Adds a new external signal alerting autopilot that
new nodes have been added to the channel graph or
an existing node has modified its channel
announcment. This allows autopilot to examine its
current state, and attempt to open channels if our
target state is not yet met.
In this commit, we ensure that we're able to properly parse the cert and
macaroon paths for all relevant config variants. Before this commit, it
would be the case that the macaroon path ended up empty if a user wasn't
running the default (mainnet, lnddir) settings. In this commit, we
remedy this by parsing each of the two (cert+macaroon) paths
independently.
We make sure to return an error other than ErrIgnored, as ErrIgnored is
expected to only be returned for updates where we already have the
necessary information in the database.
In case of a channel ID found in the rejectCache, there was a
possibility that we had rejected an invalid update for this channel
earlier, and when attempting to add the current update we wouldn't
distinguish the failure to add from an outdated/ignored update.
Previosuly we would immediately return nil on the error channel for
premature ChannelUpdates, which would break the expection that a a
returned non-error meant the update was successfully added to the
database. This meant that the caller would believe the update was added
to the database, while it is actually still in volatile memory and can
be lost during restarts.
This change makes us handle premature ChannelUpdates as we handle other
premature announcements within the gossiper, by deferring sending on the
error channel until we have reprocessed the update.
Previously we wouldn't return anything in the case where the
announcement were meant for a chain we didn't recognize. After this
change we should return an error on the error channel in all flows
within the gossiper.
At ChannelArbitrator startup we now check the database close status of
the channel. If we detect that the channel is closed, but our state
machine hasn't advanced to reflect that (possibly because of a shutdown
before the state transition was finished), we manually trigger the state
transition to recover.
This commit moves the responsibility for closing local and remote force
closes in the database from the chain watcher to the channel arbitrator.
We do this because we previously would close the channel in the
database, before sending the event to the channel arbitrator. This could
lead to a situation where the channel was marked closed, but the channel
arbitrator didn't receive the event before shutdown. As we don't listen
for chain events for channels that are closed, those channels would be
stuck in the pending close state forever, as the channel arbitrator
state machine wouldn't progress.
We fix this by letting the ChannelArbitrator close the channel in the
database. After the contract resolutions are logged (in the state
callback before transitioning to StateContractClosed) we mark the
channel closed in the database. This way we make sure that it is marked
closed only if the resolutions have been successfully persisted.
This commit removes the state callback, and instead logs the contract
resolutions directly after receiving the unilateral close event. The
resolutions won't change so there's not really necessary to wait to log
them, and this greatly simplifies the code.
This commit moves the logic handling responses to
locally-initiated payments to be asynchronous. The
reordering of operations into handleLocalDispatch
brings a serious performance burden to the switch's
main event loop. However, the at-most once semantics
of circuit map and idempotency of cleanup methods
allows concurrent operations to run in parallel.
Prior to this commit, the async_payments_benchmark
would timeout due to the forcibly serial nature of
the prior design. With this change, there is no
perceptible difference in the benchmark OMM, even
though we've added two extra db calls.
Composes the new payment status helper methods such that
we only require one db txn per state transition. This
also allows us to remove the exclusive lock from the
control tower, and enable more concurrent requests.