We pool the database for the channel commit point with an exponential
backoff. This is meant to handle the case where we are in process of
handling a channel sync, and the case where we detect a channel close
and must wait for the peer to come online to start channel sync before
we can proceed.
In this commit, we modify the newly introduced UtxoSweeper.CreateSweepTx
to accept the confirmation target as a param of the method rather than a
struct level variable. We do this as this allows each caller to decide
at sweep time, what the fee rate should be, rather than using a global
value that is meant to work in all scenarios. For example, anytime
we're sweeping an output with a CLTV lock that's has a dependant
transaction we need to sweep/cancel, we may require a higher fee rate
than a regular force close with a CSV output.
This commit restructures the initialization procedure
for chain watchers such that they can proceed in parallel.
This is primarily to help nodes running with the neutrino
backend, which otherwise forces a serial rescan for each
active channel to check for spentness.
Doing so allows the rescans to take advantage of batch
scheduling in registering for the spend notifications,
ensuring that only one or two passes are made, as opposed
to one for each channel.
Lastly, this commit ensures that the chain arb is properly
shutdown if any of it's chain watchers or channel arbs
fails to start, so as to cancel their goroutines before
exiting.
At ChannelArbitrator startup we now check the database close status of
the channel. If we detect that the channel is closed, but our state
machine hasn't advanced to reflect that (possibly because of a shutdown
before the state transition was finished), we manually trigger the state
transition to recover.
This commit moves the responsibility for closing local and remote force
closes in the database from the chain watcher to the channel arbitrator.
We do this because we previously would close the channel in the
database, before sending the event to the channel arbitrator. This could
lead to a situation where the channel was marked closed, but the channel
arbitrator didn't receive the event before shutdown. As we don't listen
for chain events for channels that are closed, those channels would be
stuck in the pending close state forever, as the channel arbitrator
state machine wouldn't progress.
We fix this by letting the ChannelArbitrator close the channel in the
database. After the contract resolutions are logged (in the state
callback before transitioning to StateContractClosed) we mark the
channel closed in the database. This way we make sure that it is marked
closed only if the resolutions have been successfully persisted.
This commit removes the state callback, and instead logs the contract
resolutions directly after receiving the unilateral close event. The
resolutions won't change so there's not really necessary to wait to log
them, and this greatly simplifies the code.
This commit attempts to fix an inconsistency in when we consider an HTLC
to expire. When we first launched the resolver we would compare the
current block height against the expiry, while for new incoming blocks
we would compare against expiry-1.
This lead to a flake during integration tests, during a call to
RestartNode after _exactly_ enough blocks for the HTLC to expire. In
some cases the resolver would see the new blocks and consider the HTLC
to be expired (because of the -1), while in some cases resolver would
shut down before seeing the new blocks, and upon restart wouldn't act on
the new height because we did not compare against -1.
This commit fixes this by doing the same comparison in both cases.
Due to a recent change within the codebase to return estimated fee rates
in sat/kw, this commit ensures that we use this fee rate properly by
calculing a transaction's fees using its weight. This includes all of
the different transactions that are created within lnd (funding, sweeps,
etc.). On-chain transactions still rely on a sat/vbyte fee rate since it's
required by btcwallet.
This commit fixes a potential race condition during
shutdown, that could allow the chain arb's
activeWatchers or activeChannels map to be modified
while ranging over their contents. We fix this by
copying the contents into new maps with the mutex
held, before releasing the mutex and shutting down
each watcher or channel arbitrator.