This commit moves the responsibility for closing local and remote force
closes in the database from the chain watcher to the channel arbitrator.
We do this because we previously would close the channel in the
database, before sending the event to the channel arbitrator. This could
lead to a situation where the channel was marked closed, but the channel
arbitrator didn't receive the event before shutdown. As we don't listen
for chain events for channels that are closed, those channels would be
stuck in the pending close state forever, as the channel arbitrator
state machine wouldn't progress.
We fix this by letting the ChannelArbitrator close the channel in the
database. After the contract resolutions are logged (in the state
callback before transitioning to StateContractClosed) we mark the
channel closed in the database. This way we make sure that it is marked
closed only if the resolutions have been successfully persisted.
This commit removes the state callback, and instead logs the contract
resolutions directly after receiving the unilateral close event. The
resolutions won't change so there's not really necessary to wait to log
them, and this greatly simplifies the code.
This commit attempts to fix an inconsistency in when we consider an HTLC
to expire. When we first launched the resolver we would compare the
current block height against the expiry, while for new incoming blocks
we would compare against expiry-1.
This lead to a flake during integration tests, during a call to
RestartNode after _exactly_ enough blocks for the HTLC to expire. In
some cases the resolver would see the new blocks and consider the HTLC
to be expired (because of the -1), while in some cases resolver would
shut down before seeing the new blocks, and upon restart wouldn't act on
the new height because we did not compare against -1.
This commit fixes this by doing the same comparison in both cases.
Due to a recent change within the codebase to return estimated fee rates
in sat/kw, this commit ensures that we use this fee rate properly by
calculing a transaction's fees using its weight. This includes all of
the different transactions that are created within lnd (funding, sweeps,
etc.). On-chain transactions still rely on a sat/vbyte fee rate since it's
required by btcwallet.
This commit fixes a potential race condition during
shutdown, that could allow the chain arb's
activeWatchers or activeChannels map to be modified
while ranging over their contents. We fix this by
copying the contents into new maps with the mutex
held, before releasing the mutex and shutting down
each watcher or channel arbitrator.
This commit makes the chainwatcher attempt to dispatch a remote close
when it detects a remote state with a state number higher than our
known remote state. This can mean that we lost some state, and we check
the database for (hopefully) a data loss commit point retrieved during
channel sync with the remote peer. If this commit point is found in the
database we use it to try to recover our funds from the commitment.
This commit fixes a potential race condition within the
IncomingContestResolver, that could cause us to miss a
preimage that was delivered in time.
Currently we query the db for the preimage, and then
subscribe for notifications. This permits the following
ordering of events:
- query for preimage, returns nothing
- preimage is added and delivered to subscribers
- subscribe to preimages
- preimage never comes through!!
We fix this by reordering to subscribe for preimages and
then query just in case it already exists. The effect is
that the query will always return a valid read of the
preimages that are currently queued for delivery.
In this commit, we alter cooperative channel closures to also use
MarkChannelResolved in order to unify the logic for the different types
of channel closures.
We no longer have to mark the channel as fully closed in the database,
as it is done directly in the chainWatcher. Instead, we stop the watcher
and delete it from the set of active watchers.
This commit makes the dispatchCooperativeClose method mark the channel
fully closed directly, without registering for confirmation
notifications first. We can do this as recent changes to the
contractcourt changed the definition of a closed channel in the database
to have had its closing tx confirmed, and we only dispatch the
cooperative close once the transaction has 1 confirmation.
We also rename the markChanClosed method to notifyChanClosed, to more
clearly indicate that the ChainArbitrator no longer has to mark the
channel fully closed in the database.
In this commit, we attempt to fix a bug that's possible within the
Start() method of the ChainArbiter. We pass the channel pointer directly
into the newActiveChannelArbitrator function causing it to close over
the loop variable. We later use the channel point directly to send
messages to other sub-systems. It's possible that we actually have the
shadowed loop variable and will send an incorrect message. Defensively,
we now re-bind the loop variable in order to ensure we point to the
proper channel.
This commit adds a test for the case where the ChannelArbitrator fails
to broadcast its commitment during a force close because of
ErrDoubleSpend. We test that in this case it will still wait for a
commitment getting confirmed in-chain, then resolve.
This commit adds the new function closure option ContractBreach to the
ChainArbitrator config, a closure that is again used by the ChainWatcher
to reliably handoff a breach event to the breachArbiter.
This commit changes how the ChainWatcher notifies the breachArbiter
about a channel breach. Instead of assuming the breachArbiter is among
the clients subscibing to channel events, it will call a new method
contractBreach(), and assume the breachArbiter has reliably gotten the
breach info when this method returns with a non-nil error.
Since the breachArbiter was the only sybsystem having a sync chain
subsciption, we also remove the (now) unused syncDispatch option.
This move the log message "channel marked pending-closed" to the point
where the channel actually has been marked pending closed, instead of
before the database transaction has been done.
This commit removes the for loop in the closeObserver, as it wasn't
serving any purpose. After receiving a spend notification we would
return, breaking out of the loop. When getting a quit signal we would
also return, making the loop only do one iteration in any case.
This commit makes clients subscribing to channel events that are marked
"sync dispatch" _not_ being deleted from the list of clients when they
call Cancel(). Instead a go routine will be launched that will send an
error on every read of the ProcessACK channel.
This fixes a race in handing off the breach info while lnd was shutting
down. The breach arbiter could end up being shut down (and calling
Cancel()) before while the ChainWatcher was in the process of
dispatching a breach. Since the breach arbiter no longer was among the
registered clients at this point, the ChainWatcher would assume the
breach was handed off successfully, and mark the channel as pending
closed. When lnd now was restarted, the breach arbiter would not know
about the breach, and the ChainWatcher wouldn't attempt to re-dispatch,
as it was already marked as pending closed.
This commit adds MVP unit tests for the following scenarios in the
ChannelArbitrator:
1) A cooperative close is confirmed.
2) A remote force close is confirmed.
3) A local force close is requested and confirmed.
4) A local force close is requested, but a remote force close gets
confirmed.
This commit changes the ChainWatcher to only send a chain event in case
the various spends are _confirmed_ on-chain, not only seen on the
network.
A consequence of this is that we now give the ChainWatcher the
responsibility of marking the channel closed when the closing tx is
confirmed, instead of the ChannelArbitrator.
This commit changes the channel arbitrator state machine to only care
about commitment transactions that are being confirmed on-chain
according to the chain_watcher. This is meant to handles the cases where
we would broadcast our commitment, expecting it to get confirmed, but
instead a competing transaction was confirmed.
This commit readies the ChannelArbitrator state machine for the change
that will make the ChainWatcher only notify on confirmed commitments.
The state machine has gotten a new state, StateCommitmentBroadcasted,
which we'll transition to after we have broadcasted our own commitment.
From this state we'll go to the StateContractClosed state regardless of
which commitment the ChainWatcher notifies about, unifying the contract
resolution betweee the local and remote force close.
This commit adds a new method dispatchLocalClose, which will be called
in case our commitment is detected to spend the funding transaction. In
this case LocalUnilateralCloseInfo will be sent on the
LocalUnilateralClosure channel to all subscribers.
The UnilateralClosure channel is renamed to RemoteUnilateralClosure for
consistency.
This commit removes a short circuit checking if the contract resolver
after a unilateral close is empty. After removing this, the state
machine will advance the state from StateDefault->ContractClosed, in
which the stateCallback will be called, logging the state needed to
advance. Since this logged state is empty, the state machine will go
directly to StateFullyResolved, which will trigger the
MarkChannelResolved call. This means the behaviour is kept.
This commit adds a missing return to the resolveContract method, that
will ensure the goroutine exits if the ChannelArbitrator shuts down.
This fixes a potential deadlock during the integration tests.
We also promote some of the logs to Debug from Trace.
In this commit, we fix a long standing bug where at times a co-op
channel closure wouldn't be properly marked as fully closed in the
database. The culprit was a re-occurring code flaw we've seen many times
in the codebase: a closure variable that closes over a loop iterator
variable. Before this instance, I assumed that this could only pop up
when goroutines bind to the loop iterator within a closure. However,
this instance is the exact same issue, but within a regular closure that
has _delayed_ execution. As the closure doesn't execute until long after
the loop has finished executing, it may still be holding onto the _last_
item the loop iterator variable was assigned to.
The fix for this issue is very simple: re-assign the channel point
before creating the closure. Without this fix, we would go to call
db.MarkChanFullyClosed on a channel that may not have yet actually be in
the pending close state, causing all executions to fail.
Fixes#1054.
Fixes#1056.
Fixes#1075.
This commit renames ForceCloseSummary to LocalForceCloseSummary, and
adds a new method NewLocalForceCloseSummary that can be used to derive a
LocalForceCloseSummary if our commitment transaction gets confirmed
in-chain. It is meant to accompany the NewUnilateralCloseSummary method,
which is used for the same purpose in the event of a remote commitment
being seen in-chain.
This commit mitigates a problem within the ChannelArbitrator, where
after a restart we would start up in the state StateBroadcastCommit but
fail to broadcast out commitment because a conflicting transaction (most
likely our own commitment) was already broadcast. A more complete fix
for this case will be added later, but this commit let the
ChannelArbitrator continue, trying to close out the channel.
In this commit, we fix an existing grouting leak within the contract
court package. If a goroutine dies, but it doesn’t actually cancel the
block epoch notification that it requested, then it’s possible to leak
thousands of gorutines. To remedy this situation, we ensure that we’ll
*always* cancel the epoch notification once the goroutine has exited.
In this commit, we fix an existing flake on Travis related to the new
set of on-chain HTLC tests. In this timing flake, Bob would broadcast
his sweeping transaction, but *mid block mining*. As a result, the
output would never be properly swept, needing an additional block to be
mined. We’ll now wait for both Bob’s sweeping transaction, and Carol’s
sweep transaction to be confirmed before we attempt our assertions.
In this commit, we fix an existing bug in the implementation of the
resolution of the htlcOutgoingContestResolver. Before this commit, we
would _always_ watch the claim outpoint. However, if this is on the
remote party’s commitment transaction, then we would end up watching
the wrong output. We’ll now properly detect this by modifying which
output we watch, based on if we have a second level transaction or not.
In this commit, we add 6 new integration tests to test the various
actions that may need to be performed when either side goes on-chain to
fully resolve HTLC’s. Many of the tests are mirrors of each other as
they test sweeping/resolving HTLC’s from both commitment transactions.
In this commit, we modify the way that notifications are dispatched
within the chainWatcher. Before we would *always* wait for an ack back
before we started to clean up he database state. This would at times
lead to deadlocks. To remedy this, we now allow callers to decide if
they want notifications to be sync or not. The only current caller that
requires this is the breach arbiter.
In this commit, we modify the interaction between the chanCloser
sub-system and the chain notifier all together. This fixes a series of
bugs as before this commit, we wouldn’t be able to detect if the remote
party actually broadcasted *any* of the transactions that we signed off
upon. This would be rejected to the user by having a “zombie” channel
close that would never actually be resolved.
Rather than the chanCloser watching for on-chain closes, we’ll now open
up a co-op close context to the chainWatcher (via a layer of
indirection via the ChainArbitrator), and report to it all possible
closes that we’ve signed. The chainWatcher will then be able to launch
a goroutine to properly update the database state once any of the
possible closure transactions confirms.
In this commit, we add the IsOurAddress field into the config of the
chain arb. With this new function closure, the chain arb is able to
detect co-op on chain closes automatically.
In this commit, we extend the chainWatcher to be able to automatically
detect co-op closes of the channel. With this change, it’s now fully
encompassed so able to detect all types of closes on-chain. We detect a
co-op close due to the sequence number being finalized, as well as
paying to us directly in a regular p2wkh-like output.
In this commit, we add a new method to allow external sub-systems to
gain an intent to receive notifications once an on-chain event happens.
This will be used in place of the old channel signals directly on the
channel state machine object in a series of follow up commits.
In this commit, we modify the construction of the channel arbitrator to
accept a pointer to an event stream from the chain watcher that’s been
assigned to that channel. As a result, we no longer need a fresh
unilateral close signal, as the one we get from the chain watcher will
*always* be up to date.
For each active channel, we’ll now create a chainWatcher instance that
will be around until the channel is fully closed on chain.
In this commit, we add a new struct to the package, the chainWatcher.
The duty of this struct is to replace the functionality that was
previously implemented by the closeObserver of each channel. Rather
than the source of notification being tied to the lifetime of a
particular object, it’s now delegated to a persistent object that will
be around for the entire lifetime of the channel (until it’s closed).
This will serve to greatly simplify the code, and eliminate a large
class of bugs.
In this commit, we add the ChainArbitrator struct. The ChainArbitrator
is a special sub-system that will oversee the on-chain resolution of
all active channels, and also channels that are in the pending close
state. The ChainArbitrator maintains a set of ChannelArbitrators, one
for each channel that hasn’t yet been fully resolved.
Outside sub-systems should send new channels to the arbitrator once
they’ve opened. Additionally, they can also trigger manual
interventions to close out a channel on chain forcibly, or just to
signal that a channel has been closed cooperatively.
Finally, (for now) the ChainArbitrator should be notified once a fresh
set of signals for a channel becomes available. The ChannelArbitrator
for the channel will use these set of signals to be notified when an
on-chain event happens.
In this commit, we introduce a new interface, the ContractResolver. The
duty of a ContractResolver is to watch a contract on-chain, for all
possible transitions, and exit finally when the contract has been fully
resolved. Resolvers themselves can be recursive: meaning producing
another resolver to hand off the duties require to fully resolve a
contract.
Each resolver also has a ResolverKit which contains all the function
closures and interfaces that the resolver need to properly do its job.
The 5 types of resolvers are:
* outgoing HTLC timeout
* outgoing HTLC contested
* incoming HTLC know presage
* incoming HTLC contested (don’t yet know)
* commitment sweep
In the future, more advanced resolver types can be added as required.
In this commit, we add a new file: briefcase.go. The contents of this
file are the ArbitratorLog. This log will be used by the internal state
machine of each Channel Arbitrator to ensure that each state transition
is fully reflected on-disk, to ensure that the state machine is durable
and able to survive restarts.
This commit also adds a new implementation of the ArbitratorLog
interface backed by boltdb.
In this commit, we add the primary struct of the package with a full
implementation. The duty of the ChannelArbitrator is to watch the set
of active contracts on a comment transaction and act accordingly if any
of their redemption criteria have been met. Potential criteria include:
an HTLC about to time out, and HTLC about to time out that we know the
preiamge to, or the remote party going to chain (forcing us to resolve
all pending contracts on chain).
The primary goroutine of this struct implements a persistent state
machine in order to ensure that mid contract resolution, we’re able to
properly survive restarts without losing our place, or forgetting about
a pending contract.
A ChannelArbitrator will stay alive until all contracts have been fully
resolved. This means that outside sub-systems no longer need to worry
about remembering to mark a channel as fully resolved, as it’s the job
of the ChannelArbitrator to do this task.