This commit makes the chainwatcher attempt to dispatch a remote close
when it detects a remote state with a state number higher than our
known remote state. This can mean that we lost some state, and we check
the database for (hopefully) a data loss commit point retrieved during
channel sync with the remote peer. If this commit point is found in the
database we use it to try to recover our funds from the commitment.
This commit adds a check for the LocalUnrevokedCommitPoint sent to us by
the remote during channel reestablishment, ensuring it is the same point
as they have previously sent us.
This commit enumerates the various error cases we can encounter when we
compare our remote commit chain to the view the remote communicates to us
via msg.NextLocalCommitHeight.
We now compare this height to our remote tail and tip height, returning
relevant error in case of a unrecoverable desync, and re-send a
commitment signature (including log updates) in case we owe one.
This commit enumerates the various error cases we can encounter when we
compare our local commit chain to the view the remote communicates to us
via msg.RemoteCommitTailHeight.
We now compare this height to our local tail height (note that there's
never a local "tip" at this point), returning relevant error in case of
a unrecoverable desync, and re-send a revocation in case we owe one.
This commit defines a few new errors that we can potentially encounter
during channel reestablishment:
* ErrInvalidLocalUnrevokedCommitPoint
* ErrCommitSyncLocalDataLoss
* ErrCommitSyncRemoteDataLoss
in addition to the already defined errors
* ErrInvalidLastCommitSecret
* ErrCannotSyncCommitChains
In this commit we modify the integration tests slightly, by setting the
parties that gets breached during the breach tests to --nolisten. We do
this to ensure that once the data protection logic is in place, they
nodes won't automatically connect, detect the state desync and recover
before we are able to trigger the breach.
Corrects an instance that holds a reference to a boltdb
byte slice after returning from the transaction. This
can cause panics under certain conditions, which is
avoided by creating a copy of the key.
This commit corrects our exit hop logic to return
FailFinalExpiryTooSoon if the following check is true:
pd.Timeout-expiryGraceDelta <= heightNow
Previously we returned FailFinalIncorrectCltvExpiry, which
should only be returned if the packet was misconstructed.
This commit makes sure the channels that are force closed also are put
into the state "waiting close" before the commitment transaction is
confirmed, and exits this state when it confirms.
This was previously not checked, as this check was added before the
"waiting close" state was introduced.
This commit fixes a flake within the integration tests, where we would
mine a set of blocks before checking if Bob's sweep tx was in the
mempool. Usually this would pass since the blocks were generated before
the tx hit the miner's mempool, but sometimes it was mined and then we
would check the mempool.
This commit fixes this by correctly waiting immediately for Bob to sweep
his funds, as they are not time locked.
This commit replaces the debug Config struct with an empty
one, so that the command line flags are hidden in production
builds.
Production help before commit:
Tor:
--tor.active
--tor.socks=
--tor.dns=
--tor.streamisolation
--tor.control=
--tor.v2
--tor.v2privatekeypath=
--tor.v3
hodl:
--hodl.exit-settle
--hodl.add-incoming
--hodl.settle-incoming
--hodl.fail-incoming
--hodl.add-outgoing
--hodl.settle-outgoing
--hodl.fail-outgoing
--hodl.commit
--hodl.bogus-settle
Help Options:
-h, --help
Production help after commit:
Tor:
--tor.active
--tor.socks=
--tor.dns=
--tor.streamisolation
--tor.control=
--tor.v2
--tor.v2privatekeypath=
--tor.v3
Help Options:
-h, --help
This commit fixes a potential race condition within the
IncomingContestResolver, that could cause us to miss a
preimage that was delivered in time.
Currently we query the db for the preimage, and then
subscribe for notifications. This permits the following
ordering of events:
- query for preimage, returns nothing
- preimage is added and delivered to subscribers
- subscribe to preimages
- preimage never comes through!!
We fix this by reordering to subscribe for preimages and
then query just in case it already exists. The effect is
that the query will always return a valid read of the
preimages that are currently queued for delivery.
In this commit, we correct our size estimates for to-local scripts,
which are used on the commitment transaction and the htlc
success/timeout transactions. There have been observed cases of
transactions getting stuck because our estimates were too low, and cause
the transactions to not be relayed.
Our previous estimate for the commitment to-local script was derived
from an older version of the script. Though the estimate is greater than
the actual size, this has been updated with the current estimate of 79
bytes.
This estimates makes the assumption that CSV delays will be at most
4 bytes when serialized. Since this value is expressed in relative block
heights, this should be more than sufficient for our needs, even though
the maximum possible size for the little-endian int64 is 9 bytes (plus
an OP_DATA).
The other correction is to use the ToLocalScriptSize as our estimate for
htlc timeout/success scripts, as they are the same script. Previously,
our estimate was derived from the proper script, though we were 6 bytes
shy of the new to-local estimate, since we counted the csv_delay as 1
byte, and missed some other OP_DATAs.
All derived estimates have been updating depending on the new and
improved ToLocalScriptSize estimate, and fix some estimates that did not
include the witness length in the estimate.
Finally, we correct some weight miscalculations in:
- AcceptedHtlcTimeoutWitnessSize: missing data push lengths
- OfferedHtlcSuccessWitnessSize: extra 73 byte sig, missing data push lengths
- OfferedHtlcPenaltyWitnessSize: missing 33 byte pubkey
In this commit, we fix a slight race condition that can occur when we go
to add a shell node for a node announcement, but then right afterwards,
a new block arrives that causes us to prune an unconnected node. To
ensure this doesn't happen, we now add shell nodes within the same db
transaction as AddChannelEdge. This ensures that the state is fully
consistent and shell nodes will be added atomically along with the new
channel edge.
As a result of this change, we no longer need to add shell nodes within
the ChannelRouter, as the database will take care of this operation as
it should.
In this commit, we fix an existing bug in the pruneGraphNodes method.
Before this commit, if a node was involved in a channel, but only one of
the edges was advertised, then either it, or the other node would be
erroneously pruned from the graph. They shouldn't be pruned as there's
still an edge connecting the, although only 1/2 of the edge is actually
advertised.
In order to fix this, we'll now do two passes: the first pass will
populate a ref count map of all known nodes in the graph, the second
pass will increment the ref count each time a node is found in the
graph. With this two pass method, we ensure that nodes are only deleted
if there are absolutely no edges pointing to them within the graph.
In this commit, we extend the TestPruneGraphNodes test to also test the
case of when a node is involved in a channel, but only a single edge for
that channel has been advertised. In order to test this, we add an
additional node to the graph, and also a new channel. However, this
channel will only have a single edge advertised. As result, when we
prune the set of edges, the only node remaining should be the node that
didn't have any edges at all.