This commit modifies the default BatchTicker
implementation such that it will generate a
new ticker with each call to Start(). This
allows us to create a new ticker after
releasing an old one due to the batch
being empty.
In this commit, we prevent the htlcManager from
being woken up by the batchTicker when there is no
work to be done. Profiling has shown a significant
portion of CPU time idling, since the batch ticker
endlessly demands resources. We resolve this by only
selecting on the batch ticker when we have a
non-empty batch of downstream packets from the
switch.
Corrects an instance that holds a reference to a boltdb
byte slice after returning from the transaction. This
can cause panics under certain conditions, which is
avoided by creating a copy of the key.
In this commit, we allow the gossiper syncer to store the chunk size for
its respective encoding type. We do this to prevent a race condition
that would arise within the unit tests by modifying the values of the
encodingTypeToChunkSize map to allow for easier testing.
This commit corrects our exit hop logic to return
FailFinalExpiryTooSoon if the following check is true:
pd.Timeout-expiryGraceDelta <= heightNow
Previously we returned FailFinalIncorrectCltvExpiry, which
should only be returned if the packet was misconstructed.
This commit makes sure the channels that are force closed also are put
into the state "waiting close" before the commitment transaction is
confirmed, and exits this state when it confirms.
This was previously not checked, as this check was added before the
"waiting close" state was introduced.
This commit fixes a flake within the integration tests, where we would
mine a set of blocks before checking if Bob's sweep tx was in the
mempool. Usually this would pass since the blocks were generated before
the tx hit the miner's mempool, but sometimes it was mined and then we
would check the mempool.
This commit fixes this by correctly waiting immediately for Bob to sweep
his funds, as they are not time locked.
This commit replaces the debug Config struct with an empty
one, so that the command line flags are hidden in production
builds.
Production help before commit:
Tor:
--tor.active
--tor.socks=
--tor.dns=
--tor.streamisolation
--tor.control=
--tor.v2
--tor.v2privatekeypath=
--tor.v3
hodl:
--hodl.exit-settle
--hodl.add-incoming
--hodl.settle-incoming
--hodl.fail-incoming
--hodl.add-outgoing
--hodl.settle-outgoing
--hodl.fail-outgoing
--hodl.commit
--hodl.bogus-settle
Help Options:
-h, --help
Production help after commit:
Tor:
--tor.active
--tor.socks=
--tor.dns=
--tor.streamisolation
--tor.control=
--tor.v2
--tor.v2privatekeypath=
--tor.v3
Help Options:
-h, --help
This commit fixes a potential race condition within the
IncomingContestResolver, that could cause us to miss a
preimage that was delivered in time.
Currently we query the db for the preimage, and then
subscribe for notifications. This permits the following
ordering of events:
- query for preimage, returns nothing
- preimage is added and delivered to subscribers
- subscribe to preimages
- preimage never comes through!!
We fix this by reordering to subscribe for preimages and
then query just in case it already exists. The effect is
that the query will always return a valid read of the
preimages that are currently queued for delivery.
In this commit, we correct our size estimates for to-local scripts,
which are used on the commitment transaction and the htlc
success/timeout transactions. There have been observed cases of
transactions getting stuck because our estimates were too low, and cause
the transactions to not be relayed.
Our previous estimate for the commitment to-local script was derived
from an older version of the script. Though the estimate is greater than
the actual size, this has been updated with the current estimate of 79
bytes.
This estimates makes the assumption that CSV delays will be at most
4 bytes when serialized. Since this value is expressed in relative block
heights, this should be more than sufficient for our needs, even though
the maximum possible size for the little-endian int64 is 9 bytes (plus
an OP_DATA).
The other correction is to use the ToLocalScriptSize as our estimate for
htlc timeout/success scripts, as they are the same script. Previously,
our estimate was derived from the proper script, though we were 6 bytes
shy of the new to-local estimate, since we counted the csv_delay as 1
byte, and missed some other OP_DATAs.
All derived estimates have been updating depending on the new and
improved ToLocalScriptSize estimate, and fix some estimates that did not
include the witness length in the estimate.
Finally, we correct some weight miscalculations in:
- AcceptedHtlcTimeoutWitnessSize: missing data push lengths
- OfferedHtlcSuccessWitnessSize: extra 73 byte sig, missing data push lengths
- OfferedHtlcPenaltyWitnessSize: missing 33 byte pubkey
In this commit, we fix a slight race condition that can occur when we go
to add a shell node for a node announcement, but then right afterwards,
a new block arrives that causes us to prune an unconnected node. To
ensure this doesn't happen, we now add shell nodes within the same db
transaction as AddChannelEdge. This ensures that the state is fully
consistent and shell nodes will be added atomically along with the new
channel edge.
As a result of this change, we no longer need to add shell nodes within
the ChannelRouter, as the database will take care of this operation as
it should.
In this commit, we fix an existing bug in the pruneGraphNodes method.
Before this commit, if a node was involved in a channel, but only one of
the edges was advertised, then either it, or the other node would be
erroneously pruned from the graph. They shouldn't be pruned as there's
still an edge connecting the, although only 1/2 of the edge is actually
advertised.
In order to fix this, we'll now do two passes: the first pass will
populate a ref count map of all known nodes in the graph, the second
pass will increment the ref count each time a node is found in the
graph. With this two pass method, we ensure that nodes are only deleted
if there are absolutely no edges pointing to them within the graph.
In this commit, we extend the TestPruneGraphNodes test to also test the
case of when a node is involved in a channel, but only a single edge for
that channel has been advertised. In order to test this, we add an
additional node to the graph, and also a new channel. However, this
channel will only have a single edge advertised. As result, when we
prune the set of edges, the only node remaining should be the node that
didn't have any edges at all.