This new method allows outside callers to sample the current state of
the gossipSyncer in a concurrent-safe manner. In order to achieve this,
we now only modify the g.state variable atomically.
In this commit, we ensure that all indexes for a particular channel have
any relevant keys deleted once a channel is removed from the database.
Before this commit, if we pruned a channel due to closing, then its
entry in the channel update index would ever be removed.
In this commit, we add a new command line option to allow (ideally
routing nodes) to disable receiving up-to-date channel updates all
together. This may be desired as it'll allow routing nodes to save on
bandwidth as they don't need the channel updates to passively forward
HTLCs. In the scenario that they _do_ want to update their routing
policies, the first failed HTLC due to policy inconsistency will then
allow the routing node to propagate the new update to potential nodes
trying to route through it.
In this commit, we create a new concrete implementation for the new
discovery.ChannelGraphTimeSeries interface. We also export the
createChannelAnnouncement method to allow the chanSeries struct to
re-use the existing code for creating wire messages from the database
structs.
In this commit, we add a new database migration required to update old
database to the version of the database that tracks the update index for
the nodes and edge policies. The migration is straight forward, we
simply need to populate the new indexes for the all the nodes, and then
all the edges.
In this commit, we add a series of methods, and a new database index
that we'll use to implement the new discovery.ChannelGraphTimeSeries
interface interface. The primary change is that we now maintain two new
indexes tracking the last update time for each node, and the last update
time for each edge. These two indexes allow us to implement the
NodeUpdatesInHorizon and ChanUpdatesInHorizon methods. The remaining
methods added simply utilize the existing database indexes to allow us to
respond to any peer gossip range queries.
A set of new unit tests has been added to exercise the added logic.
In this commit, we update the logic in the AuthenticatedGossiper to
ensure that can properly create, manage, and dispatch messages to any
gossipSyncer instances created by the server.
With this set of changes, the gossip now has complete knowledge of the
current set of peers we're conneted to that support the new range
queries. Upon initial connect, InitSyncState will be called by the
server if the new peer understands the set of gossip queries. This will
then create a new spot in the peerSyncers map for the new syncer. For
each new gossip query message, we'll then attempt to dispatch the
message directly to the gossip syncer. When the peer has disconnected,
we then expect the server to call the PruneSyncState method which will
allow us to free up the resources.
Finally, when we go to broadcast messages, we'll send the messages
directly to the peers that have gossipSyncer instances active, so they
can properly be filtered out. For those that don't we'll broadcast
directly, ensuring we skip *all* peers that have an active gossip
syncer.
In this commit, introduce a new struct, the gossipSyncer. The role of
this struct is to encapsulate the state machine required to implement
the new gossip query range feature recently added to the spec. With this
change, each peer that knows of this new feature will have a new
goroutine that will be managed by the gossiper.
Once created and started, the gossipSyncer will start to progress
through each possible state, finally ending at the chansSynced stage. In
this stage, it has synchronized state with the remote peer, and is
simply awaiting any new messages from the gossiper to send directly to
the peer. Each message will only be sent if the remote peer actually has
a set update horizon, and the message isn't before or after that
horizon.
A set of unit tests has been added to ensure that two state machines
properly terminate and synchronize channel state.
In this commit, we add recognition of the data loss protected feature
bit. We already implement the full feature set, but then never added the
bit to our set of known features.
This commit changes the decayed log tests to create
a new temporary database for each test. Previously, all
instances referenced the same db path. Since the tests
are run in parallel, the tests would create/delete the
shared db out from under each other, causing flakes in
the unit tests.
This commit adds a test which will restore a channel from an OpenChannel
struct at various stages of the state transation cycle, ensuring the
HTLC local and remote add heights are restored properly.
This commit fixes a bug which would cause the add heights of the HTLCs
in the update log to be set wrongly. At times, an add height could be
incorrecly set, leading to the HTLCs not being accounted for correctly
during evaluating the HTLC views. This was caused by the assumption that
if the HTLC was not on the pending remote commit, then it was locked in
on both the local and the remote commit, which is not always true.
Instead of making this assumption, we instead now inspect the three
commits: the local, remote and pending remote; and set the add heights
accordingly. This should ensure that HTLCs are subtracted from the
balances only when they are first added.
In this commit, we fix a recently introduced bug. The issue is that
while we're failing the link, the peer we're attempting to force close
on may disconnect. As a result, if the peerTerminationWatcher exits
before we can add to the wait group (it's waiting on that), then we'll
run into a panic as we're attempting to increment the wait group while
another goroutine is calling wait.
The fix is to first check that the server isn't shutting down, and then
use the server's wait group rather than the peer to synchronize
goroutines.
Fixes#1285.
In this commit, we add a new index to the HTLC log. This new index is
meant to ensure that we don't attempt to modify and HTLC twice. An HTLC
modification is either a fail or a settle. This is the first in a series
of commits to fix an existing bug in the state machine that can cause a
panic if a remote node attempts to settle an HTLC twice.
Before the previous commit, we assumed the HTLC's timeout transaction
would be the only transaction in the mempool. In reality, after mining
some blocks for the HTLC to expire and waiting for the timeout
transaction to arrive in the mempool, at times we would instead detect
the funding output's sweeping transaction and proceed the test with this
assumption, leading to the case where we would have to mine extra blocks
to include the HTLC sweeping transaction. This has been resolved in the
previous commit, so this fix is no longer needed.
This reverts commit e54f1ea4dbe59b2e53a94774995ae1711746c2f8.
In this commit, we address an existing flake that would be triggered
when testing HTLC timeouts. After force closing a channel and generating
enough blocks to expire an HTLC, we would wait for a transaction to
arrive in the mempool and assumed it was the timeout transaction.
Instead, we'd detect the funding output sweep transaction and attempt to
proceed with the test with the incorrect assumption of the timeout
transaction being broadcast.