The policy update logic that resided part in the gossiper and
part in the rpc server is extracted into its own object.
This prepares for additional validation logic to be added for policy
updates that would otherwise make the gossiper heavier.
It is also a small first step towards separation of our own channel data
from the rest of the graph.
As a preparation for making the gossiper less responsible for validating
and supplementing local channel policy updates, this commits moves the
on-the-fly max htlc migration up the call tree. The plan for a follow up
commit is to move it out of the gossiper completely for local channel
updates, so that we don't need to return a list of final applied policies
anymore.
In this commit, we fix a bug where if a user updates a forwarding policy to be
zero, the update will be applied to the policy correctly on-disk, but not
in-memory.
We solve this issue by having the gossiper return the list of on-disk updated
policies and passing these policies to the switch, so the switch can assume
that zero-valued fields are intentional and not just uninitialized.
There's no need to broadcast these as we assume that online nodes have
already received them. For nodes that were offline, they should receive
them as part of their initial graph sync.
This ensures that the graph synced status is marked true at some point
once a historical sync has completed. Before this commit, a stalled
historical sync could cause us to never mark the graph as synced.
In this commit, we modify the `SampleNodeAddrs` method to no longer
retry itself. Instead, we'll now leave this task to the caller of the
this method. Additionally, we'll no longer return with an error if we
can't hit a particular seed. Instead, we'll log the error and move onto
the next seed. Finally, we'll also no longer require that the DNS seed
has a secondary seed in order to support a wider array of DNS seeds.
We do this to ensure the node announcement propagates to our channel
counterparty. At times, the node announcement does not propagate to them
when opening our first channel due to a race condition between
IsPublicNode and processing announcement signatures. This isn't
necessary for channel updates and announcement signatures as we send
those to our channel counterparty directly through the reliable sender.
TestSyncManagerHistoricalSyncOnReconnect tests that the sync manager will
re-trigger a historical sync when a new peer connects after a historical
sync has completed, but we have lost all peers.
To handle the case where we have been without peers, and get a new
connection, we reset the historical scan booleans when the first active
syncer is connected to trigger another historical sync.
Prior to this change, the numQueryResponses that we calculated would be
one more than what we actually wanted since it didn't account for the
initial QueryChannelRange msg. This resulted in the test sending one
extra delayed query than was configured. This doesn't fundamentally
impact the test, but does make what happens in the test more reflective
of the configuration.
This commit makes all replies in the gossip syncer synchronous, meaning
that they will wait for each message to be successfully written to the
remote peer before attempting to send the next. This helps throttle
messages the remote peer has requested, preventing unintended
disconnects when the remote peer is slow to process messages. This
changes also helps out congestion in the peer by forcing the syncer to
buffer the messages instead of dumping them into the peer's queue.
This commit creates a distinct replyHandler, completely isolating the
requesting state machine from the processing of queries from the remote
peer. Before the two were interlaced, and the syncer could only reply to
messages in certain states. Now the two will be complete separated,
which is preliminary step to make the replies synchronous (as otherwise
we would be blocking our own requesting state machine).
With this changes, the channelGraphSyncer of each peer will drive the
replyHanlder of the other. The two can now operate independently, or
even spun up conditionally depending on advertised support for gossip
queries, as shown below:
A B
channelGraphSyncer ---control-msg--->
replyHandler
channelGraphSyncer <--control-msg----
gossiper <--gossip-msgs----
<--control-msg---- channelGraphSyncer
replyHandler
---control-msg---> channelGraphSyncer
---gossip-msgs---> gossiper
In this commit, we begin to queue any active syncers until the initial
historical sync has completed. We do this to ensure we can properly
handle any new channel updates at tip. This is required for fresh nodes
that are syncing the channel graph for the first time. If we begin
accepting updates at tip while the initial historical sync is still
ongoing, then we risk not processing certain updates since we've yet to
learn of the channels themselves.
In this commit, we add logic to handle a peer with whom we're performing
an initial historical sync disconnecting. This is required to ensure we
get as much of the graph as possible when starting a fresh node. It will
also serve useful to ensure we do not get stalled once we prevent active
GossipSyncers from starting until the initial historical sync has
completed.
Now that the roundRobinHandler is no longer present, this commit aims to
clean up and simplify some of the logic surrounding initializing/tearing
down new/stale GossipSyncers from the SyncManager. Along the way, we
also synchronize these calls with the syncerHandler, which will serve
useful in future work that allows us to recovery from initial historical
sync disconnections.
Since ActiveSync GossipSyncers no longer synchronize our state with the
remote peers, none of the logic surrounding the round-robin is required
within the SyncManager.
In this commit, we remove the ability for ActiveSync GossipSyncers to
synchronize our graph with our remote peers. This serves as a starting
point towards allowing the daemon to only synchronize our graph through
historical syncs, which will be routinely done by the SyncManager.
In this commit, we extend the gossiper with support for external callers
to provide optional fields that can serve as useful when processing a
specific network announcement. This will serve useful for light clients,
which are unable to obtain the channel point and capacity for a given
channel, but can provide them manually for their own set of channels.
Assuming a graph size of 50,000 channels, an interval of 20 minutes
would cause nodes to consume about 600MB per month in bandwidth doing
these routine historical sync spot checks. In this commit, we increase
to one hour, which consumes about 300MB per month.
In this commit, we modify the main loop in `processChanPolicyUpdate` to
send updates for private channels directly to the remote peer via the
reliable message sender. This fixes a prior issue where the remote peer
wouldn't receive new updates as this method doesn't go through the
traditional path for channel updates.
In this commit, we add a new test case to exercise a recent bug fix to
ensure that we no longer broadcast private channel policy changes. Along
the way, a few helper functions were added to slim down the test to the
core logic compared to some of the existing tests in this package. In
the future, these new helper functions should be utilized more widely for
tests in this package in order to cut down on some of the duplicated
logic.
This commit reduces the number of channels a syncer will request from
the remote node in a single QueryShortChanIDs message. The current size
is derived from the chunkSize, which is meant to signal the maximum
number of short chan ids that can fit in a single ReplyChannelRange
message. For EncodingSortedPlain, this number is 8000, and we use the
same number to dictate the size of the batch from the remote peer.
We modify this by introducing a separately configurable batchSize, so
that both can be tuned independently. The value is chosen to reduce the
amount of buffering the remote party will perform, only requiring them
queue 500 responses, as opposed to 8000. In turn, this reduces larges
spikes in allocation on the remote node at the expense of a few extra
round trips for the control messages. However, will be negligible since
the control messages are much smaller than the messages being returned.
In this commit, we address a bug where we'd attempt to replace the
stale active syncer when it transitioned to a passive syncer. This
replacement logic is only intended to happen when the active syncer
disconnects, as rotateActiveSyncerCandidate chooses and queues its own
replacement.
As required by the spec:
> SHOULD send all gossip messages whose timestamp is greater or equal to
first_timestamp, and less than first_timestamp plus timestamp_range.
In this commit, we introduce a new subsystem for the gossiper: the
SyncManager. This subsystem is a major overhaul on the way the daemon
performs the graph query sync state machine with peers.
Along with this subsystem, we also introduce the concept of an active
syncer. An active syncer is simply a GossipSyncer currently operating
under an ActiveSync sync type. Before this commit, all GossipSyncer's
would act as active syncers, which means that we were receiving new
graph updates from all of them. This isn't necessary, as it greatly
increases bandwidth usage as the network grows. The SyncManager changes
this by requiring a specific number of active syncers. Once we reach
this specified number, any future peers will have a GossipSyncer with a
PassiveSync sync type.
It is responsible for three main things:
1. Choosing different peers randomly to receive graph updates from to
ensure we don't only receive them from the same set of peers.
2. Choosing different peers to force a historical sync with to ensure we
have as much of the public network as possible. The first syncer
registered with the manager will also attempt a historical sync.
3. Managing an in-order queue of active syncers where the next cannot be
started until the current one has completed its state machine to ensure
they don't overlap and request the same set of channels, which
significantly reduces bandwidth usage and addresses a number of issues.
In this commit, we introduce another feature to the GossipSyncer in
which it can deliver a signal to an external caller once it reaches its
terminal chansSynced state. This is yet to be used, but will serve
useful with a round-robin sync mechanism, where we wait for to finish
syncing with a specific peer before moving on to the next.
In this commit, we introduce the ability for gossip syncers to perform
historical syncs. This allows us to reconcile any channels we're missing
that the remote peer has starting from the genesis block of the chain.
This commit serves as a prerequisite to the SyncManager, introduced in a
later commit, where we'll be able to make spot checks by performing
historical syncs with peers to ensure we have as much of the graph as
possible.
In this commit, we introduce the ability for GossipSyncer's to
transition their sync type. This allows us to be more flexible with our
gossip syncers, as we can now prevent them from receiving new graph
updates at any time. It's now possible to transition between the
different sync types, as long as the GossipSyncer has reached its
terminal chansSynced sync state. Certain transitions require some
additional wire messages to be sent, like in the case of an ActiveSync
GossipSyncer transitioning to a PassiveSync type.
With the introduction of the gossip sync manager in a later commit,
retrieving the backlog of updates within the last hour is no longer
necessary as we'll be forcing full syncs periodically.
In this commit, we introduce a new type: SyncerType. This type denotes
the type of sync a GossipSyncer is currently under. We only introduce
the two possible entry states, ActiveSync and PassiveSync. An ActiveSync
GossipSyncer will exchange channels with the remote peer and receive new
graph updates from them, while a PassiveSync GossipSyncer will not and
will only response to the remote peer's queries.
This commit does not modify the behavior and is only meant to be a
refactor.
In this commit, we address an assumption of the gossiper's recently
introduce reliable sender. The reliable sender is currently only used
for messages of unannounced channels. This makes sense as peers should
be able to retrieve messages from the network if they've previously
announced. However, within isMsgStale, we assumed that the reliable
sender would be used for every ChannelUpdate being sent, even if the
channel is already announced. Due to this, checking if the policy is
stale was unnecessary. But since this isn't the case, we should actually
be checking whether it is stale to prevent sending it later on.
In this commit, we address an issue with our router mock in which it was
not properly storing and retrieving edge policies. Previously, they were
being appended to a slice of policies, but this doesn't always work like
when you attempt to update the same edge twice. Instead, the slice can
only contain up to two entries, each one being the latest version of
each direction.
In this commit, we leverage the recently introduced zombie edge index to
quickly reject announcements for edges we've previously deemed as
zombies. Care has been taken to ensure we don't reject fresh updates for
edges we've considered zombies.
In this commit, we also allow channel updates for our channels to be
sent reliably to our channel counterparty. This is especially crucial
for private channels, since they're not announced, in order to ensure
each party can receive funds from the other side.
In this commit, we implement a new subsystem for the gossiper that
uses some of the existing logic for resending channel announcement
signatures and implements it in a way to make it message-agnostic,
meaning that any type of message can be resent. Along the way we also
modify the way this works to prevent multiple goroutines per peer _and_
message.
A peerHandler will be spawned for each peer for which we attempt to send
a message reliably to. This handler is responsible for managing requests
to reliably send messages to a peer while also taking the peer's
connection lifecycle into account by requesting notifications for when
the peer connects/disconnects. A peer connection notification is first
requested to determine when we should attempt to send any pending
messages. After the messages are sent, a peer disconnection notification
is requested to ensure we don't continue to request connection
notifications while the peer remains connected. Once there are no more
pending messages left to be sent for a given peer, the peerHandler can
be torn down.
In this commit, we add a new store within the database that'll be
responsible for storing gossip messages which we need to reliably send
to peers. This aims to replace the current messageStore that exists
within the gossiper, so much of this logic is borrowed from there.
One of the main differences between the two is that we now index
messages with a new key format in which we take into account the
message's type. This allows us to store different messages for a
specific channel with a peer. The old key format is still supported in
order to prevent a database migration.