Commit Graph

230 Commits

Author SHA1 Message Date
Olaoluwa Osuntokun
921eea9f57
discovery: update mockGraphSource to implement ForAllOutgoingChannels 2019-04-10 17:05:47 -07:00
Olaoluwa Osuntokun
aba32de1f4
discovery: set AnnSigner for mock signer in testCtx 2019-04-10 17:05:44 -07:00
Olaoluwa Osuntokun
13b91e6ea1 discovery: properly set short chan IDs for ann sigs in tests 2019-04-10 17:05:37 -07:00
Olaoluwa Osuntokun
3e5a6f1022
Merge pull request #2905 from cfromknecht/split-chunk-size
discovery: make batch size distinct from chunk size, reduce to 500
2019-04-09 19:27:20 -07:00
Conner Fromknecht
a4b4fe666a
discovery: make batch size distinct from chunk size, reduce to 500
This commit reduces the number of channels a syncer will request from
the remote node in a single QueryShortChanIDs message. The current size
is derived from the chunkSize, which is meant to signal the maximum
number of short chan ids that can fit in a single ReplyChannelRange
message. For EncodingSortedPlain, this number is 8000, and we use the
same number to dictate the size of the batch from the remote peer.

We modify this by introducing a separately configurable batchSize, so
that both can be tuned independently. The value is chosen to reduce the
amount of buffering the remote party will perform, only requiring them
queue 500 responses, as opposed to 8000. In turn, this reduces larges
spikes in allocation on the remote node at the expense of a few extra
round trips for the control messages. However, will be negligible since
the control messages are much smaller than the messages being returned.
2019-04-06 15:27:26 -07:00
Conner Fromknecht
9df6af237e
discovery/sync_manager: restore lazy gossip sends 2019-04-06 03:32:03 -07:00
Wilmer Paulino
00338c5ec2
discovery: properly handle SyncManager shutdown signal 2019-04-03 19:32:56 -07:00
Wilmer Paulino
46ceaf8cf6
discovery: only replace stale active syncer if disconnected
In this commit, we address a bug where we'd attempt to replace the
stale active syncer when it transitioned to a passive syncer. This
replacement logic is only intended to happen when the active syncer
disconnects, as rotateActiveSyncerCandidate chooses and queues its own
replacement.
2019-04-03 16:43:31 -07:00
Wilmer Paulino
8b6a9bb5d3
discovery: make timestamp range check inclusive within FilterGossipMsgs
As required by the spec:

> SHOULD send all gossip messages whose timestamp is greater or equal to
first_timestamp, and less than first_timestamp plus timestamp_range.
2019-04-03 15:44:46 -07:00
Wilmer Paulino
70be812747
discovery+server: use new gossiper's SyncManager subsystem 2019-04-03 15:44:43 -07:00
Wilmer Paulino
a188657b2f
discovery: introduce gossiper SyncManager subsystem
In this commit, we introduce a new subsystem for the gossiper: the
SyncManager. This subsystem is a major overhaul on the way the daemon
performs the graph query sync state machine with peers.

Along with this subsystem, we also introduce the concept of an active
syncer. An active syncer is simply a GossipSyncer currently operating
under an ActiveSync sync type. Before this commit, all GossipSyncer's
would act as active syncers, which means that we were receiving new
graph updates from all of them. This isn't necessary, as it greatly
increases bandwidth usage as the network grows. The SyncManager changes
this by requiring a specific number of active syncers. Once we reach
this specified number, any future peers will have a GossipSyncer with a
PassiveSync sync type.

It is responsible for three main things:

1. Choosing different peers randomly to receive graph updates from to
ensure we don't only receive them from the same set of peers.

2. Choosing different peers to force a historical sync with to ensure we
have as much of the public network as possible. The first syncer
registered with the manager will also attempt a historical sync.

3. Managing an in-order queue of active syncers where the next cannot be
started until the current one has completed its state machine to ensure
they don't overlap and request the same set of channels, which
significantly reduces bandwidth usage and addresses a number of issues.
2019-04-03 15:08:32 -07:00
Wilmer Paulino
e075817e44
discovery: introduce GossipSyncer signal delivery of chansSynced state
In this commit, we introduce another feature to the GossipSyncer in
which it can deliver a signal to an external caller once it reaches its
terminal chansSynced state. This is yet to be used, but will serve
useful with a round-robin sync mechanism, where we wait for to finish
syncing with a specific peer before moving on to the next.
2019-04-03 15:08:31 -07:00
Wilmer Paulino
042241dc48
discovery: allow gossip syncer to perform historical syncs
In this commit, we introduce the ability for gossip syncers to perform
historical syncs. This allows us to reconcile any channels we're missing
that the remote peer has starting from the genesis block of the chain.
This commit serves as a prerequisite to the SyncManager, introduced in a
later commit, where we'll be able to make spot checks by performing
historical syncs with peers to ensure we have as much of the graph as
possible.
2019-04-03 15:08:30 -07:00
Wilmer Paulino
ca4fbd598c
discovery: introduce GossipSyncer sync transitions
In this commit, we introduce the ability for GossipSyncer's to
transition their sync type. This allows us to be more flexible with our
gossip syncers, as we can now prevent them from receiving new graph
updates at any time. It's now possible to transition between the
different sync types, as long as the GossipSyncer has reached its
terminal chansSynced sync state. Certain transitions require some
additional wire messages to be sent, like in the case of an ActiveSync
GossipSyncer transitioning to a PassiveSync type.
2019-04-03 15:08:29 -07:00
Wilmer Paulino
acc42c1b68
discovery: set GossipSyncer update horizon to current time
With the introduction of the gossip sync manager in a later commit,
retrieving the backlog of updates within the last hour is no longer
necessary as we'll be forcing full syncs periodically.
2019-04-03 15:08:28 -07:00
Wilmer Paulino
8d7c0a9899
discovery: replace GossipSyncer syncChanUpdates flag with SyncerType
In this commit, we introduce a new type: SyncerType. This type denotes
the type of sync a GossipSyncer is currently under. We only introduce
the two possible entry states, ActiveSync and PassiveSync. An ActiveSync
GossipSyncer will exchange channels with the remote peer and receive new
graph updates from them, while a PassiveSync GossipSyncer will not and
will only response to the remote peer's queries.

This commit does not modify the behavior and is only meant to be a
refactor.
2019-04-03 15:08:27 -07:00
Wilmer Paulino
7e92b9a4e2
discovery: export gossipSyncer 2019-04-03 15:08:26 -07:00
Wilmer Paulino
d954cfc4ba
discovery: include peerPub in gossipSyncerCfg 2019-04-03 15:08:25 -07:00
Wilmer Paulino
c72db902f0
discovery: replace waitPredicate with lntest version 2019-04-03 15:08:12 -07:00
Wilmer Paulino
0ab97957ea discovery: check if stale within isMsgStale for ChannelUpdate messages
In this commit, we address an assumption of the gossiper's recently
introduce reliable sender. The reliable sender is currently only used
for messages of unannounced channels. This makes sense as peers should
be able to retrieve messages from the network if they've previously
announced. However, within isMsgStale, we assumed that the reliable
sender would be used for every ChannelUpdate being sent, even if the
channel is already announced. Due to this, checking if the policy is
stale was unnecessary. But since this isn't the case, we should actually
be checking whether it is stale to prevent sending it later on.
2019-03-28 17:22:23 -07:00
Wilmer Paulino
93414bd27a discovery: clean up TestSignatureAnnouncementRetryAtStartup
In this commit, we remove code from
TestSignatureAnnouncementRetryAtStartup that is not crucial to the
assumptions the test is exercising.
2019-03-28 17:22:23 -07:00
Wilmer Paulino
95ed11b01b discovery: properly store and retrieve edge policies within mockGraphSource
In this commit, we address an issue with our router mock in which it was
not properly storing and retrieving edge policies. Previously, they were
being appended to a slice of policies, but this doesn't always work like
when you attempt to update the same edge twice. Instead, the slice can
only contain up to two entries, each one being the latest version of
each direction.
2019-03-28 17:22:22 -07:00
Wilmer Paulino
3a2c4ec594 discovery: add missing offline peer check before sending message reliably 2019-03-28 17:21:28 -07:00
Wilmer Paulino
3e81e89062 discovery: add log when attempting to send msg reliably and peer is offline 2019-03-28 17:21:28 -07:00
Wilmer Paulino
5cec4513de
discovery: reject announcements for known zombie edges
In this commit, we leverage the recently introduced zombie edge index to
quickly reject announcements for edges we've previously deemed as
zombies. Care has been taken to ensure we don't reject fresh updates for
edges we've considered zombies.
2019-03-27 13:08:03 -07:00
Wilmer Paulino
23796d3247
routing+discovery: extend ChannelGraphSource with zombie index methods 2019-03-27 13:07:30 -07:00
Conner Fromknecht
0ae06c8189
discovery+server: send lazy gossip msgs 2019-03-05 17:08:48 -08:00
Conner Fromknecht
f39edd8000
peer: add SendMessageLazy 2019-03-05 17:08:22 -08:00
Wilmer Paulino
12168f022e
server+discovery: send channel updates to remote peers reliably
In this commit, we also allow channel updates for our channels to be
sent reliably to our channel counterparty. This is especially crucial
for private channels, since they're not announced, in order to ensure
each party can receive funds from the other side.
2019-02-14 18:33:27 -08:00
Wilmer Paulino
4996d49118
server+discovery: use reliableSender to replace existing resend logic 2019-02-14 18:33:27 -08:00
Wilmer Paulino
2f679f6015
discovery/reliable_sender: implement message-agnostic reliable sender
In this commit, we implement a new subsystem for the gossiper that
uses some of the existing logic for resending channel announcement
signatures and implements it in a way to make it message-agnostic,
meaning that any type of message can be resent. Along the way we also
modify the way this works to prevent multiple goroutines per peer _and_
message.

A peerHandler will be spawned for each peer for which we attempt to send
a message reliably to. This handler is responsible for managing requests
to reliably send messages to a peer while also taking the peer's
connection lifecycle into account by requesting notifications for when
the peer connects/disconnects. A peer connection notification is first
requested to determine when we should attempt to send any pending
messages. After the messages are sent, a peer disconnection notification
is requested to ensure we don't continue to request connection
notifications while the peer remains connected. Once there are no more
pending messages left to be sent for a given peer, the peerHandler can
be torn down.
2019-02-14 18:33:27 -08:00
Wilmer Paulino
6e556aa897
discovery/gossiper_test: prevent race conditions within mockGraphSource 2019-02-14 18:33:27 -08:00
Wilmer Paulino
73b4bc4b68
server+discovery: remove channeldb.DB reference within the gossiper
Now that we've replaced the built-in messageStore with the
channeldb.GossipMessageStore, the reference to channeldb.DB is no longer
needed.
2019-02-14 18:29:39 -08:00
Wilmer Paulino
2277535e6b
server+discovery: replace gossiper message store with MessageStore 2019-02-14 18:29:39 -08:00
Wilmer Paulino
847b064461
discovery/message_store: add gossip message store
In this commit, we add a new store within the database that'll be
responsible for storing gossip messages which we need to reliably send
to peers. This aims to replace the current messageStore that exists
within the gossiper, so much of this logic is borrowed from there.
One of the main differences between the two is that we now index
messages with a new key format in which we take into account the
message's type. This allows us to store different messages for a
specific channel with a peer. The old key format is still supported in
order to prevent a database migration.
2019-02-14 18:29:39 -08:00
Johan T. Halseth
7d34ce9d08
lnwire+multi: define HasMaxHtlc helper on msgFlags 2019-01-22 08:42:30 +01:00
Valentine Wallace
207c4f030a
discovery/gossiper: include max HTLC when rebroadcasting stale channel updates
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
2019-01-22 08:42:29 +01:00
Valentine Wallace
19c403711e
discovery/chan_series+utils: include max htlc when syncing with peers
In this commit, we ensure that max HTLC is included when we're
synchronizing ChannelUpdates with remote peers.
2019-01-22 08:42:28 +01:00
Valentine Wallace
513ac23479
discovery/gossiper: persist remote channel policy updates' max htlc 2019-01-22 08:42:28 +01:00
Valentine Wallace
15168c391e
discovery+routing: validate msg flags and max htlc in ChannelUpdates
In this commit, we alter the ValidateChannelUpdateAnn function in
ann_validation to validate a remote ChannelUpdate's message flags
and max HTLC field. If the message flag is set but the max HTLC
field is not set or vice versa, the ChannelUpdate fails validation.

Co-authored-by: Johan T. Halseth <johanth@gmail.com>
2019-01-22 08:42:27 +01:00
Valentine Wallace
f316cc6c7e
discovery/gossiper_test: set ChannelUpdate max htlc
In this commit, we alter the gossiper test's helper method
that creates channel updates to include the max htlc field
in the ChannelUpdates it creates.

Co-authored-by: Johan T. Halseth <johanth@gmail.com>
2019-01-22 08:42:26 +01:00
Valentine Wallace
7ab8900eb6
discovery/gossiper_test: mock AddEdge: set capacity
In this commit, we modify the mockGraphSource's `AddEdge`
method to set the capacity of the edge it's adding to be a large
capacity.

This will enable us to test the validation of each ChannelUpdate's
max HTLC, since future validation checks will ensure the specified
max HTLC is less than total channel capacity.
2019-01-22 08:42:26 +01:00
Valentine Wallace
0fd6004958
multi: partition lnwire.ChanUpdateFlag into ChannelFlags and MessageFlags
In this commit:

* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's

* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.

* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.

Co-authored-by: Johan T. Halseth <johanth@gmail.com>
2019-01-22 08:42:26 +01:00
Johan T. Halseth
5dcd2a4530
gossiper: validate our own node announement 2018-12-05 09:31:25 +01:00
Olaoluwa Osuntokun
640fe2558b
discovery: update TestReceiveRemoteChannelUpdateFirst due to stricter stale node checks
A recent commit modified the `IsNodeStale` method in the mocks to mirror
the actual implementation in the gossiper. As a result, we now expect
one less node announcement to be broadcast.
2018-12-03 21:04:28 -08:00
Olaoluwa Osuntokun
5ad2592891
Merge pull request #2123 from halseth/node-announcement-no-forward
[Tests] No forwarding of stale node announcements
2018-12-03 20:11:43 -08:00
Conner Fromknecht
5fc0a192ad
discovery/syncer: move CGTimeSerires to chan_series 2018-11-30 15:33:34 -08:00
Conner Fromknecht
1c81a8fad6
discovery/chan_series: copy chan_series to discovery 2018-11-30 15:33:34 -08:00
Olaoluwa Osuntokun
1fd3aac925
multi: switch from bolt packge to bbolt package for all imports 2018-11-29 20:33:49 -08:00
Wilmer Paulino
55094f1470
discovery/gossiper: send node anns when constructing full chan proof
In this commit, we allow the gossiper to also broadcast the
corresponding node announcements, if we know of them, of a channel when
constructing its full proof. We do this to ensure peers (other than our
remote peer) receive all the relevant announcements for a channel.

The tests changes were made to ensure the new behavior introduced works
as intended. Previously, the node announcements for each test channel
announcement were not processed, so they never existed from the
gossiper's point of view.

This also addresses an existing flake in the integration test
`testNodeAnnouncement`. This problem arose due to the node announcement
being sent before the connection between Dave (node announcement sender)
and Alice (node announcement receiver) was initiated and the full
channel proof was constructed.
2018-11-11 17:48:07 -08:00