This commit modifies the nodeWithDist struct to use a route.Vertex
instead of a *channeldb.LightningNode. This change, coupled with
the new ForEachNodeChannel function, allows the findPath Djikstra's
algorithm to cut down on database lookups since we no longer need
to call the FetchOtherNode function.
In this commit, we refactor DeleteChannelEdge to use ChannelIDs rather
than ChannelPoints. We do this as the only use of DeleteChannelEdge is
when we are pruning zombie channels from our graph. When running under a
light client, we are unable to obtain the ChannelPoint of each edge due
to the expensive operations required to do so. As a stop-gap, we'll
resort towards using an edge's ChannelID instead, which is already
gossiped between nodes.
This commit removes the MarkEdgeZombie method from channeldb. This
method is currently not used in any live code paths in production, and
is only used in unit tests. However, incorrect usage of this method
could result in an edge being present in both the zombie and channel
indexes, which deviates from any state we would expect to see in
production. Removing the method will help mitigate the potential for
writing incorrect unit tests in the future, by forcing zombie edges to
be created via the relevant, production APIs, e.g. DeleteChannelEdge.
The existing unit tests that use this method have been modified to use
the DeleteChannelEdge instead. No regressions were discovered in the
process.
This commit modifies FetchChanInfos to skip any channels that are not in
the graph at the time of the call. Currently the entire call will fail
if the edge is not found, which stalls a gossip sync in the following
scenario:
1. Remote peer queries for a channel range
2. We return the set of channel ids in that range
3. A channel from that set is removed from the graph, e.g. via close.
4. Remote peer queries for removed edge, causing the query to fail.
To remedy this, we will now skip any edges that are not known in the
database at the time of the query. This prevents the syncer state
machines from halting, which otherwise could only be resolved by
disconnecting and reconnecting.
This commit modifies FilterKnownChanIDs to skip edges that
we ourselves have deemed zombies. This prevents us from requesting
the updates from them, as this wastes bandwidth and cpu cycles.
In this commit, we extend the graph's FetchChannelEdgesByID and
HasChannelEdge methods to also check the zombie index whenever the edge
to be looked up doesn't exist within the edge index. We do this to
signal to callers that the edge is known, but only as a zombie, and the
only information that we have about the edge are the node public keys of
the two parties involved in the edge.
In the event that an edge does exist within the zombie index, we make
an additional check on edge policies to ensure they are not within the
router's pruning window, indicating that it is a fresh update.
We mark the edges as zombies when pruning them to ensure we don't
attempt to reprocess them later on. This also applies to channels that
have been removed from the graph due to being stale.
In this commit, we add a zombie edge index to the database. This allows
us to quickly determine across restarts whether we're attempting to
process an edge we've previously deemed as zombie.
In this commit, we convert all the `Update` calls which are serial, to
use `Batch` calls which are optimistically batched together for
concurrent writers. This should increase performance slightly during the
initial graph sync, and also updates at tip as we can coalesce more of
these individual transactions into a single transaction.
If the max_htlc field is not found when fetching a ChannelEdgePolicy
from the DB, we treat this as an unknown policy.
This is done to ensure we won't propagate invalid data further. The data
will be overwritten with a valid one when we receive an update for this
channel.
It shouldn't be very common, but old data could be lingering in the DB
added before this field was validated.
Adding this field will allow us to persist an edge's
max HTLC to disk, thus preserving it between restarts.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
Instead return ErrGraphNodeNotFound directly. If the node bucket was
created it would be empty, and the call delChannelByEdge ->
fetchChanEdgePolicies -> fetchChanEdgePolicy ->
deserializeChanEdgePolicy -> fetchLightningNode would return this error
anyway.
In this commit, we add a method to the ChannelGraph struct that
determines whether a node is seen as public based on graph's source
node's point of view.
In this commit, we account for the additional case wherein the
announcement hasn't yet been written with the extra zero byte to
indicate that there aren't any remaining bytes to be read. Before this
commit, we accounted for the case where the announcement was written
with the extra byte, but now we ensure that legacy nodes that upgrade
will be able to boot properly.
In this commit, we add a new limit on the largest number of extra opaque
bytes that we'll allow to be written per vertex/edge. We do this in
order to limit the amount of disk space that we expose, as it's possible
that nodes may start to pad their announcements adding an additional
externalized cost as nodes may need to continue to store and relay these
large announcements.
In this commit, we add a mirror set of fields to the ones we recently
added to the set of gossip wire messages. With these set of fields in
place, we ensure that we'll be able to properly store and re-validate
gossip messages that contain a set of extra/optional fields.
In this commit, we ensure that we de-duplicate the set of channel edges
returned from ChanUpdatesInHorizon. Other subsystems within lnd use this
method to retrieve and send all the channels with updates within a time
series to network peers. However, since the method looks at the edge
update index, which can include up to two entries per edge for each
policy, it's possible that we'd send channel announcements and updates
twice, causing extra bandwidth.
In this commit, we fix a lingering issue within the edge update index
where entries were not being properly pruned due to an incorrect
calculation of the offset of an edge's last update time. Since the
offset is being determined from the end to the start, we need to
subtract all the fields after an edge policy's last update time from the
total amount of bytes of the serialized edge policy to determine the
correct offset. This was also slightly off as the edge policy included
an extra byte, which has been fixed in the previous commit.
Instead of continuing the slicing approach however, we'll switch to
deserializing the raw bytes of an edge's policy to ensure this doesn't
happen in the future when/if the serialization methods change or extra
data is included.
In this commit, we fix an off-by-one error when slicing the public key
from the serialized node info byte slice. This would cause us to write
an extra byte to all edge policies. Even though the values were read
correctly, when attempting to calculate the offset of an edge's update
time going backwards, we'd always be incorrect, causing us to not
properly prune the edge update index.