This commit reevaluates the router's quit channel between each block
during the initial call to syncGraphWithChain, which, in the worst case,
may have to scan several thousand blocks on startup if the node has not
been active for some time. Without this, attempting to stop the daemon
will not exit until the rescan has completed, which for certain backends
could be several hours.
In this commit, we update the process that we use to generate a sphinx
packet to send our onion routed HTLC. Due to recent changes in the
`sphinx` package we use, we now need to use a new PaymentPath struct. As
a result, it no longer makes sense to split up the nodes in a route and
their per hop paylods as they're now in the same struct. All tests have
been updated accordingly.
In this commit, we refactor DeleteChannelEdge to use ChannelIDs rather
than ChannelPoints. We do this as the only use of DeleteChannelEdge is
when we are pruning zombie channels from our graph. When running under a
light client, we are unable to obtain the ChannelPoint of each edge due
to the expensive operations required to do so. As a stop-gap, we'll
resort towards using an edge's ChannelID instead, which is already
gossiped between nodes.
This serves as a stop-gap for light clients as blocks need to be
downloaded from the P2P network, and even with caches, would be too
costly for them to verify. Doing this has two side effects however:
we'll no longer know of the channel capacity and outpoint, which are
essential for some of lnd's responsibilities.
In this commit, we disable attempting to determine when a channel has
been closed out on-chain whenever AssumeChannelValid is active. Since
the flag indicates that performing this operation is expensive, we do
this as a temporary optimization until we can include proofs of channels
being closed in the gossip protocol.
With this change, the only way for channels being removed from the graph
will be once they're considered zombies: which can happen when both
edges of a channel have their disabled bits set or when both edges
haven't had an update within the past two weeks.
To ensure we don't mark an edge as live again just because an update
with a fresh timestamp was received, we'll ensure that we reject any
new updates for zombie channels if they remain disabled when running
with AssumeChannelValid.
In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.
This commit removes the QueryRoutes route cache. It is causing wrong
routes to be returned because not all of the request parameters are
stored.
The cache allowed high frequency QueryRoutes calls to the same
destination and with the same amount to be returned fast. This behaviour
can also be achieved by caching the request on the client side. In case
a route is invalidated because of for example a channel update,
the subsequent SendToRoute call will fail. This is a trigger to call
QueryRoutes again for a fresh route.
In this commit, we extend the graph's FetchChannelEdgesByID and
HasChannelEdge methods to also check the zombie index whenever the edge
to be looked up doesn't exist within the edge index. We do this to
signal to callers that the edge is known, but only as a zombie, and the
only information that we have about the edge are the node public keys of
the two parties involved in the edge.
In the event that an edge does exist within the zombie index, we make
an additional check on edge policies to ensure they are not within the
router's pruning window, indicating that it is a fresh update.
Currently public keys are represented either as a 33-byte array (Vertex) or as a
btcec.PublicKey struct. The latter isn't useable as index into maps and
cannot be used easily in compares. Therefore the 33-byte array
representation is used predominantly throughout the code base.
This commit converts the argument types of source and target nodes for
path finding to Vertex. Path finding executes no crypto operations and
using Vertex simplifies the code.
Additionally, it prepares for the path finding source parameter to be
exposed over rpc in a follow up commit without requiring conversion back
and forth between Vertex and btcec.PublicKey.
This commit allows the execution of QueryRoutes to be controlled using
lists of black-listed edges and nodes. Any path returned will not pass
through the edges and/or nodes on the list.
This commit is a step to split the lnwallet package. It puts the Input
interface and implementations in a separate package along with all their
dependencies from lnwallet.
In this commit, we deprecate the `IncorrectHtlcAmount` onion error.
We'll still decode this error to use when retrying paths, but we'll no
longer send this ourselves. The `UnknownPaymentHash` error has been
amended to also include the value of the payment as well. This allows us
to worry about one less error.
In this commit, we ensure that when we update an edge
as a result of a ChannelUpdate being returned from an
onion failure, the max htlc portion of the channel update
is included in the edge update.
In this commit, we alter the ValidateChannelUpdateAnn function in
ann_validation to validate a remote ChannelUpdate's message flags
and max HTLC field. If the message flag is set but the max HTLC
field is not set or vice versa, the ChannelUpdate fails validation.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit we introduce pruning of channel edges instead of channels.
Channel failures apply to a single direction and it is unnecessarily
restricting to prune both directions.
Hop maps were used in a test to verify the population of the hop map
itself and further only in a single function (getFailedChannelID).
Rewrote that function and removed the hop maps completely.
Fixes the following issues:
- If the channel update of FailFeeInsufficient contains an invalid channel
update, it is not possible to properly add to the failed channels set.
- FailAmountBelowMinimum may apply a channel update, but does not retry.
- FailIncorrectCltvExpiry immediately prunes the vertex without
trying one more time.
In this commit, the logic for all three policy related errors is
aligned.
In this commit we add a check to HtlcSatifiesPolicy to verify that the
time lock for the outgoing htlc that is requested in the onion packet
isn't too far in the future.
Without this check, anyone could force an unreasonably long time lock on
the forwarding node.
In this commit the dependency of unmarshallRoute on edge policies being
available is removed. Edge policies may be unknown and reported as nil.
SendToRoute does not need the policies, but it does need pubkeys of the
route hops. In this commit, unmarshallRoute is modified so that it
takes the pubkeys from edgeInfo instead of channelEdgePolicy.
In addition to this, the route structure is simplified. No more connection
to the database at that point. Fees are determined based on incoming and
outgoing amounts.
We make sure to return an error other than ErrIgnored, as ErrIgnored is
expected to only be returned for updates where we already have the
necessary information in the database.
In case of a channel ID found in the rejectCache, there was a
possibility that we had rejected an invalid update for this channel
earlier, and when attempting to add the current update we wouldn't
distinguish the failure to add from an outdated/ignored update.
The commit ensures that for every channel, there will always
be two entries in the edges bucket. If the policy from one or
both ends of the channel is unknown, it is marked as such.
This allows efficient lookup of incoming edges. This is
required for backwards payment path finding.
In this commit, we fix a slight race condition that can occur when we go
to add a shell node for a node announcement, but then right afterwards,
a new block arrives that causes us to prune an unconnected node. To
ensure this doesn't happen, we now add shell nodes within the same db
transaction as AddChannelEdge. This ensures that the state is fully
consistent and shell nodes will be added atomically along with the new
channel edge.
As a result of this change, we no longer need to add shell nodes within
the ChannelRouter, as the database will take care of this operation as
it should.
In this commit, we update the generateSphinxPacket to use newLogClosure
to delay the spew evaluation until log print time. Before this commit,
even if we weren't on the trace logging level, the spew call would
always be evaluated.
- Extend SendRequest and QueryRoutesRequest protos
- newRoute function takes fee limit and cuts off routes that exceed it
- queryRoutes, payInvoice and sendPayment commands take the feeLimit inputs and pass them down to newRoute
- When no feeLimit is included, don't enforce any feeLimits at all (by setting feeLimit to maxValue)
In this commit, we modify the recent refactoring of the mission control
sub-system to overload the existing payment session, rather than create
a brand new one. This allows us to re-use more of the existing logic, and
also feedback into mission control the failures incurred by any user
selected routes.
In this commit, we introduce a new method to the channel router's config
struct: QueryBandwidth. This method allows the channel router to query
for the up-to-date available bandwidth of a particular link. In the case
that this link emanates from/to us, then we can query the switch to see
if the link is active (if not bandwidth is zero), and return the current
best estimate for the available bandwidth of the link. If the link,
isn't one of ours, then we can thread through the total maximal
capacity of the link.
In order to implement this, the missionControl struct will now query the
switch upon creation to obtain a fresh bandwidth snapshot. We take care
to do this in a distinct db transaction in order to now introduced a
circular waiting condition between the mutexes in bolt, and the channel
state machine.
The aim of this change is to reduce the number of unnecessary failures
during HTLC payment routing as we'll now skip any links that are
inactive, or just don't have enough bandwidth for the payment. Nodes
that have several hundred channels (all of which in various states of
activity and available bandwidth) should see a nice gain from this w.r.t
payment latency.
This commit improves the shutdown of the router's
pending validation tasks, by ensuring the pending
tasks exit early if the validation barrier
receives a shutdown request.
Currently, any goroutines blocked by WaitForDependants
will continue execution after a shutdown is signaled.
This may lead to unnexpected behavior as the relation
between updates is no longer upheld. It also has the
side effect of slowing down shutdown, since we
continue to process the remaining updates.
To remedy this, WaitForDependants now returns an error
that signals if a shutdown was requested. The blocked
goroutines can exit early upon seeing this error,
without also signaling completion of their task to
the dependent tasks, which should will now properly
wait to read the validation barrier's quit signal.
In this commit we fix an lingering bug in the Mission Control logic we
execute in response to the FailUnknownNextPeer error. Historically, we
would treat this as the _next_ node not being online. As a result, we
would then prune away the vertex from the current reachable graph all
together. It was recently realized, that this would at times be a bit
_tooo_ aggressive if the channel we attempt to route over was faulty,
down, or the incoming node had connectivity issues with the outgoing
node.
In light of this realization, we'll now instead only prune the _edge_
that we attempted to route over. This ensures that we'll continue to
explore the possible edges. Additionally, this guards us against failure
modes where nodes report FailUnknownNextPeer to other nodes in an
attempt to more closely control our retry logic.
This change is a stop gap on the path to a more intelligent set of
autopilot heuristics.
Fixes#1114.
In this commit, we introduce the ability for payment sessions to store
an additional set of edges that can be used to assist a payment in
successfully reaching its destination.
In this commit, we add a new field of routing hints to payments over the
Lightning Network. These routing hints can later be used within the path
finding algorithm in order to craft a path that will reach the
destination succesfully.
In this commit, we modify the way we handle FeeInsufficientErrors to
more aggressively route around nodes that repeatedly return the same
error to us. This will ensure we skip older nodes on the network which
are running a buggier older version of lnd. Eventually most nodes will
upgrade to this new version, making this change less needed.
We also update the existing test to properly use a multi-hop route to
ensure that we route around the offending node.
In this commit, we add vertex pruning for any non-final CLTV error.
Before this commit, we assumed that any source of this error was due to
the local node setting the incorrect time lock. However, it’s been
recently noticed on main net that there’re a set of nodes that seem to
not be properly scanned to the chain. Without this patch, users aren’t
able to route successfully as atm, we’ll stop all path finding attempts
if we encounter this.
In this commit, we address a number of edge cases that were unaccounted
for when responding to errors that can be sent back due to an HTLC
routing failure. Namely:
* We’ll no longer stop payment attempts if we’re unable to apply a
channel update, instead, we’ll log the error, prune the channel and
continue.
* We’ll no remember which channels were pruned due to insufficient
fee errors. If we ever get a repeat fee error from a channel, then we
prune it. This ensure that we don’t get stuck in a loop due to a node
continually advertising the same fees.
* We also correct an error in which node we’d prune due to a
temporary or permanent node failure. Before this commit, we would prune
the next node, when we should actually be pruning the node that sent us
the error.
Finally, we also add a new test to exercise the fee insufficient error
handling and channel pruning.
Fixes#865.
In this commit, we add a new field to the LightningPayment struct:
PayAttemptTimeout. This new field allows the caller to control exactly
how much time should be spent attempting to route a payment to the
destination. The default value we’ll use is 60 seconds, but callers are
able to specify a diff value. Once the timeout has passed, we’ll
abandon th e payment attempt, and return an error back to the original
caller.
In this commit, we add a set of new methods to check the freshness of
an edge/node. This will allow callers to skip expensive validation in
the case that the router already knows of an item, or knows of a
fresher version of that time.
A set of tests have been added to ensure basic correctness of these new
methods.
In this commit, we modify the caching structure to return a set of
cached routes for a request if the number of routes requested is less
than or equal to the number of cached of routes.
In order to reduce high CPU utilization during the initial network view
sync, we slash down the total number of active in-flight jobs that can
be launched.