In this commit, we update the generateSphinxPacket to use newLogClosure
to delay the spew evaluation until log print time. Before this commit,
even if we weren't on the trace logging level, the spew call would
always be evaluated.
- Extend SendRequest and QueryRoutesRequest protos
- newRoute function takes fee limit and cuts off routes that exceed it
- queryRoutes, payInvoice and sendPayment commands take the feeLimit inputs and pass them down to newRoute
- When no feeLimit is included, don't enforce any feeLimits at all (by setting feeLimit to maxValue)
In this commit, we modify the recent refactoring of the mission control
sub-system to overload the existing payment session, rather than create
a brand new one. This allows us to re-use more of the existing logic, and
also feedback into mission control the failures incurred by any user
selected routes.
In this commit, we introduce a new method to the channel router's config
struct: QueryBandwidth. This method allows the channel router to query
for the up-to-date available bandwidth of a particular link. In the case
that this link emanates from/to us, then we can query the switch to see
if the link is active (if not bandwidth is zero), and return the current
best estimate for the available bandwidth of the link. If the link,
isn't one of ours, then we can thread through the total maximal
capacity of the link.
In order to implement this, the missionControl struct will now query the
switch upon creation to obtain a fresh bandwidth snapshot. We take care
to do this in a distinct db transaction in order to now introduced a
circular waiting condition between the mutexes in bolt, and the channel
state machine.
The aim of this change is to reduce the number of unnecessary failures
during HTLC payment routing as we'll now skip any links that are
inactive, or just don't have enough bandwidth for the payment. Nodes
that have several hundred channels (all of which in various states of
activity and available bandwidth) should see a nice gain from this w.r.t
payment latency.
This commit improves the shutdown of the router's
pending validation tasks, by ensuring the pending
tasks exit early if the validation barrier
receives a shutdown request.
Currently, any goroutines blocked by WaitForDependants
will continue execution after a shutdown is signaled.
This may lead to unnexpected behavior as the relation
between updates is no longer upheld. It also has the
side effect of slowing down shutdown, since we
continue to process the remaining updates.
To remedy this, WaitForDependants now returns an error
that signals if a shutdown was requested. The blocked
goroutines can exit early upon seeing this error,
without also signaling completion of their task to
the dependent tasks, which should will now properly
wait to read the validation barrier's quit signal.
In this commit we fix an lingering bug in the Mission Control logic we
execute in response to the FailUnknownNextPeer error. Historically, we
would treat this as the _next_ node not being online. As a result, we
would then prune away the vertex from the current reachable graph all
together. It was recently realized, that this would at times be a bit
_tooo_ aggressive if the channel we attempt to route over was faulty,
down, or the incoming node had connectivity issues with the outgoing
node.
In light of this realization, we'll now instead only prune the _edge_
that we attempted to route over. This ensures that we'll continue to
explore the possible edges. Additionally, this guards us against failure
modes where nodes report FailUnknownNextPeer to other nodes in an
attempt to more closely control our retry logic.
This change is a stop gap on the path to a more intelligent set of
autopilot heuristics.
Fixes#1114.
In this commit, we introduce the ability for payment sessions to store
an additional set of edges that can be used to assist a payment in
successfully reaching its destination.
In this commit, we add a new field of routing hints to payments over the
Lightning Network. These routing hints can later be used within the path
finding algorithm in order to craft a path that will reach the
destination succesfully.
In this commit, we modify the way we handle FeeInsufficientErrors to
more aggressively route around nodes that repeatedly return the same
error to us. This will ensure we skip older nodes on the network which
are running a buggier older version of lnd. Eventually most nodes will
upgrade to this new version, making this change less needed.
We also update the existing test to properly use a multi-hop route to
ensure that we route around the offending node.
In this commit, we add vertex pruning for any non-final CLTV error.
Before this commit, we assumed that any source of this error was due to
the local node setting the incorrect time lock. However, it’s been
recently noticed on main net that there’re a set of nodes that seem to
not be properly scanned to the chain. Without this patch, users aren’t
able to route successfully as atm, we’ll stop all path finding attempts
if we encounter this.
In this commit, we address a number of edge cases that were unaccounted
for when responding to errors that can be sent back due to an HTLC
routing failure. Namely:
* We’ll no longer stop payment attempts if we’re unable to apply a
channel update, instead, we’ll log the error, prune the channel and
continue.
* We’ll no remember which channels were pruned due to insufficient
fee errors. If we ever get a repeat fee error from a channel, then we
prune it. This ensure that we don’t get stuck in a loop due to a node
continually advertising the same fees.
* We also correct an error in which node we’d prune due to a
temporary or permanent node failure. Before this commit, we would prune
the next node, when we should actually be pruning the node that sent us
the error.
Finally, we also add a new test to exercise the fee insufficient error
handling and channel pruning.
Fixes#865.
In this commit, we add a new field to the LightningPayment struct:
PayAttemptTimeout. This new field allows the caller to control exactly
how much time should be spent attempting to route a payment to the
destination. The default value we’ll use is 60 seconds, but callers are
able to specify a diff value. Once the timeout has passed, we’ll
abandon th e payment attempt, and return an error back to the original
caller.
In this commit, we add a set of new methods to check the freshness of
an edge/node. This will allow callers to skip expensive validation in
the case that the router already knows of an item, or knows of a
fresher version of that time.
A set of tests have been added to ensure basic correctness of these new
methods.
In this commit, we modify the caching structure to return a set of
cached routes for a request if the number of routes requested is less
than or equal to the number of cached of routes.
In order to reduce high CPU utilization during the initial network view
sync, we slash down the total number of active in-flight jobs that can
be launched.
In this commit, we now account for a case where a node sends us a
FailPermanentChannelFailure during a payment attempt. Before this
commit, we wouldn’t properly prune the edge to avoid re-using it. We
remedy this by properly attempting to prune the edge if possible.
Future changes well send a FailPermanentChannelFailure in the case that
we ned to go on-chain for an outgoing HTLC, and cancel back the
incoming HTLC.
In this commit, we fix an existing bug that could cause lnd to crash if
we sent a payment, and the *destination* sent a temp channel failure
error message. When handling such a message, we’ll look in the nextHop
map to see which channel was *after* the node that sent the payment.
However, if the destination sends this error, then there’ll be no entry
in this map.
To address this case, we now add a prevHop map. If we attempt to lookup
a node in the nextHop map, and they don’t have an entry, then we’ll
consult the prevHop map.
We also update the set of tests to ensure that we’re properly setting
both the prevHop map and the nextHop map.
This commit adds synchronization around the processing
of multiple ChannelEdgePolicy updates for the same
channel ID at the same time.
This fixes a bug that could cause the database access
HasChannelEdge to be out of date when the goroutine
came to the point where it was calling UpdateEdgePolicy.
This happened because a second goroutine would have
called UpdateEdgePolicy in the meantime.
This bug was quite benign, as if this happened at
runtime, we would eventually get the ChannelEdgePolicy
we had lost again, either from a peer sending it to
us, or if we would fail a payment since we were using
outdated information. However, it would cause some of
the tests to flake, since losing routing information
made payments we expected to go through fail if this
happened.
This is fixed by introducing a new mutex type, that
when locking and unlocking takes an additional
(id uint64) parameter, keeping an internal map
tracking what ID's are currently locked and the
count of goroutines waiting for the mutex. This
ensure we can still process updates concurrently,
only avoiding updates with the same channel ID from
being run concurrently.
In this commit, we modify the pruning semantics of the missionControl
struct. Before this commit, on each payment attempt, we would fetch a
new graph pruned view each time. This served to instantly propagate any
detected failures to all outstanding payment attempts. However, this
meant that we could at times get stuck in a retry loop if sends take a
few second, then we may prune an edge, try another, then the original
edge is now unpruned.
To remedy this, we now introduce the concept of a paymentSession. The
session will start out as a snapshot of the latest graph prune view.
Any payment failures are now reported directly to the paymentSession
rather than missionControl. The rationale for this is that
edges/vertexes pruned as result of failures will never decay for a
local payment session, only for the global prune view. With this in
place, we ensure that our set of prune view only grows for a session.
Fixes#536.
This commit fixes an existing bug within the ChannelRouter. Prior to
this commit, if the chain view skipped blocks or for some reason we had
a gap in blocks delivered, then we would simply accept them. This had
the potential to cause us to miss on-chain channel closure events. To
remedy this, we won’t process any blocks whose heights aren’t
*strictly* increasing.
A longer term fix would be to have the ChainView take a block height,
and re-dispatch any notifications from that height to the current
height.
In this commit, we modify the high value passed into UpdateFilter upon
restart. Before this commit, we would pass in the prune height, which
would cause a full rescan within the FilteredChainView if the best
height as > than the prune height. This was redundant as we would
shortly carry out a manual rescan in the method below. To fix this, we
now pass in the bestHeight, this isn’t an issue as the
syncGraphWithChain method will manually scan up to that best height.
In this commit we ensure that if this is the first time that the
ChannelRouter is starting, then we set the pruned height+hash to the
current best height. Otherwise, it’s possible that we attempt to update
the filter with a 0 prune height, which will restart a historical
rescan unnecessarily.
In this commit we ensure that we only update the filter, if we have a
non-zero chain view. Otherwise, a mini rescan may be kicked off
unnecessarily if we don’t yet know of any channels yet in the greater
graph.