Extends the invalid payment details failure with the new accept height
field. This allows sender to distinguish between a genuine invalid
details situation and a delay caused by intermediate nodes.
Currently the underlying array backing the hop's TLVRecords is modified
when combining custom records with the primitive forwarding info. This
commit uses a fresh slice to prevent modifications from mutating the
hop itself.
This commit modifies paymentLifecycle so that it not only feeds
failures into mission control, but successes as well.
This allows for more accurate probability estimates. Previously,
the success probability for a successful pair and a pair with
no history was equal. There was no force that pushed towards
previously successful routes.
In this commit, we extend the path finding to be able to recognize when
a node needs the new TLV format, or the legacy format based on the
feature bits they expose. We also extend the `LightningPayment` struct
to allow the caller to specify an arbitrary set of TLV records which can
be used for a number of use-cases including various variants of
spontaneous payments.
In this commit, we extend the Hop struct to carry an arbitrary set of
TLV values, and add a new field that allows us to distinguish between
the modern and legacy TLV payload.
We add a new `PackPayload` method that will be used to encode the
combined required routing TLV fields along any set of TLV fields that
were specified as part of path finding.
Finally, the `ToSphinxPath` has been extended to be able to recognize if
a hop needs the modern, or legacy payload.
This commit overhauls the interpretation of failed payments. It changes
the interpretation rules so that we always apply the strongest possible
set of penalties, without making assumptions that would hurt good nodes.
Main changes are:
- Apply different rule sets for intermediate and final nodes. Both types
of nodes have different sets of failures that we expect. Penalize nodes
that send unexpected failure messages.
- Distinguish between direct payments and multi-hop payments. For direct
payments, we can infer more about the performance of our peer because we
trust ourselves.
- In many cases it is impossible for the sender to determine which of
the two nodes in a pair is responsible for the failure. In this
situation, we now penalize bidirectionally. This does not hurt the good
node of the pair, because only its connection to a bad node is
penalized.
- Previously we always penalized the outgoing connection of the
reporting node. This is incorrect for policy related failures. For
policy related failures, it could also be that the reporting node
received a wrongly crafted htlc from its predecessor. By penalizing the
incoming channel, we surely hit the responsible node.
- FailExpiryTooSoon is a failure that could have been caused by any node
up to the reporting node by delaying forwarding of the htlc. We don't
know which node is responsible, therefore we now penalize all node pairs
in the route.
When an undecryptable failure comes back for a payment attempt, we
previously only penalized our own outgoing connection. However,
any node could have caused this failure. It is therefore better to
penalize all node connections along the route. Then at least we know for
sure that we will hit the responsible node.
This commit updates existing tests to not rely on mission control for
pruning of local channels. Information about local channels should
already be up to date before path finding starts. If not, the problem
should be fixed where bandwidth hints are set up.
This commit moves the payment outcome interpretation logic into a
separate file. Also, mission control isn't updated directly anymore, but
results are stored in an interpretedResult struct. This allows the
mission control state to be locked for a minimum amount of time and
makes it easier to unit test the result interpretation.
This commit converts several functions from returning a bool and a
failure reason to a nillable failure reason as return parameter. This
will take away confusion about the interpretation of the two separate
values.
Previously mission control tracked failures on a per node, per channel basis.
This commit changes this to tracking on the level of directed node pairs. The goal
of moving to this coarser-grained level is to reduce the number of required
payment attempts without compromising payment reliability.
Align naming better with the lightning spec. Not the full name of the
failure (FailIncorrectOrUnknownPaymentDetails) is used, because this
would cause too many long lines in the code.
This commit adds the BlockPadding value (currently 3) to sendpayment
calls so that if some blocks are mined while the htlc is in-flight, the
exit hop won't reject it.
The current approach iterates all channels in the graph in order to
filter those in need. This approach is time consuming, several seconds
on my mobile device for ~40,000 channels, while during this time the
db is locked in a transaction.
The proposed change is to use an existing functionality that utilize the
fact that channel update are saved indexed by date. This method enables
us to go over only a small subset of the channels, only those that
were updated before the "channel expiry" time and further filter
them for our need.
The same graph that took several seconds to prune was pruned, after
the change, in several milliseconds.
In addition for testing purposes I added Initiator field to the
testChannel structure to reflect the channeldEdgePolicy direction.
If nodes return a channel policy related failure, they may get a second
chance. Our graph may not be up to date. Previously this logic was
contained in the payment session.
This commit moves that into global mission control and thereby removes
the last mission control state that was kept on the payment level.
Because mission control is not aware of the relation between payment
attempts and payments, the second chance logic is no longer based
tracking second chances given per payment.
Instead a time based approach is used. If a node reports a policy
failure that prevents forwarding to its peer, it will get a second
chance. But it will get it only if the previous second chance was
long enough ago.
Also those second chances are no longer dependent on whether an
associated channel update is valid. It will get the second chance
regardless, to prevent creating a dependency between mission control and
the graph. This would interfer with (future) replay of history, because
the graph may not be the same anymore at that point.
This commit adds the pubkeyIndices map to the distanceHeap to avoid
duplicate entries on the heap. This happened in the earlier iteration
of the findPath algorithm and would cause the driving loop to
evaluate already evaluated entries when there was no need.
This commit modifies the nodeWithDist struct to use a route.Vertex
instead of a *channeldb.LightningNode. This change, coupled with
the new ForEachNodeChannel function, allows the findPath Djikstra's
algorithm to cut down on database lookups since we no longer need
to call the FetchOtherNode function.
This commit moves the call to PruneGraph outside of the loop
that collates all of the spentOutputs. With this change, if
a node has been offline for a long period of time, resyncing
with the chain no longer takes up as much memory (1MB vs 200MB
in some cases) or time. Previously, PruneGraph was called
for every block and allocated a very large map further down
in the pruneGraphNodes function. Now, pruneGraphNodes is only
called once.
Since nilling the pubkey curve will lead to a nil-pointer exception if
the key is later used for signature verification, we make sure to make a
copy before nilling and spewing.
This commit moves the default timeout out of router and thereby fixes a
bug that caused SendToRoute to not return the actual error, but a
timeout result instead. SendToRoute only tries a single route, so a
timeout should never happen.
This commit exposes the three main parameters that influence mission
control and path finding to the user as command line or config file
flags. It allows for fine-tuning for optimal results.
This commit adds an assertion to the SendToRoute test that the payment
value stored to the DB during SendToRoute execution is the correct one.
This assertion would fail before the previous commit that fixed a
missing value initialization.
Previously we would mistakenly use the payment value from the dummy
LightningPayment struct, which would obviously be 0 always. Now we
instead calculate the value from the given route.
Previously every payment had its own local mission control state which
was in effect only for that payment. In this commit most of the local
state is removed and payments all tap into the global mission control
probability estimator.
Furthermore the decay time of pruned edges and nodes is extended, so
that observations about the network can better benefit future payment
processes.
Last, the probability function is transformed from a binary output to a
gradual curve, allowing for a better trade off between candidate routes.
This PR replaces the previously used edge and node ignore lists in path
finding by a probability based system. It modifies path finding so that
it not only compares routes on fee and time lock, but also takes route
success probability into account.
Allowing routes to be compared based on success probability is achieved
by introducing a 'virtual' cost of a payment attempt and using that to
translate probability into another cost factor.
TestRouterPaymentStateMachine tests that the router interacts as
expected with the ControlTower during a payment lifecycle, such that it
payment attempts are not sent twice to the switch, and results are
handled after a restart.
This commit makes the router use the ControlTower to drive the payment
life cycle state machine, to keep track of active payments across
restarts. This lets the router resume payments on startup, such that
their final results can be handled and stored when ready.
This encapsulates all state needed to resume a payment from any point of
the payment flow, and that must be shared between the different stages
of the execution. This is done to prepare for breaking the send loop
into smaller parts, and being able to resume the payment from any point
from persistent state.
In this commit we move handing the deobfuscator from the router to the
switch from when the payment is initiated, to when the result is
queried.
We do this because only the router can recreate the deobfuscator after a
restart, and we are preparing for being able to handle results across
restarts.
Since the deobfuscator cannot be nil anymore, we can also get rid of
that special case.
This lets us distinguish an critical error from a actual payment result
(success or failure). This is important since we know that we can only
attempt another payment when a final result from the previous payment
attempt is received.
This commit moves the responsibility of generating a unique payment ID
from the switch to the router. This will make it easier for the router
to keep track of which HTLCs were successfully forwarded onto the
network, as it can query the switch for existing HTLCs as long as the
paymentIDs are kept.
The router is expected to maintain a map from paymentHash->paymentID,
such that they can be replayed on restart. This also lets the router
check the status of a sent payment after a restart, by querying the
switch for the paymentID in question.
This commit reevaluates the router's quit channel between each block
during the initial call to syncGraphWithChain, which, in the worst case,
may have to scan several thousand blocks on startup if the node has not
been active for some time. Without this, attempting to stop the daemon
will not exit until the rescan has completed, which for certain backends
could be several hours.
In this commit, we update the process that we use to generate a sphinx
packet to send our onion routed HTLC. Due to recent changes in the
`sphinx` package we use, we now need to use a new PaymentPath struct. As
a result, it no longer makes sense to split up the nodes in a route and
their per hop paylods as they're now in the same struct. All tests have
been updated accordingly.
In this commit, we make our findPath function use an edge's MaxHTLC as
its available bandwidth instead of its Capacity. We do this as it's
possible for the capacity of an edge to not exist when operating as a
light client. For channels that do not support the MaxHTLC optional
field, we'll fall back to using the edge's Capacity.
In this commit, we refactor DeleteChannelEdge to use ChannelIDs rather
than ChannelPoints. We do this as the only use of DeleteChannelEdge is
when we are pruning zombie channels from our graph. When running under a
light client, we are unable to obtain the ChannelPoint of each edge due
to the expensive operations required to do so. As a stop-gap, we'll
resort towards using an edge's ChannelID instead, which is already
gossiped between nodes.
Since light clients no longer have access to an edge's capacity, they
are unable to validate whether the max HTLC value for an updated edge
policy respects the capacity limit. As a stop-gap, we'll skip this
check.
This serves as a stop-gap for light clients as blocks need to be
downloaded from the P2P network, and even with caches, would be too
costly for them to verify. Doing this has two side effects however:
we'll no longer know of the channel capacity and outpoint, which are
essential for some of lnd's responsibilities.
In this commit, we disable attempting to determine when a channel has
been closed out on-chain whenever AssumeChannelValid is active. Since
the flag indicates that performing this operation is expensive, we do
this as a temporary optimization until we can include proofs of channels
being closed in the gossip protocol.
With this change, the only way for channels being removed from the graph
will be once they're considered zombies: which can happen when both
edges of a channel have their disabled bits set or when both edges
haven't had an update within the past two weeks.
To ensure we don't mark an edge as live again just because an update
with a fresh timestamp was received, we'll ensure that we reject any
new updates for zombie channels if they remain disabled when running
with AssumeChannelValid.
In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.
This commit removes the QueryRoutes route cache. It is causing wrong
routes to be returned because not all of the request parameters are
stored.
The cache allowed high frequency QueryRoutes calls to the same
destination and with the same amount to be returned fast. This behaviour
can also be achieved by caching the request on the client side. In case
a route is invalidated because of for example a channel update,
the subsequent SendToRoute call will fail. This is a trigger to call
QueryRoutes again for a fresh route.
In this commit, we extend the graph's FetchChannelEdgesByID and
HasChannelEdge methods to also check the zombie index whenever the edge
to be looked up doesn't exist within the edge index. We do this to
signal to callers that the edge is known, but only as a zombie, and the
only information that we have about the edge are the node public keys of
the two parties involved in the edge.
In the event that an edge does exist within the zombie index, we make
an additional check on edge policies to ensure they are not within the
router's pruning window, indicating that it is a fresh update.
In this commit, we update the build to point to the latest version of
neutrino and btcwallet. The latest version of neutrino includes a number
of bug fixes, and new features like reliably transaction broadcast. The
latest version of btcwallet contains a number of bug fixes related to
properly remove invalid transactions from its database.
Currently public keys are represented either as a 33-byte array (Vertex) or as a
btcec.PublicKey struct. The latter isn't useable as index into maps and
cannot be used easily in compares. Therefore the 33-byte array
representation is used predominantly throughout the code base.
This commit converts the argument types of source and target nodes for
path finding to Vertex. Path finding executes no crypto operations and
using Vertex simplifies the code.
Additionally, it prepares for the path finding source parameter to be
exposed over rpc in a follow up commit without requiring conversion back
and forth between Vertex and btcec.PublicKey.
This commit allows the execution of QueryRoutes to be controlled using
lists of black-listed edges and nodes. Any path returned will not pass
through the edges and/or nodes on the list.
In this commit, we update the path finding logic to
ignore a channel if the HTLC value (including the fees
at the point) exceeds the max HTLC value (if set) of the
link.
Since the MaxHTLC field was recently added to the ChannelEdgePolicy struct,
and the Flags field was broken into ChannelFlags and MessageFlags, the
test edge policies should be updated accordingly.
This commit is a step to split the lnwallet package. It puts the Input
interface and implementations in a separate package along with all their
dependencies from lnwallet.
In this commit, we deprecate the `IncorrectHtlcAmount` onion error.
We'll still decode this error to use when retrying paths, but we'll no
longer send this ourselves. The `UnknownPaymentHash` error has been
amended to also include the value of the payment as well. This allows us
to worry about one less error.
In this commit, we ensure that when we update an edge
as a result of a ChannelUpdate being returned from an
onion failure, the max htlc portion of the channel update
is included in the edge update.
In this commit, we alter the ValidateChannelUpdateAnn function in
ann_validation to validate a remote ChannelUpdate's message flags
and max HTLC field. If the message flag is set but the max HTLC
field is not set or vice versa, the ChannelUpdate fails validation.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit we introduce pruning of channel edges instead of channels.
Channel failures apply to a single direction and it is unnecessarily
restricting to prune both directions.
Hop maps were used in a test to verify the population of the hop map
itself and further only in a single function (getFailedChannelID).
Rewrote that function and removed the hop maps completely.
There is the general assumption that channel edge policy nodes are
ordered such that the node1 pubkey is smaller than the key of node 2. In
the test graph, this assumption didn't hold. This commit fixes the test
graph and also adds a check to prevent this from happening again.
This commit adds a new test that checks that the bandwidth hints are
considered correclty for local channels, and that disable flags are
ignored in this case.
To decouple our own path finding from the graph state, we don't consider
the disable bit when attempting to use local channels. Instead the
bandwidth hints will be zero for local inactive channels.
We alos modify the unit test to check that the disable flag is ignored
for local edges.
Fixes the following issues:
- If the channel update of FailFeeInsufficient contains an invalid channel
update, it is not possible to properly add to the failed channels set.
- FailAmountBelowMinimum may apply a channel update, but does not retry.
- FailIncorrectCltvExpiry immediately prunes the vertex without
trying one more time.
In this commit, the logic for all three policy related errors is
aligned.
In this commit we add a check to HtlcSatifiesPolicy to verify that the
time lock for the outgoing htlc that is requested in the onion packet
isn't too far in the future.
Without this check, anyone could force an unreasonably long time lock on
the forwarding node.
In this commit the dependency of unmarshallRoute on edge policies being
available is removed. Edge policies may be unknown and reported as nil.
SendToRoute does not need the policies, but it does need pubkeys of the
route hops. In this commit, unmarshallRoute is modified so that it
takes the pubkeys from edgeInfo instead of channelEdgePolicy.
In addition to this, the route structure is simplified. No more connection
to the database at that point. Fees are determined based on incoming and
outgoing amounts.
Previously, gossiper was the only object that validated channel
updates. Because updates can also be received as part of a
failed payment session in the routing package, validation logic
needs to be available there too. Gossiper already depends on
routing and having routing call the validation logic inside
gossiper would be a circular dependency. Therefore the validation
was moved to routing.
We make sure to return an error other than ErrIgnored, as ErrIgnored is
expected to only be returned for updates where we already have the
necessary information in the database.
In case of a channel ID found in the rejectCache, there was a
possibility that we had rejected an invalid update for this channel
earlier, and when attempting to add the current update we wouldn't
distinguish the failure to add from an outdated/ignored update.
The commit ensures that for every channel, there will always
be two entries in the edges bucket. If the policy from one or
both ends of the channel is unknown, it is marked as such.
This allows efficient lookup of incoming edges. This is
required for backwards payment path finding.
In this commit, we introduce a nice optimization with regards to lnd's
interaction with a bitcoind backend. Within lnd, we currently have three
different subsystems responsible for watching the chain: chainntnfs,
lnwallet, and routing/chainview. Each of these subsystems has an active
RPC and ZMQ connection to the underlying bitcoind node. This would incur
a toll on the underlying bitcoind node and would cause us to miss ZMQ
events, which are crucial to lnd. We remedy this issue by sharing the
same connection to a bitcoind node between the different clients within
lnd.
In this commit, we modify the test to explitlcy give the neutrino
backend more time to catch up compared to the RPC backends. We do this
as a recent change has been made in the neutrino backend to wait for the
filter headers to finish syncing before proceeding with the rescan
itself. As a result, we'll need to account for this in the test and
sleep enough to give the backend a chance to catch up.
In this commit, we ensure that the neutrino backend meets the target
interface, and also we update the API usage for the internal neutrino
rescan struct to use the new InputWithScript struct.
In this commit, we update the existing UpdateFilter method to take the
new channeldb.EdgePoint struct in place of the prior wire.OutPoint. We
must do this as the caller now typically has this type due to the
preparation to enable lnd to be able to be compatible with the new
neutrino protocol.
In this commit, we fix a slight race condition that can occur when we go
to add a shell node for a node announcement, but then right afterwards,
a new block arrives that causes us to prune an unconnected node. To
ensure this doesn't happen, we now add shell nodes within the same db
transaction as AddChannelEdge. This ensures that the state is fully
consistent and shell nodes will be added atomically along with the new
channel edge.
As a result of this change, we no longer need to add shell nodes within
the ChannelRouter, as the database will take care of this operation as
it should.
In this commit, we fix an existing bug that could at times lead to a
panic if a user manually crafts a route via SendToRoute, and that route
results in a payment error. The fix is simple: create the map even
though it won't be used in the sessions since the user is feeding the
router manual routes.
In this commit, we modify the granularity of the locking
around the filterMtx in the bitcoind chainview, such that
we only lock once per block connected or filter update.
Currently, we acquire and release the lock for every
update to the map.
We also fix a bug that would cause us to not fully remove
all previous outpoints spent by a txn when doing manual
filter, as we previously would only remove the first output
detected.
In this commit, we modify the granularity of the locking
around the filterMtx in the btcd chainview, such that we
only lock once per block connected or filter update.
Currently, we acquire and release the lock for every
update to the map.
We also fix a bug that would cause us to not fully remove
all previous outpoints spent by a txn when doing manual
filter, as we previously would only remove the first output
detected.
In this commit, we update the generateSphinxPacket to use newLogClosure
to delay the spew evaluation until log print time. Before this commit,
even if we weren't on the trace logging level, the spew call would
always be evaluated.
In this commit, a new weight function is introduced. This will create a
meaningful effect of time lock on route selection. Also, removes the
squaring of the fee term. This led to suboptimal routes.
Unit test added that covers the weight function and asserts that the
lowest fee route is indeed returned.
This comment extends the unit tests for NewRoute with checks
on the total time lock for a route as well as the expected time
lock values for every hop along the route.
This commit fixes the logic inside the newRoute function to
address the following problems:
- Fee calculation for a hop does not include the fee that needs
to be paid to the next hop.
- The incoming channel capacity "sanity" check does not include
the fee to be paid to the current hop.
In this commit, we fix the incorrect expiry values in the
spec_example.json test file. Many of the time locks were incorrect which
allowed bugs within the path finding logic related to CLTV deltas to go
un-detected.
In this commit, we fix an existing bug in the newRoute method. Before
this commit we would use the time lock delta of the current hop to
compute the outgoing time lock for the current hop. This is incorrect as
the time lock delta of the _outgoing_ hop should be used, as this is
what we're paying for "transit" on. This is a bug left over from when we
switched the meaning of the CLTV delta on the ChannelUpdate message
sometime last year.
The fix is simple: use the CLTV delta of the prior (later in the route)
hop.
- Extend SendRequest and QueryRoutesRequest protos
- newRoute function takes fee limit and cuts off routes that exceed it
- queryRoutes, payInvoice and sendPayment commands take the feeLimit inputs and pass them down to newRoute
- When no feeLimit is included, don't enforce any feeLimits at all (by setting feeLimit to maxValue)
In this commit, we modify the recent refactoring of the mission control
sub-system to overload the existing payment session, rather than create
a brand new one. This allows us to re-use more of the existing logic, and
also feedback into mission control the failures incurred by any user
selected routes.
In this commit, we introduce a new method to the channel router's config
struct: QueryBandwidth. This method allows the channel router to query
for the up-to-date available bandwidth of a particular link. In the case
that this link emanates from/to us, then we can query the switch to see
if the link is active (if not bandwidth is zero), and return the current
best estimate for the available bandwidth of the link. If the link,
isn't one of ours, then we can thread through the total maximal
capacity of the link.
In order to implement this, the missionControl struct will now query the
switch upon creation to obtain a fresh bandwidth snapshot. We take care
to do this in a distinct db transaction in order to now introduced a
circular waiting condition between the mutexes in bolt, and the channel
state machine.
The aim of this change is to reduce the number of unnecessary failures
during HTLC payment routing as we'll now skip any links that are
inactive, or just don't have enough bandwidth for the payment. Nodes
that have several hundred channels (all of which in various states of
activity and available bandwidth) should see a nice gain from this w.r.t
payment latency.
This commit alters the neutrino chainview such that it
caches the filter entries corresponding to watched
outpoints at the moment they are added to the filter.
Previously, we would rederive each filter entry when
reconstructing the relevant filter entries, which
would lead to unnecessary work on the gc. Now, each is
created at most once, and reused across subsequent
reconstructions.
Adds a new error ErrVBarrierShuttingDown that is returned
from WaitForDependants if the validation barrier's quit
chan is closed. This allows any blocked goroutines to
distinguish whether the dependent task has been completed,
or if validation should be aborted entirely.
This commit improves the shutdown of the router's
pending validation tasks, by ensuring the pending
tasks exit early if the validation barrier
receives a shutdown request.
Currently, any goroutines blocked by WaitForDependants
will continue execution after a shutdown is signaled.
This may lead to unnexpected behavior as the relation
between updates is no longer upheld. It also has the
side effect of slowing down shutdown, since we
continue to process the remaining updates.
To remedy this, WaitForDependants now returns an error
that signals if a shutdown was requested. The blocked
goroutines can exit early upon seeing this error,
without also signaling completion of their task to
the dependent tasks, which should will now properly
wait to read the validation barrier's quit signal.
In this commit, we update the TestSendPaymentErrorPathPruning test to
reflect the new behavior w.r.t how we respond to UnknownPeer errors. In
this new test, we expect that we'll find alternative route in light of
us getting an UnknownPeer error "pointing" to our destination node.
In this commit we fix an lingering bug in the Mission Control logic we
execute in response to the FailUnknownNextPeer error. Historically, we
would treat this as the _next_ node not being online. As a result, we
would then prune away the vertex from the current reachable graph all
together. It was recently realized, that this would at times be a bit
_tooo_ aggressive if the channel we attempt to route over was faulty,
down, or the incoming node had connectivity issues with the outgoing
node.
In light of this realization, we'll now instead only prune the _edge_
that we attempted to route over. This ensures that we'll continue to
explore the possible edges. Additionally, this guards us against failure
modes where nodes report FailUnknownNextPeer to other nodes in an
attempt to more closely control our retry logic.
This change is a stop gap on the path to a more intelligent set of
autopilot heuristics.
Fixes#1114.
In this commit, we modify our path finding algorithm to take an
additional set of edges that are currently not known to us that are
used to temporarily extend our graph with during a payment session.
These edges should assist the sender of a payment in successfully
constructing a path to the destination.
These edges should usually represent private channels, as they are not
publicly advertised to the network for routing.
In this commit, we introduce the ability for payment sessions to store
an additional set of edges that can be used to assist a payment in
successfully reaching its destination.
In this commit, we add a new field of routing hints to payments over the
Lightning Network. These routing hints can later be used within the path
finding algorithm in order to craft a path that will reach the
destination succesfully.
In this commit, we modify the way we handle FeeInsufficientErrors to
more aggressively route around nodes that repeatedly return the same
error to us. This will ensure we skip older nodes on the network which
are running a buggier older version of lnd. Eventually most nodes will
upgrade to this new version, making this change less needed.
We also update the existing test to properly use a multi-hop route to
ensure that we route around the offending node.
In this commit, we add a new node to the current default test graph
that we use for our path finding tests. This new node connects roasbeef
to sophon via a new route with very high fees. With this new node and
the two channels it adds, we can properly test that we’ll route around
failures that we run into during payment routing.
In this commit, we add vertex pruning for any non-final CLTV error.
Before this commit, we assumed that any source of this error was due to
the local node setting the incorrect time lock. However, it’s been
recently noticed on main net that there’re a set of nodes that seem to
not be properly scanned to the chain. Without this patch, users aren’t
able to route successfully as atm, we’ll stop all path finding attempts
if we encounter this.
In this commit, we address a number of edge cases that were unaccounted
for when responding to errors that can be sent back due to an HTLC
routing failure. Namely:
* We’ll no longer stop payment attempts if we’re unable to apply a
channel update, instead, we’ll log the error, prune the channel and
continue.
* We’ll no remember which channels were pruned due to insufficient
fee errors. If we ever get a repeat fee error from a channel, then we
prune it. This ensure that we don’t get stuck in a loop due to a node
continually advertising the same fees.
* We also correct an error in which node we’d prune due to a
temporary or permanent node failure. Before this commit, we would prune
the next node, when we should actually be pruning the node that sent us
the error.
Finally, we also add a new test to exercise the fee insufficient error
handling and channel pruning.
Fixes#865.
In this commit, we add a new field to the LightningPayment struct:
PayAttemptTimeout. This new field allows the caller to control exactly
how much time should be spent attempting to route a payment to the
destination. The default value we’ll use is 60 seconds, but callers are
able to specify a diff value. Once the timeout has passed, we’ll
abandon th e payment attempt, and return an error back to the original
caller.
In this commit, we add a set of new methods to check the freshness of
an edge/node. This will allow callers to skip expensive validation in
the case that the router already knows of an item, or knows of a
fresher version of that time.
A set of tests have been added to ensure basic correctness of these new
methods.
In router_test FindRoutes is passing DefaultFinalCLTVDelta in place
where numPaths is expected. This commit passes a default numPaths for
function calls to FindRoutes so that final cltv delta are correctly
passed.
In this commit, we modify the edgeWeight function that’s used within
the findPath method to weight fees more heavily than the time lock
value at an edge. We do this in order to greedily prefer lower fees
during path finding. This is a simple stop gap in place of more complex
weighting parameters that will be investigated later.
We also modify the edge distance to use an int64 rather than a float.
Finally an additional test has been added in order to excessive this
new change. Before the commit, the test was failing as we preferred the
route with lower total time lock.
In this commit, we modify the caching structure to return a set of
cached routes for a request if the number of routes requested is less
than or equal to the number of cached of routes.
In this commit, we modify the findPaths method to take the max number
of routes to return. With this change, FindRoutes can eventually itself
also take a max number of routes in order to make the function useable
again.
In order to reduce high CPU utilization during the initial network view
sync, we slash down the total number of active in-flight jobs that can
be launched.
In this commit, we now account for a case where a node sends us a
FailPermanentChannelFailure during a payment attempt. Before this
commit, we wouldn’t properly prune the edge to avoid re-using it. We
remedy this by properly attempting to prune the edge if possible.
Future changes well send a FailPermanentChannelFailure in the case that
we ned to go on-chain for an outgoing HTLC, and cancel back the
incoming HTLC.
In this commit, we fix an existing bug that could cause lnd to crash if
we sent a payment, and the *destination* sent a temp channel failure
error message. When handling such a message, we’ll look in the nextHop
map to see which channel was *after* the node that sent the payment.
However, if the destination sends this error, then there’ll be no entry
in this map.
To address this case, we now add a prevHop map. If we attempt to lookup
a node in the nextHop map, and they don’t have an entry, then we’ll
consult the prevHop map.
We also update the set of tests to ensure that we’re properly setting
both the prevHop map and the nextHop map.
This commit adds synchronization around the processing
of multiple ChannelEdgePolicy updates for the same
channel ID at the same time.
This fixes a bug that could cause the database access
HasChannelEdge to be out of date when the goroutine
came to the point where it was calling UpdateEdgePolicy.
This happened because a second goroutine would have
called UpdateEdgePolicy in the meantime.
This bug was quite benign, as if this happened at
runtime, we would eventually get the ChannelEdgePolicy
we had lost again, either from a peer sending it to
us, or if we would fail a payment since we were using
outdated information. However, it would cause some of
the tests to flake, since losing routing information
made payments we expected to go through fail if this
happened.
This is fixed by introducing a new mutex type, that
when locking and unlocking takes an additional
(id uint64) parameter, keeping an internal map
tracking what ID's are currently locked and the
count of goroutines waiting for the mutex. This
ensure we can still process updates concurrently,
only avoiding updates with the same channel ID from
being run concurrently.
In this commit, we modify the pruning semantics of the missionControl
struct. Before this commit, on each payment attempt, we would fetch a
new graph pruned view each time. This served to instantly propagate any
detected failures to all outstanding payment attempts. However, this
meant that we could at times get stuck in a retry loop if sends take a
few second, then we may prune an edge, try another, then the original
edge is now unpruned.
To remedy this, we now introduce the concept of a paymentSession. The
session will start out as a snapshot of the latest graph prune view.
Any payment failures are now reported directly to the paymentSession
rather than missionControl. The rationale for this is that
edges/vertexes pruned as result of failures will never decay for a
local payment session, only for the global prune view. With this in
place, we ensure that our set of prune view only grows for a session.
Fixes#536.
Before this commit, we wouldn’t properly set the TotalFees attribute.
As a result, our sorting algorithm at the end to select candidate
routes would simply maintain the time-lock order rather than also sort
by total fees. This commit fixes this issue and also allows the test
added in the prior commit to pass.
This commit fixes an existing bug within the ChannelRouter. Prior to
this commit, if the chain view skipped blocks or for some reason we had
a gap in blocks delivered, then we would simply accept them. This had
the potential to cause us to miss on-chain channel closure events. To
remedy this, we won’t process any blocks whose heights aren’t
*strictly* increasing.
A longer term fix would be to have the ChainView take a block height,
and re-dispatch any notifications from that height to the current
height.
In this commit, we implement adherence of the disabled bit within a
ChannelUpdate during path finding. If a channel is marked as disabled,
then we won’t attempt to route through it. A test has been added to
exercise this new check.
In this commit, we update path finding to skip an edge if the amount
we’re trying to route through it is below the MinHTLC (in mSAT) value
for that node. We also add a new test to exercise this behavior. In
order for out test to work properly, we’ve modified the JSON to make
the edge to Goku have a higher min HTLC value.
In this commit, we modify the high value passed into UpdateFilter upon
restart. Before this commit, we would pass in the prune height, which
would cause a full rescan within the FilteredChainView if the best
height as > than the prune height. This was redundant as we would
shortly carry out a manual rescan in the method below. To fix this, we
now pass in the bestHeight, this isn’t an issue as the
syncGraphWithChain method will manually scan up to that best height.
In this commit, we add a new abstraction, the ValidationBarrier. This
struct will be used to allow parallel validation of announcements
within notes AuthenticatedGossiper as well as the ChannelRouter.
Naively validating the announcement in parallel would run into issues
as it would be possible for validate an update announcement, before
validating the channel announcement itself. We solve this by creating a
waiting dependance using the ValidationBarrier to ensure that the
defendant jobs wait until their parents have been full validated.
In this commit we ensure that if this is the first time that the
ChannelRouter is starting, then we set the pruned height+hash to the
current best height. Otherwise, it’s possible that we attempt to update
the filter with a 0 prune height, which will restart a historical
rescan unnecessarily.
In this commit we ensure that we only update the filter, if we have a
non-zero chain view. Otherwise, a mini rescan may be kicked off
unnecessarily if we don’t yet know of any channels yet in the greater
graph.
Run go fmt so config file is formatted correctly. Also rename
newVertex to NewVertex in pathfind_test and notifications_test
as it is now exported from the routing package.
For Part 1 of Issue #275. Create isolated private struct in
networkHandler goroutine that will de-duplicate
announcements added to the batch. The struct contains maps
for each of channel announcements, channel updates, and
node announcements to keep track of unique announcements.
The struct has a Reset method to reset stored announcements, an
AddMsg(lnwire.Message) method to add a new message to the current
batch, and a Batch method to return the set of de-duplicated
announcements.
Also fix a few minor typos.
This commit alters the behavior of the router's logic on
startup, ensuring that the chain view is filtered using
the router's latest prune height. Before, the chain was
filtered using the bestHeight variable, which was
uninitialized, benignly forcing a rescan from genesis.
In tracking down this, we realized that we should
actually be using the prune height, as this is
representative of the channel view loaded from disk.
The best height/hash are now only used during
startup to determine if we are out of sync.
In this commit we fix an existing bug within the ChannelRouter. Before
this commit, we would sync our graph prune state, *then* update the
cain filter. This is incorrect as the blocks we manually pruned may
have included channel closing transactions. As a result, we would miss
the pruning of a set of channels, and assume that they were still
active.
In this commit, we fix this by reversing the order: we first update the
chain filter and THEN sync the channel graph.
In this commit we add a new test to the set of unit tests for the
ChannelRouter: TestRouterChansClosedOfflinePruneGraph. This tests that
if channels are closed while the ChannelRouter is down, then upon
restart the channels are properly recognized as being closed.
In this commit, we add a Reset() method to the mockChainView struct.
With this new method tests are able to fully simulate a restart of the
ChannelRouter. This is necessary as the FilteredChainView instances are
assumed to be stateless, and don’t write their state to disk before a
restart.
This commit adds a test for the FilteredChainView interfaces,
making sure they notify about disconnected/connected blocks
in the correct order during a reorg.
This commit makes use of the blockEventQueue within the neutrino
implementation of FilteredChainView to ensure connected and
disconnected blocks are consumed in order by the reader.
It also specifies that neutrino is not to send disconnected blocks
notifications during rescans, making it consistent with the btcd
implementation.
This commit moves btcd view away from using the deprecated
callbacks onBlockConnected/Disconnected, and instead use
onFilteredBlockConnected/disconnected.
This commit also implements the sending of disconnected blocks
over the staleBlocks channel. To send these blocks, the
blockEventQueue is used to ensure the ordering of blocks are
correctly kept.
It also changes the way filter updates are handled. Since we
now load the tx filter to the rpc server itself, we can call
RescanBlocks instead of manually filtering blocks. These
rescanned blocks are also added to the blockEventQueue,
ensuring the ordering is kept.
blockEventQueue is an ordered queue for block events sent from a
FilteredChainView. The two types of possible block events are
connected/new blocks, and disconencted/stale blocks. The
blockEventQueue keeps the order of these events intact, while
still being non-blocking. This is important in order for the
chainView's call to onFilteredBlockConnected/Disconnected to not
get blocked, and for the consumer of the block events to always
get the events in the correct order.
Before this commit, we would expect that structurally we don’t pay any
fee for the first hop, but do for the final hop. After the latest
commit, this is now flipped as when we say fee, we mean the fee that we
need to pay to transit a link. For the final hop, there’s no additional
distance to be traveled, so the fee is nothing.
In this commit we fix an existing miscalculation in the fees that we
prescribe within the onion payloads for multi-hop routes. Before this
commit, if a route had more than 3 hops, then we would erroneously give
the second to last hop zero fees.
In this commit we correct this behavior, and also re-write the fee
calculation code fragment within newRoute for readability and clarity.
There are now only two cases: this is the last hop, and this is any
other hop. In the case of the last hop, simply send the exact amount
with no additional fee. In the case of an intermediate hop, we use the
_prior_ (closer to the destination) hop to calculate the amount of fees
we need, which allows us to compute the incoming flow. Using that
incoming flow, we then can compute the amount that the hop should
forward out.
Partially fixes#391.
In this commit we fix a slight bug within the existing SendPayment loop
which would cause the wrong error to be returned to users. Prior to
this commit, if we received an update identical to what we were already
aware of, then that error would be returned rather than the
ForwardingError that encapsulated this update.
In this commit with remedy this by properly returning the exact error.
Partially fixes#391.