In this commit, we fix a regression introduced by a recent bug fix in
this area. Before this change, we'd inspect the error returned by
`processSendError`, and then fail the payment from the PoV of mission
control using the returned error.
A recent refactoring removed `processSendError` and combined the logic
with `tryApplyChannelUpdate` in order to introduce a new
`handleSendError` method that consolidates the logic within the
`shardHandler`. Along the way, the behavior of the prior check was
replicated in the form of a new internal `failPayment` closure. However,
the new function closure ends up returning a `channeldb.FailureReason`
instance, which is actually an `error`.
In the wild, when `SendToRoute` fails due to an error at the
destination, then this new logic caused the `handleSendErorr` method to
fail with an error, returning an unstructured error back to the caller,
instead of the usual payment failure details.
We fix this by no longer checking the `handleSendErorr` for an error as
normal. The `handleSendErorr` function as is will always return an error
of type `*channeldb.FailureReason`, therefore we don't need to treat it
as a normal error. Instead, we check for the type of error returned, and
update the control tower state accordingly.
With this commit, the test added in the prior commit now passes.
Fixes#5477.
This commit moves the handleSendError method from ChannelRouter to
shardHandler. In doing so, shardHandler can now apply updates to the
in-memory paymentSession if they are found in the error message.
It seems #5246 introduced a subtle bug that lead to the error "out of
order block: expecting height=1, got height=XXX" some times during
startup. Apparently it can happen that during pruning of the graph tip
some blocks can come in before we start our chain view and the new block
subscription. By querying the chain backend for the best height before
syncing with the graph we ensure that we never miss a block.
The router subsystem has its own goroutine that receives chain updates
and then does its (quite time consuming) work on each new block. To make
it possible to find out what block the router currently is synced to, we
export its internal best height through a new method.
In this commit we add a new error for when we fail to validate the
funding transaction (invalid script, etc) and mark it as a zombie like
the other failed validation cases.
In this commit, we start to add any channels that fail the normal chain
validation to the zombie index. With this change, we'll ensure that we
won't continue to re-process the same set of spent channels over and
over again.
Fixes#5191.
This ensures the waiting receiving channel always receives an error to
prevent a deadlock when processing a network update that fails due to
the validation barrier.
On commit d5aedbcbd9db510c974c9f7be5ab177ad6546994:
1000 @ 0x43a285 0x44a38f 0xc42e86 0xc80fda 0xc8682d 0xc976c9 0x46fce1
github.com/lightningnetwork/lnd/routing.(*ChannelRouter).AddNode+0x245 github.com/lightningnetwork/lnd/routing/router.go:2218
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).addNode+0x3b9 github.com/lightningnetwork/lnd/discovery/gossiper.go:1510
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).processNetworkAnnouncement+0x574c github.com/lightningnetwork/lnd/discovery/gossiper.go:1554
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).networkHandler.func1+0x24github.com/lightningnetwork/lnd/discovery/gossiper.go:1043
Since we want to support AMP payment using a different unique payment
identifier (AMP payments don't go to one specific hash), we change the
nomenclature to be Identifier instead of PaymentHash.
We'll let the payment's lifecycle register each shard it's sending with
the ShardTracker, canceling failed shards. This will be the foundation
for correct AMP derivation for each shard we'll send.
To distinguish the attempt's unique ID from the overall payment
identifier, we name it attemptID everywhere, and note that the
paymentHash argument won't be the actual payment hash for AMP payments.
In this commit, we add strict zombie pruning as a config level param.
This allow us to add the option for those that want a tighter graph, and
not change the default composition of the channel graph for most users
over night.
In addition, we expand the test case slightly by testing that the self
node won't be pruned, but also that if there's a node with only a single
known stale edge, then both variants will prune that edge.
Since zombie pruning can be very slow on some devices (e.g. mobile) it
would stall lnd startup. Since it is not essential for pruning to be
finished for lnd to be functional, we instead delay the initial prune by
30 seconds.
Note that we could also wait for the graphPruneInterval to tick, but
since this is by default 2 hours, it is unlikely that a mobile app will
ever be open that long.
Previously, we would always allow dependent jobs to be processed,
regardless of the result of its parent job's validation. This isn't
correct, as a parent job contains actions necessary to successfully
process a dependent job. A prime example of this can be found within the
AuthenticatedGossiper, where an incoming channel announcement and update
are both processed, but if the channel announcement job fails to
complete, then the gossiper is unable to properly validate the update.
This commit aims to address this by preventing the dependent jobs to
run.
This commit reduces the number of concurrent validation operations the
router will perform when fully validating the channel graph. Reports
from several users indicate that GetInfo would hang for several minutes,
which is believed to be caused by attempting to validate massive amounts
of channels in parallel. This commit returns the limit back to its
original state before adding the batched gossip improvements.
We keep the 1000 concurrent validation request limit for
AssumeChannelValid, since we don't fetch blocks in that case. This
allows us to still keep the performance benefits on mobile/low-resource
devices.
In this commit, we extend the `BuildRoute` method and RPC on the router
sub-server to accept a raw payment address which will be included as
part of an MPP payload for the finla hop. This change actually also
allows users to craft their own MPP paths using BuildRoute+SendToRoute.
Our primary goal however, was to fix some broken itests since we now
require the payAddr to be present for ALL payments other than key send
payments.
This allows for a 1000 different persistent operations to proceed
concurrently. Now that we are batching operations at the db level, the
average number of outstanding requests will be higher since the commit
latency has increased. To compensate, we allow for more outstanding
requests to keep the router busy while batches are constructed.
This commit clamps all user-chosen CLTVs in LND to be at least 18, which
is the new conservative value used in the sepc. This minimum is applied
uniformly to forwarding CLTV deltas (via channel updates) as well as
final CLTV deltas for new invoices.
Modifies the payment session to launch additional pathfinding attempts
for lower amounts. If a single shot payment isn't possible, the goal is
to try to complete the payment using multiple htlcs. In previous
commits, the payment lifecycle has been prepared to deal with
partial-amount routes returned from the payment session. It will query
for additional shards if needed.
Additionally a new rpc payment parameter is added that controls the
maximum number of shards that will be used for the payment.