This shortcut does not work when the destination is a private node. We
also don't have this shortcut for regular payments. This commit
aligns the behavior between SendPayment and QueryRoutes.
The default was increased for the main sendpayment RPC in commit
d3fa9767a9729756bab9b4a1121344b265410b1a. This commit sets the
same default for QueryRoutes, routerrpc.SendPayment and
router.EstimateRouteFee.
Update the type check used for checking local payment
failures to check on the ClearTextError interface rather
than on the ForwardingError type. This change prepares
for splitting payment errors up into Link and Forwarding
errors.
This commit adds a ClearTextError interface
which is implemented by non-opaque errors that
we know the underlying wire failure message for.
This interface is implemented by ForwardingErrors,
because we can fully decrypt the onion blob to
obtain the underlying failure reason. This interface
will also be implemented by errors which originate
at our node in following commits, because we know
the failure reason when we fail the htlc.
The lnwire interface is un-embedded in the
ForwardingError struct in favour of implementing
this interface. This change is made to protect
against accidental passing of a ForwardingError
to the wire, where the embedded FailureMessage
interface will present as wire failure but
will not serialize properly.
In this commit, we update the routing package to use the new
`sphinx.NewOnionPacket` method. The new version of this method allows us
to specify _how_ the packet should be filled before it's used to create
a mix-header. This isn't a fundamental change (totally backwards
compatible), instead it plugs a privacy leak that may have revealed to
the destination how long the true route was.
This commit creates a wrapper struct, grouping all parameters that
influence the final hop during route construction. This is a preliminary
step for passing in the receiver's invoice feature bits, which will be
used to select an appropriate payment or payload type.
This commit prepares for more manipulation of custom records. A list of
tlv.Record types is more difficult to use than the more basic
map[uint64][]byte.
Furthermore fields and variables are renamed to make them more
consistent.
With the introduction of the max CLTV limit parameter, nodes are able to
reject HTLCs that exceed it. This should also be applied to path
finding, otherwise HTLCs crafted by the same node that exceed it never
left the switch. This wasn't a big deal since the previous max CLTV
limit was ~5000 blocks. Once it was lowered to 1008, the issue became
more apparent. Therefore, all of our path finding attempts now have a
restriction of said limit in in order to properly carry out HTLCs to the
network.
In the process of moving to use the new package, we no longer need to
fetch the outpoint directly, and instead only need to pass the funding
transaction into the new verification logic.
In this commit, we update the router and link to support users
updating the max HTLC policy for their channels. By updating these internal
systems before updating the RPC server and lncli, we protect users from
being shown an option that doesn't actually work.
This commit modifies paymentLifecycle so that it not only feeds
failures into mission control, but successes as well.
This allows for more accurate probability estimates. Previously,
the success probability for a successful pair and a pair with
no history was equal. There was no force that pushed towards
previously successful routes.
In this commit, we extend the path finding to be able to recognize when
a node needs the new TLV format, or the legacy format based on the
feature bits they expose. We also extend the `LightningPayment` struct
to allow the caller to specify an arbitrary set of TLV records which can
be used for a number of use-cases including various variants of
spontaneous payments.
This commit converts several functions from returning a bool and a
failure reason to a nillable failure reason as return parameter. This
will take away confusion about the interpretation of the two separate
values.
Previously mission control tracked failures on a per node, per channel basis.
This commit changes this to tracking on the level of directed node pairs. The goal
of moving to this coarser-grained level is to reduce the number of required
payment attempts without compromising payment reliability.
The current approach iterates all channels in the graph in order to
filter those in need. This approach is time consuming, several seconds
on my mobile device for ~40,000 channels, while during this time the
db is locked in a transaction.
The proposed change is to use an existing functionality that utilize the
fact that channel update are saved indexed by date. This method enables
us to go over only a small subset of the channels, only those that
were updated before the "channel expiry" time and further filter
them for our need.
The same graph that took several seconds to prune was pruned, after
the change, in several milliseconds.
In addition for testing purposes I added Initiator field to the
testChannel structure to reflect the channeldEdgePolicy direction.
If nodes return a channel policy related failure, they may get a second
chance. Our graph may not be up to date. Previously this logic was
contained in the payment session.
This commit moves that into global mission control and thereby removes
the last mission control state that was kept on the payment level.
Because mission control is not aware of the relation between payment
attempts and payments, the second chance logic is no longer based
tracking second chances given per payment.
Instead a time based approach is used. If a node reports a policy
failure that prevents forwarding to its peer, it will get a second
chance. But it will get it only if the previous second chance was
long enough ago.
Also those second chances are no longer dependent on whether an
associated channel update is valid. It will get the second chance
regardless, to prevent creating a dependency between mission control and
the graph. This would interfer with (future) replay of history, because
the graph may not be the same anymore at that point.
This commit moves the call to PruneGraph outside of the loop
that collates all of the spentOutputs. With this change, if
a node has been offline for a long period of time, resyncing
with the chain no longer takes up as much memory (1MB vs 200MB
in some cases) or time. Previously, PruneGraph was called
for every block and allocated a very large map further down
in the pruneGraphNodes function. Now, pruneGraphNodes is only
called once.
Since nilling the pubkey curve will lead to a nil-pointer exception if
the key is later used for signature verification, we make sure to make a
copy before nilling and spewing.
This commit moves the default timeout out of router and thereby fixes a
bug that caused SendToRoute to not return the actual error, but a
timeout result instead. SendToRoute only tries a single route, so a
timeout should never happen.
Previously we would mistakenly use the payment value from the dummy
LightningPayment struct, which would obviously be 0 always. Now we
instead calculate the value from the given route.
Previously every payment had its own local mission control state which
was in effect only for that payment. In this commit most of the local
state is removed and payments all tap into the global mission control
probability estimator.
Furthermore the decay time of pruned edges and nodes is extended, so
that observations about the network can better benefit future payment
processes.
Last, the probability function is transformed from a binary output to a
gradual curve, allowing for a better trade off between candidate routes.
This commit makes the router use the ControlTower to drive the payment
life cycle state machine, to keep track of active payments across
restarts. This lets the router resume payments on startup, such that
their final results can be handled and stored when ready.
This encapsulates all state needed to resume a payment from any point of
the payment flow, and that must be shared between the different stages
of the execution. This is done to prepare for breaking the send loop
into smaller parts, and being able to resume the payment from any point
from persistent state.
In this commit we move handing the deobfuscator from the router to the
switch from when the payment is initiated, to when the result is
queried.
We do this because only the router can recreate the deobfuscator after a
restart, and we are preparing for being able to handle results across
restarts.
Since the deobfuscator cannot be nil anymore, we can also get rid of
that special case.
This lets us distinguish an critical error from a actual payment result
(success or failure). This is important since we know that we can only
attempt another payment when a final result from the previous payment
attempt is received.
This commit moves the responsibility of generating a unique payment ID
from the switch to the router. This will make it easier for the router
to keep track of which HTLCs were successfully forwarded onto the
network, as it can query the switch for existing HTLCs as long as the
paymentIDs are kept.
The router is expected to maintain a map from paymentHash->paymentID,
such that they can be replayed on restart. This also lets the router
check the status of a sent payment after a restart, by querying the
switch for the paymentID in question.
This commit reevaluates the router's quit channel between each block
during the initial call to syncGraphWithChain, which, in the worst case,
may have to scan several thousand blocks on startup if the node has not
been active for some time. Without this, attempting to stop the daemon
will not exit until the rescan has completed, which for certain backends
could be several hours.
In this commit, we update the process that we use to generate a sphinx
packet to send our onion routed HTLC. Due to recent changes in the
`sphinx` package we use, we now need to use a new PaymentPath struct. As
a result, it no longer makes sense to split up the nodes in a route and
their per hop paylods as they're now in the same struct. All tests have
been updated accordingly.
In this commit, we refactor DeleteChannelEdge to use ChannelIDs rather
than ChannelPoints. We do this as the only use of DeleteChannelEdge is
when we are pruning zombie channels from our graph. When running under a
light client, we are unable to obtain the ChannelPoint of each edge due
to the expensive operations required to do so. As a stop-gap, we'll
resort towards using an edge's ChannelID instead, which is already
gossiped between nodes.
This serves as a stop-gap for light clients as blocks need to be
downloaded from the P2P network, and even with caches, would be too
costly for them to verify. Doing this has two side effects however:
we'll no longer know of the channel capacity and outpoint, which are
essential for some of lnd's responsibilities.
In this commit, we disable attempting to determine when a channel has
been closed out on-chain whenever AssumeChannelValid is active. Since
the flag indicates that performing this operation is expensive, we do
this as a temporary optimization until we can include proofs of channels
being closed in the gossip protocol.
With this change, the only way for channels being removed from the graph
will be once they're considered zombies: which can happen when both
edges of a channel have their disabled bits set or when both edges
haven't had an update within the past two weeks.
To ensure we don't mark an edge as live again just because an update
with a fresh timestamp was received, we'll ensure that we reject any
new updates for zombie channels if they remain disabled when running
with AssumeChannelValid.
In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.