With mpp it isn't possible anymore for findPath to determine that there
isn't enough local bandwidth. The full payment amount isn't known at
that point.
In a follow-up, this payment outcome can be reintroduced on a higher
level (payment lifecycle).
The default was increased for the main sendpayment RPC in commit
d3fa9767a9729756bab9b4a1121344b265410b1a. This commit sets the
same default for QueryRoutes, routerrpc.SendPayment and
router.EstimateRouteFee.
Modifies TestMissingFeatureDep and TestDestPaymentAddr to use the test
ctx directly instead of generating a closure and using local state to
modify restrictions.
This commit brings us inline with recent modifications to the spec, that
say we shouldn't pay nodes whose feature vectors signal unknown required
features, and also that we shouldn't route through nodes signaling
unknown required features.
Currently we assert that invoices don't have such features during
decoding, but now that users can specify feature vectors via the rpc
interface, it makes sense to perform this check deeper in call stack.
This will also allow us to remove the check from decoding entirely,
making decodepayreq more useful for debugging.
Also the max hop count check can be removed, because the real bound is
the payload size. By moving the check inside the search loop, we now
also backtrack when we hit the limit.
This commit fixes a potential bug in our test harness, by ensuring that
the constructed node policies are configured _after_ sorting. Currently
the node pubkeys are sorted, but additional parameters (max htlc,
disabled, etc) are applied using the unsorted policies.
Most of the constructors used today use the symmetric channel
constructor, so this shouldn't cause an issue with the majority of our
tests. We recently introduced an asymmetric channel constructor for
which this could have been an issue, however, no known issues were
discovered.
Lastly, we remove the direction from the configuration altogether, and
derive it purely from the final sorting of the pubkeys.
We move up the check for TLV support, since we will later use it to
determine if we can use dependent features, e.g. TLV records and payment
addresses.
This commit creates a wrapper struct, grouping all parameters that
influence the final hop during route construction. This is a preliminary
step for passing in the receiver's invoice feature bits, which will be
used to select an appropriate payment or payload type.
In this commit, we overwrite the final hop's features with either the
destination features or those loaded from the graph fallback. This
ensures that the same features used in pathfinding will be provided to
route construction.
In an earlier commit, we validated the final hop's transitive feature
dependencies, so we also add validation to non-final nodes.
This commit adds an optional PaymentAddr field to the RestrictParams, so
that we can verify the final hop can support it before doing an
expensive round of pathfindig.
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
This commit allows custom node features to be populated in specific test
instances. For consistency, we auto-populate an empty feature vector for
nodes that have nil feature vectors before writing them to the database.
When the (virtual) payment attempt cost is set to zero, probabilities
are no longer a factor in determining the best route. In case of routes
with equal costs, we'd just go with the first one found. This commit
refines this behavior by picking the route with the highest probability.
So even though probability doesn't affect the route cost, it is still
used as a tie breaker.
With the introduction of the max CLTV limit parameter, nodes are able to
reject HTLCs that exceed it. This should also be applied to path
finding, otherwise HTLCs crafted by the same node that exceed it never
left the switch. This wasn't a big deal since the previous max CLTV
limit was ~5000 blocks. Once it was lowered to 1008, the issue became
more apparent. Therefore, all of our path finding attempts now have a
restriction of said limit in in order to properly carry out HTLCs to the
network.
In this commit, we extend the path finding to be able to recognize when
a node needs the new TLV format, or the legacy format based on the
feature bits they expose. We also extend the `LightningPayment` struct
to allow the caller to specify an arbitrary set of TLV records which can
be used for a number of use-cases including various variants of
spontaneous payments.
Previously mission control tracked failures on a per node, per channel basis.
This commit changes this to tracking on the level of directed node pairs. The goal
of moving to this coarser-grained level is to reduce the number of required
payment attempts without compromising payment reliability.
The current approach iterates all channels in the graph in order to
filter those in need. This approach is time consuming, several seconds
on my mobile device for ~40,000 channels, while during this time the
db is locked in a transaction.
The proposed change is to use an existing functionality that utilize the
fact that channel update are saved indexed by date. This method enables
us to go over only a small subset of the channels, only those that
were updated before the "channel expiry" time and further filter
them for our need.
The same graph that took several seconds to prune was pruned, after
the change, in several milliseconds.
In addition for testing purposes I added Initiator field to the
testChannel structure to reflect the channeldEdgePolicy direction.
Previously every payment had its own local mission control state which
was in effect only for that payment. In this commit most of the local
state is removed and payments all tap into the global mission control
probability estimator.
Furthermore the decay time of pruned edges and nodes is extended, so
that observations about the network can better benefit future payment
processes.
Last, the probability function is transformed from a binary output to a
gradual curve, allowing for a better trade off between candidate routes.
This PR replaces the previously used edge and node ignore lists in path
finding by a probability based system. It modifies path finding so that
it not only compares routes on fee and time lock, but also takes route
success probability into account.
Allowing routes to be compared based on success probability is achieved
by introducing a 'virtual' cost of a payment attempt and using that to
translate probability into another cost factor.
In this commit, we update the process that we use to generate a sphinx
packet to send our onion routed HTLC. Due to recent changes in the
`sphinx` package we use, we now need to use a new PaymentPath struct. As
a result, it no longer makes sense to split up the nodes in a route and
their per hop paylods as they're now in the same struct. All tests have
been updated accordingly.
In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.