This commit reverts cb4cd49dc8d3b0255afe9ff29af9c46c2dbb2c98 to bring
back the insufficient local balance failure.
Distinguishing betweeen this failure and a regular "no route" failure
prevents meaningless htlcs from being sent out.
With mpp it isn't possible anymore for findPath to determine that there
isn't enough local bandwidth. The full payment amount isn't known at
that point.
In a follow-up, this payment outcome can be reintroduced on a higher
level (payment lifecycle).
We whitelist a set of "expected" errors that can be returned from
RequestRoute, by converting them into a new type noRouteError. For any
other error returned by RequestRoute, we'll now exit immediately.
This commit brings us inline with recent modifications to the spec, that
say we shouldn't pay nodes whose feature vectors signal unknown required
features, and also that we shouldn't route through nodes signaling
unknown required features.
Currently we assert that invoices don't have such features during
decoding, but now that users can specify feature vectors via the rpc
interface, it makes sense to perform this check deeper in call stack.
This will also allow us to remove the check from decoding entirely,
making decodepayreq more useful for debugging.
Also the max hop count check can be removed, because the real bound is
the payload size. By moving the check inside the search loop, we now
also backtrack when we hit the limit.
We move up the check for TLV support, since we will later use it to
determine if we can use dependent features, e.g. TLV records and payment
addresses.
This commit creates a wrapper struct, grouping all parameters that
influence the final hop during route construction. This is a preliminary
step for passing in the receiver's invoice feature bits, which will be
used to select an appropriate payment or payload type.
In this commit, we overwrite the final hop's features with either the
destination features or those loaded from the graph fallback. This
ensures that the same features used in pathfinding will be provided to
route construction.
In an earlier commit, we validated the final hop's transitive feature
dependencies, so we also add validation to non-final nodes.
This commit adds an optional PaymentAddr field to the RestrictParams, so
that we can verify the final hop can support it before doing an
expensive round of pathfindig.
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
This commit prepares for more manipulation of custom records. A list of
tlv.Record types is more difficult to use than the more basic
map[uint64][]byte.
Furthermore fields and variables are renamed to make them more
consistent.
A unified policy differs between local channels and other channels on
the network. There is more information available for local channels and
this is used in the unified policy.
Previously we used the pathfinding source pubkey to determine whether to
apply the local channel logic or not. If queryroutes is executed with a
source node that isn't the self node, this wouldn't work.
When the (virtual) payment attempt cost is set to zero, probabilities
are no longer a factor in determining the best route. In case of routes
with equal costs, we'd just go with the first one found. This commit
refines this behavior by picking the route with the highest probability.
So even though probability doesn't affect the route cost, it is still
used as a tie breaker.
This prepares for routing to self. When checking the condition at the
start, the loop would terminate immediately because the source is equal
to the target.
In this commit we change path finding to no longer consider all channels
between a pair of nodes individually. We assume that nodes forward
non-strict and when we attempt a connection between two nodes, we don't
want to try multiple channels because their policies may not be identical.
Having distinct policies for channel to the same peer is against the
recommendation in the spec, but it happens in the wild. Especially since
we recently changed the default cltv delta value.
What this commit introduces is a unified policy. This can be looked upon
as the greatest common denominator of all policies and should maximize
the probability of getting the payment forwarded.
distance map now holds the edge the current path is coming from,
removing the need for next map.
Both distance map and distanceHeap now hold pointers instead of the full
struct to reduce allocations and copies.
Both these changes reduced path finding time by ~5% and memory usage by
~2mb.
Pre-sizing these structures avoids a lot of map resizing, which causes
copies and rehashing of entries. We mostly know that the map won't
exceed that size, and it doesn't affect memory usage in any significant
way.
Calling `ForEachNode` hits the DB, and allocates and parses every node
in the graph. Walking the channels also loads nodes from the DB, so this
meant that each node was read/parsed/allocated several times per run.
This reduces runtime by ~10ms and memory usage by ~4mb.
With the introduction of the max CLTV limit parameter, nodes are able to
reject HTLCs that exceed it. This should also be applied to path
finding, otherwise HTLCs crafted by the same node that exceed it never
left the switch. This wasn't a big deal since the previous max CLTV
limit was ~5000 blocks. Once it was lowered to 1008, the issue became
more apparent. Therefore, all of our path finding attempts now have a
restriction of said limit in in order to properly carry out HTLCs to the
network.