Also the max hop count check can be removed, because the real bound is
the payload size. By moving the check inside the search loop, we now
also backtrack when we hit the limit.
This commit fixes a potential bug in our test harness, by ensuring that
the constructed node policies are configured _after_ sorting. Currently
the node pubkeys are sorted, but additional parameters (max htlc,
disabled, etc) are applied using the unsorted policies.
Most of the constructors used today use the symmetric channel
constructor, so this shouldn't cause an issue with the majority of our
tests. We recently introduced an asymmetric channel constructor for
which this could have been an issue, however, no known issues were
discovered.
Lastly, we remove the direction from the configuration altogether, and
derive it purely from the final sorting of the pubkeys.
We move up the check for TLV support, since we will later use it to
determine if we can use dependent features, e.g. TLV records and payment
addresses.
This commit creates a wrapper struct, grouping all parameters that
influence the final hop during route construction. This is a preliminary
step for passing in the receiver's invoice feature bits, which will be
used to select an appropriate payment or payload type.
In this commit, we overwrite the final hop's features with either the
destination features or those loaded from the graph fallback. This
ensures that the same features used in pathfinding will be provided to
route construction.
In an earlier commit, we validated the final hop's transitive feature
dependencies, so we also add validation to non-final nodes.
This commit adds an optional PaymentAddr field to the RestrictParams, so
that we can verify the final hop can support it before doing an
expensive round of pathfindig.
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
This commit allows custom node features to be populated in specific test
instances. For consistency, we auto-populate an empty feature vector for
nodes that have nil feature vectors before writing them to the database.
Previously if a payment was sent with custom records attached, path
finding wouldn't perform a check whether the final node was capable of
receiving custom records in a tlv payload.
This commit prepares for more manipulation of custom records. A list of
tlv.Record types is more difficult to use than the more basic
map[uint64][]byte.
Furthermore fields and variables are renamed to make them more
consistent.
A unified policy differs between local channels and other channels on
the network. There is more information available for local channels and
this is used in the unified policy.
Previously we used the pathfinding source pubkey to determine whether to
apply the local channel logic or not. If queryroutes is executed with a
source node that isn't the self node, this wouldn't work.
When the (virtual) payment attempt cost is set to zero, probabilities
are no longer a factor in determining the best route. In case of routes
with equal costs, we'd just go with the first one found. This commit
refines this behavior by picking the route with the highest probability.
So even though probability doesn't affect the route cost, it is still
used as a tie breaker.
This prepares for routing to self. When checking the condition at the
start, the loop would terminate immediately because the source is equal
to the target.
This commit modifies the FetchPayment method to return MPPayment structs
converted from the legacy on-disk format. This allows us to attach the
HTLCs to the events given to clients subscribing to the outcome of an
HTLC.
This commit also bubbles up to the routerrpc/router_server, by
populating HTLCAttempts in the response and extracting the legacy route
field from the HTLCAttempts.
Previously we used the a priori probability also for our own untried
channels. This led to local channels that had seen a success already
being prioritized over untried local channels. In some cases, depending
on the configured payment attempt cost, this could lead to the payment
taking a two hop route while a direct payment was also possible.
An InvalidOnionPayload implies that the onion was successfully received
by the reporting node, but that they were unable to extract the
contents. Since we assume our own behavior is correct, this mostly
likely poins to an error in the reporter's implementation or that we
sent an unknown required type. Therefore we only penalize that single
hop, and consider the failure terminal if the receiver reported it.
Probabilities are no longer returned for querymc calls. To still provide
some insight into the mission control internals, this commit adds a new
rpc that calculates a success probability estimate for a specific node
pair and amount.
This prepares for decoupling the result interpretation of a single
payment attempt from the information stored in mission control memory
on the history of a node pair. A planned follow-up where we store both
the last success and last failure requires this decoupling.
In this commit we change path finding to no longer consider all channels
between a pair of nodes individually. We assume that nodes forward
non-strict and when we attempt a connection between two nodes, we don't
want to try multiple channels because their policies may not be identical.
Having distinct policies for channel to the same peer is against the
recommendation in the spec, but it happens in the wild. Especially since
we recently changed the default cltv delta value.
What this commit introduces is a unified policy. This can be looked upon
as the greatest common denominator of all policies and should maximize
the probability of getting the payment forwarded.
distance map now holds the edge the current path is coming from,
removing the need for next map.
Both distance map and distanceHeap now hold pointers instead of the full
struct to reduce allocations and copies.
Both these changes reduced path finding time by ~5% and memory usage by
~2mb.
Pre-sizing these structures avoids a lot of map resizing, which causes
copies and rehashing of entries. We mostly know that the map won't
exceed that size, and it doesn't affect memory usage in any significant
way.
Calling `ForEachNode` hits the DB, and allocates and parses every node
in the graph. Walking the channels also loads nodes from the DB, so this
meant that each node was read/parsed/allocated several times per run.
This reduces runtime by ~10ms and memory usage by ~4mb.
This commit changes mission control to partially base the estimated
probability for untried connections on historical results obtained in
previous payment attempts. This incentivizes routing nodes to keep all
of their channels in good shape.
Probability estimates are amount dependent. Previously we assumed an
amount, but that starts to make less sense when we make probability more
dependent on amounts in the future.
This commit modifies the interpretation of node-level failures.
Previously only the failing node was marked. With this commit, also the
incoming and outgoing connections involved in the route are marked as
failed.
The change prepares for the removal of node-level failures in mission
control probability estimation.
This commit changes the in-memory structure of the mission control
state. It prepares for calculation of a node probability. For this we
need to be able to efficiently look up the last results for all channels
of a node.
With the introduction of the max CLTV limit parameter, nodes are able to
reject HTLCs that exceed it. This should also be applied to path
finding, otherwise HTLCs crafted by the same node that exceed it never
left the switch. This wasn't a big deal since the previous max CLTV
limit was ~5000 blocks. Once it was lowered to 1008, the issue became
more apparent. Therefore, all of our path finding attempts now have a
restriction of said limit in in order to properly carry out HTLCs to the
network.