This commit modifies the FetchPayment method to return MPPayment structs
converted from the legacy on-disk format. This allows us to attach the
HTLCs to the events given to clients subscribing to the outcome of an
HTLC.
This commit also bubbles up to the routerrpc/router_server, by
populating HTLCAttempts in the response and extracting the legacy route
field from the HTLCAttempts.
Previously we used the a priori probability also for our own untried
channels. This led to local channels that had seen a success already
being prioritized over untried local channels. In some cases, depending
on the configured payment attempt cost, this could lead to the payment
taking a two hop route while a direct payment was also possible.
An InvalidOnionPayload implies that the onion was successfully received
by the reporting node, but that they were unable to extract the
contents. Since we assume our own behavior is correct, this mostly
likely poins to an error in the reporter's implementation or that we
sent an unknown required type. Therefore we only penalize that single
hop, and consider the failure terminal if the receiver reported it.
Probabilities are no longer returned for querymc calls. To still provide
some insight into the mission control internals, this commit adds a new
rpc that calculates a success probability estimate for a specific node
pair and amount.
This prepares for decoupling the result interpretation of a single
payment attempt from the information stored in mission control memory
on the history of a node pair. A planned follow-up where we store both
the last success and last failure requires this decoupling.
In this commit we change path finding to no longer consider all channels
between a pair of nodes individually. We assume that nodes forward
non-strict and when we attempt a connection between two nodes, we don't
want to try multiple channels because their policies may not be identical.
Having distinct policies for channel to the same peer is against the
recommendation in the spec, but it happens in the wild. Especially since
we recently changed the default cltv delta value.
What this commit introduces is a unified policy. This can be looked upon
as the greatest common denominator of all policies and should maximize
the probability of getting the payment forwarded.
distance map now holds the edge the current path is coming from,
removing the need for next map.
Both distance map and distanceHeap now hold pointers instead of the full
struct to reduce allocations and copies.
Both these changes reduced path finding time by ~5% and memory usage by
~2mb.
Pre-sizing these structures avoids a lot of map resizing, which causes
copies and rehashing of entries. We mostly know that the map won't
exceed that size, and it doesn't affect memory usage in any significant
way.
Calling `ForEachNode` hits the DB, and allocates and parses every node
in the graph. Walking the channels also loads nodes from the DB, so this
meant that each node was read/parsed/allocated several times per run.
This reduces runtime by ~10ms and memory usage by ~4mb.
This commit changes mission control to partially base the estimated
probability for untried connections on historical results obtained in
previous payment attempts. This incentivizes routing nodes to keep all
of their channels in good shape.
Probability estimates are amount dependent. Previously we assumed an
amount, but that starts to make less sense when we make probability more
dependent on amounts in the future.
This commit modifies the interpretation of node-level failures.
Previously only the failing node was marked. With this commit, also the
incoming and outgoing connections involved in the route are marked as
failed.
The change prepares for the removal of node-level failures in mission
control probability estimation.
This commit changes the in-memory structure of the mission control
state. It prepares for calculation of a node probability. For this we
need to be able to efficiently look up the last results for all channels
of a node.
With the introduction of the max CLTV limit parameter, nodes are able to
reject HTLCs that exceed it. This should also be applied to path
finding, otherwise HTLCs crafted by the same node that exceed it never
left the switch. This wasn't a big deal since the previous max CLTV
limit was ~5000 blocks. Once it was lowered to 1008, the issue became
more apparent. Therefore, all of our path finding attempts now have a
restriction of said limit in in order to properly carry out HTLCs to the
network.