In this commit, we fix an existing bug that could cause lnd to crash if
we sent a payment, and the *destination* sent a temp channel failure
error message. When handling such a message, we’ll look in the nextHop
map to see which channel was *after* the node that sent the payment.
However, if the destination sends this error, then there’ll be no entry
in this map.
To address this case, we now add a prevHop map. If we attempt to lookup
a node in the nextHop map, and they don’t have an entry, then we’ll
consult the prevHop map.
We also update the set of tests to ensure that we’re properly setting
both the prevHop map and the nextHop map.
This commit adds synchronization around the processing
of multiple ChannelEdgePolicy updates for the same
channel ID at the same time.
This fixes a bug that could cause the database access
HasChannelEdge to be out of date when the goroutine
came to the point where it was calling UpdateEdgePolicy.
This happened because a second goroutine would have
called UpdateEdgePolicy in the meantime.
This bug was quite benign, as if this happened at
runtime, we would eventually get the ChannelEdgePolicy
we had lost again, either from a peer sending it to
us, or if we would fail a payment since we were using
outdated information. However, it would cause some of
the tests to flake, since losing routing information
made payments we expected to go through fail if this
happened.
This is fixed by introducing a new mutex type, that
when locking and unlocking takes an additional
(id uint64) parameter, keeping an internal map
tracking what ID's are currently locked and the
count of goroutines waiting for the mutex. This
ensure we can still process updates concurrently,
only avoiding updates with the same channel ID from
being run concurrently.
In this commit, we modify the pruning semantics of the missionControl
struct. Before this commit, on each payment attempt, we would fetch a
new graph pruned view each time. This served to instantly propagate any
detected failures to all outstanding payment attempts. However, this
meant that we could at times get stuck in a retry loop if sends take a
few second, then we may prune an edge, try another, then the original
edge is now unpruned.
To remedy this, we now introduce the concept of a paymentSession. The
session will start out as a snapshot of the latest graph prune view.
Any payment failures are now reported directly to the paymentSession
rather than missionControl. The rationale for this is that
edges/vertexes pruned as result of failures will never decay for a
local payment session, only for the global prune view. With this in
place, we ensure that our set of prune view only grows for a session.
Fixes#536.
Before this commit, we wouldn’t properly set the TotalFees attribute.
As a result, our sorting algorithm at the end to select candidate
routes would simply maintain the time-lock order rather than also sort
by total fees. This commit fixes this issue and also allows the test
added in the prior commit to pass.
This commit fixes an existing bug within the ChannelRouter. Prior to
this commit, if the chain view skipped blocks or for some reason we had
a gap in blocks delivered, then we would simply accept them. This had
the potential to cause us to miss on-chain channel closure events. To
remedy this, we won’t process any blocks whose heights aren’t
*strictly* increasing.
A longer term fix would be to have the ChainView take a block height,
and re-dispatch any notifications from that height to the current
height.
In this commit, we implement adherence of the disabled bit within a
ChannelUpdate during path finding. If a channel is marked as disabled,
then we won’t attempt to route through it. A test has been added to
exercise this new check.
In this commit, we update path finding to skip an edge if the amount
we’re trying to route through it is below the MinHTLC (in mSAT) value
for that node. We also add a new test to exercise this behavior. In
order for out test to work properly, we’ve modified the JSON to make
the edge to Goku have a higher min HTLC value.
In this commit, we modify the high value passed into UpdateFilter upon
restart. Before this commit, we would pass in the prune height, which
would cause a full rescan within the FilteredChainView if the best
height as > than the prune height. This was redundant as we would
shortly carry out a manual rescan in the method below. To fix this, we
now pass in the bestHeight, this isn’t an issue as the
syncGraphWithChain method will manually scan up to that best height.
In this commit, we add a new abstraction, the ValidationBarrier. This
struct will be used to allow parallel validation of announcements
within notes AuthenticatedGossiper as well as the ChannelRouter.
Naively validating the announcement in parallel would run into issues
as it would be possible for validate an update announcement, before
validating the channel announcement itself. We solve this by creating a
waiting dependance using the ValidationBarrier to ensure that the
defendant jobs wait until their parents have been full validated.
In this commit we ensure that if this is the first time that the
ChannelRouter is starting, then we set the pruned height+hash to the
current best height. Otherwise, it’s possible that we attempt to update
the filter with a 0 prune height, which will restart a historical
rescan unnecessarily.
In this commit we ensure that we only update the filter, if we have a
non-zero chain view. Otherwise, a mini rescan may be kicked off
unnecessarily if we don’t yet know of any channels yet in the greater
graph.
Run go fmt so config file is formatted correctly. Also rename
newVertex to NewVertex in pathfind_test and notifications_test
as it is now exported from the routing package.
For Part 1 of Issue #275. Create isolated private struct in
networkHandler goroutine that will de-duplicate
announcements added to the batch. The struct contains maps
for each of channel announcements, channel updates, and
node announcements to keep track of unique announcements.
The struct has a Reset method to reset stored announcements, an
AddMsg(lnwire.Message) method to add a new message to the current
batch, and a Batch method to return the set of de-duplicated
announcements.
Also fix a few minor typos.
This commit alters the behavior of the router's logic on
startup, ensuring that the chain view is filtered using
the router's latest prune height. Before, the chain was
filtered using the bestHeight variable, which was
uninitialized, benignly forcing a rescan from genesis.
In tracking down this, we realized that we should
actually be using the prune height, as this is
representative of the channel view loaded from disk.
The best height/hash are now only used during
startup to determine if we are out of sync.
In this commit we fix an existing bug within the ChannelRouter. Before
this commit, we would sync our graph prune state, *then* update the
cain filter. This is incorrect as the blocks we manually pruned may
have included channel closing transactions. As a result, we would miss
the pruning of a set of channels, and assume that they were still
active.
In this commit, we fix this by reversing the order: we first update the
chain filter and THEN sync the channel graph.
In this commit we add a new test to the set of unit tests for the
ChannelRouter: TestRouterChansClosedOfflinePruneGraph. This tests that
if channels are closed while the ChannelRouter is down, then upon
restart the channels are properly recognized as being closed.
In this commit, we add a Reset() method to the mockChainView struct.
With this new method tests are able to fully simulate a restart of the
ChannelRouter. This is necessary as the FilteredChainView instances are
assumed to be stateless, and don’t write their state to disk before a
restart.
This commit adds a test for the FilteredChainView interfaces,
making sure they notify about disconnected/connected blocks
in the correct order during a reorg.
This commit makes use of the blockEventQueue within the neutrino
implementation of FilteredChainView to ensure connected and
disconnected blocks are consumed in order by the reader.
It also specifies that neutrino is not to send disconnected blocks
notifications during rescans, making it consistent with the btcd
implementation.
This commit moves btcd view away from using the deprecated
callbacks onBlockConnected/Disconnected, and instead use
onFilteredBlockConnected/disconnected.
This commit also implements the sending of disconnected blocks
over the staleBlocks channel. To send these blocks, the
blockEventQueue is used to ensure the ordering of blocks are
correctly kept.
It also changes the way filter updates are handled. Since we
now load the tx filter to the rpc server itself, we can call
RescanBlocks instead of manually filtering blocks. These
rescanned blocks are also added to the blockEventQueue,
ensuring the ordering is kept.
blockEventQueue is an ordered queue for block events sent from a
FilteredChainView. The two types of possible block events are
connected/new blocks, and disconencted/stale blocks. The
blockEventQueue keeps the order of these events intact, while
still being non-blocking. This is important in order for the
chainView's call to onFilteredBlockConnected/Disconnected to not
get blocked, and for the consumer of the block events to always
get the events in the correct order.
Before this commit, we would expect that structurally we don’t pay any
fee for the first hop, but do for the final hop. After the latest
commit, this is now flipped as when we say fee, we mean the fee that we
need to pay to transit a link. For the final hop, there’s no additional
distance to be traveled, so the fee is nothing.
In this commit we fix an existing miscalculation in the fees that we
prescribe within the onion payloads for multi-hop routes. Before this
commit, if a route had more than 3 hops, then we would erroneously give
the second to last hop zero fees.
In this commit we correct this behavior, and also re-write the fee
calculation code fragment within newRoute for readability and clarity.
There are now only two cases: this is the last hop, and this is any
other hop. In the case of the last hop, simply send the exact amount
with no additional fee. In the case of an intermediate hop, we use the
_prior_ (closer to the destination) hop to calculate the amount of fees
we need, which allows us to compute the incoming flow. Using that
incoming flow, we then can compute the amount that the hop should
forward out.
Partially fixes#391.
In this commit we fix a slight bug within the existing SendPayment loop
which would cause the wrong error to be returned to users. Prior to
this commit, if we received an update identical to what we were already
aware of, then that error would be returned rather than the
ForwardingError that encapsulated this update.
In this commit with remedy this by properly returning the exact error.
Partially fixes#391.
In this commit we restore the in memory ChannelRouter as we’ll no
dynamically set the ChannelRouter’s pointer within he spec path finding
test example.