This was initially done as there were a few assertions throughout the
codebase requiring a channel's policy to be known. Now that these have
been addressed, we no longer need to store restored channels in the
graph, as their policies where incomplete anyway.
Modifies syncer.replyChanRangeQuery method to use the LastBlockHeight
method on the query. LastBlockHeight safely calculates the ending
block height and prevents an overflow of start_block + num_blocks.
Prior to this change, query messages that had a start_block +
num_blocks that overflows uint32_max would return zero results in the
reply message.
Tests are added to fix the bug and ensure proper start and end values
are supplied to the channel graph filter.
This fixes a decoding error when the list of short channel ids within a
QueryShortChanIDs message started with a zero sid.
BOLT-0007 specifies that lists of short channel ids should be sorted in
ascending order. Previously, this was checked within lnwire by comparing
two consecutive sids in the list, starting at the empty (zero) sid.
This meant that a list that started with a zero sid couldn't be decoded
since the first element would _not_ be greater than the last one
(namely: also zero).
Given that one can only check for ordering starting at the second
element, we add a check to ensure the proper behavior.
A unit test is also added to ensure no future regressions on this
behavior.
In this commit we add the ability to intercept forwarded htlc packets
straight from the RPC layer. The RPC layer handles a bidrectional stream
that comminucates to the client the intercepted packets and handles its
response by coordinating with the interceptable switch.
In this commit we implement a wrapper arround the switch, called
InterceptableSwitch. This kind of wrapper behaves like a proxy which
intercepts forwarded packets and allows an external interceptor to
signal if it is interested to hold this forward and resolve it
manually later or let the switch execute its default behavior.
This infrastructure allows the RPC layer to expose interceptor
registration API to the user and by that enable the implementation
of custom routing behavior.
As part of the preparation to the switch interceptor feature, this
function is changed to return error instead of error channel that
is closed automatically.
Returning an error channel has become complex to maintain and
implement when adding more asynchronous flows to the switch.
The change doesn't affect the current behavior which logs the
errors as before.
In this commit, we add an additional defense against starting up with an
invalid SCB state. With this commit, we'll now fail to start up if we're
unable to update or read the existing SCB state from disk. This will
prevent an lnd node from starting up with an invalid SCB file, an SCB
file it can't decrypt, or an SCB left over from another node.
In this commit, we fix a bug that could possibly cause a user's on disk
back up file to be wiped out, if they ever started _another_ lnd node
with different channel state. To remedy this, before we swap out the
channel state with what's on disk, we'll first read out the contents of
the on-disk SCB file and _combine_ that with what we have in memory.
Fixes#4377
Introduces a new chancloser package which exposes a ChanCloser
struct that handles the cooperative channel closure negotiation
and is meant to replace chancloser.go in the lnd package. Updates
all references to chancloser.go to instead use chancloser package.
The current implementation of LeaseOutput already checked whether the
output had already been leased by the persisted implementation, but not
the in-memory one used by lnd internally. Without this check, we could
potentially end up with a double spend error if lnd acquired the UTXO
internally before the LeaseOutput call.
This addresses a panic when a notification is canceled after its been
detected as included in a block and before its confirmation notification
is dispatched.
Use the new paginatior strcut for payments. Add some tests which will
specifically test cases on and around the missing index we force in our
test to ensure that we properly handle this case. We also add a sanity
check in the test that checks that we can query when we have no
payments.
With our new index of sequence number to index, it is possible for
more than one sequence number to point to the same hash because legacy
lnd allowed duplicate payments under the same hash. We now store these
payments in a nested bucket within the payments database. To allow
lookup of the correct payment from an index, we require matching of the
payment hash and sequence number.
We now use the same method of pagination for invoices and payments.
Rather than duplicate logic across calls, we add a pagnator struct
which can have query specific logic plugged into it. This commit also
addresses an existing issue where a reverse query for invoices with an
offset larger than our last offset would not return any invoices. We
update this behaviour to act more like c.Seek and just start from the
last entry. This behaviour change is covered by a unit test that
previously checked for the lack of invoices.
In our current invoice pagination logic, we would not return any
invoices if our offset index was more than 1 off our last index and we
were paginating backwards. This commit adds a test case for this
behaviour before fixing it in the next commit.
Add an entry to a payments index bucket which maps sequence number
to payment hash when we initiate payments. This allows for more
efficient paginated queries. We create the top level bucket in its
own migration so that we do not need to create it on the fly.
When we retry payments and provide them with a new sequence number, we
delete the index for their existing payment so that we do not have an
index that points to a non-existent payment.
If we delete a payment, we also delete its index entry. This prevents
us from looking up entries from indexes to payments that do not exist.
Update our current tests to include lookup of duplicate payments. We
do so in preparation for changing our lookup to be based on a new
payments index. We add an append duplicate function which will add a
duplicate payment with the minimum information required to successfully
read it from disk in tests.
This fixes a possible race condition during TestBreachSpends that could
cause the test to fail in a flaky way.
The error returned in PublishTransaction (publErr) could be modified by
the main test goroutine before PublishTransaction had a chance to
return, causing the wrong error value to be returned.
This was mostly visible as a flake during TestBreachSpends/all_spends
where adding a one second delay in the old code between the send in
publTx and the call to publMtx.Lock() would cause the last iteration of
the test loop to fail.
This is fixed by moving the lock and locally storing the expected error
value to before the send so that each call to PublishTransaction is
guaranteed to return the correct error value.