In this commit, we fix a bug that could possibly cause a user's on disk
back up file to be wiped out, if they ever started _another_ lnd node
with different channel state. To remedy this, before we swap out the
channel state with what's on disk, we'll first read out the contents of
the on-disk SCB file and _combine_ that with what we have in memory.
Fixes#4377
This commit changes the verification of our code against the spec test
vectors to use a more black box approach. It exercises the channel state
machine via its external interface as much as possible, making this test
more robust. A consequence of this is that the test now runs from the
'root' data from which the test vectors are also derived, meaning that
more code is covered too.
Running from the root data is also a preparation for _producing_ test
vectors for the new anchor commitment format. This will be a matter of
changing the channel type and recording the produced commitment and htlc
txes.
Previously the success transaction was skipped during verification. With
this commit, the proper preimage insertion is carried out, allowing the
success tx to be checked too.
Introduces a new chancloser package which exposes a ChanCloser
struct that handles the cooperative channel closure negotiation
and is meant to replace chancloser.go in the lnd package. Updates
all references to chancloser.go to instead use chancloser package.
The current implementation of LeaseOutput already checked whether the
output had already been leased by the persisted implementation, but not
the in-memory one used by lnd internally. Without this check, we could
potentially end up with a double spend error if lnd acquired the UTXO
internally before the LeaseOutput call.
This addresses a panic when a notification is canceled after its been
detected as included in a block and before its confirmation notification
is dispatched.
Use the new paginatior strcut for payments. Add some tests which will
specifically test cases on and around the missing index we force in our
test to ensure that we properly handle this case. We also add a sanity
check in the test that checks that we can query when we have no
payments.
With our new index of sequence number to index, it is possible for
more than one sequence number to point to the same hash because legacy
lnd allowed duplicate payments under the same hash. We now store these
payments in a nested bucket within the payments database. To allow
lookup of the correct payment from an index, we require matching of the
payment hash and sequence number.
We now use the same method of pagination for invoices and payments.
Rather than duplicate logic across calls, we add a pagnator struct
which can have query specific logic plugged into it. This commit also
addresses an existing issue where a reverse query for invoices with an
offset larger than our last offset would not return any invoices. We
update this behaviour to act more like c.Seek and just start from the
last entry. This behaviour change is covered by a unit test that
previously checked for the lack of invoices.
In our current invoice pagination logic, we would not return any
invoices if our offset index was more than 1 off our last index and we
were paginating backwards. This commit adds a test case for this
behaviour before fixing it in the next commit.
Add an entry to a payments index bucket which maps sequence number
to payment hash when we initiate payments. This allows for more
efficient paginated queries. We create the top level bucket in its
own migration so that we do not need to create it on the fly.
When we retry payments and provide them with a new sequence number, we
delete the index for their existing payment so that we do not have an
index that points to a non-existent payment.
If we delete a payment, we also delete its index entry. This prevents
us from looking up entries from indexes to payments that do not exist.
Update our current tests to include lookup of duplicate payments. We
do so in preparation for changing our lookup to be based on a new
payments index. We add an append duplicate function which will add a
duplicate payment with the minimum information required to successfully
read it from disk in tests.
This fixes a possible race condition during TestBreachSpends that could
cause the test to fail in a flaky way.
The error returned in PublishTransaction (publErr) could be modified by
the main test goroutine before PublishTransaction had a chance to
return, causing the wrong error value to be returned.
This was mostly visible as a flake during TestBreachSpends/all_spends
where adding a one second delay in the old code between the send in
publTx and the call to publMtx.Lock() would cause the last iteration of
the test loop to fail.
This is fixed by moving the lock and locally storing the expected error
value to before the send so that each call to PublishTransaction is
guaranteed to return the correct error value.
This reworks the locking behavior of the Gossiper so that a race
condition on channel updates and block notifications doesn't cause any
loss of messages.
This fixes an issue that manifested mostly as flakes on itests during
WaitForNetworkChannelOpen calls.
The previous behavior allowed ChannelUpdates to be missed if they
happened concurrently to block notifications. The
processNetworkAnnoucement call would check for the current block height,
then lock the gossiper and add the msg to the prematureAnnoucements
list. New blocks would trigger an update to the current block height
then a lock and check of the aforementioned list.
However, specially during itests it could happen that the missing lock
before checking the height could case a race condition if the following
sequence of events happened:
- A new ChannelUpdate message was received and started processing on a
separate goroutine
- The isPremature() call was made and verified that the ChannelUpdate
was in fact premature
- The goroutine was scheduled out
- A new block started processing in the gossiper. It updated the block
height, asked and was granted the lock for the gossiper and verified
there was zero premature announcements. The lock was released.
- The goroutine processing the ChannelUpdate asked for the gossiper lock
and was granted it. It added the ChannelUpdate in the
prematureAnnoucements list. This can never be processed now.
The way to fix this behavior is to ensure that both isPremature checks
done inside processNetworkAnnoucement and best block updates are made
inside the same critical section (i.e. while holding the same lock) so
that they can't both check and update the prematureAnnoucements list
concurrently.