To make sure the test that takes the longest overall time is always
started first, independent of the number of test tranches we run, we
move it to the beginning of the list. Because that test involves a lot
of waiting, it allows us to play around with the number of tranches more
efficiently.
Updating the fee of the mock estimator _after_ starting carol turned out
to be flaky and could lead to the new fee not being picked up in time
for the force close. That lead to carol not cpfp'ing the force closed
transaction.
To allow running multiple test tranches in parallel, we need a way to
make sure the TCP ports don't collide. We'll work with offsets for the
ports, using a different offset for each tranche.
A user complained about getting a misleading error after a typo in the
mnemonic. The word was `hear` and it passed the check even when it is
not in the list of valid words.
The reason is that we where checking if the word is in the variable
`englishWordList` (which includes all the words) instead of checking if the
variable is in the `defaultWordList` (which is basically `englishWoldList`
split by spaces). That means that `hear` passed the check because `heart`
appears in the list.
Related issue [4733](https://github.com/lightningnetwork/lnd/issues/4733)
This fixes tests that were surfaced as flaky. Usually these tests had
an off-by-one error when mining blocks while waiting for a
CSV-encumbered output.
As a preparation to fix an issue with the mempool wait, we clean up the
multi-hop itests a bit. We fix the formatting, use the require library
for assertions consistently and simplify some of the wait predicates.
Commonly used code is also extracted into functions.
In some cases the router isn't yet fully aware of all newly opened
channels. We need to give it some time to process the updates. Therefore
we add a wait for sending payments to give it a few more changes to
catch up.
This fixes itest flakes that happen when a payment is attempted that
ends up causing a channel closure.
completePaymentRequests() attempts to monitor the open channels after a
payment is attempted in order to identify that payment was actually
dispatched to a remote node before returning.
However, when the payment actually causes a channel closure (for
example, because the receiver sent an incorrect preimage) this logic
fails in that the channel will no longer the found in the list of open
channels. This could cause a flake when there was enough time for the
channel to close before performing the check.
One example of such a flaky test is failing_link.
This fixes the issue by also checking whether the total number of
channels was reduced, which indicates (assuming itest operations are
being executed serially) that one of the attempted payments affected at
least one channel.
This ensures the Carol and Dave nodes created in the single_hop series
of tests are synced to the latest chain tip generated by the miner
before creating the route that will eventually be used for payments.
This prevents a flake during CI tests where slow processing of blocks by
Carol could cause the generated route to have a final CLTV delta too
short for Dave to accept.
While at it, we switch the test to use the default CLTV delta provided
by QueryRoutes which is now the global default CLTV.