This commit makes the router use the ControlTower to drive the payment
life cycle state machine, to keep track of active payments across
restarts. This lets the router resume payments on startup, such that
their final results can be handled and stored when ready.
This encapsulates all state needed to resume a payment from any point of
the payment flow, and that must be shared between the different stages
of the execution. This is done to prepare for breaking the send loop
into smaller parts, and being able to resume the payment from any point
from persistent state.
This commit gives a new responsibility to the control tower, letting it
populate the payment bucket structure as the payment goes through
its different stages.
The payment will transition states Grounded->InFlight->Success/Failed,
where the CreationInfo/AttemptInfo/Preimage must be set accordingly.
This will be the main driver for the router state machine.
migrateOutgoingPayments moves the OutgoingPayments into a new bucket format
where they all reside in a top-level bucket indexed by the payment hash. In
this sub-bucket we store information relevant to this payment, such as the
payment status.
To avoid that the router resend payments that have the status InFlight (we
cannot resume these payments for pre-migration payments) we delete those
statuses, so only Completed payments remain in the new bucket structure.
This commit changes the format used to store payments within the
DB. Previously this was serialized as one continuous struct
OutgoingPayment, which also contained an Invoice struct we where only
using a few fields of. We now split it up into two simpler sub-structs
CreationInfo, AttemptInfo and PaymentPreimage.
We also want to associate the payments more closely with payment
statuses, so we move to this hierarchy:
There's one top-level bucket "sentPaymentsBucket" which contains a set
of sub-buckets indexed by a payment's payment hash. Each such sub-bucket
contains several fields:
paymentStatusKey -> the payment's status
paymentCreationInfoKey -> the payment's CreationInfo.
paymentAttemptInfoKey -> the payment's AttemptInfo.
paymentSettleInfoKey -> the payment's preimage (or zeroes for
non-settled payments)
The CreationInfo is information that is static during the whole payment
lifcycle. The attempt info is set each time a new payment attempt
(route+paymentID) is sent on the network. The preimage is information
only known when a payment succeeds. It therefore makes sense to split
them.
We keep legacy serialization code for migration puproses.
We slightly alter testUnconfirmedChannelFunding to instead of using an
external deposit to test unconfirmed channel funding, we use one of our
own unconfirmed change outputs.
This is done since Neutrino currently has now way of knowing about
incoming unconfirmed outputs.
This commit gives the current chainbackend the ability to connect and
disconnect the chain backend at will. We do this to let the chain
backend initiate the connection to the miner, not the other way around.
This is a preparation for using Neutrino as a backend, as it only allows
making outbound connections.
We must also move the setup of the chainbackend to after to miner, to
know the address to connect to.
This commit adds persisted status bit-field to ClientSessions, that can
be used to modify behavior of their handling in the client. Currently,
only a default CSessionActive status is defined. However, the intention
is that this could later be used to signal that a session is abandoned
without needing to perform a db migration to add the field. As we move
forward with testing, this will likely be useful if a session gets
borked and we need a simple method of the client to temporarily ignore
certain sessions.
The field may be useful in signaling other types of status changes,
though this was the primary motivation that warranted the addition.
Now that the committed and acked updates are persisted across restarts,
we will use them to filter out duplicate commit heights presented by the
client.
This commit adds the full bbolt-backed client database as well as a set
of unit tests to assert that it exactly implements the same behavior as
the mock ClientDB.
In this commit, we address another issue that arose with the
introduction of the fee rate buckets. We'll use an example to explain
the problem space:
Let's say we have inputs A, B, and C within the same fee rate bucket. If
A's fee rate is bumped to a higher bucket, then it's currently possible
for the lower fee rate bucket to be swept first, which would produce an
invalid RBF transaction since we're removing an input from the original
without providing a higher fee. By the time we get to the higher fee
rate bucket, we broadcast a valid RBF transaction _only_ sweeping input
A, which would evict the transaction sweeping inputs B and C from the
mempool.
To prevent this eviction, we can simply broadcast the higher fee rate
sweep transactions first, to ensure we have valid RBF transactions.
In this commit, we introduce support for arbitrary client fee
preferences when accepting input sweep requests. This is possible with
the addition of fee rate buckets. Fee rate buckets are buckets that
contain inputs with similar fee rates within a specific range, e.g.,
1-10 sat/vbyte, 11-20 sat/vbyte, etc. Having these buckets allows us to
batch and sweep inputs from different clients with similar fee rates
within a single transaction, allowing us to save on chain fees.
With this addition, we can now get rid of the UtxoSweeper's default fee
preference. As of this commit, any clients using the it to sweep inputs
specify the same fee preference to not change their behavior. Each of
these can be fine-tuned later on given their use cases.
Previous to this commit, running `make unit-cover pkg=xx`
would ignore the selected package and run unit tests and
coverage for all packages.
After this commit, the package selected with pkg= is the
only one that is tested and coverage output generated for.
If no pkg is selected, the default is as before, all pkgs.
TestSyncManagerHistoricalSyncOnReconnect tests that the sync manager will
re-trigger a historical sync when a new peer connects after a historical
sync has completed, but we have lost all peers.
To handle the case where we have been without peers, and get a new
connection, we reset the historical scan booleans when the first active
syncer is connected to trigger another historical sync.
A ClientChanSummary will be inserted for each channel registered with
the client, which for now will just track the sweep pkscript to use. In
the future, this will be extended with additional information to enable
the client to efficiently compute which historical states need to be
backed up under a given policy.
In advance of the upcoming wtdb.ClientDB, we'll modify the behavior
of the mockdb to be more like the final bbolt backed one, and assert
that all or our tests are still passing.
This commit replaces the map-based CommittedUpdates field with a slice.
When reading from disk, these will already be sorted by bbolt, so the
client restore the updates as presented without needing to sort them
first.
Since the key in the map variant was the sequence number, we refactor
the CommittedUpdate struct to have a sequence number and an embedded
CommittedUpdateBody (which is equivalent to the old CommittedUpdate).
The database is then expected to populate the sequence number from the
key on disk.
Since the sequence number is now directly integrated in the
CommittedUpdate struct, this allow allows us to remove the now redundant
seqNum argument from CommitUpdate.