In this commit, we address another issue that arose with the
introduction of the fee rate buckets. We'll use an example to explain
the problem space:
Let's say we have inputs A, B, and C within the same fee rate bucket. If
A's fee rate is bumped to a higher bucket, then it's currently possible
for the lower fee rate bucket to be swept first, which would produce an
invalid RBF transaction since we're removing an input from the original
without providing a higher fee. By the time we get to the higher fee
rate bucket, we broadcast a valid RBF transaction _only_ sweeping input
A, which would evict the transaction sweeping inputs B and C from the
mempool.
To prevent this eviction, we can simply broadcast the higher fee rate
sweep transactions first, to ensure we have valid RBF transactions.
In this commit, we introduce support for arbitrary client fee
preferences when accepting input sweep requests. This is possible with
the addition of fee rate buckets. Fee rate buckets are buckets that
contain inputs with similar fee rates within a specific range, e.g.,
1-10 sat/vbyte, 11-20 sat/vbyte, etc. Having these buckets allows us to
batch and sweep inputs from different clients with similar fee rates
within a single transaction, allowing us to save on chain fees.
With this addition, we can now get rid of the UtxoSweeper's default fee
preference. As of this commit, any clients using the it to sweep inputs
specify the same fee preference to not change their behavior. Each of
these can be fine-tuned later on given their use cases.
Previous to this commit, running `make unit-cover pkg=xx`
would ignore the selected package and run unit tests and
coverage for all packages.
After this commit, the package selected with pkg= is the
only one that is tested and coverage output generated for.
If no pkg is selected, the default is as before, all pkgs.
TestSyncManagerHistoricalSyncOnReconnect tests that the sync manager will
re-trigger a historical sync when a new peer connects after a historical
sync has completed, but we have lost all peers.
To handle the case where we have been without peers, and get a new
connection, we reset the historical scan booleans when the first active
syncer is connected to trigger another historical sync.
A ClientChanSummary will be inserted for each channel registered with
the client, which for now will just track the sweep pkscript to use. In
the future, this will be extended with additional information to enable
the client to efficiently compute which historical states need to be
backed up under a given policy.
In advance of the upcoming wtdb.ClientDB, we'll modify the behavior
of the mockdb to be more like the final bbolt backed one, and assert
that all or our tests are still passing.
This commit replaces the map-based CommittedUpdates field with a slice.
When reading from disk, these will already be sorted by bbolt, so the
client restore the updates as presented without needing to sort them
first.
Since the key in the map variant was the sequence number, we refactor
the CommittedUpdate struct to have a sequence number and an embedded
CommittedUpdateBody (which is equivalent to the old CommittedUpdate).
The database is then expected to populate the sequence number from the
key on disk.
Since the sequence number is now directly integrated in the
CommittedUpdate struct, this allow allows us to remove the now redundant
seqNum argument from CommitUpdate.
This commit renames the variables dbName to towerDBName and dbVersions
to towerDBVersions, to distinguish between the upcoming clientDBName
clientDBVersions. We also move resusable portions of the database
initialization and default endianness to its own file so that it can be
shared between both tower and client databases.
This commit upgrades the protobuf version. Compared to the previous
v1.2.0 it generates smaller diffs in generated code. This change was
introduced in:
fffb0f7828
The previous value was set to two days, which may not be enough to
handle large fee spikes where users still which to submit channels with
a low fee and don't mind lowering their time preference. Since the
timeout is only applied to channels that we don't initiate, there's real
downside since it's not our funds that are locked up.