This commit adds support for etcd namespaces. This is useful when
using a shared etcd database where separate users have access to
separate namespaces.
Adds an outpoint index that stores a tlv stream. Currently the stream
only contains the outpoint's indexStatus. This should cut down on
big bbolt transactions in several places throughout the codebase.
Previously we wouldn't recreate some of the top level buckets that are
now considered expected with our migration logic. This bug was
preexisting, but never surfaced because the other TLB buckets were not
touched by this unit test.
* mod: bump btcwallet version to accept db timeout
* btcwallet: add DBTimeOut in config
* kvdb: add database timeout option for bbolt
This commit adds a DBTimeout option in bbolt config. The relevant
functions walletdb.Open/Create are updated to use this config. In
addition, the bolt compacter also applies the new timeout option.
* channeldb: add DBTimeout in db options
This commit adds the DBTimeout option for channeldb. A new unit
test file is created to test the default options. In addition,
the params used in kvdb.Create inside channeldb_test is updated
with a DefaultDBTimeout value.
* contractcourt+routing: use DBTimeout in kvdb
This commit touches multiple test files in contractcourt and routing.
The call of function kvdb.Create and kvdb.Open are now updated with
the new param DBTimeout, using the default value kvdb.DefaultDBTimeout.
* lncfg: add DBTimeout option in db config
The DBTimeout option is added to db config. A new unit test is
added to check the default DB config is created as expected.
* migration: add DBTimeout param in kvdb.Create/kvdb.Open
* keychain: update tests to use DBTimeout param
* htlcswitch+chainreg: add DBTimeout option
* macaroons: support DBTimeout config in creation
This commit adds the DBTimeout during the creation of macaroons.db.
The usage of kvdb.Create and kvdb.Open in its tests are updated with
a timeout value using kvdb.DefaultDBTimeout.
* walletunlocker: add dbTimeout option in UnlockerService
This commit adds a new param, dbTimeout, during the creation of
UnlockerService. This param is then passed to wallet.NewLoader
inside various service calls, specifying a timeout value to be
used when opening the bbolt. In addition, the macaroonService
is also called with this dbTimeout param.
* watchtower/wtdb: add dbTimeout param during creation
This commit adds the dbTimeout param for the creation of both
watchtower.db and wtclient.db.
* multi: add db timeout param for walletdb.Create
This commit adds the db timeout param for the function call
walletdb.Create. It touches only the test files found in chainntnfs,
lnwallet, and routing.
* lnd: pass DBTimeout config to relevant services
This commit enables lnd to pass the DBTimeout config to the following
services/config/functions,
- chainControlConfig
- walletunlocker
- wallet.NewLoader
- macaroons
- watchtower
In addition, the usage of wallet.Create is updated too.
* sample-config: add dbtimeout option
Similar to what we did for the local state handling, we extract handling
all known remote states we can act on (breach, current, pending state)
into its own method.
Since we want to handle the case where we lost state (both in case of
local and remote close) last, we don't rely on the remote state number
to check which commit we are looking at, but match on TXIDs directly.
This change was largely motivated by an increase in high disk usage as a
result of channel update spam. With an in memory graph, this would've
gone mostly undetected except for the increased bandwidth usage, which
this doesn't aim to solve yet. To minimize the effects to disks, we
begin to rate limit channel updates in two ways. Keep alive updates,
those which only increase their timestamps to signal liveliness, are now
limited to one per lnd's rebroadcast interval (current default of 24H).
Non keep alive updates are now limited to one per block per direction.
This commit adds the compaction feature of the bbolt compact to our bolt
backend. This will try to open the DB in read-only mode and create a
compacted copy in a temporary file. Once the compaction was successful,
the temporary file and the old source file are swapped atomically.
Similarly as with kvdb.View this commits adds a reset closure to the
kvdb.Update call in order to be able to reset external state if the
underlying db backend needs to retry the transaction.
This commit adds a reset() closure to the kvdb.View function which will
be called before each retry (including the first) of the view
transaction. The reset() closure can be used to reset external state
(eg slices or maps) where the view closure puts intermediate results.
This commit adds commitQueue which is a lightweight contention manager
for STM transactions. The queue attempts to queue up transactions that
conflict for sequential execution, while leaving all "unblocked"
transactons to run freely in parallel.
Since we will use peer flap rate to determine how we rate limit, we
store this value on disk per peer per channel. This allows us to
restart with memory of our peers past behaviour, so we don't give badly
behaving peers have a fresh start on restart. Last flap timestamp is
stored with our flap count so that we can degrade this all time flap
count over time for peers that have not recently flapped.
This commit extends channeldb with the DeleteInvoices call which is when
passed a slice of delete references will attempt to delete the invoices
pointed to by the references and also clean up all our invoice indexes.
This commit adds channeldb.ScanInvoices to scan through all invoices in
the database. The new call will also replace the already existing
channeldb.FetchAllInvoicesWithPaymentHash call in preparation to collect
invoices we'd like to delete and watch for expiry in one scan in later
commits.
We use the event timestamp of a forwarding event as its primary storage
key. On systems with a bad clock resolution this can lead to collisions
of the events if some of the timestamps are identical. We fix this
problem by shifting the timestamps on the nanosecond level until only
unique values remain.
In this commit, unify the old and new values for `sync-freelist`, and
also ensure that we don't break behavior for any users that're using the
_old_ value.
To do this, we first rename what was `--db.bolt.no-sync-freelist`, to
`--db.bolt.sync-freelist`. This gets rid of the negation on the config
level, and lets us override that value if the user is specifying the
legacy config option.
In the future, we'll deprecate the old config option, in favor of the
new DB scoped option.
This commit extends compatibility with the bbolt kvdb implementation,
which returns ErrIncompatibleValue in case of a bucket/value key
collision. Furthermore the commit also adds an extra precondition to the
transaction when a key doesn't exist. This is needed as we fix reads to
a snapshot revision and other writers may commit the key otherwise.
This fixes a long-standing force close bug. When we receive a
revocation, store the updates that the remote should sign next under
a new database key. Previously, these were not persisted which would
lead to force closure.
This commit removes the lock set which was used to only add bucket keys
to the tx predicate while also bumping their mod version.
This was useful to reduce the size of the compare set but wasn't useful
to reduce contention as top level buckets were always in the lock set.
This commit moves makeTestDB to db.go and exports it so that we'll be
able to use this function in other unit tests to make them testable with
etcd if needed.
This commit changes the key derivation algo we use to emulate buckets
similar to bbolt. The issue with prefixing keys with either a bucket or
a value prefix is that the cursor couldn't effectively iterate trough
all keys in a bucket, as it skipped the bucket keys.
While there are multiple ways to fix that issue (eg. two pointers,
iterating value keys then bucket keys, etc), the cleanest is to instead
of prefixes in keys we use a postfix indicating whether a key is a
bucket or a value. This also simplifies all operations where we
(recursively) iterate a bucket and is equivalent with the prefixing key
derivation with the addition that bucket and value keys are now
continous.