Similarly as with kvdb.View this commits adds a reset closure to the
kvdb.Update call in order to be able to reset external state if the
underlying db backend needs to retry the transaction.
This commit adds a reset() closure to the kvdb.View function which will
be called before each retry (including the first) of the view
transaction. The reset() closure can be used to reset external state
(eg slices or maps) where the view closure puts intermediate results.
This commit adds commitQueue which is a lightweight contention manager
for STM transactions. The queue attempts to queue up transactions that
conflict for sequential execution, while leaving all "unblocked"
transactons to run freely in parallel.
Since we will use peer flap rate to determine how we rate limit, we
store this value on disk per peer per channel. This allows us to
restart with memory of our peers past behaviour, so we don't give badly
behaving peers have a fresh start on restart. Last flap timestamp is
stored with our flap count so that we can degrade this all time flap
count over time for peers that have not recently flapped.
This commit extends channeldb with the DeleteInvoices call which is when
passed a slice of delete references will attempt to delete the invoices
pointed to by the references and also clean up all our invoice indexes.
This commit adds channeldb.ScanInvoices to scan through all invoices in
the database. The new call will also replace the already existing
channeldb.FetchAllInvoicesWithPaymentHash call in preparation to collect
invoices we'd like to delete and watch for expiry in one scan in later
commits.
We use the event timestamp of a forwarding event as its primary storage
key. On systems with a bad clock resolution this can lead to collisions
of the events if some of the timestamps are identical. We fix this
problem by shifting the timestamps on the nanosecond level until only
unique values remain.
In this commit, unify the old and new values for `sync-freelist`, and
also ensure that we don't break behavior for any users that're using the
_old_ value.
To do this, we first rename what was `--db.bolt.no-sync-freelist`, to
`--db.bolt.sync-freelist`. This gets rid of the negation on the config
level, and lets us override that value if the user is specifying the
legacy config option.
In the future, we'll deprecate the old config option, in favor of the
new DB scoped option.
This commit extends compatibility with the bbolt kvdb implementation,
which returns ErrIncompatibleValue in case of a bucket/value key
collision. Furthermore the commit also adds an extra precondition to the
transaction when a key doesn't exist. This is needed as we fix reads to
a snapshot revision and other writers may commit the key otherwise.
This fixes a long-standing force close bug. When we receive a
revocation, store the updates that the remote should sign next under
a new database key. Previously, these were not persisted which would
lead to force closure.
This commit removes the lock set which was used to only add bucket keys
to the tx predicate while also bumping their mod version.
This was useful to reduce the size of the compare set but wasn't useful
to reduce contention as top level buckets were always in the lock set.
This commit moves makeTestDB to db.go and exports it so that we'll be
able to use this function in other unit tests to make them testable with
etcd if needed.
This commit changes the key derivation algo we use to emulate buckets
similar to bbolt. The issue with prefixing keys with either a bucket or
a value prefix is that the cursor couldn't effectively iterate trough
all keys in a bucket, as it skipped the bucket keys.
While there are multiple ways to fix that issue (eg. two pointers,
iterating value keys then bucket keys, etc), the cleanest is to instead
of prefixes in keys we use a postfix indicating whether a key is a
bucket or a value. This also simplifies all operations where we
(recursively) iterate a bucket and is equivalent with the prefixing key
derivation with the addition that bucket and value keys are now
continous.
This commit moves the deletion of all updates under the unsigned
acked updates key from AppendRemoteCommitChain to
AdvanceCommitChainTail. This is done because if we went down after
signing for these updates but before receiving a revocation, we would
incorrectly reject their commitment signature:
Alice Bob
-----add----->
-----sig----->
<----rev------
<----sig------
-----rev----->
<----fail-----
<----sig------
-----rev----->
-----sig----->
*reconnect*
<----rev------
<----add------
x----sig------
It is also important to note that filtering is required when we
receive a revocation to ensure that we aren't erroneously deleting
remote updates. Take the following state transitions:
Alice Bob
-----add----->
-----sig----->
<----rev------
<----sig------
-----rev----->
-----add----->
-----sig----->
<----fail-----
<----sig------
-----rev-----> (alice stores updates here)
<----rev------
In the above case, if Alice deleted all updates rather than filtering
when receiving the final revocation from Bob, then Alice would have
to force close the channel due to missing updates. Since Alice hasn't
signed for any of the unsigned acked updates, she should not filter any
of them out.
This commit copies over the relevant zpay32 decoding logic to ensure
that our prior migrations aren't affected by upcoming changes to the
zpay32 package, most notably changes to the default final_cltv_expiry
and expiry values.
When a remote peer claims one of our outgoing htlcs on chain, we do
not care whether they claimed with multiple stages. We simply store
the claim outgome then forget the resolver.
Incoming htlcs that are timed out or failed (invalid htlc or invoice
condition not met), save a single on chain resolution because we don't
need to take any actions on them ourselves (we don't need to worry
about 2 stage claims since this is the success path for our peer).
Add a new top level bucket which holds closed channels nested by chain
hash which contains additional information about channel closes. We add
resolver resolutions under their own key so that we can extend the
bucket with additional information if required.
This is useful when we wish to have a channel frozen for a specific
amount of blocks after its confirmation. This could also be done with an
absolute thaw height, but it does not suit cases where a strict block
delta needs to be enforced, as it's not possible to know for certain
when a channel will be included in the chain. To work around this, we
add a relative interpretation of the field, where if its value is below
500,000, then it's interpreted as a relative height. This approach
allows us to prevent further database modifications to account for a
relative thaw height.
Avoids indexing the all-zeros pay addr, since it is still in use by
legacy keysend. Without this, the pay addr index will reject all but the
first keysend since they will be detected as duplicates within the set
id index.
This was initially done as there were a few assertions throughout the
codebase requiring a channel's policy to be known. Now that these have
been addressed, we no longer need to store restored channels in the
graph, as their policies where incomplete anyway.
Use the new paginatior strcut for payments. Add some tests which will
specifically test cases on and around the missing index we force in our
test to ensure that we properly handle this case. We also add a sanity
check in the test that checks that we can query when we have no
payments.
With our new index of sequence number to index, it is possible for
more than one sequence number to point to the same hash because legacy
lnd allowed duplicate payments under the same hash. We now store these
payments in a nested bucket within the payments database. To allow
lookup of the correct payment from an index, we require matching of the
payment hash and sequence number.
We now use the same method of pagination for invoices and payments.
Rather than duplicate logic across calls, we add a pagnator struct
which can have query specific logic plugged into it. This commit also
addresses an existing issue where a reverse query for invoices with an
offset larger than our last offset would not return any invoices. We
update this behaviour to act more like c.Seek and just start from the
last entry. This behaviour change is covered by a unit test that
previously checked for the lack of invoices.
In our current invoice pagination logic, we would not return any
invoices if our offset index was more than 1 off our last index and we
were paginating backwards. This commit adds a test case for this
behaviour before fixing it in the next commit.
Add an entry to a payments index bucket which maps sequence number
to payment hash when we initiate payments. This allows for more
efficient paginated queries. We create the top level bucket in its
own migration so that we do not need to create it on the fly.
When we retry payments and provide them with a new sequence number, we
delete the index for their existing payment so that we do not have an
index that points to a non-existent payment.
If we delete a payment, we also delete its index entry. This prevents
us from looking up entries from indexes to payments that do not exist.
Update our current tests to include lookup of duplicate payments. We
do so in preparation for changing our lookup to be based on a new
payments index. We add an append duplicate function which will add a
duplicate payment with the minimum information required to successfully
read it from disk in tests.
This commit extends the etcd.BackendConfig to also provide an abort
context and integrates it with the STM retry loop in order to be able
stop LND when conflicting transactions keep the loop running.