* mod: bump btcwallet version to accept db timeout
* btcwallet: add DBTimeOut in config
* kvdb: add database timeout option for bbolt
This commit adds a DBTimeout option in bbolt config. The relevant
functions walletdb.Open/Create are updated to use this config. In
addition, the bolt compacter also applies the new timeout option.
* channeldb: add DBTimeout in db options
This commit adds the DBTimeout option for channeldb. A new unit
test file is created to test the default options. In addition,
the params used in kvdb.Create inside channeldb_test is updated
with a DefaultDBTimeout value.
* contractcourt+routing: use DBTimeout in kvdb
This commit touches multiple test files in contractcourt and routing.
The call of function kvdb.Create and kvdb.Open are now updated with
the new param DBTimeout, using the default value kvdb.DefaultDBTimeout.
* lncfg: add DBTimeout option in db config
The DBTimeout option is added to db config. A new unit test is
added to check the default DB config is created as expected.
* migration: add DBTimeout param in kvdb.Create/kvdb.Open
* keychain: update tests to use DBTimeout param
* htlcswitch+chainreg: add DBTimeout option
* macaroons: support DBTimeout config in creation
This commit adds the DBTimeout during the creation of macaroons.db.
The usage of kvdb.Create and kvdb.Open in its tests are updated with
a timeout value using kvdb.DefaultDBTimeout.
* walletunlocker: add dbTimeout option in UnlockerService
This commit adds a new param, dbTimeout, during the creation of
UnlockerService. This param is then passed to wallet.NewLoader
inside various service calls, specifying a timeout value to be
used when opening the bbolt. In addition, the macaroonService
is also called with this dbTimeout param.
* watchtower/wtdb: add dbTimeout param during creation
This commit adds the dbTimeout param for the creation of both
watchtower.db and wtclient.db.
* multi: add db timeout param for walletdb.Create
This commit adds the db timeout param for the function call
walletdb.Create. It touches only the test files found in chainntnfs,
lnwallet, and routing.
* lnd: pass DBTimeout config to relevant services
This commit enables lnd to pass the DBTimeout config to the following
services/config/functions,
- chainControlConfig
- walletunlocker
- wallet.NewLoader
- macaroons
- watchtower
In addition, the usage of wallet.Create is updated too.
* sample-config: add dbtimeout option
This ensures that the nodes will properly be shutdown even if one fails
to start or any of them fail to connect. Previously the shutdown is
defered only in the event that the setup was successful.
Certain checks were implemented with Errorf, which only logs the
failure. This results in the test harness panicking further down. We go
further ahead and convert all calls in this file to use require.
To make sure we build the exact version of btcd that is referenced in
the project's go.mod file and to not overwrite any binary the user might
already have installed on the system, we compile btcd into an explicit
file in the itest directory.
This should also speed up invocations of "make itest-only" because the
test harness doesn't always compile btcd on its own.
We also fix a bug with the version parsing where adding a "replace"
directive in the go.mod would result in the awk commands to extract the
wrong version. Because we no longer use the DEPGET goal to build and
install btcd, using a replace directive now actually works for itests.
To remove the need to have an extra make goal for the Windows itests, we
instead add the flag windows=1 that sets the make variable EXEC_SUFFIX
to properly add the ".exe" suffix to all executable names.
To make the Makefile a bit easier to understand, we remove the implicit
ITEST goal/command variable and switch all itest execution over to
explicit goals in the main Makefile.
With the new btcd version we can specify our own listen address
generator function for any btcd nodes. This should reduce flakiness as
the previous way of getting a free port was based on just picking a
random number which lead to conflicts.
We also double the default values for connection retries and timeouts,
effectively waiting up to 4 seconds in total now.
This adds the scenario to the local force close test cases where a node
force closes one of its channels, then lose state (or do recovery)
before the commmitment is confirmed. Without the previous commit this
would go undetected.
outdated local state
This commit fixes a bug that would cause us to not sweep our local
output in case we force closed, then lost state or attempted recovery.
The reason being that we would use or local commit height when deriving
our scripts, which would be incorrect. Instead we use the extracted
state number to derive the correct scripts, allowing us to sweep the
output.
Allthough being an unlikely scenario, we would leave money on chain in
this case without any warning (since we would just end up with an empty
delay script) and forget about the spend.
Similar to what we did for the local state handling, we extract handling
all known remote states we can act on (breach, current, pending state)
into its own method.
Since we want to handle the case where we lost state (both in case of
local and remote close) last, we don't rely on the remote state number
to check which commit we are looking at, but match on TXIDs directly.
The tests didn't really roll back the channel state, so we would only
rely on the state number to determine whether we had lost state. Now we
properly roll back the channel to a previous state, in preparation for
upcoming changes.
Since it was added, we never maintained the file leading to decay in the
set of "actual" code owners. In practice, the current set up ends up
assign most reviews to 2 or so active contributors. I think the idea
itself is sound, but the current implementation leads to certain
reviewers being over-assigned PRs, which at times causes those PRs to
stagnate since pretty much every PR gets assigned to the same set of
people.
In certain container set ups, it's useful to optionally have lnd just shutdown if it detects that its certs are expired, as assuming there's a hypervisor to restart the container/pod, then upon restart, lnd will have fully up to date certs.
This commit introduces a change in the key format used to reserve/lookup
session-key-indexes. Currently the reservations are stored under the
tower id, however this creates issues when multiple clients are using
the same database since only one reservation is permitted per tower.
We fix this by appending the blob type to the session-key-index locator.
This allows multiple clients to reserve keys for the same tower, but
still limits each client to one outstanding reservation. The changes are
made in a way such that we fall back to the legacy format if the a
reservation under the new format is not found, but only if the blob type
matches blob.TypeAltruistCommit, which is so far the only actively
deployed blob type.
To avoid the "Error outside of test" log and to properly terminate the
test if a sub test fails, we need to correctly invoke them using the
RunTestCase method.
In this commit, we add a new option to toggle gossip rate limiting. This
new option can be useful in contexts that require near instant
propagation of gossip messages like integration tests.
With this commit we make it possible to use an Onion v2 hidden service
address as the Neutrino backend.
This failed before because an .onion address cannot be looked up and
converted into an IP address through the normal DNS resolving process,
even when using a Tor socks proxy.
Instead, we turn any v2 .onion address into a fake IPv6 representation
before giving it to Neutrino's address manager and turn it back into an
Onion host address when actually dialing.
If we use a chain backend that only understands IP addresses (like
Neutrino for example), we need to turn any Onion v2 host addresses into
a fake IPv6 representation, otherwise it would be resolved incorrectly.
To do this, we use the same fake IPv6 address format that bitcoind and
btcd use internally to represent Onion v2 hidden service addresses.
Currently if the tower hangs up during session negotiation there is no
backoff applied. We add backoff here to avoid excessive CPU/network
utilization during unexpected failures.
Currently the ForceQuit call is scheduled after trying to stop the
backup queue. In certain cases, the call to stop the queue never
finishes, which means the force quit is never scheduled. We rememdy by
scheduling this call before any other operations to ensure we can always
exit ungracefully if necessary.