In automated or unattended setups such as cluster/container
environments, unlocking the wallet through RPC presents a set of
challenges. Usually the password is present as a file somewhere in the
container already anyway so we might also just read it from there.
This commit adds a new "waiting to start" state which may be used to
query if we're still waiting to become the cluster leader. Once leader
we advance the state to "wallet not exist" or "wallet locked" given
wallet availablity.
This commit also changes the order of DB init to be run after the RPC
server is up. This will allow us to later add an RPC endpoint to be used
to query leadership status.
The grpc-gateway library that is used to transform REST calls into gRPC
uses a different method for reading a request body stream depending on
whether the RPC is a request-streaming one or not. We can't really find
out what kind of RPC the user is calling at runtime, so we add a new
parameter to the proxy that lists all request-streaming RPC calls.
In any case the client _has_ to send one request message initially to
kick off the request processing. Normally this can just be an empty
message. This can lead to problems if that empty message is not
expected by the gRPC server. But for the currently existing two
client-streaming RPCs this will only trigger a warning
(HTLC interceptor) or be ignored (channel acceptor).
In this commit the location of where chain control services
are stopped is shifted to be closer to the point they are started.
Stopping of two services: "wallet" and "feeEstimator" that are started
inside the "newChainControlFromConfig" was shifted from server.go to
the cleanup function.
In addition the chainView.Stop was also removed from the server.Stop as
it is already handled by the router, where it is being started.
The --profile flag now accepts both a port and a host:port string.
If profile is set to a port, then pprof debugging information will
be served over localhost. Otherwise, we will attempt to serve pprof
information on the specified host:port (if we are allowed to listen
on it.)
We default to the safe option as if the port is connectable, anybody
can connect and see debugging information.
See: https://mmcloughlin.com/posts/your-pprof-is-showing
Downloading every block that contains a channel point takes a very long
time when syncing the graph on mainnet with Neutrino. Therefore it makes
sense to use routing.assumechanvalid=true since by using Neutrino a user
already accepts the different trust model.
Apparently the existence or meaning of the routing.assumechanvalid flag
is unknown to a lot of users and is overlooked.
This commit basically sets the default to routing.assumechanvalid=true
for Neutrino. Because the CLI library doesn't support setting a bool
value to false by the user if the default is true, we need to add an
additional flag that is the inverse of the routing one, just for the
case where a Neutrino user explicitly wants to turn on channel
validation.
After unification of the WalletUnlocker and RPC services on the same gRPC
server, the WalletUnlocker will no longer be shut down after the wallet
has been unlocked.
In case --no-macaroons was used, this lead to the caller getting stuck
after unlocking the wallet, since we would wait for a response on the
MacResponseChan. Earlier we would close the MacResponseChan always
when shutting down the WalletUnlocker, but this is no longer done.
To fix this we close this channel after the wallet is unlocked,
regardless of which combination of --no-macaroons and --noseedbackup
that is being used.
This commit makes us gate the calls to the RPC servers according to the
current RPC state. This ensures we won't try to call the RPC server
before it has been fully initialized, and that we won't call the
walletUnlocker after the wallet already has been unlocked.
This commit achieves what we have been building up to: running the
WalletUnlockerService and the LightningService on the same gRPC server
simultaneously!
To achieve this, we first create the RPC server in a "interface only"
way, only creating the struct and setting the dependencies we have
available before the wallet has been unlocked. After the wallet has been
unlocked and we have created all the subsystems we need, we add those to
the RPC server, and start the sub-servers.
This means that the WalletUnlockerService and the LightningService both
will be registered and available at all times on the gRPC server.
However, before the wallet has been unlocked, the LightningService
should not be used since the RPC server is not yet ready to handle the
calls. Similarly, after the wallet has been unlocked, the
WalletUnlockerService should not be used. This we will ensure in
following commits.
We don't have to define the external subserver config more than once, so
it is not needed to be defined for every listener. Instead we move it to
the ListenerConfig.
This adds a new package rpcperms which houses the InterceptorChain
struct. This is a central place where we'll craft interceptors to use
for the GRPC server, which includes macaroon enforcement.
This let us add the interceptor chain to the GRPC server before the
macaroon service is ready, allowing us to avoid tearing down the GRPC
server after the wallet has been unlocked.
Apparently dropping the wallet transaction history only fully takes
effect after opening the wallet from scratch again. To do this cleanly,
we need to do it in the unlocker instead of lnd.
* mod: bump btcwallet version to accept db timeout
* btcwallet: add DBTimeOut in config
* kvdb: add database timeout option for bbolt
This commit adds a DBTimeout option in bbolt config. The relevant
functions walletdb.Open/Create are updated to use this config. In
addition, the bolt compacter also applies the new timeout option.
* channeldb: add DBTimeout in db options
This commit adds the DBTimeout option for channeldb. A new unit
test file is created to test the default options. In addition,
the params used in kvdb.Create inside channeldb_test is updated
with a DefaultDBTimeout value.
* contractcourt+routing: use DBTimeout in kvdb
This commit touches multiple test files in contractcourt and routing.
The call of function kvdb.Create and kvdb.Open are now updated with
the new param DBTimeout, using the default value kvdb.DefaultDBTimeout.
* lncfg: add DBTimeout option in db config
The DBTimeout option is added to db config. A new unit test is
added to check the default DB config is created as expected.
* migration: add DBTimeout param in kvdb.Create/kvdb.Open
* keychain: update tests to use DBTimeout param
* htlcswitch+chainreg: add DBTimeout option
* macaroons: support DBTimeout config in creation
This commit adds the DBTimeout during the creation of macaroons.db.
The usage of kvdb.Create and kvdb.Open in its tests are updated with
a timeout value using kvdb.DefaultDBTimeout.
* walletunlocker: add dbTimeout option in UnlockerService
This commit adds a new param, dbTimeout, during the creation of
UnlockerService. This param is then passed to wallet.NewLoader
inside various service calls, specifying a timeout value to be
used when opening the bbolt. In addition, the macaroonService
is also called with this dbTimeout param.
* watchtower/wtdb: add dbTimeout param during creation
This commit adds the dbTimeout param for the creation of both
watchtower.db and wtclient.db.
* multi: add db timeout param for walletdb.Create
This commit adds the db timeout param for the function call
walletdb.Create. It touches only the test files found in chainntnfs,
lnwallet, and routing.
* lnd: pass DBTimeout config to relevant services
This commit enables lnd to pass the DBTimeout config to the following
services/config/functions,
- chainControlConfig
- walletunlocker
- wallet.NewLoader
- macaroons
- watchtower
In addition, the usage of wallet.Create is updated too.
* sample-config: add dbtimeout option
With this commit we make it possible to use an Onion v2 hidden service
address as the Neutrino backend.
This failed before because an .onion address cannot be looked up and
converted into an IP address through the normal DNS resolving process,
even when using a Tor socks proxy.
Instead, we turn any v2 .onion address into a fake IPv6 representation
before giving it to Neutrino's address manager and turn it back into an
Onion host address when actually dialing.
Because we'll need to return the macaroon through the wallet unlocker
we cannot shut down its service before we have done so, otherwise
we'll end up in a deadlock. That's why we collect all shutdown
tasks and return them as a function that can be called after we've
initialized the macaroon service.
As a preparation for the next commit where we add proper wallet unlocker
shutdown handling, we move the calls that require cleanup down after the
creation of the wallet unlocker service itself.
To make sure no macaroons are created anywhere if the stateless
initialization was requested, we keep the requested initialization mode
in the memory of the macaroon service.