In this commit, we increase the default trickle delay from 30s to 1m30s.
We do this as before we implement the new INV gossip mechanism, we want
to de-emphasise the quick propagation of updates through the network
which eats up bandwidth.
In this commit, we add the glue infrastructure to make the sub RPC
server system work properly. Our high level goal is the following: using
only the lnrpc package (with no visibility into the sub RPC servers),
the RPC server is able to find, create, run, and manage the entire set
of present and future sub RPC servers. In order to achieve this, we use
the reflect package and build tags heavily to permit a loosely coupled
configuration parsing system for the sub RPC servers.
We start with a new `subRpcServerConfigs` struct which is _always_
present. This struct has its own group, and will house a series of
sub-configs, one for each sub RPC server. Each sub-config is actually
gated behind a build flag, and can be used to allow users on the command
line or in the config to specify arguments related to the sub-server. If
the config isn't present, then we don't attempt to parse it at all, if
it is, then that means the RPC server has been registered, and we should
parse the contents of its config.
The `subRpcServerConfigs` struct has two main methods:
`PopulateDependancies` and `FetchConfig`. The `PopulateDependancies` is
used to dynamically locate and set the config fields for each new
sub-server. As the config may not actually have any fields (if the build
flag is off), we use the reflect pacakge to determine if things are
compiled in or not, then if so, we dynamically set each of the config
parameters. The `PopulateDependancies` method implements the
`lnrpc.SubServerConfigDispatcher` interface. Our goal is to allow sub
servers to look up their actual config in this main config struct. We
achieve this by using reflect to look up the target field _as if it were
a key in a map_. If the field is found, then we check if it has any
actual attributes (it won't if the build flag is off), if it is, then we
return it as we expect it to be populated already.
This commit renames the confusing noencryptwallet
flag to noseedbackup, since this highlights the more
crucial information of the flags behavior to the user.
The description has also been capitalized to urge
the user think twice about what they're doing.
In this commit, we defer creating the base lnd directory until all flag
parsing is done. We do this as it's possible that the config file
specifies a lnddir, but it isn't actually used as the directory has
already been created.
Due to recent changes to the BitcoindClient interface, we now require
the backing bitcoind to use different hosts for its ZMQ raw block and
raw transaction notifications. This was needed as the notification queue
maintained by the bitcoind node would sometimes overflow with
transactions and cause block notifications to be dropped/missed.
In this commit, we expand extractBitcoindRPCParams to account for this.
In the event that the default Tor DNS host wouldn't resolve, it would
prevent `lnd` from starting due to the failed lookup. This should fail
silently as it's only crucial during bootstrapping. However, if the user
has explicitly modified this, we should let them know of the error
immediately.
In this commit, we update all the lncfg methods used to properly pass in
a new resolver. This is required in order to ensure that we don't leak
our DNS queries if Tor mode is active.
In this commit, we update the set of Tor flags to use sane defaults when
not specified. We also include some new flags related to the recent
onion services changes. This allows users to easily get set up on Tor by
only specifying the tor.active flag. If needed, the defaults can still
be overridden.
In this commit, we add a new command line option to allow (ideally
routing nodes) to disable receiving up-to-date channel updates all
together. This may be desired as it'll allow routing nodes to save on
bandwidth as they don't need the channel updates to passively forward
HTLCs. In the scenario that they _do_ want to update their routing
policies, the first failed HTLC due to policy inconsistency will then
allow the routing node to propagate the new update to potential nodes
trying to route through it.