In this commit, we add the glue infrastructure to make the sub RPC
server system work properly. Our high level goal is the following: using
only the lnrpc package (with no visibility into the sub RPC servers),
the RPC server is able to find, create, run, and manage the entire set
of present and future sub RPC servers. In order to achieve this, we use
the reflect package and build tags heavily to permit a loosely coupled
configuration parsing system for the sub RPC servers.
We start with a new `subRpcServerConfigs` struct which is _always_
present. This struct has its own group, and will house a series of
sub-configs, one for each sub RPC server. Each sub-config is actually
gated behind a build flag, and can be used to allow users on the command
line or in the config to specify arguments related to the sub-server. If
the config isn't present, then we don't attempt to parse it at all, if
it is, then that means the RPC server has been registered, and we should
parse the contents of its config.
The `subRpcServerConfigs` struct has two main methods:
`PopulateDependancies` and `FetchConfig`. The `PopulateDependancies` is
used to dynamically locate and set the config fields for each new
sub-server. As the config may not actually have any fields (if the build
flag is off), we use the reflect pacakge to determine if things are
compiled in or not, then if so, we dynamically set each of the config
parameters. The `PopulateDependancies` method implements the
`lnrpc.SubServerConfigDispatcher` interface. Our goal is to allow sub
servers to look up their actual config in this main config struct. We
achieve this by using reflect to look up the target field _as if it were
a key in a map_. If the field is found, then we check if it has any
actual attributes (it won't if the build flag is off), if it is, then we
return it as we expect it to be populated already.
In this commit, we create a lnrpc.SubServerDriver for signrpc. Note that
this file will only have its init() method executed if the proper build
flag is on. As a result, only if the build flag is set, will the RPC
server be registered, and visible at the packge lnrpc level for the root
server to manipulate.
In this commit, we add a full implementation of the new SignerServer sub
RPC service within the main root RPC service. This service is able to
fully manage its macaroons, and service any connected clients. Atm, this
service only has a single method: SignOutputRaw which mimics the
existing lnwallet.Signer interface within lnd itself. As the API's are
so similar, it will be possible for a client to directly use the
lnwallet.Signer interface, and have a proxy that sends the request over
RPC, and translates the proto layer on both sides. To the client, it
doesn't know that it's using a remote, or local RPC.
In this commit, we add the scafolding for the future sub-server RPC
system. The idea is that each sub server will implement this particular
interface. From there on, a "root" RPC server is able to query this
registry, and dynamically create each sub-sever instance without
knowing the details of each sub-server.
In the init() method of the pacakge of a sub-server, the sub-server is
to call: RegisterSubServer to claim its namespace. Afterwards, the root
RPC server can use the RegisteredSubServers() method to obtain a slice
of ALL regsitered sub-servers. Once this list is obtained, it can use
the New() method of the SubServerDriver struct to create a new
sub-server instance.
Each sub-server needs to be able to locate it's primary config using the
SubServerConfigDispatcher interface. This can be a map of maps, or a
regular config structr. The main requirement is that the sub-server be
able to find a config under the same name that it registered with. This
string of abstractions will allow the main RPC server to find, create,
and run each sub-server without knowing the details of its configuration
or its role.
In this commit, we add a new proto generation script to match the one in
the main lnrpc package. This script differs, as we don't need to
generate the REST proxy stuff (for now).
In this commit, we introduce a new sub-package within the greater RPC
package. This new sub-package will house a new set of sub-RPC servers
to expose experimental features behind build flags for upstream
consumers. In this commit, we add the first config for the service,
which will simply expose the lnwallet.Signer interface over RPC.
In the default file, we have what the config will be if the build tag
(signerrpc) is off. In this case, the config parser won't detect any
times, and if specified will error out. In the active file, we have the
true config that the server will use. With this new set up, we'll
exploit these build flags heavily in order to create a generalized
framework for adding additional sub RPC servers.
In this commit, we update the makefile to be aware of go modules. Along
the way, we remove all references to dep as we no longer use it within
this project. Note that in order to allow usage of go modules within the
$GOPATH directory, we set the `GO111MODULE=on` environment variable.
In this commit, we modify all existing historical rescans for
ChainNotifier backends to scan backwards rather than forwards. If we
know that a transaction has been confirmed, or outpoint spent, the it's
likely that the event has recently transpired assuming we've been
offline for a short period of time. Therefore, if we scan backwards
rather than forwards, then we can save potentially hundreds or thousands
of block fetches if the event recently happened close to the tip of the
chain.
We bound this search at the genesis block, to ensure we don't underflow
the uint32 used throughout the package in the main loop.
This adds the scenario where a channel is closed while the node is
offline, the node loses state and comes back online. In this case the
node should attempt to resync the channel, and the peer should resend a
channel sync message for the closed channel, such that the node can
retrieve its funds.
We pool the database for the channel commit point with an exponential
backoff. This is meant to handle the case where we are in process of
handling a channel sync, and the case where we detect a channel close
and must wait for the peer to come online to start channel sync before
we can proceed.