Add more fields to channel acceptor response so that users can have more
fine grained control over their incoming channels. With our chained
acceptor, it is possible that we get inconsistent responses from
multiple chained acceptors. We create a conjugate repsponse from all the
set fields in our various responses, but fail if we get different, non-
zero responses from our various acceptors. Separate merge functions are
used per type so that we avoid unexpected outcomes comparing interfaces
(panic on comparing types that aren't comparable), with casting used
where applicable to avoid code duplication.
This commit adds an optional error message to the channel acceptor's
reponse to allow operators to inform (or insult) unsuccessful channel
initiators as to the reason for their rejection.
This field is added in addition to the existing accept field to maintain
backwards compatibity. If we were to deprecate accept and interpret a
non-nil error as rejecting the channel, then received a response with
accept=false and a nil error, the server cannot tell whether this is a
legacy rejection or new mesage type acceptance (due to nil error),
so we keep both fields.
This commit moves and partially refactors the channel acceptor logic
added in c2a6c86e into the channel acceptor package. This allows us to
use the same logic in our unit tests as the rpcserver, rather than
needing to replicate it in unit tests.
Two changes are made to the existing implementation:
- Rather than having the Accept function run a closure, the closure
originally used in the rpcserver is moved directly into Accept
- The done channel used to signal client exit is moved into the acceptor
because the rpc server does not need knowledge of this detail (in
addition to other fields required for mocking the actual rpc).
Crediting orginal committer as co-author:
Co-authored-by: Crypt-iQ
This will prevent the subservers from writing macaroons to disk
when the stateless_init flag is set to true. It accomplishes
this by storing the StatelessInit value in the Macaroon Service.
Because we'll need to return the macaroon through the wallet unlocker
we cannot shut down its service before we have done so, otherwise
we'll end up in a deadlock. That's why we collect all shutdown
tasks and return them as a function that can be called after we've
initialized the macaroon service.
As a preparation for the next commit where we add proper wallet unlocker
shutdown handling, we move the calls that require cleanup down after the
creation of the wallet unlocker service itself.
To make sure no macaroons are created anywhere if the stateless
initialization was requested, we keep the requested initialization mode
in the memory of the macaroon service.
This commit adds the --stateless_init flag to all three wallet unlocker
operations. Once you initialize a wallet stateless, you need to set
this flag for every further wallet unlocker operation. Otherwise you
risk non-encrypted macaroon information to leak to the underlying
system.
Similarly as with kvdb.View this commits adds a reset closure to the
kvdb.Update call in order to be able to reset external state if the
underlying db backend needs to retry the transaction.
This commit adds a reset() closure to the kvdb.View function which will
be called before each retry (including the first) of the view
transaction. The reset() closure can be used to reset external state
(eg slices or maps) where the view closure puts intermediate results.
Without this change the high-fee logic is never tested as it is instead caught by the dust-output logic. This change uses a higher fee rate to ensure an output value above the dust limit, while still spending 20% on fees.
To allow nodes more control over the amount of time that their funds
will be locked up, we add a MaxLocalCSVDelay option which sets the
maximum csv delay we will accept for all channels. We default to the
existing constant of 10000, and set a sane minimum on this value so that
clients cannot set unreasonably low maximum csv delays which will result
in their node rejecting all channels.
To make sure the test that takes the longest overall time is always
started first, independent of the number of test tranches we run, we
move it to the beginning of the list. Because that test involves a lot
of waiting, it allows us to play around with the number of tranches more
efficiently.
Updating the fee of the mock estimator _after_ starting carol turned out
to be flaky and could lead to the new fee not being picked up in time
for the force close. That lead to carol not cpfp'ing the force closed
transaction.
To allow running multiple test tranches in parallel, we need a way to
make sure the TCP ports don't collide. We'll work with offsets for the
ports, using a different offset for each tranche.