When we cancel a confirmation request, we should remove the request from
the height map regardless of the current height. Otherwise we end up in
the situation when the height is reached, the notification is attempted
sent which results in a crash.
This commit enables lnd to request and renew a Let's Encrypt
certificate. This certificate is used both for the grpc as well as the
rest listeners. It allows clients to connect without having a copy of
the (public) server certificate.
Co-authored-by: Vegard Engen <vegard@engen.priv.no>
The disk availability health check is less critical than our chain
access check, and may break existing setups (particularly mobile) if we
enable it by default. Here we disable by default, but leave our other
default values in so that it can easily be flipped on.
As we already create two channels in our PSBT funding flow itest we can
easily just submit the final transaction for the second channel in the
raw wire format to test this new functionality.
- let users specify their MAXIMUM WUMBO with new config option which sets the maximum channel size lnd will accept
- current implementation is a simple check by the fundingManager rather than anything to do with the ChannelAcceptor
- Add test cases which verify that maximum channel limit is respected for wumbo/non-wumbo channels
- use --maxchansize 0 value to distinguish set/unset config. If user sets max value to 0 it will not do anything as 0 is currently used to indicate to the funding manager that the limit should not be enforced. This seems justifiable since --maxchansize=0 doesn't seem to make sense at first glance.
- add integration test case to ensure that config parsing and valiation is proper. I simplified the funding managers check electing to rely on config.go to correctly parse and set up either i) non wumbo default limit of 0.16 BTC OR ii) wumbo default soft limit of 10 BTC
Addresses: https://github.com/lightningnetwork/lnd/issues/4557
This is required to make restart work for LndMobile builds.
Not calling UnloadWallet would make `UnlockWallet` stall forever as
the file is already opened.
Due to a misunderstanding about how the entities/actions are encoded
inside the macaroon, only the first action was printed per entity.
Even though we add them as separate pairs in the macaroon service (for
example "offchain:read" and "offchain:write"), they are grouped in the
serialized macaroon ("offchain:read,write").
To be spec compliant, we require the initiator to not pay the anchor
values into fees on coop close. We extract the balance calculation into
commitment.go, and add back the value of the anchors to the initiator's
balance.
Give the external subservers the possibility to also use their own
validator to check any macaroons attached to calls to their registered
gRPC URIs.
This allows them to have their own root key ID database and permission
entities.
When external subservers register themselves to be served through the
same gRPC interface as the main lnd RPC, their requests are also
intercepted by the main lnd macaroon interceptor.
If the external subservers want to use their own macaroons that are
independent of lnd's, they need a way to overwrite the default validator
of the macaroon interceptor. We add this mechanism with the concept of
external validators.
When starting up with lnd.conf that contains the sample line
"tor.active", lnd crashes and prints the error:
malformed key=value (tor.active)
Using "tor.active=true" instead works as expected.
Since we store all-time flap count for a peer, we add a cooldown factor
which will discount poor flap counts in the past. This is only applied
to peers that have not flapped for at least a cooldown period, so that
we do not downgrade our rate limiting for badly behaved peers.
Since we will use peer flap rate to determine how we rate limit, we
store this value on disk per peer per channel. This allows us to
restart with memory of our peers past behaviour, so we don't give badly
behaving peers have a fresh start on restart. Last flap timestamp is
stored with our flap count so that we can degrade this all time flap
count over time for peers that have not recently flapped.
To prevent flapping peers from endlessly dos-ing us with online and
offline events, we rate limit the number of events we will store per
period using their flap rate to determine how often we will add their
events to our in memory list of online events.
Since we are tracking online events, we need to track the aggregate
change over the rate limited period, otherwise we will lose track of
a peer's current state. For example, if we store an online event, then
do not store the subsequent offline event, we will believe that the
peer is online when they actually aren't. To address this, we "stage"
a single event which keeps track of all the events that occurred while
we were rate limiting the peer. At the end of the rate limting period,
we will store the last state for that peer, thereby ensureing that
we maintain our record of their most recent state.