We cap the maximum value we'll reserve for anchor channel fee bumping at
10 times the per-channel amount such that nodes with a high number of
channels don't have to keep around a very large amount for the unlikely
scanario that they all close at the same time.
This commit adds height-based invoice expiry for hodl invoices
that have active htlcs. This allows us to cancel our intentionally
held htlcs before channels are force closed. We only add this for
hodl invoices because we expect regular invoices to automatically
be resolved.
We still keep hodl invoices in the time-based expiry queue,
because we want to expire open invoices that reach their timeout
before any htlcs are added. Since htlcs are added after the
invoice is created, we add new htlcs as they arrive in the
invoice registry. In this commit, we allow adding of duplicate
entries for an invoice to be added to the expiry queue as each
htlc arrives to keep implementation simple. Our cancellation
logic can already handle the case where an entry is already
canceled, so this is ok.
AMP invoices need to signal:
- AMPRequired in order to avoid being paid by older clients that don't
support it.
- Can't advertised MPP optional, otherwise older clients will attempt
to pay the invoice with regular MPP payment.
Hence, the features advertised on AMP invoices are mutually exclusive
from those advertised on MPP. Create a new set to classify the two.
It seems #5246 introduced a subtle bug that lead to the error "out of
order block: expecting height=1, got height=XXX" some times during
startup. Apparently it can happen that during pruning of the graph tip
some blocks can come in before we start our chain view and the new block
subscription. By querying the chain backend for the best height before
syncing with the graph we ensure that we never miss a block.
The router has a lot of work to do for each block. So it might be
possible that it isn't yet up to date with the most recent block,
even if the wallet is. This can happen in environments with high CPU
load (such as parallel itests). Since the `synced_to_chain` flag in
the response of this call is used by many wallets (and also our
itests) to make sure everything's up to date, we add the router's
state to it. So the flag will only toggle to true once the router was
also able to catch up.
The router subsystem has its own goroutine that receives chain updates
and then does its (quite time consuming) work on each new block. To make
it possible to find out what block the router currently is synced to, we
export its internal best height through a new method.
Having it set to nil caused https://github.com/lightningnetwork/lnd/issues/5115
The problem was several layers removed from the fix. The link decides to
clean up a `fwdPkg` only if it's completed, otherwise it renotifies the
HTLCs. A package is only set to complete if it's `addAck` and
`settleFail` filters are full. For forwarded HTLCs, the `addAck` was
never being set so it would never be considered complete under this
criteria.
`addAck` is set for an HTLC when signing the next commitment TX in the
`LightningChannel`. The path for this is:
* `LightningChannel#SettleHtlc` adds the HTLC to `localUpdates`
* `LightningChannel#SignNextCommitment` builds the `ackAddRef` for all
updates with `SourceRef != nil`.
* `LightningChannel#SignNextCommitment` then passes the list of
`ackAddRef` to `OpenChannel#AppendRemoteCommitChain` to persist the new
acks in the filter
Since `SourceRef` was nil for interceptor packages, `SignNextCommitment`
ignored it and the ack was never persisted.
closed
This commit makes the handoff procedure between the breachabiter and
chainwatcher use a function closure to mark the channel pending closed
in the DB. Doing it this way we know that the channel has been markd
pending closed in the DB when ProcessACK returns.
The reason we do this is that we really need a "two-way ACK" to have the
breacharbiter know it can go on with the breach handling. Earlier it
would just send the ACK on the channel and continue. This lead to a race
where breach handling could finish before the chain watcher had marked
the channel pending closed in the database, which again lead to the
breacharbiter failing to mark the channel fully closed.
We saw this causing flakes during itests.