This commit again enables anchor channels by default.
During itests we still keep anchors opt-in, as many of the tests rely on
the behaviour of non-anchor channels.
Since private channels (most likely) won't be used for routing other
than as last hop, they bear a smaller risk that we must quickly force
close them in order to resolve a HTLC up/downstream.
In the case where it actually has to be force to resolve a payment (if
it is the first/last hop on a routed payment), we can assume that the
router will have UTXOs available from the reserved value from the
incoming public channel.
We cap the maximum value we'll reserve for anchor channel fee bumping at
10 times the per-channel amount such that nodes with a high number of
channels don't have to keep around a very large amount for the unlikely
scanario that they all close at the same time.
This commit adds height-based invoice expiry for hodl invoices
that have active htlcs. This allows us to cancel our intentionally
held htlcs before channels are force closed. We only add this for
hodl invoices because we expect regular invoices to automatically
be resolved.
We still keep hodl invoices in the time-based expiry queue,
because we want to expire open invoices that reach their timeout
before any htlcs are added. Since htlcs are added after the
invoice is created, we add new htlcs as they arrive in the
invoice registry. In this commit, we allow adding of duplicate
entries for an invoice to be added to the expiry queue as each
htlc arrives to keep implementation simple. Our cancellation
logic can already handle the case where an entry is already
canceled, so this is ok.
AMP invoices need to signal:
- AMPRequired in order to avoid being paid by older clients that don't
support it.
- Can't advertised MPP optional, otherwise older clients will attempt
to pay the invoice with regular MPP payment.
Hence, the features advertised on AMP invoices are mutually exclusive
from those advertised on MPP. Create a new set to classify the two.
It seems #5246 introduced a subtle bug that lead to the error "out of
order block: expecting height=1, got height=XXX" some times during
startup. Apparently it can happen that during pruning of the graph tip
some blocks can come in before we start our chain view and the new block
subscription. By querying the chain backend for the best height before
syncing with the graph we ensure that we never miss a block.
The router has a lot of work to do for each block. So it might be
possible that it isn't yet up to date with the most recent block,
even if the wallet is. This can happen in environments with high CPU
load (such as parallel itests). Since the `synced_to_chain` flag in
the response of this call is used by many wallets (and also our
itests) to make sure everything's up to date, we add the router's
state to it. So the flag will only toggle to true once the router was
also able to catch up.
The router subsystem has its own goroutine that receives chain updates
and then does its (quite time consuming) work on each new block. To make
it possible to find out what block the router currently is synced to, we
export its internal best height through a new method.