GetNodeInfo retrieves the policies for every edge the node belongs to.
When these policies are retrieved from the database, they're returned
in the following order: the first policy is the outgoing policy from the
node, and the second is the incoming policy to the node. This ordering
is not consistent with the ordering we have within our other RPCs like
GetChanInfo and DescribeGraph, where policies are sorted based on the
smaller public key of the nodes within an edge.
We fix this by maintaining the same order as our other RPCs.
Since we also must count revoked funds swept from second level revoked
outputs, we move the funds counting into the updateBreachInfo method,
where we already are checking whether the spend is by us or the remote.
We also clean up the logs a bit, to log the incremental sweep of funds
that now can happen.
We define a new struct justiceTxVariants, which holds three different
justice transactions:
1. The "normal" justice tx that spends all breached outputs
2. A tx that spends only the breached to_local output and to_remote output
(can be nil if none of these exist)
3. A tx that spends all the breached HTLC outputs (can be nil if no HTLC
outputs exist)
This will later be used to sweep the time sensitive outputs separately,
in case the normal justice tx doesn't confirm in time.
Now that we don't rely on the justice tx TXID anymore, we can remove
finalization of it. Instead we'll recreate the transaction when needed
from the persisted retribution info.
Since we want to potentially broadcast multiple versions of the justice
TX, instead of waiting for confirmation of a specific TXID, we instead
wait for the breached outputs to be spent.
inputs
Since we want to test more complex combinations of spends of the
breached outputs, we use two maps tracking
1. which transaction will spend the outpoint
2. which outpoints we expect the breacharbiter to include in the justice
tx
This let us trigger spends of the individual outputs, and depending on
what we want to test check whether the breacharbiter sweeps the expected
outpoints.
This commit again enables anchor channels by default.
During itests we still keep anchors opt-in, as many of the tests rely on
the behaviour of non-anchor channels.
Since private channels (most likely) won't be used for routing other
than as last hop, they bear a smaller risk that we must quickly force
close them in order to resolve a HTLC up/downstream.
In the case where it actually has to be force to resolve a payment (if
it is the first/last hop on a routed payment), we can assume that the
router will have UTXOs available from the reserved value from the
incoming public channel.
We cap the maximum value we'll reserve for anchor channel fee bumping at
10 times the per-channel amount such that nodes with a high number of
channels don't have to keep around a very large amount for the unlikely
scanario that they all close at the same time.
This commit adds height-based invoice expiry for hodl invoices
that have active htlcs. This allows us to cancel our intentionally
held htlcs before channels are force closed. We only add this for
hodl invoices because we expect regular invoices to automatically
be resolved.
We still keep hodl invoices in the time-based expiry queue,
because we want to expire open invoices that reach their timeout
before any htlcs are added. Since htlcs are added after the
invoice is created, we add new htlcs as they arrive in the
invoice registry. In this commit, we allow adding of duplicate
entries for an invoice to be added to the expiry queue as each
htlc arrives to keep implementation simple. Our cancellation
logic can already handle the case where an entry is already
canceled, so this is ok.
AMP invoices need to signal:
- AMPRequired in order to avoid being paid by older clients that don't
support it.
- Can't advertised MPP optional, otherwise older clients will attempt
to pay the invoice with regular MPP payment.
Hence, the features advertised on AMP invoices are mutually exclusive
from those advertised on MPP. Create a new set to classify the two.
It seems #5246 introduced a subtle bug that lead to the error "out of
order block: expecting height=1, got height=XXX" some times during
startup. Apparently it can happen that during pruning of the graph tip
some blocks can come in before we start our chain view and the new block
subscription. By querying the chain backend for the best height before
syncing with the graph we ensure that we never miss a block.
The router has a lot of work to do for each block. So it might be
possible that it isn't yet up to date with the most recent block,
even if the wallet is. This can happen in environments with high CPU
load (such as parallel itests). Since the `synced_to_chain` flag in
the response of this call is used by many wallets (and also our
itests) to make sure everything's up to date, we add the router's
state to it. So the flag will only toggle to true once the router was
also able to catch up.
The router subsystem has its own goroutine that receives chain updates
and then does its (quite time consuming) work on each new block. To make
it possible to find out what block the router currently is synced to, we
export its internal best height through a new method.