The router subsystem has its own goroutine that receives chain updates
and then does its (quite time consuming) work on each new block. To make
it possible to find out what block the router currently is synced to, we
export its internal best height through a new method.
Having it set to nil caused https://github.com/lightningnetwork/lnd/issues/5115
The problem was several layers removed from the fix. The link decides to
clean up a `fwdPkg` only if it's completed, otherwise it renotifies the
HTLCs. A package is only set to complete if it's `addAck` and
`settleFail` filters are full. For forwarded HTLCs, the `addAck` was
never being set so it would never be considered complete under this
criteria.
`addAck` is set for an HTLC when signing the next commitment TX in the
`LightningChannel`. The path for this is:
* `LightningChannel#SettleHtlc` adds the HTLC to `localUpdates`
* `LightningChannel#SignNextCommitment` builds the `ackAddRef` for all
updates with `SourceRef != nil`.
* `LightningChannel#SignNextCommitment` then passes the list of
`ackAddRef` to `OpenChannel#AppendRemoteCommitChain` to persist the new
acks in the filter
Since `SourceRef` was nil for interceptor packages, `SignNextCommitment`
ignored it and the ack was never persisted.
closed
This commit makes the handoff procedure between the breachabiter and
chainwatcher use a function closure to mark the channel pending closed
in the DB. Doing it this way we know that the channel has been markd
pending closed in the DB when ProcessACK returns.
The reason we do this is that we really need a "two-way ACK" to have the
breacharbiter know it can go on with the breach handling. Earlier it
would just send the ACK on the channel and continue. This lead to a race
where breach handling could finish before the chain watcher had marked
the channel pending closed in the database, which again lead to the
breacharbiter failing to mark the channel fully closed.
We saw this causing flakes during itests.
To give users an idea how the new auto-unlock flag can be used in a more
safe way than just writing the password to a file, we add a new wallet
management document and describe the unlock feature in detail.
In automated or unattended setups such as cluster/container
environments, unlocking the wallet through RPC presents a set of
challenges. Usually the password is present as a file somewhere in the
container already anyway so we might also just read it from there.
This commit adds a new "waiting to start" state which may be used to
query if we're still waiting to become the cluster leader. Once leader
we advance the state to "wallet not exist" or "wallet locked" given
wallet availablity.
This commit also changes the order of DB init to be run after the RPC
server is up. This will allow us to later add an RPC endpoint to be used
to query leadership status.
In this commit we add a new error for when we fail to validate the
funding transaction (invalid script, etc) and mark it as a zombie like
the other failed validation cases.