This commit fixes a bug that would cause us to fetch our peer's
ChannelUpdate in some cases, where we really wanted to fetch our own.
The reason this happened was that we passed the peer's pubkey to
fetchLastChanUpdate, making us match on their policy. This would lead to
ChannelUpdates being sent during routing which would have no effect on
the attempted path.
We fix this by always use our own pubkey in fetchLastChanUpdate, and
also uses the common methods within the server to be able to extract the
update even when only one policy is known.
This commit fixes a bug within the peer, where we would always use the
default forwarding policy when adding new links to the switch. Mostly
this wasn't a problem, as we most often are using default values for new
channels, but the min_htlc value is a value that is set by the remote,
not us.
If the remote specified a custom min_htlc during channel creation, we
would still use our DefaultMinHtlc value when first adding the link,
making us try to forward HTLCs that would be rejected by the remote.
We fix this by getting the required min_htlc value from the channel
state machine, and setting this for the link's forwarding policy.
Note that on restarts we will query the database for our latest
ChannelEdgePolicy, and use that to craft the forwardingPolicy. This
means that the value will be set correctly if the policy is found in the
database.
This commit fixes a bug that would make us advertise the remote's
min_htlc value in our channel update.
The min_htlc value is set by a node Alice to limit its exposure to small
HTLCs, and the channel counter party should not forward HTLCs of value
smaller than this to Alice. This means that the value a node Bob should
advertise in its ChannelUpdate, is the min_htlc value the counter party
require all HTLCs to be above.
Instead of populating the ChannelUpdate with the MinHtlc value found in
the remote constraints, we now use the value from the local constraints.
We make sure to return an error other than ErrIgnored, as ErrIgnored is
expected to only be returned for updates where we already have the
necessary information in the database.
In case of a channel ID found in the rejectCache, there was a
possibility that we had rejected an invalid update for this channel
earlier, and when attempting to add the current update we wouldn't
distinguish the failure to add from an outdated/ignored update.
Previosuly we would immediately return nil on the error channel for
premature ChannelUpdates, which would break the expection that a a
returned non-error meant the update was successfully added to the
database. This meant that the caller would believe the update was added
to the database, while it is actually still in volatile memory and can
be lost during restarts.
This change makes us handle premature ChannelUpdates as we handle other
premature announcements within the gossiper, by deferring sending on the
error channel until we have reprocessed the update.
Previously we wouldn't return anything in the case where the
announcement were meant for a chain we didn't recognize. After this
change we should return an error on the error channel in all flows
within the gossiper.
At ChannelArbitrator startup we now check the database close status of
the channel. If we detect that the channel is closed, but our state
machine hasn't advanced to reflect that (possibly because of a shutdown
before the state transition was finished), we manually trigger the state
transition to recover.
This commit moves the responsibility for closing local and remote force
closes in the database from the chain watcher to the channel arbitrator.
We do this because we previously would close the channel in the
database, before sending the event to the channel arbitrator. This could
lead to a situation where the channel was marked closed, but the channel
arbitrator didn't receive the event before shutdown. As we don't listen
for chain events for channels that are closed, those channels would be
stuck in the pending close state forever, as the channel arbitrator
state machine wouldn't progress.
We fix this by letting the ChannelArbitrator close the channel in the
database. After the contract resolutions are logged (in the state
callback before transitioning to StateContractClosed) we mark the
channel closed in the database. This way we make sure that it is marked
closed only if the resolutions have been successfully persisted.