In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.
This enables users to specify an external API for fee estimation.
The API is expected to return fees in the JSON format:
`{
fee_by_block_target: {
a: x,
b: y,
...
c: z
}
}`
where a, b, c are block targets and x, y, z are fees in sat/kb.
Note that a, b, c need not be contiguous.
In this commit, we add a new interface which will allow callers to drop
in an arbitrary Web API for fee estimation with an arbitrary
request/response schema.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
The check prevented the creation of port forwardings which were assumed
to be present already. After this change the port forwardings which
might have been removed from the NAT device can be re-created.
This commit removes the QueryRoutes route cache. It is causing wrong
routes to be returned because not all of the request parameters are
stored.
The cache allowed high frequency QueryRoutes calls to the same
destination and with the same amount to be returned fast. This behaviour
can also be achieved by caching the request on the client side. In case
a route is invalidated because of for example a channel update,
the subsequent SendToRoute call will fail. This is a trigger to call
QueryRoutes again for a fresh route.
In this commit, we fix a bug that would cause a node with a hodl HTLC to
cancel back the HTLC upon restart if the invoice has been settled, but
the HTLC is still present on the commitment transaction. A fix for the
HTLC still being present (not triggering a new commitment) has been
fixed recently. However, for older nodes with a lingering HTLC, on
restart it would be failed back.
In this commit, we make the check stricter by only performing these
checks for HTLCs that are in the open state. This ensures that we'll
only check this constraints the first time around, before the HTLC has
been transitioned to the accepted state.
Assuming a graph size of 50,000 channels, an interval of 20 minutes
would cause nodes to consume about 600MB per month in bandwidth doing
these routine historical sync spot checks. In this commit, we increase
to one hour, which consumes about 300MB per month.
This commit adds a brief delay when sending our channel reestablish
message if the link contains a restored channel to ensure we first have
a stable connection. Sending the message will cause the remote peer to
force close the channel, which currently may not be resumed reliably if
the connection is being torn town simultaneously. This delay can be
removed after the force close is reliable, but in the meantime it
improves the reliability of successfully closing out the channel and
allows the `channel_backup_restore/restore_during_creation` to pass
reliably.
In this commit, we fix a slight bug in the existing implantation that
would cause no channel recovery if the recovering node was already
connected to their channel peer. As we need the link to be known at the
time of connection, if we're already connected, then the chan sync
message won't be sent again. By first disconnecting an existing peer, we
ensure that during the next connection (after the recovered channel is
added to the database), then the regular chan sync message exchange will
take place as expected. # Please enter the commit message for your
changes.
In this commit, we modify the starting link logic to always send the
chan sync message to the remote peer in a synchronous manner. Otherwise,
it's possible that we fail very quickly below this block, and don't ever
send the message to the remote peer.
In this commit, we fix a bug in the existing logic for ConnectPeer that
would cause an SCB restore to fail if we were already connected to the
peer. To fix this, we now instead will just return with a success if
we're already connected to the peer.