This serves as a stop-gap for light clients as blocks need to be
downloaded from the P2P network, and even with caches, would be too
costly for them to verify. Doing this has two side effects however:
we'll no longer know of the channel capacity and outpoint, which are
essential for some of lnd's responsibilities.
In this commit, we disable attempting to determine when a channel has
been closed out on-chain whenever AssumeChannelValid is active. Since
the flag indicates that performing this operation is expensive, we do
this as a temporary optimization until we can include proofs of channels
being closed in the gossip protocol.
With this change, the only way for channels being removed from the graph
will be once they're considered zombies: which can happen when both
edges of a channel have their disabled bits set or when both edges
haven't had an update within the past two weeks.
To ensure we don't mark an edge as live again just because an update
with a fresh timestamp was received, we'll ensure that we reject any
new updates for zombie channels if they remain disabled when running
with AssumeChannelValid.
In this commit, we add an additional heuristic when running with
AssumeChannelValid. Since AssumeChannelValid being present assumes that
we're not able to quickly determine whether channels are valid, we also
assume that any channels with the disabled bit set on both sides are
considered zombie. This should be relatively safe to do, since the
disabled bits are usually set when the channel is closed on-chain. In
the case that they aren't, we'll have to wait until both edges haven't
had a new update within two weeks to prune them.
We do this to ensure we don't prune too aggressively, as it's possible
that we've only received the channel announcement for a channel, but not
its accompanying channel updates.
This new version of neutrino includes a number of fixes to the query
functionality of neutrino, stuck syncing issues, and peer connection
handling and banning.
Now that the official releases are built with subservers, can/should we build docker images with subservers by default too?
This one-line change is all I needed to do to build LND with subservers so loop would work with the docker image.
This enables users to specify an external API for fee estimation.
The API is expected to return fees in the JSON format:
`{
fee_by_block_target: {
a: x,
b: y,
...
c: z
}
}`
where a, b, c are block targets and x, y, z are fees in sat/kb.
Note that a, b, c need not be contiguous.
In this commit, we add a new interface which will allow callers to drop
in an arbitrary Web API for fee estimation with an arbitrary
request/response schema.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
The check prevented the creation of port forwardings which were assumed
to be present already. After this change the port forwardings which
might have been removed from the NAT device can be re-created.
This commit removes the QueryRoutes route cache. It is causing wrong
routes to be returned because not all of the request parameters are
stored.
The cache allowed high frequency QueryRoutes calls to the same
destination and with the same amount to be returned fast. This behaviour
can also be achieved by caching the request on the client side. In case
a route is invalidated because of for example a channel update,
the subsequent SendToRoute call will fail. This is a trigger to call
QueryRoutes again for a fresh route.
TCP addresses resolved through net.ResolveTCPAddr give a default network
of "tcp", so we'll map back the correct network for the given address.
This ensures that we can listen on the correct interface (IPv4 vs IPv6).
In this commit, we fix a bug that would cause a node with a hodl HTLC to
cancel back the HTLC upon restart if the invoice has been settled, but
the HTLC is still present on the commitment transaction. A fix for the
HTLC still being present (not triggering a new commitment) has been
fixed recently. However, for older nodes with a lingering HTLC, on
restart it would be failed back.
In this commit, we make the check stricter by only performing these
checks for HTLCs that are in the open state. This ensures that we'll
only check this constraints the first time around, before the HTLC has
been transitioned to the accepted state.
Assuming a graph size of 50,000 channels, an interval of 20 minutes
would cause nodes to consume about 600MB per month in bandwidth doing
these routine historical sync spot checks. In this commit, we increase
to one hour, which consumes about 300MB per month.
This commit adds a brief delay when sending our channel reestablish
message if the link contains a restored channel to ensure we first have
a stable connection. Sending the message will cause the remote peer to
force close the channel, which currently may not be resumed reliably if
the connection is being torn town simultaneously. This delay can be
removed after the force close is reliable, but in the meantime it
improves the reliability of successfully closing out the channel and
allows the `channel_backup_restore/restore_during_creation` to pass
reliably.