In this commit, we modify the server to serve the role as the agent
which will carry out the SCB restoration protocol if the Init/Unlock
methods include a set of channels to be recovered.
The `openChannelShell` method now includes the new config information
within the open channel shells. Additionally, we now properly re-derive
all keys from our local chan config so they're useable immediately in
the channel state machine.
We extend the `chanDBRestorer.RestoreChansFromSingles` method to also
add the new channels to the chain arbitrator once they've been restored
on disk. We do this in order to ensure that we catch the channel closure
on chain once the DLP protocol beings.
In this commit, we modify the filter we use to determine if we should
add a new channel to the switch to reflect the new channel restoration
state. For all other non-default states, we want to avoid loading in a
channel, but for the restoration state, we need to load the link in
order to ensure we initiate the data loss protection protocol once we
connect to the remote peer.
In this commit, we we extend the Init and Unlock methods to also parse
out and return optional SCB instances. With this change, when the user
creates their node, if they have an existing seed and also a set of SCBs
(either single or multi), they'll be able to recover both their on-chain
balance, and also any funds that were settled within their existing
channels.
In this commit, we upgrade regular KeyRing instance to a SecretKeyRing
instance as we need the upgraded instance in order to recover SCB's. Due
to the fact that we don't currently store the full KeyLocator for the
key used to derive a shachain root for each channel, we instead need to
obtain the private key vanilla to re-derive this value.
In this commit, we address an assumption of the gossiper's recently
introduce reliable sender. The reliable sender is currently only used
for messages of unannounced channels. This makes sense as peers should
be able to retrieve messages from the network if they've previously
announced. However, within isMsgStale, we assumed that the reliable
sender would be used for every ChannelUpdate being sent, even if the
channel is already announced. Due to this, checking if the policy is
stale was unnecessary. But since this isn't the case, we should actually
be checking whether it is stale to prevent sending it later on.
In this commit, we address an issue with our router mock in which it was
not properly storing and retrieving edge policies. Previously, they were
being appended to a slice of policies, but this doesn't always work like
when you attempt to update the same edge twice. Instead, the slice can
only contain up to two entries, each one being the latest version of
each direction.
lnd_test: adding address validation for send coins
The commit adds a test that checks that when a user calls sendcoins, the
receiving address is validated according to the current network. If the
address is not compatible with the current network, it will return an
error to the user.
rpcserver: adding a check for compatible network in SendCoins
This commit adds a check in SendCoins that checks whether the receiving
address is compatible with the current network.
Fixes#2677.
In this commit, we leverage the recently introduced zombie edge index to
quickly reject announcements for edges we've previously deemed as
zombies. Care has been taken to ensure we don't reject fresh updates for
edges we've considered zombies.
In this commit, we extend the graph's FetchChannelEdgesByID and
HasChannelEdge methods to also check the zombie index whenever the edge
to be looked up doesn't exist within the edge index. We do this to
signal to callers that the edge is known, but only as a zombie, and the
only information that we have about the edge are the node public keys of
the two parties involved in the edge.
In the event that an edge does exist within the zombie index, we make
an additional check on edge policies to ensure they are not within the
router's pruning window, indicating that it is a fresh update.
We mark the edges as zombies when pruning them to ensure we don't
attempt to reprocess them later on. This also applies to channels that
have been removed from the graph due to being stale.
In this commit, we add a zombie edge index to the database. This allows
us to quickly determine across restarts whether we're attempting to
process an edge we've previously deemed as zombie.
In this commit, we modify the primary `signal` package to instead catch
all signals. Before this commit, it would only catch the interrupt
signal sent from the kernel. With this new commit, we'll now also catch
(or attempt to catch): `SIGABRT`, `SIGTERM`, `SIGSTOP`, and `SIGQUIT`.
In this commit, we update the `btcwallet` dep to the latest version that
includes a fix related to zero conf spends. Before this new version, if
a single output was spent multiple time by conflicting transactions,
then upon removing them all after a new transaction confirms, a number
of zero conf UTXOs could be left over.
Bumps the default read and write handlers to be well
above the average number of peers a node has. Since
the worker counts specify only a maximum number of
concurrent read/write workers, it is expected that
the actual usage would converge to the requirements
of the node anyway. However, in preparation for a
major release, this is a conservative measure to
ensure that the default values aren't too low and
improve network instability.
In this commit, we add a 5 minute idle timer to
the write handler. After catching the write
timeouts, it's been observed that some connections
have trouble reading a message for several hours.
This typically points to a deeper issue w/ the peer
or, e.g. the remote peer switched networks. This now
mirrors the idle timeout used in the read handler,
such that we will disconnect a peer if we are unable
to send or receive a message from the peer after 5
minutes.
We also modify the readHandler to drain its
idleTimer's channel in the even that the timer had
already fired, but we successfully sent the message.
This commit reduces the peer's write timeout to 5s.
Now that the peer catches write timeouts and doesn't
disconnect, this will ensure we spend less time blocking
in the write pool in case others also need to access the
workers concurrently. Slower peers will now only block
for 5s, after every reattempt w/ exponential backoff.