In this commit, we add a new type (ChannelShell) along with a new
method, RestoreChannelShells which allows a caller to insert a series of
channel shells into the database. These channel shells will allow a
restored node to initiate the DLP protocol and recover their set of
existing channels.
When we insert a channel shell, we re-create the original link node, and
also add the outgoing edge to the channel graph. This way we can be sure
that upon start up, we attempt to connect to the remote peers, and that
the normal graph query commands will operate as expected.
If the ChanStatusRestored flag is set, then we don't need to write or
read the set of commits for a channel as they won't exist. This will be
the case when we restore a channel from an SCB.
The new syncNewChannel function will allow callers to insert a new
channel given the OpenChannel struct, and set of addresses for the
channel peer. This new method will also create a new LinkNode for the
peer if one doesn't already exist.
In this commit, we modify the String() method of the ChannelStatus type
to reflect the fact that it's a flag set. With these new changes, we'll
now print the variable name of each assigned bit with a bar delimiting
them all.
In this commit, we add a prefix naming scheme to the ChannelStatus enum
variables. We do this as it enables outside callers to more easily
identify each individual enum variable as a part of the greater
enum-like type.
In this commit, we add a new AddrsForNode method. This method will allow
a wrapper sturct to implement the new chanbackup.LiveChannelSource
method which is required to implement the full SCB feature set.
If the max_htlc field is not found when fetching a ChannelEdgePolicy
from the DB, we treat this as an unknown policy.
This is done to ensure we won't propagate invalid data further. The data
will be overwritten with a valid one when we receive an update for this
channel.
It shouldn't be very common, but old data could be lingering in the DB
added before this field was validated.
Adding this field will allow us to persist an edge's
max HTLC to disk, thus preserving it between restarts.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
This commit is a preparation for the addition of new invoice
states. A database migration is not needed because we keep
the same field length and values.
In this commit, we address an issue with the FetchWaitingCloseChannels
method where it would not properly return channels that are unconfirmed
and also have an unconfirmed closing transaction because they were
filtered out. We fix this by fetching channels that remain unconfirmed
that are also waiting for a confirmed closing transaction.
This will allow the recently added test TestFetchWaitingCloseChannels to
pass.
In this commit, we add a test case for FetchWaitingCloseChannels to
ensure it behaves as intended. Currently, this test fails due to not
fetching channels which are pending to be closed but are also pending to
be opened. This will be fixed in the following commit and should allow
the test to pass.
Instead return ErrGraphNodeNotFound directly. If the node bucket was
created it would be empty, and the call delChannelByEdge ->
fetchChanEdgePolicies -> fetchChanEdgePolicy ->
deserializeChanEdgePolicy -> fetchLightningNode would return this error
anyway.
This commit adds an optional field LastChanSyncMsg to the
CloseChannelSummary, which will be used to save the ChannelReestablish
message for the channel at the point of channel close.
This commit adds a new file legacy_serialization.go, where a copy of the
current deserializeCloseChannelSummary is made, called
deserializeCloseChannelSummaryV6.
The rationale is to keep old deserialization code around to be used
during migration, as it is hard maintaining compatibility with the old
format while changing the code in use.
In this commit, we add a method to the ChannelGraph struct that
determines whether a node is seen as public based on graph's source
node's point of view.
Previously a call to QueryInvoices with reversed=true and index_offset=1
would make the cursor point to the first available invoice (num 1) that
would be returned as part of the response. This is inconsistent with the
othre indexes, so we instead just return an empty list in this case.
A test case for this situation is also added.
In this commit, we ensure that we only create the sub-bucket for
channels once: at the time of creation. We do this as otherwise it's
possible that a method that mutates a channel's state is called after it
has already been closed on-chain, leading to the channel bucket being
recreated.
Using AbandonChannel, a channel can be abandoned. This means
removing all state without any on-chain or off-chain action.
A close summary is the only thing that is stored in the db after
abandoning.
A specific close type Abandoned is added. Abandoned channels
can be retrieved via the ClosedChannels RPC.
This commit modifies the edge update index
migration to avoid repetitive calls to
Cursor.First(). Instead, we construct a set
of all edge update keys to remove, and then do
a second pass to remove each from the bucket.
Additional log messages are included to notify
users on progress, as some have reported long
migration times.
In this commit, we fix a bug in the latest database migration when
migrating from 0.4 to 0.5. There's an issue in bolt db where if one
deletes a bucket that has a key with a nil value, it thinks that's a sub
bucket and attempts a bucket deletion. This will fail as it's not
actually a sub-bucket. We get around this by using a cursor to manually
delete items in the
bucket.
Fixes#1907.
In this commit, we fix a bug in the latest migration that could cause
the migration to end in a panic. Additionally, we modify the migration
to exit early if the bucket wasn't found, as in this case, no migration
is required.
Fixes#1874.
In this commit, we no longer assume that the bucket hierarchy has been
created properly when applying the latest DB migration. On older nodes
that never obtained a channel graph, or updated _before_ the query sync
stuff was added, then they're missing buckets that the migration expects
them to have.
We fix this by simply creating the buckets as we go, if needed.
In this commit, we fix a bug in the bucket creation code in
createChannelDB. This bug can cause migrations on older nodes to fail,
as we expect the bucket to already have been created. With this commit,
we ensure that all the buckets under the main node and edges bucket are
properly created. Otherwise, a set of the newer migrations will fail to
apply for nodes updating from 0.4.
In this commit, we move the declaration of the key for an unused bucket.
In the past, this bucket was used to store the revocation forwarding
package log. However, this has been moved under the key
`fwdPackagesKey`.
In this commit, we account for the additional case wherein the
announcement hasn't yet been written with the extra zero byte to
indicate that there aren't any remaining bytes to be read. Before this
commit, we accounted for the case where the announcement was written
with the extra byte, but now we ensure that legacy nodes that upgrade
will be able to boot properly.
In this commit, we add a new limit on the largest number of extra opaque
bytes that we'll allow to be written per vertex/edge. We do this in
order to limit the amount of disk space that we expose, as it's possible
that nodes may start to pad their announcements adding an additional
externalized cost as nodes may need to continue to store and relay these
large announcements.
In this commit, we add a mirror set of fields to the ones we recently
added to the set of gossip wire messages. With these set of fields in
place, we ensure that we'll be able to properly store and re-validate
gossip messages that contain a set of extra/optional fields.
In this commit, we ensure that we de-duplicate the set of channel edges
returned from ChanUpdatesInHorizon. Other subsystems within lnd use this
method to retrieve and send all the channels with updates within a time
series to network peers. However, since the method looks at the edge
update index, which can include up to two entries per edge for each
policy, it's possible that we'd send channel announcements and updates
twice, causing extra bandwidth.
timestamps
In this commit, we ensure policies for edges we create in
TestChanUpdatesInHorizon have different update timestamps. This ensures
that there are two entries per edge in the edge update index. Because of
this, the test will fail because ChanUpdatesInHorizon will return
duplicate channel edges due to looking at all the entries within the
edge update index. This will be addressed in a future commit to allow
the set of tests to pass once again.
In this commit, we introduce a migration to fix some of the recent
issues found w.r.t. the edge update index. The migration attempts to fix
two things:
1) Edge policies include an extra byte at the end due to reading an
extra byte for the node's public key from the serialized node info.
2) Properly prune all stale entries within the edge update index.
As a result of this migration, nodes will have a slightly smaller in
size channeldb. We will also no longer send stale edges to our peers in
response to their gossip queries, which should also fix the fetching
channel announcement for closed channels issue.
In this commit, we extend TestChannelEdgePruningUpdateIndexDeletion test
to include one more update for each edge. By doing this, we can
correctly determine whether old entries were properly pruned from the
index once a new update has arrived.
Due to entries within the edge update index having a nil value, the
tests need to be modified to account for this. Previously, we'd assume
that if we were unable to retrieve a value for a certain key that the
entry was non-existent, which is why the improper pruning bug was not
caught. Instead, we'll assert the number of entries to be the expected
value and populate a lookup map to determine whether the correct entries
exist within it.
In this commit, we fix a lingering issue within the edge update index
where entries were not being properly pruned due to an incorrect
calculation of the offset of an edge's last update time. Since the
offset is being determined from the end to the start, we need to
subtract all the fields after an edge policy's last update time from the
total amount of bytes of the serialized edge policy to determine the
correct offset. This was also slightly off as the edge policy included
an extra byte, which has been fixed in the previous commit.
Instead of continuing the slicing approach however, we'll switch to
deserializing the raw bytes of an edge's policy to ensure this doesn't
happen in the future when/if the serialization methods change or extra
data is included.
In this commit, we fix an off-by-one error when slicing the public key
from the serialized node info byte slice. This would cause us to write
an extra byte to all edge policies. Even though the values were read
correctly, when attempting to calculate the offset of an edge's update
time going backwards, we'd always be incorrect, causing us to not
properly prune the edge update index.
This commit splits FetchPaymentStatus and
UpdatePaymentStatus, such that they each invoke
helper methods that can be composed into different
db txns. This enables us to improve performance on
send/receive, as we can remove the exclusive lock
from the control tower, and allow concurrent calls
to utilize Batch more effectively.