During the restore process, it may be possible that we have already
heard about our prior edge from a node on the network (or our channel
peers). As a result, we shouldn't exit if this happens, and instead
should continue with the rest of the restoration process.
In this commit, we modify the `AddrsForNode` method to not fail if no
graph node is found. We do this as when the backup is being
restored/created, it's possible that we don't yet have a
NodeAnnouncement for that node.
In this commit, we move the location where we restore the channel status
to within the `RestoreChannelShells` method itself. Before this commit,
we attempted to use `ApplyChanStatus` which creates a DB transaction and
relies on a fully populated channel state, which in the restoration
case, we don't yet have.
In this commit, we add a zombie edge index to the database. This allows
us to quickly determine across restarts whether we're attempting to
process an edge we've previously deemed as zombie.
In this commit, we introduce a migration for the message store
sub-bucket that will migrate all keys within it to a new key format.
This new key format is composed of the peer's public key, followed by
the short channel ID, followed by the message type. This migration is
needed in order to provide backwards-compatibility with messages that
were previously stored before the introduction of the new key format.
In this commit, we add a new type (ChannelShell) along with a new
method, RestoreChannelShells which allows a caller to insert a series of
channel shells into the database. These channel shells will allow a
restored node to initiate the DLP protocol and recover their set of
existing channels.
When we insert a channel shell, we re-create the original link node, and
also add the outgoing edge to the channel graph. This way we can be sure
that upon start up, we attempt to connect to the remote peers, and that
the normal graph query commands will operate as expected.
In this commit, we add a prefix naming scheme to the ChannelStatus enum
variables. We do this as it enables outside callers to more easily
identify each individual enum variable as a part of the greater
enum-like type.
In this commit, we add a new AddrsForNode method. This method will allow
a wrapper sturct to implement the new chanbackup.LiveChannelSource
method which is required to implement the full SCB feature set.
In this commit, we address an issue with the FetchWaitingCloseChannels
method where it would not properly return channels that are unconfirmed
and also have an unconfirmed closing transaction because they were
filtered out. We fix this by fetching channels that remain unconfirmed
that are also waiting for a confirmed closing transaction.
This will allow the recently added test TestFetchWaitingCloseChannels to
pass.
In this commit, we fix a bug in the bucket creation code in
createChannelDB. This bug can cause migrations on older nodes to fail,
as we expect the bucket to already have been created. With this commit,
we ensure that all the buckets under the main node and edges bucket are
properly created. Otherwise, a set of the newer migrations will fail to
apply for nodes updating from 0.4.
In this commit, we introduce a migration to fix some of the recent
issues found w.r.t. the edge update index. The migration attempts to fix
two things:
1) Edge policies include an extra byte at the end due to reading an
extra byte for the node's public key from the serialized node info.
2) Properly prune all stale entries within the edge update index.
As a result of this migration, nodes will have a slightly smaller in
size channeldb. We will also no longer send stale edges to our peers in
response to their gossip queries, which should also fix the fetching
channel announcement for closed channels issue.
The commit ensures that for every channel, there will always
be two entries in the edges bucket. If the policy from one or
both ends of the channel is unknown, it is marked as such.
This allows efficient lookup of incoming edges. This is
required for backwards payment path finding.
In this commit, we extend the server's functionality to prune link nodes
on startup. Since we currently only decide whether to prune a link node
from the database based on a channel close, it's possible that we have
link nodes lingering from before this functionality was added on.
In this commit, we migrate the database away from a partially migrated
state. In a prior commit, we migrated the database in order to update
the Invoice struct with three new fields: add index, settle index, paid
amt. However, it was overlooked that the OutgoingPayment struct also
embedded an Invoice within it. As a result, nodes that upgraded to the
first migration found themselves unable to start up, or call
listpayments, as the internal invoice within the OutgoignPayment hadn't
yet been updated. This would result in an OOM typically as we went to
allocate a slice with a integer that should have been small, but may
have ended up actually being a set of random bytes, so a very large
number.
In this commit, we finish the DB migration by also migrating the
internal invoice within each OutgoingPayment.
Fixes#1538.
Fixes#1546.
In this commit, we add a new database migration required to update old
database to the version of the database that tracks the update index for
the nodes and edge policies. The migration is straight forward, we
simply need to populate the new indexes for the all the nodes, and then
all the edges.
This commit adds a new method FetchWaitingCloseChannels to the database,
used for fetching OpenChannels that have a ChanStatus != Default. These
are channels that are borked, or have had a commitment broadcasted, and
is now waiting for it to confirm.
The fetchChannels method is rewritten to return channels exclusively
based on wheter they are pending or waitingClose.
This commit adds a FetchClosedChannel method to the
channeldb, which allows querying based on a known
channel point. This will be used in the nursery to
load channel close summaries, which can be used to
provide more accurate height hints when recovering
from failures.
In this commit we add a new MarkAsOpen method to the OpenChannel
struct. This method replaces the existing MarkChannelAsOpen method
which targeted the database struct itself.
This commit modifies the OpenChannel struct to include the full short
channel ID rather than simply the opening height. This new field will
be needed by an upcoming change to uniformly switch to using short
channel ID’s when forwarding HTLC’s due to the change in per-hop
payloads.
This commit expands the field within the OpenChannel struct in order to
start tracking the height that the funding transaction was initially
broadcast. Other sub-systems within lnd can now use this data to give a
more accurate height hint to the ChainNotifier, or to use during the
funding workflow to decide if a channel should be forgotten after it
fails to confirm for N blocks.
This commit modifies the OpenChannel structure on-disk to also track
that opening height of a channel. This change is being made in order to
make and more light client friendly. A follow up commit will modify
several areas of the codebase to use this new functionality.
This commit adds a new method to the database which allows callers to
mark a channel as fully closed once certain conditions have been
reached. If a channel was cooperatively closed, then it can be marked
as fully closed as soon as the channel has been confirmed. If the
channel was marked as force closed, then it should be marked as closed
as soon as _all_ the funds in limbo have been swept.
This commit builds upon the prior commit by adding a new method to
allows callers to query the state of all closed (fully) and pending
closed channel within the database.
In order to facilitate persistence during the funding process, added
the isPending flag to channels so that when the daemon restarts, we can
properly re-initialize the chain notifier and update the state of
channels that were going through the funding process.
This commit fixes a bug which would previously lead to corruption of
the channel state when a node had one or more channels open and one of
them was closed either forcibly or cooperatively. The source of the bug
itself as a typo: rather than using the construed `deliveryKey`
variable to fetch/put/delete the delivery scripts, `deliveryScriptsKey`
(the key prefix itself) as used. This bug would cause the database to
be unable to read _any_ channel from the database after one was
deleted, as each channel would actually be reading/writing-to the
_exact same_ delivery script.
The fix for the bug itself is simple: eliminate the typo.
Previously, the edge index bucket which maps a channelPoint ->
channelID wasn’t properly created one start up during the initial
creation of the database. This caused some extraneous failure as
queries would unnecessarily fail with bucket non-existence errors.
To fix this we now properly create the bucket on start up if the
database doesn’t exist, and also properly delete the bucket within the
Wipe() function.
This commit adds support for channel graph pruning, which is the method
used to keep the channel graph in sync with the current UTXO state. As
the channel graph is essentially simply a subset of the UTXO set, by
evaluating the channel graph with the set of outfits spent within a
block, then we’re able to prune channels that’ve been closed by
spending their funding outpoint. A new method `PruneGraph` has been
provided which implements the described functionality.
Upon start up any upper routing layers should sync forward in the chain
pruning the channel graph with each newly found block. In order to
facilitate such channel graph reconciliation a new method `PruneTip`
has been added which allows callers to query current pruning state of
the channel graph.
This commit introduces a new capability to the database: storage of an
on-disk directed channel graph. The on-disk representation of the graph
within boltdb is essentially a modified adjacency list which separates
the storage of the edge’s existence and the storage of the edge
information itself.
The new objects provided within he ChannelGraph carry an API which
facilitates easy graph traversal via their ForEach* methods. As a
result, path finding algorithms will be able to be expressed in a
natural way using the range methods as a for-range language extension
within Go.
Additionally caching will likely be added either at this layer or the
layer above (the RoutingManager) in order keep queries and outgoing
payments speedy. In a future commit a new set of RPC’s to query the
state of a particular edge or node will also be added.
This commit performs a slight refactoring of the internals (and API) of
the [Fetch|Put]Meta methods. The changes are rather minor and simply
eliminate the conditional branching structure with usage of an internal
function. This new form is much easier to follow.
This commit modifies the composition of the boltdb pointer within the
DB struct to use embedding.
The rationale for this change is that the daemon may soon store some
semi-transient items within the database which requires us to expose
the boltdb’s transaction API. The logic for serialization of this data
will likely lie outside of the channeldb package as the items that may
be stored in the future will be specific to the current sub-systems
within the daemon and not generic channel related data.
This commit unexports the SyncVerions PR, in favor of making it private
and moving it into the .Open() method. With this change, callers no
longer need worry about ensuring the database version is up to sync
before usage, as it will happen automatically once the database is
opened.
This commit also unexports some additional variables within the package
that don’t need be consumed by the public, and exports the
DbVersionNumber attribute on the meta struct. Finally some minor
formatting fixes have neen carried out to ensure the new code more
closely matches the style of the rest of the codebase.
In this commit the upgrade mechanism for database was added which makes he current schema rigid and upgradeable. Additional bucket 'metaBucket' was added which stores
key that house meta-data related to the current/version state of the database. 'createChannelDB' was modified to create this new bucket+key during initializing. Also
backup logic was added which makes a complete copy of the current database during migration process and restore the previous version of database if migration failed.
This commit introduces a new method to channeldb: ‘FetchAllChannels’.
This method can be used to obtain the state of all active (currently
open) channels within the database. This method can be used for compute
basic channel-based metrics or exposed as an RPC in order to allow
clients to display/query channel data.
This commit slightly modifies the existing structure of the channeldb
scheme to replace the former concept of a “nodeID” with simply the
compressed public key of the remote node. This change paves the way for
adding useful indexes mapping a node to all it’s active channels and
the other way around.
Additionally, the current channeldb code was written before it was
agreed by many of those implementing Lightning that a node’s ID will
simply be its compressed public key.
This commit adds a new bucket to the database which is dedicated to
storing data pertaining to p2p related reachability for direct channel
counter parties. The data stored in this new bucket can be used within
heuristics when deciding to unilaterally close a channel due to
inactivity. Additionally, all known reachable IP addresses for a
particular LinkNode are to be stored and updated within the database in
order to facilitate the establishment of persistent connections to
direct channel counter parties.
This commit removes the EncryptorDecryptor interface, and all related
usage within channeldb. This interface is no longer needed as wallet
specific secrets such as private keys are no longer stored within the
database.
This commit changes the current behavior around channeldb.Wipe().
Previously if a channel had never been closed, and a wipe was
attempted, then wipe operation would fail and the transaction would be
rolled back.
This commit fixes this behavior by checking for bolt.ErrBucketNotFound
error, and handling the specific error as a noop.
This commit fixes a bug which would potentially cause a panic if a
channel returned from FetchOpenChannels attempted to access the
internal pointer to the database.
To fix this bug, the pointer is now properly set once the channel has
been loaded from the database.
This commit introduces the concept of “closing” an already active
channel. Closing a channel causes all the channel state to be purged
from the database, and also triggers the creation of a small “summary”
kept concerning details of the previously open channel.
This commit also updates the previous test case(s), and includes the
close channel bucket in the database deletion in the .Wipe() method.
The state of OpenChannel on disk has now been partitioned into several
buckets+keys within the db. At the top level, a set of prefixed keys
stored common data updated frequently (with every channel update).
These fields are stored at the top level in order to facilities prefix
scans, and to avoid read/write amplification due to
serialization/deserialization with each read/write.
Within the active channel bucket, a nested bucket keyed on the node’s
ID stores the remainder of the channel.
Additionally OpenChannel now uses elkrem rather than shachain, delivery
scripts instead of addresses, stores the total net fees, and splits the
csv delay into the remote vs local node’s.
Several TODO’s have been left lingering, to be visited in the near
future.
Commit includes basic tests for Open/Create. Additionally, rather than
relying on btcwallet’s addmgr for encryption/decryption, this package
now exposes a simple crypto system interface.
* Initial draft of brain dump of chandler. Nothing yet set in stone.
* Will most likely move the storage of all structs to a more “column”
oriented approach. Such that, small updates like incrementing the total
satoshi sent don’t result in the entire struct being serialized and
written.
* Some skeleton structs for other possible data we might want to store
are also included.
* Seem valuable to record as much data as possible for record keeping,
visualization, debugging, etc. Will need to set up a time+space+dirty
cache to ensure performance isn’t impacted too much.