This commit modifies address handling in the NodeAnnouncement struct,
switching from net.TCPAddr to []net.Addr. This enables more flexible
address handling with multiple types and multiple addresses for each
node. This commit addresses the first part of issue #131 .
This commit adds support for channel graph pruning, which is the method
used to keep the channel graph in sync with the current UTXO state. As
the channel graph is essentially simply a subset of the UTXO set, by
evaluating the channel graph with the set of outfits spent within a
block, then we’re able to prune channels that’ve been closed by
spending their funding outpoint. A new method `PruneGraph` has been
provided which implements the described functionality.
Upon start up any upper routing layers should sync forward in the chain
pruning the channel graph with each newly found block. In order to
facilitate such channel graph reconciliation a new method `PruneTip`
has been added which allows callers to query current pruning state of
the channel graph.
This commit introduces a new capability to the database: storage of an
on-disk directed channel graph. The on-disk representation of the graph
within boltdb is essentially a modified adjacency list which separates
the storage of the edge’s existence and the storage of the edge
information itself.
The new objects provided within he ChannelGraph carry an API which
facilitates easy graph traversal via their ForEach* methods. As a
result, path finding algorithms will be able to be expressed in a
natural way using the range methods as a for-range language extension
within Go.
Additionally caching will likely be added either at this layer or the
layer above (the RoutingManager) in order keep queries and outgoing
payments speedy. In a future commit a new set of RPC’s to query the
state of a particular edge or node will also be added.
In this commit the upgrade mechanism for database was added which makes he current schema rigid and upgradeable. Additional bucket 'metaBucket' was added which stores
key that house meta-data related to the current/version state of the database. 'createChannelDB' was modified to create this new bucket+key during initializing. Also
backup logic was added which makes a complete copy of the current database during migration process and restore the previous version of database if migration failed.
This commit adds a new bucket to the database which is dedicated to
storing data pertaining to p2p related reachability for direct channel
counter parties. The data stored in this new bucket can be used within
heuristics when deciding to unilaterally close a channel due to
inactivity. Additionally, all known reachable IP addresses for a
particular LinkNode are to be stored and updated within the database in
order to facilitate the establishment of persistent connections to
direct channel counter parties.
This commit moves the location of the invoice counter key which is an
auto-incrementing primary key for all invoices. Rather than storing the counter
in the same top-level invoice bucket, the counter is now stored within the
invoiceIndex bucket. With this change, the top-level bucket can now cleanly be
scanned in a sequential manner to retrieve all invoices.
This commit adds the necessary database functionality required for a
high-level payment invoice workflow. Invoices can be added dealing the
requirements for fulfillment, looked by payment hash, and the finally
also settled by payment hash. For record keeping and the possibility of
reconciling future disputes, invoices are currently never deleted from
disk. Instead when an invoice is settled a bit is toggled indicating as
much.
The current invoiceManger within the daemon will be modified to use
this persistent invoice store, only storing certain “debug” invoices in
memory as dictated by a command line flag.
This commit implements a state update log which is intended the record
the relevant information for each state transition on disk. For each
state transition a delta should be written recording the new state. A
new method is also provided which is able to retrieve a previous
channel state based on a state update #.
At the moment no measures has been taken to optimize the space
utilization of each update on disk. There are several low-hanging
fruits which can be addressed at a later point. Ultimately the update
log itself should be implemented with an append-only flat file at the
storage level. In any case, the high level abstraction should be able
to maintained independent of differences in the on-disk format itself.
This commit overhauls the current schema for storing active channels in
order to support tracking+updating multiple open channels for a
particular peer.
Channels are now uniquely identified by an output (txid:index) rather
than an arbitrary hash value. As a result, the funding transaction is
no longer stored, as only the txin is required to lookup the original
transaction, and to sign for new commitment states.
A new bucket, nested within the bucket for a node’s Lightning ID has
been created. This new bucket acts as an index to the active channels
for a particular peer by storing all the active channel points as keys
within the bucket. This bucket can then be scanned in a linear fashion,
or queried randomly in order to retrieve channel information.
The split between top-level, and channel-level keys remains the same.
The primary modification comes in using the channel ID (the funding
outpoint) as the key suffix for all top-level and channel-level keys.
Commit includes basic tests for Open/Create. Additionally, rather than
relying on btcwallet’s addmgr for encryption/decryption, this package
now exposes a simple crypto system interface.
* Initial draft of brain dump of chandler. Nothing yet set in stone.
* Will most likely move the storage of all structs to a more “column”
oriented approach. Such that, small updates like incrementing the total
satoshi sent don’t result in the entire struct being serialized and
written.
* Some skeleton structs for other possible data we might want to store
are also included.
* Seem valuable to record as much data as possible for record keeping,
visualization, debugging, etc. Will need to set up a time+space+dirty
cache to ensure performance isn’t impacted too much.