This commit overhauls the current schema for storing active channels in
order to support tracking+updating multiple open channels for a
particular peer.
Channels are now uniquely identified by an output (txid:index) rather
than an arbitrary hash value. As a result, the funding transaction is
no longer stored, as only the txin is required to lookup the original
transaction, and to sign for new commitment states.
A new bucket, nested within the bucket for a node’s Lightning ID has
been created. This new bucket acts as an index to the active channels
for a particular peer by storing all the active channel points as keys
within the bucket. This bucket can then be scanned in a linear fashion,
or queried randomly in order to retrieve channel information.
The split between top-level, and channel-level keys remains the same.
The primary modification comes in using the channel ID (the funding
outpoint) as the key suffix for all top-level and channel-level keys.
The state of OpenChannel on disk has now been partitioned into several
buckets+keys within the db. At the top level, a set of prefixed keys
stored common data updated frequently (with every channel update).
These fields are stored at the top level in order to facilities prefix
scans, and to avoid read/write amplification due to
serialization/deserialization with each read/write.
Within the active channel bucket, a nested bucket keyed on the node’s
ID stores the remainder of the channel.
Additionally OpenChannel now uses elkrem rather than shachain, delivery
scripts instead of addresses, stores the total net fees, and splits the
csv delay into the remote vs local node’s.
Several TODO’s have been left lingering, to be visited in the near
future.
* Initial draft of brain dump of chandler. Nothing yet set in stone.
* Will most likely move the storage of all structs to a more “column”
oriented approach. Such that, small updates like incrementing the total
satoshi sent don’t result in the entire struct being serialized and
written.
* Some skeleton structs for other possible data we might want to store
are also included.
* Seem valuable to record as much data as possible for record keeping,
visualization, debugging, etc. Will need to set up a time+space+dirty
cache to ensure performance isn’t impacted too much.