A cooperative closure of a LightningChannel proceeds in two steps.
First, the party who wishes to close the channel sends a signature for
the closing transaction. Next, the responder reconstructs the closing
transaction identically as the initiator did using a canonical
input/output ordering, and the currently settled balance within the
channel. At this point, the responder then broadcasts the closure
transaction. It is the responsibility of the initiator to watch for
this transaction broadcast within the network to clean up any resources
they committed to the active channel.
* Hooks into the ChainNotifier infrastructure to receive a notification
once the funding transaction gets enough notifications.
* Still need to set up the notification grouting within a
LightningChannel to watch for uncooperative closures, and broadcasts
and revoked channel states.
* Updates to the channel are made atomic, and consistent via a proxy
object: “ChannelUpdate” which encapsulates an update transaction. Only
one update transaction may be outstanding at any time.
* Update transactions are initiated via AddHTLC or SettleHTLC.
* Once a transaction has been begun, in order to complete the update
the transaction must first be presented with a signature from the
counter-party for our new version of the commitment tx
(VerifyNewCommitmentSigs), and finally to atomically commit the
transaction, the counterparty’s pre-image to their previous revocation
hash must be validate (Commit).
* moved sorting of transaction outside of createCommitTx also us to add
HTLC’s before sorting
* On the fence about the proxy object design, will re-visit once we
start to implement the p2p code.
* Initial draft of brain dump of chandler. Nothing yet set in stone.
* Will most likely move the storage of all structs to a more “column”
oriented approach. Such that, small updates like incrementing the total
satoshi sent don’t result in the entire struct being serialized and
written.
* Some skeleton structs for other possible data we might want to store
are also included.
* Seem valuable to record as much data as possible for record keeping,
visualization, debugging, etc. Will need to set up a time+space+dirty
cache to ensure performance isn’t impacted too much.