In this commit, we add a new MultiFile struct. We'll use this struct in
store the latest multi-channel backup on disk, swap it out atomically,
and finally extract+unpack the contents of the multi-file. The format
that's written to disk is the same as a regular Packed multi. The
contents of this new file are meant to be used to safely implement an
always up to date multi file on disk as a way for users to easily rsync
or fsnotiy (when it changes) the backup state of their channels.
We implement an atomic update and swap in the UpdateAndSwap. The method
uses relies on the underlying file system supporting an atomic rename
syscall. We first make a temporary backup file, write the latest
contents to that, then swap the temp file with the main file using
rename(2). This way, we ensure that we always have a single up to date
file, if the protocol aborts before the rename, then we can detect this,
remove the temp file, and attempt another swap.
In this commit, we add a series of functions that will allow users to
recover existing channel backups. We do this using two primary
interfaces: the ChannelRestorer, and the PeerConnector. The first
interfaces allows us to abstract away the details w.r.t exactly how a
channel is restored. Instead, we simply expect that the channel backup
will be inserted as a sort of "channel shell" which contains only the
data required to initiate the data loss protection protocol. The second
interface is how we instruct the Lightning node to connect out to the
channel peer given its known addresses.
In this commit, we introduce a series of interfaces and methods that
will allow external callers to backup either all channels, or a specific
channel identified by its channel point. In order to abstract away the
details w.r.t _how_ we obtain the set of open channels, or their storage
mechanisms, we introduce a new LiveChannelSource interfaces. This
interfaces allows us to fetch all channels, a channel by its channel
point, and also all the known addresses for a node as we'll need this in
order to connect out to the node in the case of a recovery attempt.
In this commit, we introduce the Multi sturct. Multi is a series of
static channel backups. This type of backup can contains ALL the channel
backup state in a single packed blob. This is suitable for storing on
your file system, cloud storage, etc. Systems will be in place within
lnd to ensure that one can easily obtain the latest version of the Multi
for the node, and also that it will be kept up to date if channel state
changes.
In this commit, we add the initial implementation of the SCB structure.
Given an SCB, and a user's seed, it will be possible to recover the
settled balanced of a channel in the event of total or partial data
loss. The SCB contains all information required to initiate the data
loss protection protocol once we restore the channel and connect to the
remote channel peer.
The primary way outside callers will interact with this package are via
the Pack and Unpack methods. Packing means writing a
serialized+encrypted version of the SCB to an io.Writer. Unpacking does
the opposite.
The encoding format itself uses the same encoding as we do on the wire
within Lightning. Each encoded backup begins with a version so we can
easily add or modify the serialization format in the future, if new
channel types appear, or we need to add/remove fields.
In this commit, we implement a series of new crypto operations that will
allow us to encrypt and decrypt a set of serialized channel backups.
Their various backups may have distinct encodings when serialized, but
to the functions defined in this file, we treat them as simple opaque
blobs.
For encryption, we utilize chacha20poly1305 with a random 24 byte nonce.
We use a larger nonce size as this can be safely generated via a CSPRNG
without fear of frequency collisions between nonces generated. To
encrypt a blob, we then use this nonce as the AD (associated data) and
prepend the nonce to the front of the ciphertext package.
For key generation, in order to ensure the user only needs their
passphrase and the backup file, we utilize the existing keychain to
derive a private key. In order to ensure that at we don't force any
hardware signer to be aware of our crypto operations, we instead opt to
utilize a public key that will be hashed to derive our private key. The
assumption here is that this key will only be exposed to this software,
and never derived as a public facing address.
In this commit, we extend the remote/receiver chain claim integration
test to assert that the on-disk representation of the invoice on the
receiving side (Carol) is marked as settled due to the claiming the HTLC
on-chain.
In this commit, we extend the htlcSuccessResolver to settle the invoice,
if any, of the corresponding on-chain HTLC sweep. This ensures that the
invoice state is consistent as when claiming the HTLC "off-chain".
Previously, contract resolvers that needed to publish a second level tx,
did not have access to the original htlc amount.
This commit reconstructs this amount from data that is already persisted
in arbitrator log.
Co-authored-by: Joost Jager <joost.jager@gmail.com>
In this commit, we update our btcwallet dependency to point to the
latest version. This latest version redefines what the wallet will
consider as an initial sync. We'll now define it by determining if the
wallet has synced up to its birthday block, rather than looking at the
number of UTXOs in the wallet. This was needed, especially for light
clients, because it would cause unnecessary rescans to happen from the
wallet's birthday if the wallet had no UTXOs.
In this commit, we verify that ChannelUpdates for newly
funded channels contain the max HTLC that we expect.
We expect the max HTLC value of each ChannelUpdate to
equal the maximum pending msats in HTLCs required by
the remote peer.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we set a default max HTLC value in ChannelUpdates
sent out for newly funded channels. As a result, we also default
to setting `MessageFlags` equal to 1 in each new ChannelUpdate, since
the max HTLC field is an optional field and MessageFlags indicates
the presence of optional fields within the ChannelUpdate.
For a default max HTLC, we choose the maximum msats worth of
HTLCs that can be pending (or in-flight) on our side of the channel.
The reason for this is because the spec specifies that the max
HTLC present in a ChannelUpdate must be less than or equal to
both total channel capacity and the maximum in-flight amount set
by the peer. Since this in-flight value will always be less than
or equal to channel capacity, it is a safe spec-compliant default.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we ensure that when we update an edge
as a result of a ChannelUpdate being returned from an
onion failure, the max htlc portion of the channel update
is included in the edge update.
This method is called to convert an EdgePolicy to a ChannelUpdate. We
make sure to carry over the max_htlc value.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
If the max_htlc field is not found when fetching a ChannelEdgePolicy
from the DB, we treat this as an unknown policy.
This is done to ensure we won't propagate invalid data further. The data
will be overwritten with a valid one when we receive an update for this
channel.
It shouldn't be very common, but old data could be lingering in the DB
added before this field was validated.
Adding this field will allow us to persist an edge's
max HTLC to disk, thus preserving it between restarts.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>