In this commit, we implement a series of new crypto operations that will
allow us to encrypt and decrypt a set of serialized channel backups.
Their various backups may have distinct encodings when serialized, but
to the functions defined in this file, we treat them as simple opaque
blobs.
For encryption, we utilize chacha20poly1305 with a random 24 byte nonce.
We use a larger nonce size as this can be safely generated via a CSPRNG
without fear of frequency collisions between nonces generated. To
encrypt a blob, we then use this nonce as the AD (associated data) and
prepend the nonce to the front of the ciphertext package.
For key generation, in order to ensure the user only needs their
passphrase and the backup file, we utilize the existing keychain to
derive a private key. In order to ensure that at we don't force any
hardware signer to be aware of our crypto operations, we instead opt to
utilize a public key that will be hashed to derive our private key. The
assumption here is that this key will only be exposed to this software,
and never derived as a public facing address.
In this commit, we verify that ChannelUpdates for newly
funded channels contain the max HTLC that we expect.
We expect the max HTLC value of each ChannelUpdate to
equal the maximum pending msats in HTLCs required by
the remote peer.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we set a default max HTLC value in ChannelUpdates
sent out for newly funded channels. As a result, we also default
to setting `MessageFlags` equal to 1 in each new ChannelUpdate, since
the max HTLC field is an optional field and MessageFlags indicates
the presence of optional fields within the ChannelUpdate.
For a default max HTLC, we choose the maximum msats worth of
HTLCs that can be pending (or in-flight) on our side of the channel.
The reason for this is because the spec specifies that the max
HTLC present in a ChannelUpdate must be less than or equal to
both total channel capacity and the maximum in-flight amount set
by the peer. Since this in-flight value will always be less than
or equal to channel capacity, it is a safe spec-compliant default.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we ensure that when we update an edge
as a result of a ChannelUpdate being returned from an
onion failure, the max htlc portion of the channel update
is included in the edge update.
This method is called to convert an EdgePolicy to a ChannelUpdate. We
make sure to carry over the max_htlc value.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
If the max_htlc field is not found when fetching a ChannelEdgePolicy
from the DB, we treat this as an unknown policy.
This is done to ensure we won't propagate invalid data further. The data
will be overwritten with a valid one when we receive an update for this
channel.
It shouldn't be very common, but old data could be lingering in the DB
added before this field was validated.
Adding this field will allow us to persist an edge's
max HTLC to disk, thus preserving it between restarts.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we alter the ValidateChannelUpdateAnn function in
ann_validation to validate a remote ChannelUpdate's message flags
and max HTLC field. If the message flag is set but the max HTLC
field is not set or vice versa, the ChannelUpdate fails validation.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we alter the gossiper test's helper method
that creates channel updates to include the max htlc field
in the ChannelUpdates it creates.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we modify the mockGraphSource's `AddEdge`
method to set the capacity of the edge it's adding to be a large
capacity.
This will enable us to test the validation of each ChannelUpdate's
max HTLC, since future validation checks will ensure the specified
max HTLC is less than total channel capacity.
In this commit, we add a field to the ChannelUpdate
denoting the maximum HTLC we support sending over
this channel, a field which was recently added to the
spec.
This field serves multiple purposes. In the short
term, it enables nodes to signal the largest HTLC
they're willing to carry, allows light clients who
don't verify channel existence to have some guidance
when routing HTLCs, and finally may allow nodes to
preserve a portion of bandwidth at all times.
In the long term, this field can be used by
implementations of AMP to guide payment splitting,
as it becomes apparent to a node the largest possible
HTLC one can route over a particular channel.
This PR was made possible by the merge of #1825,
which enables older nodes to properly retain and
verify signatures on updates that include new fields
(like this new max HTLC field) that they haven't yet
been updated to recognize.
In addition, the new ChannelUpdate fields are added to
the lnwire fuzzing tests.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we fix the problem where it's annoying to parse a
bitfield printed out in decimal by writing a String method for the
ChanUpdate[Chan|Msg]Flags bitfield.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we modify GetTestTxidAndScript to generate new P2PKH
scripts. This is needed to properly test confirmations and spends of
unique scripts on-chain within the set of interface-level test cases.
In this commit, we extend the NeutrinoNotifier to support registering
scripts for spends notifications. Once the script has been detected as
spent within the chain, a spend notification will be dispatched through
the Spend channel of the SpendEvent returned upon registration.
For scripts that have been spent in the past, the rescan logic has been
modified to match on the script rather than the outpoint. A concurrent
queue for relevant transactions has been added to proxy notifications
from the underlying rescan to the txNotifier. This is needed for
scripts, as we cannot perform a historical rescan for scripts through
`GetUtxo`, like we do with outpoints.
For scripts that are unspent, a filter update is sent to the underlying
rescan to ensure that we match and dispatch on the script when
processing new blocks.
Along the way, we also address an issue where we'd miss detecting that
an outpoint/script has been spent in the future due to not receiving a
historical dispatch request from the underlying txNotifier. To fix this,
we ensure that we always request the backend to notify us of the spend
once it detects it at tip, regardless of whether a historical rescan was
detected or not.
In this commit, we extend the BitcoindNotifier to support registering
scripts for spends notifications. Once the script has been detected as
spent within the chain, a spend notification will be dispatched through
the Spend channel of the SpendEvent returned upon registration.
For scripts that have been spent in the past, the rescan logic has been
modified to match on the script rather than the outpoint. This is done
by re-deriving the script of the output a transaction input is spending
and checking whether it matches ours.
For scripts that are unspent, a request to the backend will be sent to
alert the BitcoindNotifier of when the script was spent by a
transaction. To make this request we encode the script as an address, as
this is what the backend uses to detect the spend. The transaction will
then be proxied through the txUpdates concurrent queue, which will hand
it off to the underlying txNotifier and dispatch spend notifications to
the relevant clients.
Along the way, we also address an issue where we'd miss detecting that
an outpoint/script has been spent in the future due to not receiving a
historical dispatch request from the underlying txNotifier. To fix this,
we ensure that we always request the backend to notify us of the spend
once it detects it at tip, regardless of whether a historical rescan was
detected or not.
In this commit, we extend the BtcdNotifier to support registering
scripts for spends notifications. Once the script has been detected as
spent within the chain, a spend notification will be dispatched through
the Spend channel of the SpendEvent returned upon registration.
For scripts that have been spent in the past, the rescan logic has been
modified to match on the script rather than the outpoint. This is done
by encoding the script as an address.
For scripts that are unspent, a request to the backend will be sent to
alert the BtcdNotifier of when the script was spent by a transaction. To
make this request we encode the script as an address, as this is what
the backend uses to detect the spend. The transaction will then be
proxied through the txUpdates concurrent queue, which will hand it off
to the underlying txNotifier and dispatch spend notifications to the
relevant clients.
Along the way, we also address an issue where we'd miss detecting that
an outpoint/script has been spent in the future due to not receiving a
historical dispatch request from the underlying txNotifier. To fix this,
we ensure that we always request the backend to notify us of the spend
once it detects it at tip, regardless of whether a historical rescan was
detected or not.
In this commit, we extend the NeutrinoNotifier to support registering
scripts for confirmation notifications. Once the script has been
detected as confirmed within the chain, a confirmation notification will
be dispatched to through the Confirmed channel of the ConfirmationEvent
returned upon registration.
For scripts that have confirmed in the past, the `historicalConfDetails`
method has been modified to determine whether the script has been
confirmed by locating the script in an output of a confirmed
transaction.
For scripts that have yet to confirm, a filter update is sent to the
underlying rescan to ensure that we match and dispatch on the script
when processing new blocks.
Along the way, we also address an issue where we'd miss detecting that a
transaction/script has confirmed in the future due to not receiving a
historical dispatch request from the underlying txNotifier. To fix this,
we ensure that we always update our filters to detect the confirmation
at tip, regardless of whether a historical rescan was detected or not.
In this commit, we extend the BitcoindNotifier to support registering
scripts for confirmation notifications. Once the script has been
detected as confirmed within the chain, a confirmation notification will
be dispatched to through the Confirmed channel of the ConfirmationEvent
returned upon registration.
For scripts that have confirmed in the past, the `historicalConfDetails`
method has been modified to skip the txindex and go straight to scanning
the chain manually if confirmation request is for a script. When
scanning the chain, we'll determine whether the script has been
confirmed by locating the script in an output of a confirmed
transaction.
For scripts that have yet to confirm, they will be properly tracked
within the TxNotifier.
In this commit, we extend the BtcdNotifier to support registering
scripts for confirmation notifications. Once the script has been
detected as confirmed within the chain, a confirmation notification will
be dispatched to through the Confirmed channel of the ConfirmationEvent
returned upon registration.
For scripts that have confirmed in the past, the `historicalConfDetails`
method has been modified to skip the txindex and go straight to scanning
the chain manually if confirmation request is for a script. When
scanning the chain, we'll determine whether the script has been
confirmed by locating the script in an output of a confirmed
transaction.
For scripts that have yet to confirm, they will be properly tracked
within the TxNotifier.