Extends the invalid payment details failure with the new accept height
field. This allows sender to distinguish between a genuine invalid
details situation and a delay caused by intermediate nodes.
Align naming better with the lightning spec. Not the full name of the
failure (FailIncorrectOrUnknownPaymentDetails) is used, because this
would cause too many long lines in the code.
Methods on failure message types used to be defined on value receivers.
This allowed assignment of a failure message to ForwardingError both as
a value and as a pointer. This is error-prone, especially when using a
type switch.
In this commit the failure message methods are changed so that they
target pointer receivers.
Two instances where a value was assigned instead of a reference are
fixed.
In this commit, we modify the decoding of the FailUnknownPaymentHash
message to ensure we're able to fully decode the legacy serialization of
the onion error. We do this by catching the `io.EOF` error as it's
returned when _no_ bytes are read. If this is the case, then only the
error type was serialized and not also the optional amount.
In this commit, we fix a bug in the way we defined our even/odd features
for a particular feature. The check for if a feature bit is part of a
pair assumes that the pair bit has the exact same name as the bit being
queried. The way we defined our feature map didn't take note of this
assumption, as a result, any attempts to require a new bit moving from
optional to required would fail since the bit would be found, but the
names differed.
In this commit, we deprecate the `IncorrectHtlcAmount` onion error.
We'll still decode this error to use when retrying paths, but we'll no
longer send this ourselves. The `UnknownPaymentHash` error has been
amended to also include the value of the payment as well. This allows us
to worry about one less error.
In this commit, we add a field to the ChannelUpdate
denoting the maximum HTLC we support sending over
this channel, a field which was recently added to the
spec.
This field serves multiple purposes. In the short
term, it enables nodes to signal the largest HTLC
they're willing to carry, allows light clients who
don't verify channel existence to have some guidance
when routing HTLCs, and finally may allow nodes to
preserve a portion of bandwidth at all times.
In the long term, this field can be used by
implementations of AMP to guide payment splitting,
as it becomes apparent to a node the largest possible
HTLC one can route over a particular channel.
This PR was made possible by the merge of #1825,
which enables older nodes to properly retain and
verify signatures on updates that include new fields
(like this new max HTLC field) that they haven't yet
been updated to recognize.
In addition, the new ChannelUpdate fields are added to
the lnwire fuzzing tests.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we fix the problem where it's annoying to parse a
bitfield printed out in decimal by writing a String method for the
ChanUpdate[Chan|Msg]Flags bitfield.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit we fix a compatibility issue with other implementations.
Before this commit, when writing out an onion error that includes a
`ChannelUpdate` we would use the `MaxPayloadLength` to get the length to
encode. However, a recent update has modified that to be the max
`brontide` payload length as it's possible to pad out the message with
optional fields we're unaware of. As a result, we would always write out
a length of 65KB or so. This didn't effect our parser as we ignore the
length and decode the channel update directly as we don't need the
length to do that. However, other implementations depended on the length
rather than just reading the channel update, meaning that they weren't
able to decode our onion errors that had channel updates.
In this commit we fix that by introducing a new
`writeOnionErrorChanUpdate` which will write out the precise length
instead of using the max payload size.
Fixes#2450.
In this commit, we ensure that when we read node aliases from the wire,
we ensure that they're valid. Before this commit, we would read the raw
bytes without checking for validity which could result in us writing in
invalid node alias to disk. We've fixed this, and also updated the
quickcheck tests to generate valid strings.
In this commit, we export the ReadElements and WriteElements functions.
We do this as exporting these functions makes it possible for outside
packages to define serializations which use the BOLT 1.0 wire format.
In this commit we add a check to HtlcSatifiesPolicy to verify that the
time lock for the outgoing htlc that is requested in the onion packet
isn't too far in the future.
Without this check, anyone could force an unreasonably long time lock on
the forwarding node.
This commit adds the required feature name to our
set of local known features. This will allow other
peers connecting to us to set the required gossip
queries feature bit. This is required for the
subsequent commits, which instruct the server to
set the bit depending on user configured preferences.
In this commit, we add a new field to all the existing gossip messages:
ExtraOpqueData. We do this, as before this commit, if we came across a
ChannelUpdate message with a set of optional fields, then we wouldn't be
able to properly parse the signatures related to the message. If we
never corrected this behavior, then we would violate the forwards
compatible principle we use when parsing existing messages.
As these messages can now be padded out to the max message size, we've
increased the MaxPayloadLength value for all of these messages.
Fixes#1814.
In this commit, we add a compatibility mode for older version of
clightning to ensure that we're able to properly parse all their channel
updates. An older version of c-lightning would send out encapsulated
onion error message with an additional type byte. This would throw off
our parsing as we didn't expect the type byte, and so we always 2 bytes
off. In order to ensure that we're able to parse these messages and make
adjustments to our path finding, we'll first check to see if the type
byte is there, if so, then we'll snip off two bytes from the front and
continue with parsing. if the bytes aren't found, then we can proceed as
normal and parse the request.
In this commit, we alter the behavior of the regular
short channel id encoding, such that it returns a nil
slice if the decoded number of elements is 0. This is
done so that it matches the behavior of the zlib
decompression, allowing us to test both in using the
same corpus.
Modifies the behavior of the quick test for
MsgQueryShortChanIDs, such that the generated
slice of expected short chan ids is always nil
if no elements are returned. This mimics the
behavior of the zlib decompression, where
elements are appended to the slice, instead of
assigning to preallocated slice.
In this commit, we add a new package level mutex. Each time we decode a
new set of chan IDs w/ zlib, we also grab this mutex. The purpose here
is to ensure that we only EVER allocate the maxZlibBufSize globally
across all peers. Otherwise, it may be possible for us to allocate up to
64 MB for _each_ peer, exposing an easy OOM attack vector.
In this commit, we implement zlib encoding and decoding for the channel
range queries. Notably, we utilize an io.LimitedReader to ensure that we
can enforce a hard cap on the total number of bytes we'll ever allocate
in a decoding attempt.