Before this commit, both writing and reading an encoded empty set of
short channel IDs from the wire would fail. Prior to this commit, we
treated decoding an empty set as a caller error, and failed to write out
the zlib encoding of an empty set in a way that us and the other
implementations were able to read.
To fix this, rather than giving zlib an empty buffer to write out (which
results in an encoding with the zlib header data and the rest), we just
write a blank slice. When decoding, if we have an empty query body, then
we'll return a `nil` slice.
With the above changes, we'll now always write out an empty short
channel ID set as:
```
0001 (1 byte follows) || <encoding_type>
```
A new test has also been added to exercise this case for both known
encoding types.
In this commit, we export the ReadElements and WriteElements functions.
We do this as exporting these functions makes it possible for outside
packages to define serializations which use the BOLT 1.0 wire format.
In this commit, we alter the behavior of the regular
short channel id encoding, such that it returns a nil
slice if the decoded number of elements is 0. This is
done so that it matches the behavior of the zlib
decompression, allowing us to test both in using the
same corpus.
In this commit, we add a new package level mutex. Each time we decode a
new set of chan IDs w/ zlib, we also grab this mutex. The purpose here
is to ensure that we only EVER allocate the maxZlibBufSize globally
across all peers. Otherwise, it may be possible for us to allocate up to
64 MB for _each_ peer, exposing an easy OOM attack vector.
In this commit, we implement zlib encoding and decoding for the channel
range queries. Notably, we utilize an io.LimitedReader to ensure that we
can enforce a hard cap on the total number of bytes we'll ever allocate
in a decoding attempt.
In this commit, we fix a slight bug in the parsing of encoded short
channel ID's. Before this commit, we would always assume that the remote
peer was sending us the sorted+encoded variant of the short channel
ID's. In the case that they weren't (as there isn't yet a feature bit),
we would assert this check and fail early as atm we don't support any
sort of compression.