Add an entry to a payments index bucket which maps sequence number
to payment hash when we initiate payments. This allows for more
efficient paginated queries. We create the top level bucket in its
own migration so that we do not need to create it on the fly.
When we retry payments and provide them with a new sequence number, we
delete the index for their existing payment so that we do not have an
index that points to a non-existent payment.
If we delete a payment, we also delete its index entry. This prevents
us from looking up entries from indexes to payments that do not exist.
This commit migrates the payments in the database to a new structure
that allows for multiple htlcs per payments. The migration introduces a
new sub-bucket that contains a list of htlcs and moves the old single
htlc into that.
The btclog package has been changed to defining its own logging
interface (rather than seelog's) and provides a default implementation
for callers to use.
There are two primary advantages to the new logger implementation.
First, all log messages are created before the call returns. Compared
to seelog, this prevents data races when mutable variables are logged.
Second, the new logger does not implement any kind of artifical rate
limiting (what seelog refers to as "adaptive logging"). Log messages
are outputted as soon as possible and the application will appear to
perform much better when watching standard output.
Because log rotation is not a feature of the btclog logging
implementation, it is handled by the main package by importing a file
rotation package that provides an io.Reader interface for creating
output to a rotating file output. The rotator has been configured
with the same defaults that btcd previously used in the seelog config
(10MB file limits with maximum of 3 rolls) but now compresses newly
created roll files. Due to the high compressibility of log text, the
compressed files typically reduce to around 15-30% of the original
10MB file.
* Initial draft of brain dump of chandler. Nothing yet set in stone.
* Will most likely move the storage of all structs to a more “column”
oriented approach. Such that, small updates like incrementing the total
satoshi sent don’t result in the entire struct being serialized and
written.
* Some skeleton structs for other possible data we might want to store
are also included.
* Seem valuable to record as much data as possible for record keeping,
visualization, debugging, etc. Will need to set up a time+space+dirty
cache to ensure performance isn’t impacted too much.