In this commit, we fix an issue that was recently introduced to the way
we handle historical dispatches for the neutrino notifier. In a recent
change, we no return an error if we’re unable to actually find the
transaction that spends an outpoint. If this is the case, then the
outpoint is actually unspent, and we should proceed as normal.
In this commit, we fix a race condition related to the way we attempt
to query to see if an outpoint has already been spent by the time it’s
registered within the ChainNotifier. If the transaction creating the
outpoint hasn’t made it into the mempool by the time we execute the
GetTxOut call, then we’ll attempt to query for the transaction itself.
In this case, if we query for the transaction, then the block hash
field will be empty as it hasn’t yet made it into a block. Under the
previous logic, we’d then attempt to force a rescan. This is an issue
as the forced rescan will fail since it’ll try to fetch the block hash
of all zeroes.
In this commit, we fix this issue by only entering this “fallback to
rescan” logic iff, the transaction has actually been mined.
All implementations of the ChainNotifier interface support registering
notifications on transaction confirmations. This struct is intended to
be used internally by ChainNotifier implementations to handle much of
this logic.
In this commit, we fix an existing bug within the logic of the neutrino
notifier. Rather than properly dispatching only once a transaction had
reached the expected number of confirmations, the historical dispatch
logic would trigger as soon as the transaction reached a single
confirmation.
This was due to the fact that we were using the scanHeight variable
which would be set to zero to calculate the number of confirmations.
The value would end up being the current height, which is generally
always greater than the number of expected confirmations. To remedy
this, we’ll now properly use the block height the transaction was
originally confirmed in to know when to dispatch.
This also applies a fix that was discovered in
93981a85c0b47622a3a5e7089b8bca9b80b834c5.
In this commit, we extend the existing historical dispatch test case to
detect any instances of early dispatches. This catches a class of bug
within a ChainNotifier when the notifier will *always* dispatch early
no matter the number of confirmations. Currently, this test fails for
the neutrino notifier.
In the historical dispatch of btcdnotify, the dispatcher checks if a
transaction has been included in a block. If this check happens before
the notifier has processed the update, it's possible that the
currentHeight of the notifier and the currentHeight of the chain might
be out of sync which causes an off by one error when calculating a
target height for the transaction confirmation. This change uses the
height of the block the transaction was found in, rather than the
currentHeight that's known by the notifier to eliminate this.
This race condition can occur if a transaction is included in a block
right when a notification is being added to the notifier for it AND when
the confirmation requires > 1 confirmations. In this case, the
confirmation gets added to the confirmation heap twice.
This test adds a test for a consumer that registers for a transaction
confirmation but takes some time to check if that confirmation has
occured.
The test reveals a race condition that can cause btcdnotify to add a
confirmation entry to its internal heap twice. If the notification
consumer is not prompt in reading from the confirmation channel, this
can cause the notifier to block indefinitely.
This commit reduces the neutrino.WaitForMoreCFHeaders parameter when
instantiating a neutrino instance as a lower value will allow the tests
to complete more quickly.
This commit fixes a prior bug in the logic for registering a new spend
notification. Previously, if the transaction wasn’t found in the
mempool or already confirmed within the chain, then
GetRawTransactionVerbose would return an error which would cause the
function itself to exit with an error.
This issue would then cause the server to be unable to start up as the
breach arbiter would be unable to register for spend notifications for
all the channels that it needed to be watching.
We fix this error simply by recognizing the particular JSON-RPC error
that will be returned in this scenario and treating it as a benign
error.
This commit fixes a prior mishandled error when attempting historical
confirmation dispatches. In the prior version of this code fragment, if
the transaction under the spotlight wasn’t found within the mempool, or
already in the chain, then an error would be returned by
b.chainConn.GetRawTransactionVerbose, which would case the function to
exit with an error. This behavior was incorrect, as during transaction
re-broadcasts, it was possible for transaction not yet to be a member
of either set.
We fix this issue by ensuring that we treat the JSON error code as a
benign error and continue with the notification registration.
The btclog package has been changed to defining its own logging
interface (rather than seelog's) and provides a default implementation
for callers to use.
There are two primary advantages to the new logger implementation.
First, all log messages are created before the call returns. Compared
to seelog, this prevents data races when mutable variables are logged.
Second, the new logger does not implement any kind of artifical rate
limiting (what seelog refers to as "adaptive logging"). Log messages
are outputted as soon as possible and the application will appear to
perform much better when watching standard output.
Because log rotation is not a feature of the btclog logging
implementation, it is handled by the main package by importing a file
rotation package that provides an io.Reader interface for creating
output to a rotating file output. The rotator has been configured
with the same defaults that btcd previously used in the seelog config
(10MB file limits with maximum of 3 rolls) but now compresses newly
created roll files. Due to the high compressibility of log text, the
compressed files typically reduce to around 15-30% of the original
10MB file.