TestChainNotifier wraps the ChainNotifier interface to allow adding additional testing methods with access to private fields in the notifiers. These testing methods are only compiled when the build tag "debug" is set. UnsafeStart allows starting a notifier with a specified best block.
UnsafeStart is useful for the purpose of testing cases where a notifier's best block is out of date when it receives a new block.
This resolves the situation where a notifier's chain backend skips a series of blocks, causing the notifier to need to dispatch historical block notifications to clients.
Additionally, if the current notifier's best block has been reorged out, this logic enables the notifier to rewind to the common ancestor between the current chain and the outdated best block and dispatches notifications from the ancestor.
This prevents the situation where we notify clients about a newly connected block, and then the block connection itself fails. We also want to set our best block in between connecting the block and notifying clients, in case a client makes queries about the new block they have received.
If the chain backend misses telling the notifier about a series of disconnected blocks, the notifier is now able to disconnect the tip to its new best block.
If a client passes in their best known block when registering for block notifications, check to see if it's behind our best block. If so, dispatch the missed block notifications to the client.
This is necessary because clients that persist their best known block can miss new blocks while registering for notifications.
Clients can optionally pass their best block known into RegisterBlockEpochNtfn. This enables the notifiers to catch up clients on blocks they may have missed.
In this commit, we introduce a nice optimization with regards to lnd's
interaction with a bitcoind backend. Within lnd, we currently have three
different subsystems responsible for watching the chain: chainntnfs,
lnwallet, and routing/chainview. Each of these subsystems has an active
RPC and ZMQ connection to the underlying bitcoind node. This would incur
a toll on the underlying bitcoind node and would cause us to miss ZMQ
events, which are crucial to lnd. We remedy this issue by sharing the
same connection to a bitcoind node between the different clients within
lnd.
This commit fix a bug within the bitcoind notifier logic, which would
ignore the passed mempool argument, and notify spentness whether the
spending transaction was confirmed or not. The logic used to fix this is
similar to what is already done for the btcd backend.
In this commit, we fix a recently introduced bug which can result in a
panic when bitcoind nodes without a txindex active are started. The
issue was that we would still defence the transaction's blockhash, which
would be nil if we detected that the backend didn't have the txindex
active.
Before this commit, we relied on the need of full nodes to enable the
transaction index. This allowed us to fetch historical details about
transactions in order to register and dispatch confirmation and spend
notifications.
This commit allows us to drop that requirement by providing a fallback
method to use when the transaction index is not enabled. This fallback
method relies on manually scanning blocks for the transactions
requested, starting from the earliest height the transactions could have
been included in, to the current height in the chain.
In this commit, we add a new Updates channel to our ConfirmationEvent
struct. This channel will be used to deliver updates to a subscriber of
a confirmation notification. Updates will be delivered at every
incremental height of the chain with the number of confirmations
remaining for the transaction to be considered confirmed by the
subscriber.
This commit moves the call to the bitcoind backend to start watching an
outpoint for spentness to after we have recorded the outpoint in our
list of clients. This is done to avoid a race that we saw using the btcd
backend, and it is probable that it can also happen using bitcoind.
In this commit, we fix a lingering bug related to the way that we
deliver block epoch notifications to end users. Before this commit, we
would launch a new goroutine for *each block*. This was done in order
to ensure that the notification dispatch wouldn’t block the main
goroutine that was dispatching the notifications. This method archived
the goal, but had a nasty side effect that the goroutines could be
re-ordered during scheduling, meaning that in the case of fast
successive blocks, then notifications would be delivered out of order.
Receiving out of order notifications is either disallowed, or can cause
sub-systems that rely on these notifications to get into weird states.
In order to fix this issue, we’ll no longer launch a new goroutine to
deliver each notification to an awaiting client. Instead, each client
will now gain a concurrent in-order queue for notification delivery.
Due to the internal design of chainntnfs.ConcurrentQueue, the caller
should never block, yet the receivers will receive notifications in
order. This change solves the re-ordering issue and also minimizes the
number of goroutines that we’ll create in order to deliver block epoch
notifications.
In this commit, we fix a race condition related to the way we attempt
to query to see if an outpoint has already been spent by the time it’s
registered within the ChainNotifier. If the transaction creating the
outpoint hasn’t made it into the mempool by the time we execute the
GetTxOut call, then we’ll attempt to query for the transaction itself.
In this case, if we query for the transaction, then the block hash
field will be empty as it hasn’t yet made it into a block. Under the
previous logic, we’d then attempt to force a rescan. This is an issue
as the forced rescan will fail since it’ll try to fetch the block hash
of all zeroes.
In this commit, we fix this issue by only entering this “fallback to
rescan” logic iff, the transaction has actually been mined.