Previously we were excluding non-dev test files from our coverage
report, which included interface_test.go as well as the bitcoindnotify
and btcdnotify dev tests.
Adds an outpoint index that stores a tlv stream. Currently the stream
only contains the outpoint's indexStatus. This should cut down on
big bbolt transactions in several places throughout the codebase.
Previously we wouldn't recreate some of the top level buckets that are
now considered expected with our migration logic. This bug was
preexisting, but never surfaced because the other TLB buckets were not
touched by this unit test.
We risked deadlocking on shutdown if a client (in our case a contract
resolver) attempted to schedule a sweep of an input after the
ChainNotifier had been shut down. This would cause the `collector`
goroutine to exit, and not handle incoming requests, causing a deadlock
(since the ChainArbitrator is being stopped before the Sweeper in the
server).
To fix this we could change the order these subsystems are stopped, but
this doesn't ensure there aren't other clients that could end up in the
same deadlock scenario. So instead we keep handling the incoming
requests even after the collector has exited (immediatly returning an
error), until the sweeper is signalled to shutdown.
This commit unsets option static remote key required for peers that
we have existing legacy channels with. This ensures that we can still
connect to peers that do not recognize this feature bit that we have
existing channels with.
To avoid running into the "server is still starting" error when trying
to close a channel, we first wait for the error to disappear before we
try closing the actual channel.
This commit replaces most of the hard coded 10, 15, 20 and 30 second
timeouts with the default timeout. This should allow darwin users to
successfully run the parallel itests locally as well.
It seems that our itests don't perform correctly in a high CPU usage
scenario such as running 6 tests in parallel. Some goroutines don't get
enough execution time and causes check to time out.
By reducing the total number of parallel tests, we hope to give all
goroutines some more breathing room.
In high CPU usage scenarios such as our parallel itests, it seems that
some goroutines just don't get any CPU time before our test timeouts
expire. By polling 10 times less frequently, we hope to reduce the
overall number of goroutines that are spawned because of the RPC
requests within the polling code.