This commit adds wallet_best_block_timestamp to the gRPC interface.
This is done in order to allow clients to calculate progress while
lnd syncs to the blockchain. wallet_best_block_timestamp is exposed
via the GetInfo() rpc call. Additionally, IsSynced() returns the
WalletBestBlockTimestamp as the second value in the tuple
that is returned, providing additional detail when querying about the
status of the sync. The BtcWallet interface has also been updated
accordingly.
This commit was created to support the issue to
[Add progress bar for chain sync] (lightninglabs/lightning-app#10) in
lightning-app
In this commit, we update the commit of my btcd fork as the saltiest
commit includes a fix for a breaking change (or bug fix). After this
commit, we’ll able to run the set of integration tests with golfing
1.10.
New tests are added for creating, decoding, and re-encoding
litecoin invoices for both mainnet and testnet, as well as a test
that expects an error when the active network mismatches the
invoice.
This change fixes a bug when an invoice is decoded for a network
whose bech32 segwit prefix is longer than 2 characters. The length
of the Bech32HRPSegwit network parameter is used to determine
where in the human-readable portion of the invoice the amount
begins, rather than assuming it begins after the first four
characters.
Decode() now throws an error when the encoded invoice does
not match the active network.
Changes the minimum hrp length check to >= 3 instead of >= 4.
Also removes a redundant "if ...; err != nil check" that was raising
a warning in invoice.go.
This commit removes the inspection of the return error
from broadcastTx. This is done since the new error
checking added to PublishTransaction will return a nil
error in case the transaction already exists in the
mempool.
In this commit, we modify the edgeWeight function that’s used within
the findPath method to weight fees more heavily than the time lock
value at an edge. We do this in order to greedily prefer lower fees
during path finding. This is a simple stop gap in place of more complex
weighting parameters that will be investigated later.
We also modify the edge distance to use an int64 rather than a float.
Finally an additional test has been added in order to excessive this
new change. Before the commit, the test was failing as we preferred the
route with lower total time lock.
In this commit, we modify the caching structure to return a set of
cached routes for a request if the number of routes requested is less
than or equal to the number of cached of routes.
In this commit, we modify the findPaths method to take the max number
of routes to return. With this change, FindRoutes can eventually itself
also take a max number of routes in order to make the function useable
again.
In this commit, we made a series of modification to the way we handle
reading edges and vertexes from disk, in order to reduce the amount of
garbage generated:
1. Properly use PubKeyBytes are required rather than PubKey()
2. Return direct structs rather than pointers, and leave it to the
runtime to perform escape analysis.
3. In-line the former readSig() method when reading sigs from disk.
In this commit, we fix a lingering bug related to the way that we
deliver block epoch notifications to end users. Before this commit, we
would launch a new goroutine for *each block*. This was done in order
to ensure that the notification dispatch wouldn’t block the main
goroutine that was dispatching the notifications. This method archived
the goal, but had a nasty side effect that the goroutines could be
re-ordered during scheduling, meaning that in the case of fast
successive blocks, then notifications would be delivered out of order.
Receiving out of order notifications is either disallowed, or can cause
sub-systems that rely on these notifications to get into weird states.
In order to fix this issue, we’ll no longer launch a new goroutine to
deliver each notification to an awaiting client. Instead, each client
will now gain a concurrent in-order queue for notification delivery.
Due to the internal design of chainntnfs.ConcurrentQueue, the caller
should never block, yet the receivers will receive notifications in
order. This change solves the re-ordering issue and also minimizes the
number of goroutines that we’ll create in order to deliver block epoch
notifications.
In this commit, we modify our initialization of neutrino to also pass
in the custom dialer and name resolver function. With this change, if
lnd is configured to use Tor, then neutrino will as well. This means
that *both* the Bitcoin P2P as well as the Lightning P2P traffic will
be proxied over Tor.
In this commit, we extend the TorDial function and add a new attribute
to the TorProxyNet struct to allow the caller to opt for stream
isolation or not. Using stream isolation, we ensure that each new
connection uses a distinct circuit.
In this commit, we fix a regression introduced by a recent change which
would allow the agent to detect a channel as failed, and blacklist the
node, promising faster convergence with the ideal state of the
heuristic.
The source of this bug is that we would use the set of blacklisted
nodes in order to compute how many additional channels we should open.
If 10 failures happened, then we would think that we had opened up 10
channels, and stop much earlier than we actually should.
To fix this, while ensuring we don’t retry to failed peers, the
NeedMoreChans method will now return *how* anymore channels to open,
and the Select method will take in how many channels it should try to
open *exactly*.
In this commit, we update the chan closer state machine to be fully
spec compliant. A recent change was made to the spec to enforce a
strict ordering such that, the negotiation is well formed. To enact
this change, we’ll ensure that before the closeFeeNegotiation phase,
the *initiator* is always the one that sends the CloseSigned message
first.
In this commit, we fix a slight miscalculation within the GetInfo call.
Before this commit, we would list any channel that the peer knew of as
active, instead of those which are, well, actually *active*. We fix
this by skipping any channels that we don’t have the remote revocation
for.