This commit makes us gate the calls to the RPC servers according to the
current RPC state. This ensures we won't try to call the RPC server
before it has been fully initialized, and that we won't call the
walletUnlocker after the wallet already has been unlocked.
We also test that legacy keysend payments are promoted to AMP payments
on the receiver-sdie by asserting basic properties of the fields
returned via the rpc.
In this commit, we thread through the necessary state to allow users to
set a max shard amount. If this value is set, then this'll effectively
serve as a ceiling for all our split attempts. If we need to split,
we'll first try to use `paymentAmt/2`, if that's bigger than
`MaxShardAmt, then we'll use the latter instead.
Ideally in the future we have a dynamic way to automatically set both
the `MaxShardAmt` as well as `MaxParts` for users. Until then exposing
these two new fields will allow us to experiment with setting them
automatically using the RPC interface, and also give users a bit more
control over how we attempt to route payments, akin to coin control for
on-chain payments.
Fixes#4730
Currently when numgraphsyncpeers=0, lnd will still attempt to perform
an initial historical sync. We change this behavior here to forgoe
historical sync entirely when numgraphsyncpeers is zero, since the
routing table isn't being updated anyway while the node is active.
This permits a no-graph lnd mode where no syncing occurs at all.
This PR updates the hold invoice itest to create a private
channel, and sets the private option on the invoices created
to add coverage for the addition of hop hints.
Rather than performing this call in the SyncManager, we give each
gossipSyncer the ability to mark the first sync completed. This permits
pinned syncers to contribute towards the rpc-level synced_to_graph
value, allowing the value to be true after the first pinned syncer or
regular syncer complets. Unlinke regular syncers, pinned syncers can
proceed in parallel possibly decreasing the waiting time if consumers
rely on this field before proceeding to load their application.
Now that the HTLC second-level transactions are going through the
sweeper instead of the nursery, there are a few things we must account
for.
1. The sweeper sweeps the CSV locked HTLC output one block earlier than
the nursery.
2. The sweeper aggregates several HTLC second levels into one
transaction. This also means it is not enough to check txids of the
transactions spent by the final sweep, but we must use the actual
outpoint to distinguish.
In case of anchor channel types, we mine one less block before we expect
the second level sweep to appear in the mempool, since the sweeper
sweeps one block earlier than the nursery.
Since the tests set a quite high fee rate before the node goes to chain,
the HTLCs wouldn't be economical to sweep at this fee rate.
Pre sweeper handling of the second-level transactions this was not a
problem, since the fees were set when the second-levels were created,
before the fee estimate was increased.
To avoid running into the "server is still starting" error when trying
to close a channel, we first wait for the error to disappear before we
try closing the actual channel.
This commit replaces most of the hard coded 10, 15, 20 and 30 second
timeouts with the default timeout. This should allow darwin users to
successfully run the parallel itests locally as well.
In high CPU usage scenarios such as our parallel itests, it seems that
some goroutines just don't get any CPU time before our test timeouts
expire. By polling 10 times less frequently, we hope to reduce the
overall number of goroutines that are spawned because of the RPC
requests within the polling code.