This commit adds a new "waiting to start" state which may be used to
query if we're still waiting to become the cluster leader. Once leader
we advance the state to "wallet not exist" or "wallet locked" given
wallet availablity.
Since we don't have to worry about network latency within our
integration tests, we can shorten the broadcast timeout for neutrino
integration tests from 5s to 1s.
Since we want to support AMP payment using a different unique payment
identifier (AMP payments don't go to one specific hash), we change the
nomenclature to be Identifier instead of PaymentHash.
Our aggregate htlc test depends on our previous behavior
where recipients would allow channels with pending hold
invoice htlcs to force close. Now that we have an expiry
watcher to prevent these force closes, we can't rely on
this for tests because the recipient will cancel the htlcs
back before they expire.
This commit adds a test for a hold invoice which is accepted
off-chain, and held by the recipient until it expired and
the payer force-closes the channel. With this test we
demonstrate two bugs in our handling of hold invoice state
in the invoice registry when we expire on chain:
- Htlcs not updated: even when we've timed out, we don't
update the htlc state accordingly.
- Invoice can be settled: the invoice can be settled even
though it's expired on chain.
Reproduce the case where we allow settling of invoices that have
htlcs that have actually timed out on chain. This bug can rarely
occur if a hodl invoice goes to chain and is manually settled
after it has timed out. Funds are SAFU, but this could be a
headache because the invoice says it's settled when no funds
were claimed.
This commit updates our multi-hop force close test to use a hodl
invoice so that we can reproduce some bugs which will require
the preimage for the invoice that is timed out on chain.
This commit deprecates/replaces the old field `sat_per_byte` with
`sat_per_vbyte`. While the old field suggests sat per byte, it’s
actually using sat per virtual byte. We use the Hidden param to hide all
the deprecated flags. These flags won't show up in help menu onwards,
while stay valid that can be passed from cli. Thus bash scripts
referencing these fields won't be broken.
This commit makes us gate the calls to the RPC servers according to the
current RPC state. This ensures we won't try to call the RPC server
before it has been fully initialized, and that we won't call the
walletUnlocker after the wallet already has been unlocked.
We also test that legacy keysend payments are promoted to AMP payments
on the receiver-sdie by asserting basic properties of the fields
returned via the rpc.
In this commit, we thread through the necessary state to allow users to
set a max shard amount. If this value is set, then this'll effectively
serve as a ceiling for all our split attempts. If we need to split,
we'll first try to use `paymentAmt/2`, if that's bigger than
`MaxShardAmt, then we'll use the latter instead.
Ideally in the future we have a dynamic way to automatically set both
the `MaxShardAmt` as well as `MaxParts` for users. Until then exposing
these two new fields will allow us to experiment with setting them
automatically using the RPC interface, and also give users a bit more
control over how we attempt to route payments, akin to coin control for
on-chain payments.
Fixes#4730
Currently when numgraphsyncpeers=0, lnd will still attempt to perform
an initial historical sync. We change this behavior here to forgoe
historical sync entirely when numgraphsyncpeers is zero, since the
routing table isn't being updated anyway while the node is active.
This permits a no-graph lnd mode where no syncing occurs at all.
This PR updates the hold invoice itest to create a private
channel, and sets the private option on the invoices created
to add coverage for the addition of hop hints.
Rather than performing this call in the SyncManager, we give each
gossipSyncer the ability to mark the first sync completed. This permits
pinned syncers to contribute towards the rpc-level synced_to_graph
value, allowing the value to be true after the first pinned syncer or
regular syncer complets. Unlinke regular syncers, pinned syncers can
proceed in parallel possibly decreasing the waiting time if consumers
rely on this field before proceeding to load their application.
Now that the HTLC second-level transactions are going through the
sweeper instead of the nursery, there are a few things we must account
for.
1. The sweeper sweeps the CSV locked HTLC output one block earlier than
the nursery.
2. The sweeper aggregates several HTLC second levels into one
transaction. This also means it is not enough to check txids of the
transactions spent by the final sweep, but we must use the actual
outpoint to distinguish.
In case of anchor channel types, we mine one less block before we expect
the second level sweep to appear in the mempool, since the sweeper
sweeps one block earlier than the nursery.
Since the tests set a quite high fee rate before the node goes to chain,
the HTLCs wouldn't be economical to sweep at this fee rate.
Pre sweeper handling of the second-level transactions this was not a
problem, since the fees were set when the second-levels were created,
before the fee estimate was increased.
To avoid running into the "server is still starting" error when trying
to close a channel, we first wait for the error to disappear before we
try closing the actual channel.
This commit replaces most of the hard coded 10, 15, 20 and 30 second
timeouts with the default timeout. This should allow darwin users to
successfully run the parallel itests locally as well.
In high CPU usage scenarios such as our parallel itests, it seems that
some goroutines just don't get any CPU time before our test timeouts
expire. By polling 10 times less frequently, we hope to reduce the
overall number of goroutines that are spawned because of the RPC
requests within the polling code.
This ensures that the nodes will properly be shutdown even if one fails
to start or any of them fail to connect. Previously the shutdown is
defered only in the event that the setup was successful.