Since the batch interval can potentially be long, adding local updates
to the graph could be slow. This would slow down operations like adding
our own channel update and announcements during the funding process, and
updating edge policies for local channels.
Now we instead check whether the update is remote or not, and only for
remote updates use the SchedulerOption to lazily add them to the graph.
We do this instead of using the source of the AnnounceSignatures
message, as we filter out the source when broadcasting any
announcements, leading to the remote node not receiving our channel
update. Note that this is done more for the sake of correctness and to
address a flake within the integration tests, as channel updates are
sent directly and reliably to channel counterparts.
Currently when numgraphsyncpeers=0, lnd will still attempt to perform
an initial historical sync. We change this behavior here to forgoe
historical sync entirely when numgraphsyncpeers is zero, since the
routing table isn't being updated anyway while the node is active.
This permits a no-graph lnd mode where no syncing occurs at all.
This PR updates the hold invoice itest to create a private
channel, and sets the private option on the invoices created
to add coverage for the addition of hop hints.
With this commit, if --tor.active is specified, then IPv6 addresses
will no longer go through the connmgr.TorLookupIP function from btcd.
This function does not have proper IPv6 support and would fail with
the error "tor general error". Instead, use the system resolver.
To fix an issue where the vendor.tar.gz in a release build had a
different hash if the mobile RPC stubs were in the mobile/ folder, we
clean those out first.
The culprit was the `google.golang.org/grpc/test/bufconn` package which
is currently only used in the mobile RPC stubs and nowhere else.
Therefore the vendor/module.txt was different when vendoring with the
generated mobile RPC stubs being around.
This allows users to see progress whenever the docker image is
[re]built, and (esp on non-linux hosts) track the size of the build
context being uploaded. Currently no output is displayed, so it's hard
to attribute the source of latency, e.g. network latency, building
layers, a large work directory, etc.
AFAICT it's not possible to flip back from bein synced_to_chain, so we
remove the underlying call that could reflect this. The method is moved
into the test file since it's still used to test correctness of other
portions of the flow.
Rather than performing this call in the SyncManager, we give each
gossipSyncer the ability to mark the first sync completed. This permits
pinned syncers to contribute towards the rpc-level synced_to_graph
value, allowing the value to be true after the first pinned syncer or
regular syncer complets. Unlinke regular syncers, pinned syncers can
proceed in parallel possibly decreasing the waiting time if consumers
rely on this field before proceeding to load their application.
A pinned syncer is an ActiveSyncer that is configured to always remain
active for the lifetime of the connection. Pinned syncers do not count
towards the total NumActiveSyncer count, which are rotated periodically.
This features allows nodes to more tightly synchronize their routing
tables by ensuring they are always receiving gossip from distinguished
subset of peers.