Corrects an instance that holds a reference to a boltdb
byte slice after returning from the transaction. This
can cause panics under certain conditions, which is
avoided by creating a copy of the key.
In this commit, we allow the gossiper syncer to store the chunk size for
its respective encoding type. We do this to prevent a race condition
that would arise within the unit tests by modifying the values of the
encodingTypeToChunkSize map to allow for easier testing.
In this commit, we randomize the order of the different bootstrappers in
order to prevent from always querying potentially unreliable
bootstrappers first.
In this commit, we fix the logging when adding new gossip syncers. The
old log would log the byte array, rather than the byte slice. We fix
this by slicing before logging.
This commit changes the gossiper to direct messages to
peer objects, instead of sending them through the
server every time. The primary motivation is to reduce
contention on the server's mutex and, more importantly,
avoid deadlocks in the Triangle of Death.
In this commit, we go through the codebase looking for TCP address
assumptions and modifying them to include the recently introduced onion
addresses. This enables us to fully support onion addresses within the
daemon.
In this commit, we fix a bug where a fallback SRV lookup would leak
information if `lnd` was set to route connections over Tor. We solve
this by using the network-specific functions rather than the standard
ones found in the `net` package.
In this commit, we fix an existing deadlock in the
gossiper->server->peer pipeline by ensuring that we're not holding the
syncer mutex while we attempt to have a syncer filter out the rest of
gossip messages.
In this commit we fix an existing bug caused by a scheduling race
condition. We'll now ensure that if we get a gossip message from a peer
before we create an instance for it, then we create one on the spot so
we can service the message. Before this commit, we would drop the first
message, and therefore never sync up with the peer at all, causing them
to miss channel announcements.
In this commit, we extend the AuthenticatedGossiper to take advantage of
the new query features in the case that it gets a channel update w/o
first receiving the full channel announcement. If this happens, we'll
attempt to find a syncer that's fully synced, and request the channel
announcement from it.
This new method allows outside callers to sample the current state of
the gossipSyncer in a concurrent-safe manner. In order to achieve this,
we now only modify the g.state variable atomically.
In this commit, we create a new concrete implementation for the new
discovery.ChannelGraphTimeSeries interface. We also export the
createChannelAnnouncement method to allow the chanSeries struct to
re-use the existing code for creating wire messages from the database
structs.
In this commit, we update the logic in the AuthenticatedGossiper to
ensure that can properly create, manage, and dispatch messages to any
gossipSyncer instances created by the server.
With this set of changes, the gossip now has complete knowledge of the
current set of peers we're conneted to that support the new range
queries. Upon initial connect, InitSyncState will be called by the
server if the new peer understands the set of gossip queries. This will
then create a new spot in the peerSyncers map for the new syncer. For
each new gossip query message, we'll then attempt to dispatch the
message directly to the gossip syncer. When the peer has disconnected,
we then expect the server to call the PruneSyncState method which will
allow us to free up the resources.
Finally, when we go to broadcast messages, we'll send the messages
directly to the peers that have gossipSyncer instances active, so they
can properly be filtered out. For those that don't we'll broadcast
directly, ensuring we skip *all* peers that have an active gossip
syncer.
In this commit, introduce a new struct, the gossipSyncer. The role of
this struct is to encapsulate the state machine required to implement
the new gossip query range feature recently added to the spec. With this
change, each peer that knows of this new feature will have a new
goroutine that will be managed by the gossiper.
Once created and started, the gossipSyncer will start to progress
through each possible state, finally ending at the chansSynced stage. In
this stage, it has synchronized state with the remote peer, and is
simply awaiting any new messages from the gossiper to send directly to
the peer. Each message will only be sent if the remote peer actually has
a set update horizon, and the message isn't before or after that
horizon.
A set of unit tests has been added to ensure that two state machines
properly terminate and synchronize channel state.
In this commit, we update the testUpdateChannelPolicy to exercise the
recent set of changes within the switch. If one applies this test to a
fresh branch (without those new changes) it should fail. This is due to
the fact that before, Bob would attempt to apply the constraints of the
incoming link (which we updated) instead of the outgoing link. With the
recent set of changes, the test now properly passes.
In this commit, we fix an existing deadlock in the
processChanPolicyUpdate method. Before this commit, within
processChanPolicyUpdate, we would directly call updateChannel *within*
the ForEachChannel closure. This would at times result in a deadlock, as
updateChannel will itself attempt to create a write transaction in order
to persist the newly updated channel.
We fix this deadlock by simply performing another loop once we know the
set of channels that we wish to update. This second loop will actually
update the channels on disk.
In this commit, fix the inability of some users to connect to the DNS
seed using our direct TCP fallback. We do this as some resolvers filter
out our large SRV requests due to their size (they also include public
keys). Instead, we’ll use a direct TCP resolution in this case.
However, after a recent change, we forgot the period at the end of the
target DNS host. This is an issue as the domain needs to be fully
qualified.
The fix is easy, add a period within our string formatting to target
the proper sub-domain and SRV target.
Fixes#854.
In this commit, we update the DNS bootstrapper to match the new query
semantics expected by the new DNS server. We no longer hard code the
target DNS host, and instead, we’ll re-use the same target endpoint as
we only need the soaShim in order to establish a direct TCP connection
for the queries.
In this commit, we reduce the amount of unnecessary work that the
gossiper can carry out. When CPU profiling some nodes, I noticed that
we’d spend a lot of time validating the signatures for an announcement,
only to realize that the router already had it.
To remedy this, we’ll use the new methods added to the channel router
in order to avoid unnecessarily validating an announcement that is
actually stale. This should reduce memory usage (since it uses big
int’s under the scenes), and also idle CPU usage.
This commit adds a new module named 'torsvc' which houses all Tor
functionality in an attempt to isolate it and make it reusable in
other projecs. Some additional tweaks were made to config.go and
to the bootstrapper.
This commit adds Tor support. Users can set the --TorSocks flag
to specify which port Tor's SOCKS5 proxy is listening on so that
lnd can connect to it. When this flag is set, ALL traffic gets
routed over Tor including DNS traffic. Special functions for
DNS lookups were added, and since Tor doesn't natively support
SRV requests, the proxySRV function routes connects us to
a DNS server via Tor and SRV requests can be issued directly
to the DNS server.
Co-authored-by: MeshCollider <dobsonsa68@gmail.com>
In order to reduce high CPU utilization during the initial network view
sync, we slash down the total number of active in-flight jobs that can
be launched.