In this commit, we fix an issue where connections made through Tor's
SOCKS proxy would result in the remote address being the address of the
proxy itself
We fix this by using an internal proxyConn struct that sets the correct
address at the time of the connection.
In this commit, we clean up the tor package to better follow the
Effective Go guidelines. Most of the changes revolve around naming,
where we'd have things like `torsvc.TorDial`. This was simplified to
`tor.Dial` along with many others.
We check if the channel is FullySynced instead of comparing the local
and remote commit chain heights, as the heights might not be in sync.
Instead we call FullySynced which recently was modified to use compare
the message indexes instead, which is _should_ really be in sync between
the chains.
The test TestChanSyncOweRevocationAndCommitForceTransition is altered to
ensure the two chains at different heights before the test is started, to
trigger the case that would previously fail to resend the commitment
signature.
This commit changes cancelReservationCtx to gold the resMtx from start
to finish. Earlier it would lock at different times only when accessing
the maps, meaning that other goroutines (I'm looking at you
PeerTerminationWatcher) could come in and grab the context in between
locks, possibly leading to a race.
This commit moves the responsibility of sending a funding error on the
reservation error channel inside failFundingFlow, reducing the places we
need to keep track of sending it.
In this commit we fix an existing bug caused by a scheduling race
condition. We'll now ensure that if we get a gossip message from a peer
before we create an instance for it, then we create one on the spot so
we can service the message. Before this commit, we would drop the first
message, and therefore never sync up with the peer at all, causing them
to miss channel announcements.
In this commit, we extend the AuthenticatedGossiper to take advantage of
the new query features in the case that it gets a channel update w/o
first receiving the full channel announcement. If this happens, we'll
attempt to find a syncer that's fully synced, and request the channel
announcement from it.
This new method allows outside callers to sample the current state of
the gossipSyncer in a concurrent-safe manner. In order to achieve this,
we now only modify the g.state variable atomically.
In this commit, we ensure that all indexes for a particular channel have
any relevant keys deleted once a channel is removed from the database.
Before this commit, if we pruned a channel due to closing, then its
entry in the channel update index would ever be removed.
In this commit, we add a new command line option to allow (ideally
routing nodes) to disable receiving up-to-date channel updates all
together. This may be desired as it'll allow routing nodes to save on
bandwidth as they don't need the channel updates to passively forward
HTLCs. In the scenario that they _do_ want to update their routing
policies, the first failed HTLC due to policy inconsistency will then
allow the routing node to propagate the new update to potential nodes
trying to route through it.
In this commit, we create a new concrete implementation for the new
discovery.ChannelGraphTimeSeries interface. We also export the
createChannelAnnouncement method to allow the chanSeries struct to
re-use the existing code for creating wire messages from the database
structs.
In this commit, we add a new database migration required to update old
database to the version of the database that tracks the update index for
the nodes and edge policies. The migration is straight forward, we
simply need to populate the new indexes for the all the nodes, and then
all the edges.
In this commit, we add a series of methods, and a new database index
that we'll use to implement the new discovery.ChannelGraphTimeSeries
interface interface. The primary change is that we now maintain two new
indexes tracking the last update time for each node, and the last update
time for each edge. These two indexes allow us to implement the
NodeUpdatesInHorizon and ChanUpdatesInHorizon methods. The remaining
methods added simply utilize the existing database indexes to allow us to
respond to any peer gossip range queries.
A set of new unit tests has been added to exercise the added logic.
In this commit, we update the logic in the AuthenticatedGossiper to
ensure that can properly create, manage, and dispatch messages to any
gossipSyncer instances created by the server.
With this set of changes, the gossip now has complete knowledge of the
current set of peers we're conneted to that support the new range
queries. Upon initial connect, InitSyncState will be called by the
server if the new peer understands the set of gossip queries. This will
then create a new spot in the peerSyncers map for the new syncer. For
each new gossip query message, we'll then attempt to dispatch the
message directly to the gossip syncer. When the peer has disconnected,
we then expect the server to call the PruneSyncState method which will
allow us to free up the resources.
Finally, when we go to broadcast messages, we'll send the messages
directly to the peers that have gossipSyncer instances active, so they
can properly be filtered out. For those that don't we'll broadcast
directly, ensuring we skip *all* peers that have an active gossip
syncer.
In this commit, introduce a new struct, the gossipSyncer. The role of
this struct is to encapsulate the state machine required to implement
the new gossip query range feature recently added to the spec. With this
change, each peer that knows of this new feature will have a new
goroutine that will be managed by the gossiper.
Once created and started, the gossipSyncer will start to progress
through each possible state, finally ending at the chansSynced stage. In
this stage, it has synchronized state with the remote peer, and is
simply awaiting any new messages from the gossiper to send directly to
the peer. Each message will only be sent if the remote peer actually has
a set update horizon, and the message isn't before or after that
horizon.
A set of unit tests has been added to ensure that two state machines
properly terminate and synchronize channel state.