In this commit, we introduce a new method to the channel router's config
struct: QueryBandwidth. This method allows the channel router to query
for the up-to-date available bandwidth of a particular link. In the case
that this link emanates from/to us, then we can query the switch to see
if the link is active (if not bandwidth is zero), and return the current
best estimate for the available bandwidth of the link. If the link,
isn't one of ours, then we can thread through the total maximal
capacity of the link.
In order to implement this, the missionControl struct will now query the
switch upon creation to obtain a fresh bandwidth snapshot. We take care
to do this in a distinct db transaction in order to now introduced a
circular waiting condition between the mutexes in bolt, and the channel
state machine.
The aim of this change is to reduce the number of unnecessary failures
during HTLC payment routing as we'll now skip any links that are
inactive, or just don't have enough bandwidth for the payment. Nodes
that have several hundred channels (all of which in various states of
activity and available bandwidth) should see a nice gain from this w.r.t
payment latency.
This commit adds a simple scheduling mechanism for
resolving potential deadlocks when dropping a stale
connection (via pubkey inspection).
Ideally, we'd like to wait to activate a new peer until
the previous one has exited entirely. However, the current
logic attempts to disconnect (and wait) until the peer
has been cleaned up fully, which can result in
deadlocks with other portions of the codebase, since
other blocking methods may also need acquire the mutex
before the peer can exit.
When existing connections are replaced, they now
schedule a callback that is executed inside the
peerTerminationWatcher. Since the peer now waits for
the clean exit of the prior peer, this callback is
now executed with a clean slate, adds the peer to
the server's maps, and initiates peer's Start() method.
This skips creating errChans when sending messages to
peer during broadcast. This should be a minor memory
optimization, as well as not requiring channel sends
on those which will never be read.
In this commit, we ensure that any time we send a TempChannelFailure
that's destined for a multi-hop source sender, then we'll always package
the latest channel update along with it.
This commit make the server populate the ChainArbitrator's
ContractBreach method, by a method that will reliably handoff the breach
event ot the breachArbiter. The server will now forward the breach event
to the breachArbiter, and only let the closure return a non-nil error
when the breachArbiter ACKs this event.
In this commit, we fix a minor logging bug introduced in a prior commit.
Before we would directly modify the *net.TCPAddr that was a part of the
brontide connection. This achieved our goal, but would print weird log
messages as we mutated the port in the already established connection.
In this commit, we fix that by ensuring we create a copy iff it's a
net.TCPAddr, then modify that and replace the instance in the
lnwire.NetAddress.
Fixes#991.
This commits changes the behavior of our connection
reestablishment, and resolves some minor issues that
could lead to uncancelled requests or an infinite
connection loop.
- Will not attempt to Remove connection requests with
an ID of 0. This can happen for reconnect attempts
that get scheduled, but have not started at the
time the server cancels the connection requests.
- Adds a per-peer cancellation channel, that is
closed upon a successful inbound or outbound
connection. The goroutine spwaned to handle the
reconnect by the peerTerminationWatch now
selects on this channel, and skips reconnecting
if it is closed before the backoff matures.
- Properly computes the backoff when no entry in
persistentPeersBackoff is found. Previously, a
value of 0 would be returned, cause all subsequent
backoff attempts to use a backoff of 0.
- Cancels a peers retries and remove connections
immediately after receiving an inbound connection,
to mimic the structure of OutboundPeerConnected.
- Cancels all persistent connection requests after
calling DisconnectPeers.
- Allow additional connection attempts to peers, even if
there already exists a pending connection attempt.
In this commit, we fix an existing bug within the codebase: if a peer
connected to us inbound, then we'd attempt to use the assigned port when
re-establishing a connection to them. We fix this issue in this commit
by adding a new method to look up any advertisements for the peer, and
use the specified port that matches our connection attempt. If we can't
find a proper advertisement, then we'll simply use the default peer
port.
In this commit we modify the storage location of the sphinx replay
database to be under the precise network, and not only the graph sub
directory. Before this commit, due to the usage of filepath.Dir(), the
db would lie under /graph/, rather than say, /graph/simnet.
This commit adds an backoff policy to the peer termination
watcher to avoid getting stuck in tight connection loops
with failing peers. The maximum backoff is now set to 128s,
and each backoff is randomized so that two instances using
the same algorithm have some hope of desynchronizing.
This commit adds the `lnnet` package which contains an
implementation of the newly created LightningNet interface which
multiplexes the Dial and DNS-related functions to use net
by default and torsvc if a flag is specified. This modularization
makes for cleaner code.
This commit adds a new interface named NetInterface and two
implementations of it: RegularNet & TorProxyNet. These two structs
are used in config.go in an attempt to clean up the code and
abstract away the dialer and DNS functions.
This commit adds a new module named 'torsvc' which houses all Tor
functionality in an attempt to isolate it and make it reusable in
other projecs. Some additional tweaks were made to config.go and
to the bootstrapper.
This commit adds Tor support. Users can set the --TorSocks flag
to specify which port Tor's SOCKS5 proxy is listening on so that
lnd can connect to it. When this flag is set, ALL traffic gets
routed over Tor including DNS traffic. Special functions for
DNS lookups were added, and since Tor doesn't natively support
SRV requests, the proxySRV function routes connects us to
a DNS server via Tor and SRV requests can be issued directly
to the DNS server.
Co-authored-by: MeshCollider <dobsonsa68@gmail.com>