In this commit, we modify the peer's writeMessage
method to properly handle errors returned from
encoding an lnwire message and from setting the
write deadline on the connection. Since an error
would likely result in an empty byte slice, the
worse case seems to be that we may have tried to
send an empty message on the wire.
Lastly, we correct the way we compute bytes sent
on the wire to properly count the number of bytes
*written*, and not just the length of the encoded
message.
In this commit, we remove the per channel `sigPool` within the
`lnwallet.LightningChannel` struct. With this change, we ensure that as
the number of channels grows, the number of gouroutines idling in the
sigPool stays constant. It's the case that currently on the daemon, most
channels are likely inactive, with only a hand full actually
consistently carrying out channel updates. As a result, this change
should reduce the amount of idle CPU usage, as we have less active
goroutines in select loops.
In order to make this change, the `SigPool` itself has been publicly
exported such that outside callers can make a `SigPool` and pass it into
newly created channels. Since the sig pool now lives outside the
channel, we were also able to do away with the Stop() method on the
channel all together.
Finally, the server is the sub-system that is currently responsible for
managing the `SigPool` within lnd.
This commit makes the AddNewChannel expect a OpenChannel instead of a
LightningChannel struct. This moves the responsibility for starting the
LightningChannel from the fundingmanager to the peer, and we can defer
the channel restoration until we know that the channel is not already
active.
In this commit, we also check ErrEdgeNotFound when attempting to send an
active/inactive channel update for a channel to the network. We do this
as it's possible that a channel has confirmed, but it still does not
meet the required number of confirmations to be publicly announced.
In this commit, we fix a small bug with regards to the persistent peer
connection pruning logic. Before this commit, it'd be the case that we'd
prune a persistent connection to a peer if all links happen to be
inactive. This isn't ideal, as the channels are still open, so we should
always be atttempting to connect to them. We fix this by looking at the
set of channels on-disk instead and prune the persistent connection if
there aren't any.
This commit moves the gossip sync dispatch
such that it is more tightly coupled to the
life cycle of the peer. In testing, I noticed
that the gossip syncer needs to be dispatched
before the first gossip messages come across
the wire.
The prior spawn location in the server happens
after starting all of the peer's goroutines,
which could permit an ordering where the
gossip syncer has not yet been registered.
The new location registers the gossip syncer
within the read handler such that the call is
blocks before any messages are read.
In this commit, we add a quit channel to the AddMsg method of the
msgStream struct. Before this commit, if the queue was full, the
readHandler would block and be unable to exit. We remedy this by
leveraging the existing quit channel of the peer as an additional select
case within the AddMsg method.
In this commit, we thread through the quit of the peer to the execution
of the apply function for a msgStream. This change ensures that if the
target is still processing the message, then the peer is able to exit
cleanly and not block insensately.
In this commit we move the atomic var increment that signals the
consumer goourtine has exited to the top of the method in a defer
statement. This cleans up some duplicate code and also adheres to the
pattern of using defers to signal cleaning up any dependent goroutine
state on exit.
In this commit, we raise the readHandler wait group done into a defer
statement at the top of the method. This fixes an existing but that
would cause the readHandler to declare it had exited, yet possibly still
be waiting on the chan message stream below to exit.
This commit fixes a bug that would cause us to fetch our peer's
ChannelUpdate in some cases, where we really wanted to fetch our own.
The reason this happened was that we passed the peer's pubkey to
fetchLastChanUpdate, making us match on their policy. This would lead to
ChannelUpdates being sent during routing which would have no effect on
the attempted path.
We fix this by always use our own pubkey in fetchLastChanUpdate, and
also uses the common methods within the server to be able to extract the
update even when only one policy is known.
This commit fixes a bug within the peer, where we would always use the
default forwarding policy when adding new links to the switch. Mostly
this wasn't a problem, as we most often are using default values for new
channels, but the min_htlc value is a value that is set by the remote,
not us.
If the remote specified a custom min_htlc during channel creation, we
would still use our DefaultMinHtlc value when first adding the link,
making us try to forward HTLCs that would be rejected by the remote.
We fix this by getting the required min_htlc value from the channel
state machine, and setting this for the link's forwarding policy.
Note that on restarts we will query the database for our latest
ChannelEdgePolicy, and use that to craft the forwardingPolicy. This
means that the value will be set correctly if the policy is found in the
database.
Sometimes when performing an initial sync, the remote
node isn't able to pull messages off the wire because
of long running tasks and queues are saturated. With
a shorter write timeout, we will give up trying to send
messages and teardown the connection, even though the
peer is still active.
This commit adds additional synchronization logic to
WaitForDisconnect, such that it can be spawned before
Start has been executed by the server. Without
modification, the current version will return
immediately since no goroutines will have been
spawned.
To solve this, we modify WaitForDisconnect to block until:
1) the peer is disconnected,
2) the peer is successfully started,
before watching the waitgroup.
In the first case, the waitgroup will block until all
(if any) spawned goroutines have exited. Otherwise, if
the Start is successful, we can switch to watching the
waitgroup, knowing that waitgroup counter is positive.
In this commit, we modify the existing message sending functionality
within the fundingmanager. Due to each mesage send requiring to hold the
server's lock to retrieve the peer, we might run into a case where the
lock is held for a larger than usual amount of time and would therefore
block on sending the message within the fundingmanager. We remedy this
by taking a similar approach to some recent changes within the gossiper.
We now keep track of each peer within the internal fundingmanager
messages and send messages directly to them.
In this commit, we add a timeout within the writeMessage method when we go to write to the socket. We do this as otherwise, if the other peer is blocked for some reason, we'll never actually unblock ourselves, which may cause issues in other sub-systems waiting on this write call. For now, we use a value of 10 seconds, and will adjust in the future if we deem this time period too short.
In this commit, we move the block height dependency from the links in
the switch to the switch itself. This is possible due to a recent change
on the links no longer depending on the block height to update their
commitment fees.
We'll now only have the switch be alerted of new blocks coming in and
links will retrieve the height from it atomically.
In this commit, we modify the behavior of links updating their
commitment fees. Rather than attempting to update the commitment fee for
each link every time a new block comes in, we'll use a timer with a
random interval between 10 and 60 minutes for each link to determine
when to update their corresponding commitment fee. This prevents us from
oscillating the fee rate for our various commitment transactions.
In this commit we fix an existing bug which could cause internal state
inconsistency between then switch, funding manager, and the peer. Before
this commit, we would _always_ add a new channel to the channelManager.
However, due to recent logic, it may be the case that this isn't the
channel that will ultimately reside in the link. As a result, we would
be unable to process incoming FundingLocked messages properly, as we
would mutate the incorrect channel in memory.
We remedy this by moving the inserting of the new channel into the
activeChannels map until the end of the loadActiveChannels method, where
we know that this will be the link that persists.
In this commit, we fix a recently introduced bug. The issue is that
while we're failing the link, the peer we're attempting to force close
on may disconnect. As a result, if the peerTerminationWatcher exits
before we can add to the wait group (it's waiting on that), then we'll
run into a panic as we're attempting to increment the wait group while
another goroutine is calling wait.
The fix is to first check that the server isn't shutting down, and then
use the server's wait group rather than the peer to synchronize
goroutines.
Fixes#1285.
This commit makes the peer aware of the LinkFailureErrors that can
happen during link operation, and making it start a goroutine to
properly remove the link and force close the channel.
This commit attempts to resolve some potential deadlock
scenarios during a peer disconnect.
Currently, writeMessage returns a nil error when disconnecting.
This should have minimal impact on the writeHanlder, as the
subsequent loop selects on the quit chan, and will cause it to
exit. However, if this happens when sending the init message,
the Start() method will attempt to proceed even though the peer
has been disconnected.
In addition, this commit changes the behavior of synchronous
write errors, by using a non-blocking select. Though unlikely,
this prevents any cases where multiple errors are returned, and
the errors are not being pulled from the other side of the errChan.
This removes any naked sends on the errChan from stalling the peer's
shutdown.
In this commit, we ensure that any time we send a TempChannelFailure
that's destined for a multi-hop source sender, then we'll always package
the latest channel update along with it.
In this commit, we fix a bug that could at times cause a deadlock when a
peer is attempting to disconnect. The issue was that when a peer goes to
disconnect, it needs to stop any active msgStream instances. The Stop()
method of the msgStream would block until an atomic variable was set to
indicate that the stream had fully exited. However, in the case that we
disconnected lower in the msgConsumer loop, we would never set the
streamShutdown variable, meaning that msgStream.Stop() would never
unblock.
The fix for this is simple: set the streamShutdown variable within the
quit case of the second select statement in the msgConsumer goroutine.
In this commit, we might a very small change to the way writing messages
works in the peer, which should have large implications w.r.t reducing
memory usage amongst chatty nodes.
When profiling the heap on one of my nodes earlier, I noticed this
fragment:
```
Showing top 20 nodes out of 68
flat flat% sum% cum cum%
0 0% 0% 75.53MB 54.61% main.(*peer).writeHandler
75.53MB 54.61% 54.61% 75.53MB 54.61% main.(*peer).writeMessage
```
Which points to an inefficiency with the way we handle allocations when
writing new messages, drilling down further we see:
```
(pprof) list writeMessage
Total: 138.31MB
ROUTINE ======================== main.(*peer).writeMessage in /root/go/src/github.com/lightningnetwork/lnd/peer.go
75.53MB 75.53MB (flat, cum) 54.61% of Total
. . 1104: p.logWireMessage(msg, false)
. . 1105:
. . 1106: // As the Lightning wire protocol is fully message oriented, we only
. . 1107: // allows one wire message per outer encapsulated crypto message. So
. . 1108: // we'll create a temporary buffer to write the message directly to.
75.53MB 75.53MB 1109: var msgPayload [lnwire.MaxMessagePayload]byte
. . 1110: b := bytes.NewBuffer(msgPayload[0:0:len(msgPayload)])
. . 1111:
. . 1112: // With the temp buffer created and sliced properly (length zero, full
. . 1113: // capacity), we'll now encode the message directly into this buffer.
. . 1114: n, err := lnwire.WriteMessage(b, msg, 0)
(pprof) list writeHandler
Total: 138.31MB
ROUTINE ======================== main.(*peer).writeHandler in /root/go/src/github.com/lightningnetwork/lnd/peer.go
0 75.53MB (flat, cum) 54.61% of Total
. . 1148:
. . 1149: // Write out the message to the socket, closing the
. . 1150: // 'sentChan' if it's non-nil, The 'sentChan' allows
. . 1151: // callers to optionally synchronize sends with the
. . 1152: // writeHandler.
. 75.53MB 1153: err := p.writeMessage(outMsg.msg)
. . 1154: if outMsg.errChan != nil {
. . 1155: outMsg.errChan <- err
. . 1156: }
. . 1157:
. . 1158: if err != nil {
```
Ah hah! We create a _new_ buffer each time we want to write a message
out. This is unnecessary and _very_ wasteful (as seen by the profile).
The fix is simple: re-use a buffer unique to each peer when writing out
messages. Since we know what the max message size is, we just allocate
one of these 65KB buffers for each peer, and keep it around until the
peer is removed.
In this commit, we follow up to the prior commit by ensuring we won't
accept a co-op close request for a chennel with active HTLCs. When
creating a chanCloser for the first time, we'll check the set of HTLC's
and reject a request (by sending a wire error) if the target channel
still as active HTLC's.
In this commit, we fix a minor deviation in our implementation from the
specification. Before if we encountered an unknown error type, we would
disconnect the peer. Instead, we’ll now just continue along parsing the
remainder of the messages. This was flared up recently by some
c-lightning related incompatibilities that emerged on main net.
In this commit, we fix a goroutine leak that could occur if while we
were loading an error occurred in any of the steps after we created the
channel object, but before it was actually loaded in to the script. If
an error occurs at any step, we ensure that we’ll stop toe channel.
Otherwise, the sigPool goroutines would still be lingering and never be
stopped.
This commit adds a set used to track channels we consider failed. This
is done to ensure we don't end up in a connect/disconnect loop when we
attempt to re-sync the channel state of a failed channel with a peer.
In this commit, we remove the DecodeHopIterator method from the
ChannelLinkConfig struct. We do this as we no longer use this method,
since we only ever use the DecodeHopIterators method now.
In this commit, we modify the msgStream struct to ensure that it has a
cap at which it’ll continue to buffer messages. Currently we have two
msgStream structs per peer: the first for the discovery messages, and
the second for any messages that modify channel state. Due to
inefficiencies in the current protocol for reconciling graph state upon
connection (just dump the entire damn thing), when a node first starts
up, this can lead to very high memory usage as all peers will
concurrently send their initial message dump which can be in the
thousands of messages on testate.
Our fix is simple: make the message stream into a _bounded_ message
stream. The newMsgStream function now has a new argument: bufSize.
Internally, we’ll take this bufSize and create more or less an internal
semaphore for the producer. Each time the producer gets a new message,
it’ll try and read an item from the channel. If the queue still has
size, then this will succeed immediately. If not, then we’ll block
until the consumer actually finishes processing a message and then
signals by sending a new item into the channel.
We choose an initial value of 1000. This was chosen as there’s already
a max limit of outstanding adds on the commitment, and a value of 1000
should allow any incoming messages to be safely flushed and processed
by the gossiper.