routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
package routing
|
|
|
|
|
|
|
|
import (
|
2017-01-23 01:39:18 +03:00
|
|
|
"bytes"
|
2017-02-07 01:54:50 +03:00
|
|
|
"fmt"
|
2021-02-18 05:13:29 +03:00
|
|
|
"runtime"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"sync"
|
|
|
|
"sync/atomic"
|
2017-10-03 08:03:18 +03:00
|
|
|
"time"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-07-22 05:52:21 +03:00
|
|
|
"github.com/btcsuite/btcd/btcec"
|
|
|
|
"github.com/btcsuite/btcd/wire"
|
|
|
|
"github.com/btcsuite/btcutil"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"github.com/davecgh/go-spew/spew"
|
2018-07-10 04:12:34 +03:00
|
|
|
"github.com/go-errors/errors"
|
2019-01-16 17:47:43 +03:00
|
|
|
|
2019-02-01 15:53:27 +03:00
|
|
|
sphinx "github.com/lightningnetwork/lightning-onion"
|
2021-04-12 16:05:48 +03:00
|
|
|
"github.com/lightningnetwork/lnd/amp"
|
2021-01-27 15:39:18 +03:00
|
|
|
"github.com/lightningnetwork/lnd/batch"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"github.com/lightningnetwork/lnd/channeldb"
|
2020-01-10 05:44:43 +03:00
|
|
|
"github.com/lightningnetwork/lnd/channeldb/kvdb"
|
2020-03-05 15:54:03 +03:00
|
|
|
"github.com/lightningnetwork/lnd/clock"
|
2017-10-11 05:48:44 +03:00
|
|
|
"github.com/lightningnetwork/lnd/htlcswitch"
|
2019-01-16 17:47:43 +03:00
|
|
|
"github.com/lightningnetwork/lnd/input"
|
2018-08-08 12:09:30 +03:00
|
|
|
"github.com/lightningnetwork/lnd/lntypes"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"github.com/lightningnetwork/lnd/lnwallet"
|
2019-10-01 05:55:03 +03:00
|
|
|
"github.com/lightningnetwork/lnd/lnwallet/chanvalidate"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"github.com/lightningnetwork/lnd/lnwire"
|
2018-01-23 18:24:21 +03:00
|
|
|
"github.com/lightningnetwork/lnd/multimutex"
|
2019-12-11 12:52:27 +03:00
|
|
|
"github.com/lightningnetwork/lnd/record"
|
2017-05-11 03:22:26 +03:00
|
|
|
"github.com/lightningnetwork/lnd/routing/chainview"
|
2019-04-05 18:36:11 +03:00
|
|
|
"github.com/lightningnetwork/lnd/routing/route"
|
2021-04-12 16:21:59 +03:00
|
|
|
"github.com/lightningnetwork/lnd/routing/shards"
|
2019-06-06 17:28:44 +03:00
|
|
|
"github.com/lightningnetwork/lnd/ticker"
|
2019-02-19 11:09:01 +03:00
|
|
|
"github.com/lightningnetwork/lnd/zpay32"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
)
|
|
|
|
|
2017-10-19 07:59:06 +03:00
|
|
|
const (
|
2019-06-07 12:27:55 +03:00
|
|
|
// DefaultPayAttemptTimeout is the default payment attempt timeout. The
|
|
|
|
// payment attempt timeout defines the duration after which we stop
|
|
|
|
// trying more routes for a payment.
|
|
|
|
DefaultPayAttemptTimeout = time.Duration(time.Second * 60)
|
2019-03-27 23:07:13 +03:00
|
|
|
|
|
|
|
// DefaultChannelPruneExpiry is the default duration used to determine
|
|
|
|
// if a channel should be pruned or not.
|
|
|
|
DefaultChannelPruneExpiry = time.Duration(time.Hour * 24 * 14)
|
2019-06-06 17:28:44 +03:00
|
|
|
|
2021-03-11 15:36:54 +03:00
|
|
|
// DefaultFirstTimePruneDelay is the time we'll wait after startup
|
|
|
|
// before attempting to prune the graph for zombie channels. We don't
|
|
|
|
// do it immediately after startup to allow lnd to start up without
|
|
|
|
// getting blocked by this job.
|
|
|
|
DefaultFirstTimePruneDelay = 30 * time.Second
|
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
// defaultStatInterval governs how often the router will log non-empty
|
|
|
|
// stats related to processing new channels, updates, or node
|
|
|
|
// announcements.
|
2019-06-20 05:44:12 +03:00
|
|
|
defaultStatInterval = time.Minute
|
2020-07-24 23:13:56 +03:00
|
|
|
|
|
|
|
// MinCLTVDelta is the minimum CLTV value accepted by LND for all
|
|
|
|
// timelock deltas. This includes both forwarding CLTV deltas set on
|
|
|
|
// channel updates, as well as final CLTV deltas used to create BOLT 11
|
|
|
|
// payment requests.
|
|
|
|
//
|
|
|
|
// NOTE: For payment requests, BOLT 11 stipulates that a final CLTV
|
|
|
|
// delta of 9 should be used when no value is decoded. This however
|
|
|
|
// leads to inflexiblity in upgrading this default parameter, since it
|
|
|
|
// can create inconsistencies around the assumed value between sender
|
|
|
|
// and receiver. Specifically, if the receiver assumes a higher value
|
|
|
|
// than the sender, the receiver will always see the received HTLCs as
|
|
|
|
// invalid due to their timelock not meeting the required delta.
|
|
|
|
//
|
|
|
|
// We skirt this by always setting an explicit CLTV delta when creating
|
|
|
|
// invoices. This allows LND nodes to freely update the minimum without
|
|
|
|
// creating incompatibilities during the upgrade process. For some time
|
|
|
|
// LND has used an explicit default final CLTV delta of 40 blocks for
|
|
|
|
// bitcoin (160 for litecoin), though we now clamp the lower end of this
|
|
|
|
// range for user-chosen deltas to 18 blocks to be conservative.
|
|
|
|
MinCLTVDelta = 18
|
2017-10-19 07:59:06 +03:00
|
|
|
)
|
|
|
|
|
2018-07-30 23:40:56 +03:00
|
|
|
var (
|
2019-04-05 18:36:11 +03:00
|
|
|
// ErrRouterShuttingDown is returned if the router is in the process of
|
|
|
|
// shutting down.
|
|
|
|
ErrRouterShuttingDown = fmt.Errorf("router shutting down")
|
2018-07-30 23:40:56 +03:00
|
|
|
)
|
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// ChannelGraphSource represents the source of information about the topology
|
|
|
|
// of the lightning network. It's responsible for the addition of nodes, edges,
|
2017-12-18 05:40:05 +03:00
|
|
|
// applying edge updates, and returning the current block height with which the
|
2017-03-19 21:40:25 +03:00
|
|
|
// topology is synchronized.
|
|
|
|
type ChannelGraphSource interface {
|
2017-07-14 22:32:00 +03:00
|
|
|
// AddNode is used to add information about a node to the router
|
|
|
|
// database. If the node with this pubkey is not present in an existing
|
|
|
|
// channel, it will be ignored.
|
2021-01-27 15:39:18 +03:00
|
|
|
AddNode(node *channeldb.LightningNode, op ...batch.SchedulerOption) error
|
2017-03-19 21:40:25 +03:00
|
|
|
|
|
|
|
// AddEdge is used to add edge/channel to the topology of the router,
|
|
|
|
// after all information about channel will be gathered this
|
|
|
|
// edge/channel might be used in construction of payment path.
|
2021-01-27 15:39:18 +03:00
|
|
|
AddEdge(edge *channeldb.ChannelEdgeInfo, op ...batch.SchedulerOption) error
|
2017-03-19 21:40:25 +03:00
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// AddProof updates the channel edge info with proof which is needed to
|
|
|
|
// properly announce the edge to the rest of the network.
|
2017-03-27 20:00:38 +03:00
|
|
|
AddProof(chanID lnwire.ShortChannelID, proof *channeldb.ChannelAuthProof) error
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// UpdateEdge is used to update edge information, without this message
|
|
|
|
// edge considered as not fully constructed.
|
2021-01-27 15:39:18 +03:00
|
|
|
UpdateEdge(policy *channeldb.ChannelEdgePolicy, op ...batch.SchedulerOption) error
|
2017-03-19 21:40:25 +03:00
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// IsStaleNode returns true if the graph source has a node announcement
|
|
|
|
// for the target node with a more recent timestamp. This method will
|
|
|
|
// also return true if we don't have an active channel announcement for
|
|
|
|
// the target node.
|
2019-04-05 18:36:11 +03:00
|
|
|
IsStaleNode(node route.Vertex, timestamp time.Time) bool
|
2018-02-25 06:34:03 +03:00
|
|
|
|
2018-10-18 01:47:12 +03:00
|
|
|
// IsPublicNode determines whether the given vertex is seen as a public
|
|
|
|
// node in the graph from the graph's source node's point of view.
|
2019-04-05 18:36:11 +03:00
|
|
|
IsPublicNode(node route.Vertex) (bool, error)
|
2018-10-18 01:47:12 +03:00
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// IsKnownEdge returns true if the graph source already knows of the
|
2019-03-27 23:06:57 +03:00
|
|
|
// passed channel ID either as a live or zombie edge.
|
2018-02-25 06:34:03 +03:00
|
|
|
IsKnownEdge(chanID lnwire.ShortChannelID) bool
|
|
|
|
|
|
|
|
// IsStaleEdgePolicy returns true if the graph source has a channel
|
|
|
|
// edge for the passed channel ID (and flags) that have a more recent
|
|
|
|
// timestamp.
|
|
|
|
IsStaleEdgePolicy(chanID lnwire.ShortChannelID, timestamp time.Time,
|
2019-01-12 20:59:43 +03:00
|
|
|
flags lnwire.ChanUpdateChanFlags) bool
|
2018-02-25 06:34:03 +03:00
|
|
|
|
2019-03-27 23:07:30 +03:00
|
|
|
// MarkEdgeLive clears an edge from our zombie index, deeming it as
|
|
|
|
// live.
|
|
|
|
MarkEdgeLive(chanID lnwire.ShortChannelID) error
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// ForAllOutgoingChannels is used to iterate over all channels
|
2017-12-18 05:40:05 +03:00
|
|
|
// emanating from the "source" node which is the center of the
|
2017-04-01 15:33:17 +03:00
|
|
|
// star-graph.
|
2017-08-22 09:58:59 +03:00
|
|
|
ForAllOutgoingChannels(cb func(c *channeldb.ChannelEdgeInfo,
|
|
|
|
e *channeldb.ChannelEdgePolicy) error) error
|
2017-03-19 21:40:25 +03:00
|
|
|
|
|
|
|
// CurrentBlockHeight returns the block height from POV of the router
|
|
|
|
// subsystem.
|
|
|
|
CurrentBlockHeight() (uint32, error)
|
|
|
|
|
2017-03-30 04:01:28 +03:00
|
|
|
// GetChannelByID return the channel by the channel id.
|
|
|
|
GetChannelByID(chanID lnwire.ShortChannelID) (*channeldb.ChannelEdgeInfo,
|
|
|
|
*channeldb.ChannelEdgePolicy, *channeldb.ChannelEdgePolicy, error)
|
|
|
|
|
2018-11-05 09:56:39 +03:00
|
|
|
// FetchLightningNode attempts to look up a target node by its identity
|
|
|
|
// public key. channeldb.ErrGraphNodeNotFound is returned if the node
|
|
|
|
// doesn't exist within the graph.
|
2019-04-05 18:36:11 +03:00
|
|
|
FetchLightningNode(route.Vertex) (*channeldb.LightningNode, error)
|
2018-11-05 09:56:39 +03:00
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// ForEachNode is used to iterate over every node in the known graph.
|
2017-03-19 21:40:25 +03:00
|
|
|
ForEachNode(func(node *channeldb.LightningNode) error) error
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// ForEachChannel is used to iterate over every channel in the known
|
|
|
|
// graph.
|
2017-03-19 21:40:25 +03:00
|
|
|
ForEachChannel(func(chanInfo *channeldb.ChannelEdgeInfo,
|
|
|
|
e1, e2 *channeldb.ChannelEdgePolicy) error) error
|
|
|
|
}
|
|
|
|
|
2019-05-16 16:27:28 +03:00
|
|
|
// PaymentAttemptDispatcher is used by the router to send payment attempts onto
|
|
|
|
// the network, and receive their results.
|
|
|
|
type PaymentAttemptDispatcher interface {
|
|
|
|
// SendHTLC is a function that directs a link-layer switch to
|
|
|
|
// forward a fully encoded payment to the first hop in the route
|
|
|
|
// denoted by its public key. A non-nil error is to be returned if the
|
|
|
|
// payment was unsuccessful.
|
|
|
|
SendHTLC(firstHop lnwire.ShortChannelID,
|
2021-04-07 16:03:54 +03:00
|
|
|
attemptID uint64,
|
2019-05-16 16:27:29 +03:00
|
|
|
htlcAdd *lnwire.UpdateAddHTLC) error
|
2019-05-16 16:27:29 +03:00
|
|
|
|
|
|
|
// GetPaymentResult returns the the result of the payment attempt with
|
2021-04-07 16:03:54 +03:00
|
|
|
// the given attemptID. The paymentHash should be set to the payment's
|
|
|
|
// overall hash, or in case of AMP payments the payment's unique
|
|
|
|
// identifier.
|
|
|
|
//
|
|
|
|
// The method returns a channel where the payment result will be sent
|
|
|
|
// when available, or an error is encountered during forwarding. When a
|
|
|
|
// result is received on the channel, the HTLC is guaranteed to no
|
|
|
|
// longer be in flight. The switch shutting down is signaled by
|
|
|
|
// closing the channel. If the attemptID is unknown,
|
|
|
|
// ErrPaymentIDNotFound will be returned.
|
|
|
|
GetPaymentResult(attemptID uint64, paymentHash lntypes.Hash,
|
2019-06-07 17:42:25 +03:00
|
|
|
deobfuscator htlcswitch.ErrorDecrypter) (
|
2019-05-16 16:27:29 +03:00
|
|
|
<-chan *htlcswitch.PaymentResult, error)
|
2020-09-24 10:53:07 +03:00
|
|
|
|
|
|
|
// CleanStore calls the underlying result store, telling it is safe to
|
|
|
|
// delete all entries except the ones in the keepPids map. This should
|
|
|
|
// be called preiodically to let the switch clean up payment results
|
|
|
|
// that we have handled.
|
|
|
|
// NOTE: New payment attempts MUST NOT be made after the keepPids map
|
|
|
|
// has been created and this method has returned.
|
|
|
|
CleanStore(keepPids map[uint64]struct{}) error
|
2019-05-16 16:27:28 +03:00
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:30 +03:00
|
|
|
// PaymentSessionSource is an interface that defines a source for the router to
|
|
|
|
// retrive new payment sessions.
|
|
|
|
type PaymentSessionSource interface {
|
|
|
|
// NewPaymentSession creates a new payment session that will produce
|
|
|
|
// routes to the given target. An optional set of routing hints can be
|
|
|
|
// provided in order to populate additional edges to explore when
|
|
|
|
// finding a path to the payment's destination.
|
2020-04-01 01:13:22 +03:00
|
|
|
NewPaymentSession(p *LightningPayment) (PaymentSession, error)
|
2019-05-23 21:05:30 +03:00
|
|
|
|
|
|
|
// NewPaymentSessionEmpty creates a new paymentSession instance that is
|
|
|
|
// empty, and will be exhausted immediately. Used for failure reporting
|
|
|
|
// to missioncontrol for resumed payment we don't want to make more
|
|
|
|
// attempts for.
|
|
|
|
NewPaymentSessionEmpty() PaymentSession
|
|
|
|
}
|
|
|
|
|
2019-06-18 19:30:56 +03:00
|
|
|
// MissionController is an interface that exposes failure reporting and
|
|
|
|
// probability estimation.
|
|
|
|
type MissionController interface {
|
2019-06-26 10:49:16 +03:00
|
|
|
// ReportPaymentFail reports a failed payment to mission control as
|
|
|
|
// input for future probability estimates. It returns a bool indicating
|
|
|
|
// whether this error is a final error and no further payment attempts
|
|
|
|
// need to be made.
|
2021-04-07 16:03:54 +03:00
|
|
|
ReportPaymentFail(attemptID uint64, rt *route.Route,
|
2019-08-05 13:13:58 +03:00
|
|
|
failureSourceIdx *int, failure lnwire.FailureMessage) (
|
|
|
|
*channeldb.FailureReason, error)
|
2019-06-18 19:30:56 +03:00
|
|
|
|
2019-07-29 15:20:06 +03:00
|
|
|
// ReportPaymentSuccess reports a successful payment to mission control as input
|
|
|
|
// for future probability estimates.
|
2021-04-07 16:03:54 +03:00
|
|
|
ReportPaymentSuccess(attemptID uint64, rt *route.Route) error
|
2019-07-29 15:20:06 +03:00
|
|
|
|
2019-07-29 16:10:58 +03:00
|
|
|
// GetProbability is expected to return the success probability of a
|
2019-06-18 19:30:56 +03:00
|
|
|
// payment from fromNode along edge.
|
2019-07-29 16:10:58 +03:00
|
|
|
GetProbability(fromNode, toNode route.Vertex,
|
2019-06-18 19:30:56 +03:00
|
|
|
amt lnwire.MilliSatoshi) float64
|
|
|
|
}
|
|
|
|
|
2018-02-07 06:11:11 +03:00
|
|
|
// FeeSchema is the set fee configuration for a Lightning Node on the network.
|
2017-12-18 05:40:05 +03:00
|
|
|
// Using the coefficients described within the schema, the required fee to
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// forward outgoing payments can be derived.
|
|
|
|
type FeeSchema struct {
|
2017-08-22 09:43:20 +03:00
|
|
|
// BaseFee is the base amount of milli-satoshis that will be chained
|
|
|
|
// for ANY payment forwarded.
|
|
|
|
BaseFee lnwire.MilliSatoshi
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
|
|
|
// FeeRate is the rate that will be charged for forwarding payments.
|
2017-08-22 09:43:20 +03:00
|
|
|
// This value should be interpreted as the numerator for a fraction
|
2017-10-17 04:13:52 +03:00
|
|
|
// (fixed point arithmetic) whose denominator is 1 million. As a result
|
|
|
|
// the effective fee rate charged per mSAT will be: (amount *
|
|
|
|
// FeeRate/1,000,000).
|
2017-08-22 09:43:20 +03:00
|
|
|
FeeRate uint32
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-12-15 23:54:07 +03:00
|
|
|
// ChannelPolicy holds the parameters that determine the policy we enforce
|
2018-02-07 06:11:11 +03:00
|
|
|
// when forwarding payments on a channel. These parameters are communicated
|
2017-12-15 23:54:07 +03:00
|
|
|
// to the rest of the network in ChannelUpdate messages.
|
|
|
|
type ChannelPolicy struct {
|
|
|
|
// FeeSchema holds the fee configuration for a channel.
|
|
|
|
FeeSchema
|
|
|
|
|
|
|
|
// TimeLockDelta is the required HTLC timelock delta to be used
|
|
|
|
// when forwarding payments.
|
|
|
|
TimeLockDelta uint32
|
2019-08-20 03:53:21 +03:00
|
|
|
|
|
|
|
// MaxHTLC is the maximum HTLC size including fees we are allowed to
|
|
|
|
// forward over this channel.
|
|
|
|
MaxHTLC lnwire.MilliSatoshi
|
2019-11-15 13:24:58 +03:00
|
|
|
|
|
|
|
// MinHTLC is the minimum HTLC size including fees we are allowed to
|
|
|
|
// forward over this channel.
|
|
|
|
MinHTLC *lnwire.MilliSatoshi
|
2017-12-15 23:54:07 +03:00
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// Config defines the configuration for the ChannelRouter. ALL elements within
|
|
|
|
// the configuration MUST be non-nil for the ChannelRouter to carry out its
|
|
|
|
// duties.
|
|
|
|
type Config struct {
|
|
|
|
// Graph is the channel graph that the ChannelRouter will use to gather
|
|
|
|
// metrics from and also to carry out path finding queries.
|
|
|
|
// TODO(roasbeef): make into an interface
|
|
|
|
Graph *channeldb.ChannelGraph
|
|
|
|
|
|
|
|
// Chain is the router's source to the most up-to-date blockchain data.
|
|
|
|
// All incoming advertised channels will be checked against the chain
|
|
|
|
// to ensure that the channels advertised are still open.
|
|
|
|
Chain lnwallet.BlockChainIO
|
|
|
|
|
2017-05-11 03:22:26 +03:00
|
|
|
// ChainView is an instance of a FilteredChainView which is used to
|
|
|
|
// watch the sub-set of the UTXO set (the set of active channels) that
|
|
|
|
// we need in order to properly maintain the channel graph.
|
|
|
|
ChainView chainview.FilteredChainView
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-05-16 16:27:28 +03:00
|
|
|
// Payer is an instance of a PaymentAttemptDispatcher and is used by
|
|
|
|
// the router to send payment attempts onto the network, and receive
|
|
|
|
// their results.
|
|
|
|
Payer PaymentAttemptDispatcher
|
2017-10-05 05:39:38 +03:00
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// Control keeps track of the status of ongoing payments, ensuring we
|
|
|
|
// can properly resume them across restarts.
|
2019-05-29 09:57:04 +03:00
|
|
|
Control ControlTower
|
2019-05-23 21:05:29 +03:00
|
|
|
|
2019-05-23 21:05:30 +03:00
|
|
|
// MissionControl is a shared memory of sorts that executions of
|
|
|
|
// payment path finding use in order to remember which vertexes/edges
|
|
|
|
// were pruned from prior attempts. During SendPayment execution,
|
|
|
|
// errors sent by nodes are mapped into a vertex or edge to be pruned.
|
|
|
|
// Each run will then take into account this set of pruned
|
|
|
|
// vertexes/edges to reduce route failure and pass on graph information
|
|
|
|
// gained to the next execution.
|
2019-06-18 19:30:56 +03:00
|
|
|
MissionControl MissionController
|
|
|
|
|
|
|
|
// SessionSource defines a source for the router to retrieve new payment
|
|
|
|
// sessions.
|
|
|
|
SessionSource PaymentSessionSource
|
2019-05-23 21:05:30 +03:00
|
|
|
|
2017-10-05 05:39:38 +03:00
|
|
|
// ChannelPruneExpiry is the duration used to determine if a channel
|
|
|
|
// should be pruned or not. If the delta between now and when the
|
|
|
|
// channel was last updated is greater than ChannelPruneExpiry, then
|
|
|
|
// the channel is marked as a zombie channel eligible for pruning.
|
|
|
|
ChannelPruneExpiry time.Duration
|
|
|
|
|
|
|
|
// GraphPruneInterval is used as an interval to determine how often we
|
|
|
|
// should examine the channel graph to garbage collect zombie channels.
|
|
|
|
GraphPruneInterval time.Duration
|
2018-05-08 07:04:31 +03:00
|
|
|
|
2021-03-11 15:36:54 +03:00
|
|
|
// FirstTimePruneDelay is the time we'll wait after startup before
|
|
|
|
// attempting to prune the graph for zombie channels. We don't do it
|
|
|
|
// immediately after startup to allow lnd to start up without getting
|
|
|
|
// blocked by this job.
|
|
|
|
FirstTimePruneDelay time.Duration
|
|
|
|
|
2018-05-08 07:04:31 +03:00
|
|
|
// QueryBandwidth is a method that allows the router to query the lower
|
|
|
|
// link layer to determine the up to date available bandwidth at a
|
|
|
|
// prospective link to be traversed. If the link isn't available, then
|
|
|
|
// a value of zero should be returned. Otherwise, the current up to
|
|
|
|
// date knowledge of the available bandwidth of the link should be
|
|
|
|
// returned.
|
|
|
|
QueryBandwidth func(edge *channeldb.ChannelEdgeInfo) lnwire.MilliSatoshi
|
2018-08-30 05:05:13 +03:00
|
|
|
|
2019-05-16 16:27:28 +03:00
|
|
|
// NextPaymentID is a method that guarantees to return a new, unique ID
|
|
|
|
// each time it is called. This is used by the router to generate a
|
|
|
|
// unique payment ID for each payment it attempts to send, such that
|
|
|
|
// the switch can properly handle the HTLC.
|
|
|
|
NextPaymentID func() (uint64, error)
|
|
|
|
|
2018-08-30 05:05:13 +03:00
|
|
|
// AssumeChannelValid toggles whether or not the router will check for
|
|
|
|
// spentness of channel outpoints. For neutrino, this saves long rescans
|
2019-04-17 23:25:05 +03:00
|
|
|
// from blocking initial usage of the daemon.
|
2018-08-30 05:05:13 +03:00
|
|
|
AssumeChannelValid bool
|
2019-06-20 13:03:45 +03:00
|
|
|
|
|
|
|
// PathFindingConfig defines global path finding parameters.
|
|
|
|
PathFindingConfig PathFindingConfig
|
2020-03-05 15:54:03 +03:00
|
|
|
|
|
|
|
// Clock is mockable time provider.
|
|
|
|
Clock clock.Clock
|
2021-04-03 00:57:50 +03:00
|
|
|
|
|
|
|
// StrictZombiePruning determines if we attempt to prune zombie
|
|
|
|
// channels according to a stricter criteria. If true, then we'll prune
|
|
|
|
// a channel if only *one* of the edges is considered a zombie.
|
|
|
|
// Otherwise, we'll only prune the channel when both edges have a very
|
|
|
|
// dated last update.
|
|
|
|
StrictZombiePruning bool
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-03-05 13:13:44 +03:00
|
|
|
// EdgeLocator is a struct used to identify a specific edge.
|
|
|
|
type EdgeLocator struct {
|
|
|
|
// ChannelID is the channel of this edge.
|
|
|
|
ChannelID uint64
|
|
|
|
|
|
|
|
// Direction takes the value of 0 or 1 and is identical in definition to
|
|
|
|
// the channel direction flag. A value of 0 means the direction from the
|
|
|
|
// lower node pubkey to the higher.
|
|
|
|
Direction uint8
|
2018-11-29 18:48:17 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// String returns a human readable version of the edgeLocator values.
|
2019-03-05 13:13:44 +03:00
|
|
|
func (e *EdgeLocator) String() string {
|
|
|
|
return fmt.Sprintf("%v:%v", e.ChannelID, e.Direction)
|
2018-11-29 18:48:17 +03:00
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// ChannelRouter is the layer 3 router within the Lightning stack. Below the
|
|
|
|
// ChannelRouter is the HtlcSwitch, and below that is the Bitcoin blockchain
|
|
|
|
// itself. The primary role of the ChannelRouter is to respond to queries for
|
|
|
|
// potential routes that can support a payment amount, and also general graph
|
2017-10-17 04:13:52 +03:00
|
|
|
// reachability questions. The router will prune the channel graph
|
|
|
|
// automatically as new blocks are discovered which spend certain known funding
|
|
|
|
// outpoints, thereby closing their respective channels.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
type ChannelRouter struct {
|
2018-06-01 01:41:41 +03:00
|
|
|
ntfnClientCounter uint64 // To be used atomically.
|
2017-03-09 01:27:46 +03:00
|
|
|
|
2018-06-01 01:41:41 +03:00
|
|
|
started uint32 // To be used atomically.
|
|
|
|
stopped uint32 // To be used atomically.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-06-01 01:41:41 +03:00
|
|
|
bestHeight uint32 // To be used atomically.
|
2017-05-11 03:22:26 +03:00
|
|
|
|
2017-02-03 04:44:13 +03:00
|
|
|
// cfg is a copy of the configuration struct that the ChannelRouter was
|
|
|
|
// initialized with.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
cfg *Config
|
|
|
|
|
2017-10-23 03:29:55 +03:00
|
|
|
// selfNode is the center of the star-graph centered around the
|
|
|
|
// ChannelRouter. The ChannelRouter uses this node as a starting point
|
|
|
|
// when doing any path finding.
|
|
|
|
selfNode *channeldb.LightningNode
|
|
|
|
|
2017-02-03 04:44:13 +03:00
|
|
|
// newBlocks is a channel in which new blocks connected to the end of
|
2017-10-02 18:54:29 +03:00
|
|
|
// the main chain are sent over, and blocks updated after a call to
|
|
|
|
// UpdateFilter.
|
2017-05-11 03:22:26 +03:00
|
|
|
newBlocks <-chan *chainview.FilteredBlock
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2017-10-02 18:54:29 +03:00
|
|
|
// staleBlocks is a channel in which blocks disconnected fromt the end
|
|
|
|
// of our currently known best chain are sent over.
|
|
|
|
staleBlocks <-chan *chainview.FilteredBlock
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
// networkUpdates is a channel that carries new topology updates
|
|
|
|
// messages from outside the ChannelRouter to be processed by the
|
|
|
|
// networkHandler.
|
|
|
|
networkUpdates chan *routingMsg
|
2017-02-03 04:44:13 +03:00
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// topologyClients maps a client's unique notification ID to a
|
|
|
|
// topologyClient client that contains its notification dispatch
|
|
|
|
// channel.
|
2017-06-25 15:48:47 +03:00
|
|
|
topologyClients map[uint64]*topologyClient
|
2017-03-09 01:27:46 +03:00
|
|
|
|
|
|
|
// ntfnClientUpdates is a channel that's used to send new updates to
|
|
|
|
// topology notification clients to the ChannelRouter. Updates either
|
|
|
|
// add a new notification client, or cancel notifications for an
|
|
|
|
// existing client.
|
|
|
|
ntfnClientUpdates chan *topologyClientUpdate
|
|
|
|
|
2018-01-08 18:55:26 +03:00
|
|
|
// channelEdgeMtx is a mutex we use to make sure we process only one
|
|
|
|
// ChannelEdgePolicy at a time for a given channelID, to ensure
|
|
|
|
// consistency between the various database accesses.
|
2018-01-23 18:24:21 +03:00
|
|
|
channelEdgeMtx *multimutex.Mutex
|
2018-01-08 18:55:26 +03:00
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
// statTicker is a resumable ticker that logs the router's progress as
|
|
|
|
// it discovers channels or receives updates.
|
|
|
|
statTicker ticker.Ticker
|
|
|
|
|
|
|
|
// stats tracks newly processed channels, updates, and node
|
|
|
|
// announcements over a window of defaultStatInterval.
|
|
|
|
stats *routerStats
|
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
sync.RWMutex
|
|
|
|
|
|
|
|
quit chan struct{}
|
|
|
|
wg sync.WaitGroup
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-10-11 05:48:44 +03:00
|
|
|
// A compile time check to ensure ChannelRouter implements the
|
|
|
|
// ChannelGraphSource interface.
|
2017-03-19 21:40:25 +03:00
|
|
|
var _ ChannelGraphSource = (*ChannelRouter)(nil)
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// New creates a new instance of the ChannelRouter with the specified
|
|
|
|
// configuration parameters. As part of initialization, if the router detects
|
|
|
|
// that the channel graph isn't fully in sync with the latest UTXO (since the
|
|
|
|
// channel graph is a subset of the UTXO set) set, then the router will proceed
|
|
|
|
// to fully sync to the latest state of the UTXO set.
|
|
|
|
func New(cfg Config) (*ChannelRouter, error) {
|
|
|
|
|
2017-10-23 03:29:55 +03:00
|
|
|
selfNode, err := cfg.Graph.SourceNode()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-05-08 07:04:31 +03:00
|
|
|
r := &ChannelRouter{
|
2017-03-19 21:40:25 +03:00
|
|
|
cfg: &cfg,
|
|
|
|
networkUpdates: make(chan *routingMsg),
|
2017-06-25 15:48:47 +03:00
|
|
|
topologyClients: make(map[uint64]*topologyClient),
|
2017-03-19 21:40:25 +03:00
|
|
|
ntfnClientUpdates: make(chan *topologyClientUpdate),
|
2018-01-23 18:24:21 +03:00
|
|
|
channelEdgeMtx: multimutex.NewMutex(),
|
2017-10-23 03:29:55 +03:00
|
|
|
selfNode: selfNode,
|
2019-06-06 17:28:44 +03:00
|
|
|
statTicker: ticker.New(defaultStatInterval),
|
|
|
|
stats: new(routerStats),
|
2017-03-19 21:40:25 +03:00
|
|
|
quit: make(chan struct{}),
|
2018-05-08 07:04:31 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return r, nil
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// Start launches all the goroutines the ChannelRouter requires to carry out
|
|
|
|
// its duties. If the router has already been started, then this method is a
|
|
|
|
// noop.
|
|
|
|
func (r *ChannelRouter) Start() error {
|
|
|
|
if !atomic.CompareAndSwapUint32(&r.started, 0, 1) {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Tracef("Channel Router starting")
|
|
|
|
|
2017-11-30 03:26:32 +03:00
|
|
|
bestHash, bestHeight, err := r.cfg.Chain.GetBestBlock()
|
2017-11-11 03:55:31 +03:00
|
|
|
if err != nil {
|
2017-11-30 03:26:32 +03:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
// If the graph has never been pruned, or hasn't fully been created yet,
|
|
|
|
// then we don't treat this as an explicit error.
|
2017-11-30 03:26:32 +03:00
|
|
|
if _, _, err := r.cfg.Graph.PruneTip(); err != nil {
|
2017-11-11 03:55:31 +03:00
|
|
|
switch {
|
|
|
|
case err == channeldb.ErrGraphNeverPruned:
|
2017-11-16 05:19:32 +03:00
|
|
|
fallthrough
|
2017-11-11 03:55:31 +03:00
|
|
|
case err == channeldb.ErrGraphNotFound:
|
2017-11-16 05:19:32 +03:00
|
|
|
// If the graph has never been pruned, then we'll set
|
|
|
|
// the prune height to the current best height of the
|
|
|
|
// chain backend.
|
|
|
|
_, err = r.cfg.Graph.PruneGraph(
|
|
|
|
nil, bestHash, uint32(bestHeight),
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-11-11 03:55:31 +03:00
|
|
|
default:
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
// If AssumeChannelValid is present, then we won't rely on pruning
|
|
|
|
// channels from the graph based on their spentness, but whether they
|
2021-03-11 15:36:54 +03:00
|
|
|
// are considered zombies or not. We will start zombie pruning after a
|
|
|
|
// small delay, to avoid slowing down startup of lnd.
|
2019-04-17 23:25:05 +03:00
|
|
|
if r.cfg.AssumeChannelValid {
|
2021-03-11 15:36:54 +03:00
|
|
|
time.AfterFunc(r.cfg.FirstTimePruneDelay, func() {
|
|
|
|
select {
|
|
|
|
case <-r.quit:
|
|
|
|
return
|
|
|
|
default:
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Info("Initial zombie prune starting")
|
|
|
|
if err := r.pruneZombieChans(); err != nil {
|
|
|
|
log.Errorf("Unable to prune zombies: %v", err)
|
|
|
|
}
|
|
|
|
})
|
2019-04-17 23:25:05 +03:00
|
|
|
} else {
|
|
|
|
// Otherwise, we'll use our filtered chain view to prune
|
|
|
|
// channels as soon as they are detected as spent on-chain.
|
|
|
|
if err := r.cfg.ChainView.Start(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-11-11 03:55:31 +03:00
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
// Once the instance is active, we'll fetch the channel we'll
|
|
|
|
// receive notifications over.
|
|
|
|
r.newBlocks = r.cfg.ChainView.FilteredBlocks()
|
|
|
|
r.staleBlocks = r.cfg.ChainView.DisconnectedBlocks()
|
|
|
|
|
|
|
|
// Before we perform our manual block pruning, we'll construct
|
|
|
|
// and apply a fresh chain filter to the active
|
|
|
|
// FilteredChainView instance. We do this before, as otherwise
|
|
|
|
// we may miss on-chain events as the filter hasn't properly
|
|
|
|
// been applied.
|
|
|
|
channelView, err := r.cfg.Graph.ChannelView()
|
|
|
|
if err != nil && err != channeldb.ErrGraphNoEdgesFound {
|
2017-11-16 05:17:43 +03:00
|
|
|
return err
|
|
|
|
}
|
2017-05-11 03:22:26 +03:00
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
log.Infof("Filtering chain using %v channels active",
|
|
|
|
len(channelView))
|
2017-11-07 03:09:02 +03:00
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
if len(channelView) != 0 {
|
|
|
|
err = r.cfg.ChainView.UpdateFilter(
|
|
|
|
channelView, uint32(bestHeight),
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Before we begin normal operation of the router, we first need
|
|
|
|
// to synchronize the channel graph to the latest state of the
|
|
|
|
// UTXO set.
|
|
|
|
if err := r.syncGraphWithChain(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Finally, before we proceed, we'll prune any unconnected nodes
|
|
|
|
// from the graph in order to ensure we maintain a tight graph
|
|
|
|
// of "useful" nodes.
|
|
|
|
err = r.cfg.Graph.PruneGraphNodes()
|
|
|
|
if err != nil && err != channeldb.ErrGraphNodesNotFound {
|
|
|
|
return err
|
|
|
|
}
|
2018-07-22 05:52:21 +03:00
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// If any payments are still in flight, we resume, to make sure their
|
|
|
|
// results are properly handled.
|
|
|
|
payments, err := r.cfg.Control.FetchInFlightPayments()
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2020-09-24 10:53:07 +03:00
|
|
|
// Before we restart existing payments and start accepting more
|
|
|
|
// payments to be made, we clean the network result store of the
|
|
|
|
// Switch. We do this here at startup to ensure no more payments can be
|
|
|
|
// made concurrently, so we know the toKeep map will be up-to-date
|
|
|
|
// until the cleaning has finished.
|
|
|
|
toKeep := make(map[uint64]struct{})
|
|
|
|
for _, p := range payments {
|
2021-03-30 13:10:30 +03:00
|
|
|
for _, a := range p.HTLCs {
|
2020-09-24 10:53:07 +03:00
|
|
|
toKeep[a.AttemptID] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Debugf("Cleaning network result store.")
|
|
|
|
if err := r.cfg.Payer.CleanStore(toKeep); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
for _, payment := range payments {
|
2021-03-31 13:23:08 +03:00
|
|
|
log.Infof("Resuming payment %v", payment.Info.PaymentIdentifier)
|
2019-05-23 21:05:29 +03:00
|
|
|
r.wg.Add(1)
|
2021-03-30 13:10:30 +03:00
|
|
|
go func(payment *channeldb.MPPayment) {
|
2019-05-23 21:05:29 +03:00
|
|
|
defer r.wg.Done()
|
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
// Get the hashes used for the outstanding HTLCs.
|
|
|
|
htlcs := make(map[uint64]lntypes.Hash)
|
|
|
|
for _, a := range payment.HTLCs {
|
|
|
|
a := a
|
|
|
|
|
2021-03-30 16:00:25 +03:00
|
|
|
// We check whether the individual attempts
|
|
|
|
// have their HTLC hash set, if not we'll fall
|
|
|
|
// back to the overall payment hash.
|
2021-03-31 13:23:08 +03:00
|
|
|
hash := payment.Info.PaymentIdentifier
|
2021-03-30 16:00:25 +03:00
|
|
|
if a.Hash != nil {
|
|
|
|
hash = *a.Hash
|
|
|
|
}
|
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
htlcs[a.AttemptID] = hash
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since we are not supporting creating more shards
|
|
|
|
// after a restart (only receiving the result of the
|
|
|
|
// shards already outstanding), we create a simple
|
|
|
|
// shard tracker that will map the attempt IDs to
|
|
|
|
// hashes used for the HTLCs. This will be enough also
|
|
|
|
// for AMP payments, since we only need the hashes for
|
|
|
|
// the individual HTLCs to regenerate the circuits, and
|
|
|
|
// we don't currently persist the root share necessary
|
|
|
|
// to re-derive them.
|
|
|
|
shardTracker := shards.NewSimpleShardTracker(
|
2021-03-31 13:23:08 +03:00
|
|
|
payment.Info.PaymentIdentifier, htlcs,
|
2021-04-12 16:21:59 +03:00
|
|
|
)
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// We create a dummy, empty payment session such that
|
|
|
|
// we won't make another payment attempt when the
|
|
|
|
// result for the in-flight attempt is received.
|
2019-06-18 19:30:56 +03:00
|
|
|
paySession := r.cfg.SessionSource.NewPaymentSessionEmpty()
|
2021-03-31 13:23:08 +03:00
|
|
|
|
2020-04-01 01:13:22 +03:00
|
|
|
// We pass in a zero timeout value, to indicate we
|
|
|
|
// don't need it to timeout. It will stop immediately
|
2020-04-01 01:13:22 +03:00
|
|
|
// after the existing attempt has finished anyway. We
|
|
|
|
// also set a zero fee limit, as no more routes should
|
|
|
|
// be tried.
|
2020-04-01 01:13:22 +03:00
|
|
|
_, _, err := r.sendPayment(
|
2021-03-31 13:23:08 +03:00
|
|
|
payment.Info.Value, 0,
|
|
|
|
payment.Info.PaymentIdentifier, 0, paySession,
|
|
|
|
shardTracker,
|
2020-04-01 01:13:22 +03:00
|
|
|
)
|
2019-05-23 21:05:29 +03:00
|
|
|
if err != nil {
|
2021-03-31 13:23:08 +03:00
|
|
|
log.Errorf("Resuming payment %v failed: %v.",
|
|
|
|
payment.Info.PaymentIdentifier, err)
|
2019-05-23 21:05:29 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
log.Infof("Resumed payment %v completed.",
|
|
|
|
payment.Info.PaymentIdentifier)
|
2019-05-23 21:05:29 +03:00
|
|
|
}(payment)
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
r.wg.Add(1)
|
|
|
|
go r.networkHandler()
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Stop signals the ChannelRouter to gracefully halt all routines. This method
|
|
|
|
// will *block* until all goroutines have excited. If the channel router has
|
|
|
|
// already stopped then this method will return immediately.
|
|
|
|
func (r *ChannelRouter) Stop() error {
|
|
|
|
if !atomic.CompareAndSwapUint32(&r.stopped, 0, 1) {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
log.Tracef("Channel Router shutting down")
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
// Our filtered chain view could've only been started if
|
|
|
|
// AssumeChannelValid isn't present.
|
|
|
|
if !r.cfg.AssumeChannelValid {
|
|
|
|
if err := r.cfg.ChainView.Stop(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-05-11 03:22:26 +03:00
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
close(r.quit)
|
|
|
|
r.wg.Wait()
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// syncGraphWithChain attempts to synchronize the current channel graph with
|
|
|
|
// the latest UTXO set state. This process involves pruning from the channel
|
|
|
|
// graph any channels which have been closed by spending their funding output
|
|
|
|
// since we've been down.
|
|
|
|
func (r *ChannelRouter) syncGraphWithChain() error {
|
|
|
|
// First, we'll need to check to see if we're already in sync with the
|
|
|
|
// latest state of the UTXO set.
|
|
|
|
bestHash, bestHeight, err := r.cfg.Chain.GetBestBlock()
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-05-11 03:22:26 +03:00
|
|
|
r.bestHeight = uint32(bestHeight)
|
2017-11-11 03:55:31 +03:00
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
pruneHash, pruneHeight, err := r.cfg.Graph.PruneTip()
|
|
|
|
if err != nil {
|
|
|
|
switch {
|
|
|
|
// If the graph has never been pruned, or hasn't fully been
|
|
|
|
// created yet, then we don't treat this as an explicit error.
|
|
|
|
case err == channeldb.ErrGraphNeverPruned:
|
|
|
|
case err == channeldb.ErrGraphNotFound:
|
|
|
|
default:
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Infof("Prune tip for Channel Graph: height=%v, hash=%v", pruneHeight,
|
|
|
|
pruneHash)
|
|
|
|
|
|
|
|
switch {
|
|
|
|
|
|
|
|
// If the graph has never been pruned, then we can exit early as this
|
|
|
|
// entails it's being created for the first time and hasn't seen any
|
|
|
|
// block or created channels.
|
|
|
|
case pruneHeight == 0 || pruneHash == nil:
|
|
|
|
return nil
|
|
|
|
|
|
|
|
// If the block hashes and heights match exactly, then we don't need to
|
|
|
|
// prune the channel graph as we're already fully in sync.
|
|
|
|
case bestHash.IsEqual(pruneHash) && uint32(bestHeight) == pruneHeight:
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-10-02 18:54:29 +03:00
|
|
|
// If the main chain blockhash at prune height is different from the
|
|
|
|
// prune hash, this might indicate the database is on a stale branch.
|
|
|
|
mainBlockHash, err := r.cfg.Chain.GetBlockHash(int64(pruneHeight))
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// While we are on a stale branch of the chain, walk backwards to find
|
|
|
|
// first common block.
|
|
|
|
for !pruneHash.IsEqual(mainBlockHash) {
|
|
|
|
log.Infof("channel graph is stale. Disconnecting block %v "+
|
|
|
|
"(hash=%v)", pruneHeight, pruneHash)
|
|
|
|
// Prune the graph for every channel that was opened at height
|
2017-11-07 03:09:02 +03:00
|
|
|
// >= pruneHeight.
|
2017-10-02 18:54:29 +03:00
|
|
|
_, err := r.cfg.Graph.DisconnectBlockAtHeight(pruneHeight)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
pruneHash, pruneHeight, err = r.cfg.Graph.PruneTip()
|
|
|
|
if err != nil {
|
|
|
|
switch {
|
|
|
|
// If at this point the graph has never been pruned, we
|
|
|
|
// can exit as this entails we are back to the point
|
|
|
|
// where it hasn't seen any block or created channels,
|
|
|
|
// alas there's nothing left to prune.
|
|
|
|
case err == channeldb.ErrGraphNeverPruned:
|
|
|
|
return nil
|
|
|
|
case err == channeldb.ErrGraphNotFound:
|
|
|
|
return nil
|
|
|
|
default:
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mainBlockHash, err = r.cfg.Chain.GetBlockHash(int64(pruneHeight))
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
log.Infof("Syncing channel graph from height=%v (hash=%v) to height=%v "+
|
|
|
|
"(hash=%v)", pruneHeight, pruneHash, bestHeight, bestHash)
|
|
|
|
|
2019-04-17 23:25:05 +03:00
|
|
|
// If we're not yet caught up, then we'll walk forward in the chain
|
|
|
|
// pruning the channel graph with each new block that hasn't yet been
|
|
|
|
// consumed by the channel graph.
|
2019-06-27 20:34:55 +03:00
|
|
|
var spentOutputs []*wire.OutPoint
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
for nextHeight := pruneHeight + 1; nextHeight <= uint32(bestHeight); nextHeight++ {
|
2019-05-07 01:02:35 +03:00
|
|
|
// Break out of the rescan early if a shutdown has been
|
|
|
|
// requested, otherwise long rescans will block the daemon from
|
|
|
|
// shutting down promptly.
|
|
|
|
select {
|
|
|
|
case <-r.quit:
|
|
|
|
return ErrRouterShuttingDown
|
|
|
|
default:
|
|
|
|
}
|
|
|
|
|
2017-05-11 03:22:26 +03:00
|
|
|
// Using the next height, request a manual block pruning from
|
|
|
|
// the chainview for the particular block hash.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
nextHash, err := r.cfg.Chain.GetBlockHash(int64(nextHeight))
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-05-11 03:22:26 +03:00
|
|
|
filterBlock, err := r.cfg.ChainView.FilterBlock(nextHash)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2018-02-07 06:11:11 +03:00
|
|
|
// We're only interested in all prior outputs that have been
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// spent in the block, so collate all the referenced previous
|
|
|
|
// outpoints within each tx and input.
|
2017-05-11 03:22:26 +03:00
|
|
|
for _, tx := range filterBlock.Transactions {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
for _, txIn := range tx.TxIn {
|
|
|
|
spentOutputs = append(spentOutputs,
|
|
|
|
&txIn.PreviousOutPoint)
|
|
|
|
}
|
|
|
|
}
|
2019-06-27 20:34:55 +03:00
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-06-27 20:34:55 +03:00
|
|
|
// With the spent outputs gathered, attempt to prune the channel graph,
|
|
|
|
// also passing in the best hash+height so the prune tip can be updated.
|
|
|
|
closedChans, err := r.cfg.Graph.PruneGraph(
|
|
|
|
spentOutputs, bestHash, uint32(bestHeight),
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-02-07 06:12:10 +03:00
|
|
|
log.Infof("Graph pruning complete: %v channels were closed since "+
|
2019-06-27 20:34:55 +03:00
|
|
|
"height %v", len(closedChans), pruneHeight)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-01-31 07:34:40 +03:00
|
|
|
// pruneZombieChans is a method that will be called periodically to prune out
|
|
|
|
// any "zombie" channels. We consider channels zombies if *both* edges haven't
|
2019-04-17 23:24:14 +03:00
|
|
|
// been updated since our zombie horizon. If AssumeChannelValid is present,
|
|
|
|
// we'll also consider channels zombies if *both* edges are disabled. This
|
|
|
|
// usually signals that a channel has been closed on-chain. We do this
|
|
|
|
// periodically to keep a healthy, lively routing table.
|
2018-01-31 07:34:40 +03:00
|
|
|
func (r *ChannelRouter) pruneZombieChans() error {
|
2019-07-10 15:20:42 +03:00
|
|
|
chansToPrune := make(map[uint64]struct{})
|
2018-01-31 07:34:40 +03:00
|
|
|
chanExpiry := r.cfg.ChannelPruneExpiry
|
|
|
|
|
2019-04-17 23:24:14 +03:00
|
|
|
log.Infof("Examining channel graph for zombie channels")
|
2018-01-31 07:34:40 +03:00
|
|
|
|
2019-07-16 17:39:57 +03:00
|
|
|
// A helper method to detect if the channel belongs to this node
|
|
|
|
isSelfChannelEdge := func(info *channeldb.ChannelEdgeInfo) bool {
|
|
|
|
return info.NodeKey1Bytes == r.selfNode.PubKeyBytes ||
|
|
|
|
info.NodeKey2Bytes == r.selfNode.PubKeyBytes
|
|
|
|
}
|
|
|
|
|
2018-01-31 07:34:40 +03:00
|
|
|
// First, we'll collect all the channels which are eligible for garbage
|
|
|
|
// collection due to being zombies.
|
|
|
|
filterPruneChans := func(info *channeldb.ChannelEdgeInfo,
|
|
|
|
e1, e2 *channeldb.ChannelEdgePolicy) error {
|
|
|
|
|
2019-07-10 15:20:42 +03:00
|
|
|
// Exit early in case this channel is already marked to be pruned
|
|
|
|
if _, markedToPrune := chansToPrune[info.ChannelID]; markedToPrune {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-01-31 07:34:40 +03:00
|
|
|
// We'll ensure that we don't attempt to prune our *own*
|
|
|
|
// channels from the graph, as in any case this should be
|
|
|
|
// re-advertised by the sub-system above us.
|
2019-07-16 17:39:57 +03:00
|
|
|
if isSelfChannelEdge(info) {
|
2018-01-31 07:34:40 +03:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-12-20 05:10:02 +03:00
|
|
|
// If either edge hasn't been updated for a period of
|
2018-01-31 07:34:40 +03:00
|
|
|
// chanExpiry, then we'll mark the channel itself as eligible
|
|
|
|
// for graph pruning.
|
2019-12-20 05:10:02 +03:00
|
|
|
e1Zombie := e1 == nil || time.Since(e1.LastUpdate) >= chanExpiry
|
|
|
|
e2Zombie := e2 == nil || time.Since(e2.LastUpdate) >= chanExpiry
|
|
|
|
|
|
|
|
if e1Zombie {
|
|
|
|
log.Tracef("Node1 pubkey=%x of chan_id=%v is zombie",
|
|
|
|
info.NodeKey1Bytes, info.ChannelID)
|
2018-01-31 07:34:40 +03:00
|
|
|
}
|
2019-12-20 05:10:02 +03:00
|
|
|
if e2Zombie {
|
|
|
|
log.Tracef("Node2 pubkey=%x of chan_id=%v is zombie",
|
|
|
|
info.NodeKey2Bytes, info.ChannelID)
|
2018-01-31 07:34:40 +03:00
|
|
|
}
|
2019-04-17 23:24:14 +03:00
|
|
|
|
2021-04-03 00:57:50 +03:00
|
|
|
// If we're using strict zombie pruning, then a channel is only
|
|
|
|
// considered live if both edges have a recent update we know
|
|
|
|
// of.
|
|
|
|
var channelIsLive bool
|
|
|
|
switch {
|
|
|
|
case r.cfg.StrictZombiePruning:
|
|
|
|
channelIsLive = !e1Zombie && !e2Zombie
|
|
|
|
|
|
|
|
// Otherwise, if we're using the less strict variant, then a
|
|
|
|
// channel is considered live if either of the edges have a
|
|
|
|
// recent update.
|
|
|
|
default:
|
|
|
|
channelIsLive = !e1Zombie || !e2Zombie
|
|
|
|
}
|
|
|
|
|
|
|
|
// Return early if the channel is still considered to be live
|
|
|
|
// with the current set of configuration parameters.
|
|
|
|
if channelIsLive {
|
2019-04-17 23:24:14 +03:00
|
|
|
return nil
|
2018-01-31 07:34:40 +03:00
|
|
|
}
|
|
|
|
|
2019-04-17 23:26:34 +03:00
|
|
|
log.Debugf("ChannelID(%v) is a zombie, collecting to prune",
|
|
|
|
info.ChannelID)
|
2019-04-17 23:24:14 +03:00
|
|
|
|
|
|
|
// TODO(roasbeef): add ability to delete single directional edge
|
2019-07-10 15:20:42 +03:00
|
|
|
chansToPrune[info.ChannelID] = struct{}{}
|
2019-04-17 23:24:14 +03:00
|
|
|
|
2018-01-31 07:34:40 +03:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-07-16 17:39:57 +03:00
|
|
|
// If AssumeChannelValid is present we'll look at the disabled bit for both
|
|
|
|
// edges. If they're both disabled, then we can interpret this as the
|
|
|
|
// channel being closed and can prune it from our graph.
|
|
|
|
if r.cfg.AssumeChannelValid {
|
|
|
|
disabledChanIDs, err := r.cfg.Graph.DisabledChannelIDs()
|
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("unable to get disabled channels ids "+
|
|
|
|
"chans: %v", err)
|
|
|
|
}
|
2019-07-10 15:20:42 +03:00
|
|
|
|
2019-07-16 17:39:57 +03:00
|
|
|
disabledEdges, err := r.cfg.Graph.FetchChanInfos(disabledChanIDs)
|
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("unable to fetch disabled channels edges "+
|
|
|
|
"chans: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ensuring we won't prune our own channel from the graph.
|
|
|
|
for _, disabledEdge := range disabledEdges {
|
|
|
|
if !isSelfChannelEdge(disabledEdge.Info) {
|
|
|
|
chansToPrune[disabledEdge.Info.ChannelID] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
2019-07-10 15:20:42 +03:00
|
|
|
}
|
|
|
|
|
2019-07-16 17:39:57 +03:00
|
|
|
startTime := time.Unix(0, 0)
|
|
|
|
endTime := time.Now().Add(-1 * chanExpiry)
|
|
|
|
oldEdges, err := r.cfg.Graph.ChanUpdatesInHorizon(startTime, endTime)
|
2018-01-31 07:34:40 +03:00
|
|
|
if err != nil {
|
2019-07-16 17:39:57 +03:00
|
|
|
return fmt.Errorf("unable to fetch expired channel updates "+
|
2019-07-10 15:20:42 +03:00
|
|
|
"chans: %v", err)
|
|
|
|
}
|
|
|
|
|
2019-07-16 17:39:57 +03:00
|
|
|
for _, u := range oldEdges {
|
2019-07-10 15:20:42 +03:00
|
|
|
filterPruneChans(u.Info, u.Policy1, u.Policy2)
|
2018-01-31 07:34:40 +03:00
|
|
|
}
|
|
|
|
|
2019-04-17 23:24:14 +03:00
|
|
|
log.Infof("Pruning %v zombie channels", len(chansToPrune))
|
2021-01-20 11:52:31 +03:00
|
|
|
if len(chansToPrune) == 0 {
|
|
|
|
return nil
|
|
|
|
}
|
2018-01-31 07:34:40 +03:00
|
|
|
|
2019-04-17 23:24:14 +03:00
|
|
|
// With the set of zombie-like channels obtained, we'll do another pass
|
|
|
|
// to delete them from the channel graph.
|
2019-07-16 17:39:57 +03:00
|
|
|
toPrune := make([]uint64, 0, len(chansToPrune))
|
2019-07-10 15:20:42 +03:00
|
|
|
for chanID := range chansToPrune {
|
|
|
|
toPrune = append(toPrune, chanID)
|
2019-04-17 23:26:34 +03:00
|
|
|
log.Tracef("Pruning zombie channel with ChannelID(%v)", chanID)
|
|
|
|
}
|
2021-04-03 00:57:50 +03:00
|
|
|
err = r.cfg.Graph.DeleteChannelEdges(r.cfg.StrictZombiePruning, toPrune...)
|
|
|
|
if err != nil {
|
2019-04-17 23:26:34 +03:00
|
|
|
return fmt.Errorf("unable to delete zombie channels: %v", err)
|
2018-01-31 07:34:40 +03:00
|
|
|
}
|
|
|
|
|
2019-04-17 23:24:49 +03:00
|
|
|
// With the channels pruned, we'll also attempt to prune any nodes that
|
|
|
|
// were a part of them.
|
|
|
|
err = r.cfg.Graph.PruneGraphNodes()
|
|
|
|
if err != nil && err != channeldb.ErrGraphNodesNotFound {
|
|
|
|
return fmt.Errorf("unable to prune graph nodes: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-01-31 07:34:40 +03:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// networkHandler is the primary goroutine for the ChannelRouter. The roles of
|
|
|
|
// this goroutine include answering queries related to the state of the
|
2017-03-19 21:40:25 +03:00
|
|
|
// network, pruning the graph on new block notification, applying network
|
|
|
|
// updates, and registering new topology clients.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
//
|
|
|
|
// NOTE: This MUST be run as a goroutine.
|
|
|
|
func (r *ChannelRouter) networkHandler() {
|
|
|
|
defer r.wg.Done()
|
|
|
|
|
2017-10-05 05:39:38 +03:00
|
|
|
graphPruneTicker := time.NewTicker(r.cfg.GraphPruneInterval)
|
|
|
|
defer graphPruneTicker.Stop()
|
2017-08-22 09:43:20 +03:00
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
defer r.statTicker.Stop()
|
|
|
|
|
|
|
|
r.stats.Reset()
|
|
|
|
|
2017-11-30 03:44:08 +03:00
|
|
|
// We'll use this validation barrier to ensure that we process all jobs
|
|
|
|
// in the proper order during parallel validation.
|
2021-02-18 05:13:29 +03:00
|
|
|
//
|
|
|
|
// NOTE: For AssumeChannelValid, we bump up the maximum number of
|
|
|
|
// concurrent validation requests since there are no blocks being
|
|
|
|
// fetched. This significantly increases the performance of IGD for
|
|
|
|
// neutrino nodes.
|
|
|
|
//
|
|
|
|
// However, we dial back to use multiple of the number of cores when
|
|
|
|
// fully validating, to avoid fetching up to 1000 blocks from the
|
|
|
|
// backend. On bitcoind, this will empirically cause massive latency
|
|
|
|
// spikes when executing this many concurrent RPC calls. Critical
|
|
|
|
// subsystems or basic rpc calls that rely on calls such as GetBestBlock
|
|
|
|
// will hang due to excessive load.
|
|
|
|
//
|
|
|
|
// See https://github.com/lightningnetwork/lnd/issues/4892.
|
|
|
|
var validationBarrier *ValidationBarrier
|
|
|
|
if r.cfg.AssumeChannelValid {
|
|
|
|
validationBarrier = NewValidationBarrier(1000, r.quit)
|
|
|
|
} else {
|
|
|
|
validationBarrier = NewValidationBarrier(
|
|
|
|
4*runtime.NumCPU(), r.quit,
|
|
|
|
)
|
|
|
|
}
|
2017-11-30 03:44:08 +03:00
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
for {
|
2019-11-02 20:55:41 +03:00
|
|
|
|
|
|
|
// If there are stats, resume the statTicker.
|
|
|
|
if !r.stats.Empty() {
|
|
|
|
r.statTicker.Resume()
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
select {
|
2017-03-19 21:40:25 +03:00
|
|
|
// A new fully validated network update has just arrived. As a
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// result we'll modify the channel graph accordingly depending
|
|
|
|
// on the exact type of the message.
|
2018-05-08 02:27:34 +03:00
|
|
|
case update := <-r.networkUpdates:
|
2017-11-30 03:44:08 +03:00
|
|
|
// We'll set up any dependants, and wait until a free
|
|
|
|
// slot for this job opens up, this allow us to not
|
|
|
|
// have thousands of goroutines active.
|
2018-05-08 02:27:34 +03:00
|
|
|
validationBarrier.InitJobDependencies(update.msg)
|
2017-11-30 03:44:08 +03:00
|
|
|
|
2018-05-08 02:27:34 +03:00
|
|
|
r.wg.Add(1)
|
2017-11-30 03:44:08 +03:00
|
|
|
go func() {
|
2018-05-08 02:27:34 +03:00
|
|
|
defer r.wg.Done()
|
2017-11-30 03:44:08 +03:00
|
|
|
defer validationBarrier.CompleteJob()
|
|
|
|
|
|
|
|
// If this message has an existing dependency,
|
|
|
|
// then we'll wait until that has been fully
|
|
|
|
// validated before we proceed.
|
2018-05-08 02:27:34 +03:00
|
|
|
err := validationBarrier.WaitForDependants(
|
|
|
|
update.msg,
|
|
|
|
)
|
|
|
|
if err != nil {
|
2021-03-23 01:32:24 +03:00
|
|
|
if err != ErrVBarrierShuttingDown &&
|
|
|
|
err != ErrParentValidationFailed {
|
2018-05-08 02:27:34 +03:00
|
|
|
log.Warnf("unexpected error "+
|
|
|
|
"during validation "+
|
|
|
|
"barrier shutdown: %v",
|
|
|
|
err)
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
2017-11-30 03:44:08 +03:00
|
|
|
|
|
|
|
// Process the routing update to determine if
|
|
|
|
// this is either a new update from our PoV or
|
|
|
|
// an update to a prior vertex/edge we
|
|
|
|
// previously accepted.
|
2021-01-27 15:39:18 +03:00
|
|
|
err = r.processUpdate(update.msg, update.op...)
|
2018-05-08 02:27:34 +03:00
|
|
|
update.err <- err
|
2017-11-30 03:44:08 +03:00
|
|
|
|
|
|
|
// If this message had any dependencies, then
|
|
|
|
// we can now signal them to continue.
|
2021-03-23 01:32:24 +03:00
|
|
|
allowDependents := err == nil ||
|
|
|
|
IsError(err, ErrIgnored, ErrOutdated)
|
|
|
|
validationBarrier.SignalDependants(
|
|
|
|
update.msg, allowDependents,
|
|
|
|
)
|
2017-11-30 04:52:39 +03:00
|
|
|
if err != nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-11-30 03:44:08 +03:00
|
|
|
// Send off a new notification for the newly
|
|
|
|
// accepted update.
|
|
|
|
topChange := &TopologyChange{}
|
2018-05-08 02:27:34 +03:00
|
|
|
err = addToTopologyChange(
|
|
|
|
r.cfg.Graph, topChange, update.msg,
|
|
|
|
)
|
2017-11-30 03:44:08 +03:00
|
|
|
if err != nil {
|
|
|
|
log.Errorf("unable to update topology "+
|
|
|
|
"change notification: %v", err)
|
|
|
|
return
|
|
|
|
}
|
2017-03-19 21:40:25 +03:00
|
|
|
|
2017-11-30 03:44:08 +03:00
|
|
|
if !topChange.isEmpty() {
|
|
|
|
r.notifyTopologyChange(topChange)
|
|
|
|
}
|
|
|
|
}()
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
|
|
|
// TODO(roasbeef): remove all unconnected vertexes
|
|
|
|
// after N blocks pass with no corresponding
|
|
|
|
// announcements.
|
|
|
|
|
2017-10-02 18:54:29 +03:00
|
|
|
case chainUpdate, ok := <-r.staleBlocks:
|
|
|
|
// If the channel has been closed, then this indicates
|
|
|
|
// the daemon is shutting down, so we exit ourselves.
|
|
|
|
if !ok {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since this block is stale, we update our best height
|
|
|
|
// to the previous block.
|
|
|
|
blockHeight := uint32(chainUpdate.Height)
|
2017-12-06 04:40:40 +03:00
|
|
|
atomic.StoreUint32(&r.bestHeight, blockHeight-1)
|
2017-10-02 18:54:29 +03:00
|
|
|
|
|
|
|
// Update the channel graph to reflect that this block
|
|
|
|
// was disconnected.
|
|
|
|
_, err := r.cfg.Graph.DisconnectBlockAtHeight(blockHeight)
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("unable to prune graph with stale "+
|
|
|
|
"block: %v", err)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO(halseth): notify client about the reorg?
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// A new block has arrived, so we can prune the channel graph
|
|
|
|
// of any channels which were closed in the block.
|
2017-05-11 03:22:26 +03:00
|
|
|
case chainUpdate, ok := <-r.newBlocks:
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// If the channel has been closed, then this indicates
|
|
|
|
// the daemon is shutting down, so we exit ourselves.
|
|
|
|
if !ok {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-12-06 04:44:10 +03:00
|
|
|
// We'll ensure that any new blocks received attach
|
|
|
|
// directly to the end of our main chain. If not, then
|
|
|
|
// we've somehow missed some blocks. We don't process
|
|
|
|
// this block as otherwise, we may miss on-chain
|
|
|
|
// events.
|
|
|
|
currentHeight := atomic.LoadUint32(&r.bestHeight)
|
|
|
|
if chainUpdate.Height != currentHeight+1 {
|
|
|
|
log.Errorf("out of order block: expecting "+
|
|
|
|
"height=%v, got height=%v", currentHeight+1,
|
|
|
|
chainUpdate.Height)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2017-02-03 04:44:13 +03:00
|
|
|
// Once a new block arrives, we update our running
|
|
|
|
// track of the height of the chain tip.
|
2017-05-11 03:22:26 +03:00
|
|
|
blockHeight := uint32(chainUpdate.Height)
|
2017-12-06 04:40:40 +03:00
|
|
|
atomic.StoreUint32(&r.bestHeight, blockHeight)
|
2017-02-03 04:44:13 +03:00
|
|
|
log.Infof("Pruning channel graph using block %v (height=%v)",
|
2017-05-11 03:22:26 +03:00
|
|
|
chainUpdate.Hash, blockHeight)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-02-07 06:11:11 +03:00
|
|
|
// We're only interested in all prior outputs that have
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// been spent in the block, so collate all the
|
|
|
|
// referenced previous outpoints within each tx and
|
|
|
|
// input.
|
|
|
|
var spentOutputs []*wire.OutPoint
|
2017-05-11 03:22:26 +03:00
|
|
|
for _, tx := range chainUpdate.Transactions {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
for _, txIn := range tx.TxIn {
|
|
|
|
spentOutputs = append(spentOutputs,
|
|
|
|
&txIn.PreviousOutPoint)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// With the spent outputs gathered, attempt to prune
|
|
|
|
// the channel graph, also passing in the hash+height
|
|
|
|
// of the block being pruned so the prune tip can be
|
|
|
|
// updated.
|
2017-03-09 01:27:46 +03:00
|
|
|
chansClosed, err := r.cfg.Graph.PruneGraph(spentOutputs,
|
2017-05-11 03:22:26 +03:00
|
|
|
&chainUpdate.Hash, chainUpdate.Height)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
|
|
|
log.Errorf("unable to prune routing table: %v", err)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Infof("Block %v (height=%v) closed %v channels",
|
2017-05-11 03:22:26 +03:00
|
|
|
chainUpdate.Hash, blockHeight, len(chansClosed))
|
2017-03-09 01:27:46 +03:00
|
|
|
|
2017-08-03 07:06:57 +03:00
|
|
|
if len(chansClosed) == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// Notify all currently registered clients of the newly
|
|
|
|
// closed channels.
|
|
|
|
closeSummaries := createCloseSummaries(blockHeight, chansClosed...)
|
|
|
|
r.notifyTopologyChange(&TopologyChange{
|
|
|
|
ClosedChannels: closeSummaries,
|
|
|
|
})
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// A new notification client update has arrived. We're either
|
|
|
|
// gaining a new client, or cancelling notifications for an
|
|
|
|
// existing client.
|
|
|
|
case ntfnUpdate := <-r.ntfnClientUpdates:
|
|
|
|
clientID := ntfnUpdate.clientID
|
|
|
|
|
|
|
|
if ntfnUpdate.cancel {
|
2017-11-30 03:44:08 +03:00
|
|
|
r.RLock()
|
|
|
|
client, ok := r.topologyClients[ntfnUpdate.clientID]
|
|
|
|
r.RUnlock()
|
|
|
|
if ok {
|
|
|
|
r.Lock()
|
2017-03-09 01:27:46 +03:00
|
|
|
delete(r.topologyClients, clientID)
|
2017-11-30 03:44:08 +03:00
|
|
|
r.Unlock()
|
2017-06-25 15:31:41 +03:00
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
close(client.exit)
|
2017-06-25 15:31:41 +03:00
|
|
|
client.wg.Wait()
|
|
|
|
|
2017-05-16 04:47:18 +03:00
|
|
|
close(client.ntfnChan)
|
2017-03-09 01:27:46 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2017-11-30 03:44:08 +03:00
|
|
|
r.Lock()
|
2017-06-25 15:48:47 +03:00
|
|
|
r.topologyClients[ntfnUpdate.clientID] = &topologyClient{
|
2017-03-09 01:27:46 +03:00
|
|
|
ntfnChan: ntfnUpdate.ntfnChan,
|
|
|
|
exit: make(chan struct{}),
|
|
|
|
}
|
2017-11-30 03:44:08 +03:00
|
|
|
r.Unlock()
|
2017-03-09 01:27:46 +03:00
|
|
|
|
2017-10-05 05:39:38 +03:00
|
|
|
// The graph prune ticker has ticked, so we'll examine the
|
|
|
|
// state of the known graph to filter out any zombie channels
|
|
|
|
// for pruning.
|
|
|
|
case <-graphPruneTicker.C:
|
2018-01-31 07:34:40 +03:00
|
|
|
if err := r.pruneZombieChans(); err != nil {
|
2019-04-17 23:24:14 +03:00
|
|
|
log.Errorf("Unable to prune zombies: %v", err)
|
2017-10-05 05:39:38 +03:00
|
|
|
}
|
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
// Log any stats if we've processed a non-empty number of
|
|
|
|
// channels, updates, or nodes. We'll only pause the ticker if
|
|
|
|
// the last window contained no updates to avoid resuming and
|
|
|
|
// pausing while consecutive windows contain new info.
|
|
|
|
case <-r.statTicker.Ticks():
|
|
|
|
if !r.stats.Empty() {
|
|
|
|
log.Infof(r.stats.String())
|
|
|
|
} else {
|
|
|
|
r.statTicker.Pause()
|
|
|
|
}
|
|
|
|
r.stats.Reset()
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// The router has been signalled to exit, to we exit our main
|
|
|
|
// loop so the wait group can be decremented.
|
|
|
|
case <-r.quit:
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// assertNodeAnnFreshness returns a non-nil error if we have an announcement in
|
|
|
|
// the database for the passed node with a timestamp newer than the passed
|
|
|
|
// timestamp. ErrIgnored will be returned if we already have the node, and
|
|
|
|
// ErrOutdated will be returned if we have a timestamp that's after the new
|
|
|
|
// timestamp.
|
2019-04-05 18:36:11 +03:00
|
|
|
func (r *ChannelRouter) assertNodeAnnFreshness(node route.Vertex,
|
2018-02-25 06:34:03 +03:00
|
|
|
msgTimestamp time.Time) error {
|
|
|
|
|
|
|
|
// If we are not already aware of this node, it means that we don't
|
|
|
|
// know about any channel using this node. To avoid a DoS attack by
|
|
|
|
// node announcements, we will ignore such nodes. If we do know about
|
|
|
|
// this node, check that this update brings info newer than what we
|
|
|
|
// already have.
|
|
|
|
lastUpdate, exists, err := r.cfg.Graph.HasLightningNode(node)
|
|
|
|
if err != nil {
|
|
|
|
return errors.Errorf("unable to query for the "+
|
|
|
|
"existence of node: %v", err)
|
|
|
|
}
|
|
|
|
if !exists {
|
|
|
|
return newErrf(ErrIgnored, "Ignoring node announcement"+
|
|
|
|
" for node not found in channel graph (%x)",
|
|
|
|
node[:])
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we've reached this point then we're aware of the vertex being
|
|
|
|
// advertised. So we now check if the new message has a new time stamp,
|
|
|
|
// if not then we won't accept the new data as it would override newer
|
|
|
|
// data.
|
|
|
|
if !lastUpdate.Before(msgTimestamp) {
|
|
|
|
return newErrf(ErrOutdated, "Ignoring outdated "+
|
|
|
|
"announcement for %x", node[:])
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
// processUpdate processes a new relate authenticated channel/edge, node or
|
2017-04-01 15:33:17 +03:00
|
|
|
// channel/edge update network update. If the update didn't affect the internal
|
|
|
|
// state of the draft due to either being out of date, invalid, or redundant,
|
|
|
|
// then error is returned.
|
2021-01-27 15:39:18 +03:00
|
|
|
func (r *ChannelRouter) processUpdate(msg interface{},
|
|
|
|
op ...batch.SchedulerOption) error {
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
switch msg := msg.(type) {
|
2017-03-19 21:40:25 +03:00
|
|
|
case *channeldb.LightningNode:
|
2018-02-25 06:34:03 +03:00
|
|
|
// Before we add the node to the database, we'll check to see
|
|
|
|
// if the announcement is "fresh" or not. If it isn't, then
|
|
|
|
// we'll return an error.
|
|
|
|
err := r.assertNodeAnnFreshness(msg.PubKeyBytes, msg.LastUpdate)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
2018-02-25 06:34:03 +03:00
|
|
|
return err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2021-01-27 15:39:18 +03:00
|
|
|
if err := r.cfg.Graph.AddLightningNode(msg, op...); err != nil {
|
2017-04-01 15:33:17 +03:00
|
|
|
return errors.Errorf("unable to add node %v to the "+
|
2018-01-31 07:26:26 +03:00
|
|
|
"graph: %v", msg.PubKeyBytes, err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
log.Tracef("Updated vertex data for node=%x", msg.PubKeyBytes)
|
|
|
|
r.stats.incNumNodeUpdates()
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
case *channeldb.ChannelEdgeInfo:
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// Prior to processing the announcement we first check if we
|
|
|
|
// already know of this channel, if so, then we can exit early.
|
2019-03-27 23:06:57 +03:00
|
|
|
_, _, exists, isZombie, err := r.cfg.Graph.HasChannelEdge(
|
|
|
|
msg.ChannelID,
|
|
|
|
)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil && err != channeldb.ErrGraphNoEdgesFound {
|
2017-03-19 21:40:25 +03:00
|
|
|
return errors.Errorf("unable to check for edge "+
|
|
|
|
"existence: %v", err)
|
2019-03-27 23:06:57 +03:00
|
|
|
}
|
|
|
|
if isZombie {
|
|
|
|
return newErrf(ErrIgnored, "ignoring msg for zombie "+
|
|
|
|
"chan_id=%v", msg.ChannelID)
|
|
|
|
}
|
|
|
|
if exists {
|
|
|
|
return newErrf(ErrIgnored, "ignoring msg for known "+
|
2017-03-19 21:40:25 +03:00
|
|
|
"chan_id=%v", msg.ChannelID)
|
2017-02-03 04:44:13 +03:00
|
|
|
}
|
2017-02-02 05:29:46 +03:00
|
|
|
|
2019-04-17 23:25:22 +03:00
|
|
|
// If AssumeChannelValid is present, then we are unable to
|
|
|
|
// perform any of the expensive checks below, so we'll
|
|
|
|
// short-circuit our path straight to adding the edge to our
|
|
|
|
// graph.
|
|
|
|
if r.cfg.AssumeChannelValid {
|
2021-01-27 15:39:18 +03:00
|
|
|
if err := r.cfg.Graph.AddChannelEdge(msg, op...); err != nil {
|
2019-04-17 23:25:22 +03:00
|
|
|
return fmt.Errorf("unable to add edge: %v", err)
|
|
|
|
}
|
2019-06-06 17:28:44 +03:00
|
|
|
log.Tracef("New channel discovered! Link "+
|
2019-04-17 23:25:22 +03:00
|
|
|
"connects %x and %x with ChannelID(%v)",
|
|
|
|
msg.NodeKey1Bytes, msg.NodeKey2Bytes,
|
|
|
|
msg.ChannelID)
|
2019-06-06 17:28:44 +03:00
|
|
|
r.stats.incNumEdgesDiscovered()
|
|
|
|
|
2019-04-17 23:25:22 +03:00
|
|
|
break
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// Before we can add the channel to the channel graph, we need
|
|
|
|
// to obtain the full funding outpoint that's encoded within
|
|
|
|
// the channel ID.
|
2017-03-27 18:22:37 +03:00
|
|
|
channelID := lnwire.NewShortChanIDFromInt(msg.ChannelID)
|
2019-10-01 05:55:03 +03:00
|
|
|
fundingTx, err := r.fetchFundingTx(&channelID)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
2019-10-01 05:55:03 +03:00
|
|
|
return errors.Errorf("unable to fetch funding tx for "+
|
2017-03-19 21:40:25 +03:00
|
|
|
"chan_id=%v: %v", msg.ChannelID, err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-07-18 05:17:19 +03:00
|
|
|
// Recreate witness output to be sure that declared in channel
|
|
|
|
// edge bitcoin keys and channel value corresponds to the
|
|
|
|
// reality.
|
2019-01-16 17:47:43 +03:00
|
|
|
witnessScript, err := input.GenMultiSigScript(
|
2018-07-18 05:17:19 +03:00
|
|
|
msg.BitcoinKey1Bytes[:], msg.BitcoinKey2Bytes[:],
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2019-10-01 05:55:03 +03:00
|
|
|
pkScript, err := input.WitnessScriptHash(witnessScript)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Next we'll validate that this channel is actually well
|
|
|
|
// formed. If this check fails, then this channel either
|
|
|
|
// doesn't exist, or isn't the one that was meant to be created
|
|
|
|
// according to the passed channel proofs.
|
|
|
|
fundingPoint, err := chanvalidate.Validate(&chanvalidate.Context{
|
|
|
|
Locator: &chanvalidate.ShortChanIDChanLocator{
|
|
|
|
ID: channelID,
|
|
|
|
},
|
|
|
|
MultiSigPkScript: pkScript,
|
|
|
|
FundingTx: fundingTx,
|
|
|
|
})
|
2018-07-18 05:17:19 +03:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-04-17 23:25:22 +03:00
|
|
|
// Now that we have the funding outpoint of the channel, ensure
|
|
|
|
// that it hasn't yet been spent. If so, then this channel has
|
|
|
|
// been closed so we'll ignore it.
|
2019-10-01 05:55:03 +03:00
|
|
|
fundingPkScript, err := input.WitnessScriptHash(witnessScript)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2019-04-17 23:25:22 +03:00
|
|
|
chanUtxo, err := r.cfg.Chain.GetUtxo(
|
|
|
|
fundingPoint, fundingPkScript, channelID.BlockHeight,
|
2018-08-24 16:31:57 +03:00
|
|
|
r.quit,
|
2019-04-17 23:25:22 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("unable to fetch utxo "+
|
|
|
|
"for chan_id=%v, chan_point=%v: %v",
|
|
|
|
msg.ChannelID, fundingPoint, err)
|
2017-03-30 04:01:28 +03:00
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
// TODO(roasbeef): this is a hack, needs to be removed
|
|
|
|
// after commitment fees are dynamic.
|
2017-05-01 21:45:02 +03:00
|
|
|
msg.Capacity = btcutil.Amount(chanUtxo.Value)
|
2017-03-19 21:40:25 +03:00
|
|
|
msg.ChannelPoint = *fundingPoint
|
2021-01-27 15:39:18 +03:00
|
|
|
if err := r.cfg.Graph.AddChannelEdge(msg, op...); err != nil {
|
2017-03-19 21:40:25 +03:00
|
|
|
return errors.Errorf("unable to add edge: %v", err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-06-06 17:28:44 +03:00
|
|
|
log.Tracef("New channel discovered! Link "+
|
2017-04-14 21:48:04 +03:00
|
|
|
"connects %x and %x with ChannelPoint(%v): "+
|
|
|
|
"chan_id=%v, capacity=%v",
|
2018-01-31 07:26:26 +03:00
|
|
|
msg.NodeKey1Bytes, msg.NodeKey2Bytes,
|
2017-04-14 21:48:04 +03:00
|
|
|
fundingPoint, msg.ChannelID, msg.Capacity)
|
2019-06-06 17:28:44 +03:00
|
|
|
r.stats.incNumEdgesDiscovered()
|
2017-03-19 21:40:25 +03:00
|
|
|
|
2017-05-11 03:22:26 +03:00
|
|
|
// As a new edge has been added to the channel graph, we'll
|
2019-04-17 23:25:22 +03:00
|
|
|
// update the current UTXO filter within our active
|
|
|
|
// FilteredChainView so we are notified if/when this channel is
|
|
|
|
// closed.
|
|
|
|
filterUpdate := []channeldb.EdgePoint{
|
|
|
|
{
|
|
|
|
FundingPkScript: fundingPkScript,
|
|
|
|
OutPoint: *fundingPoint,
|
|
|
|
},
|
|
|
|
}
|
|
|
|
err = r.cfg.ChainView.UpdateFilter(
|
|
|
|
filterUpdate, atomic.LoadUint32(&r.bestHeight),
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return errors.Errorf("unable to update chain "+
|
|
|
|
"view: %v", err)
|
2017-05-11 03:22:26 +03:00
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
case *channeldb.ChannelEdgePolicy:
|
2018-01-08 18:55:26 +03:00
|
|
|
// We make sure to hold the mutex for this channel ID,
|
|
|
|
// such that no other goroutine is concurrently doing
|
|
|
|
// database accesses for the same channel ID.
|
|
|
|
r.channelEdgeMtx.Lock(msg.ChannelID)
|
|
|
|
defer r.channelEdgeMtx.Unlock(msg.ChannelID)
|
|
|
|
|
2019-03-27 23:06:57 +03:00
|
|
|
edge1Timestamp, edge2Timestamp, exists, isZombie, err :=
|
|
|
|
r.cfg.Graph.HasChannelEdge(msg.ChannelID)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil && err != channeldb.ErrGraphNoEdgesFound {
|
2017-03-19 21:40:25 +03:00
|
|
|
return errors.Errorf("unable to check for edge "+
|
|
|
|
"existence: %v", err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2017-02-03 04:44:13 +03:00
|
|
|
}
|
|
|
|
|
2019-03-27 23:06:57 +03:00
|
|
|
// If the channel is marked as a zombie in our database, and
|
|
|
|
// we consider this a stale update, then we should not apply the
|
|
|
|
// policy.
|
|
|
|
isStaleUpdate := time.Since(msg.LastUpdate) > r.cfg.ChannelPruneExpiry
|
|
|
|
if isZombie && isStaleUpdate {
|
|
|
|
return newErrf(ErrIgnored, "ignoring stale update "+
|
|
|
|
"(flags=%v|%v) for zombie chan_id=%v",
|
|
|
|
msg.MessageFlags, msg.ChannelFlags,
|
|
|
|
msg.ChannelID)
|
|
|
|
}
|
|
|
|
|
2018-12-04 16:58:39 +03:00
|
|
|
// If the channel doesn't exist in our database, we cannot
|
|
|
|
// apply the updated policy.
|
|
|
|
if !exists {
|
2019-03-27 23:06:57 +03:00
|
|
|
return newErrf(ErrIgnored, "ignoring update "+
|
2018-12-04 16:58:39 +03:00
|
|
|
"(flags=%v|%v) for unknown chan_id=%v",
|
|
|
|
msg.MessageFlags, msg.ChannelFlags,
|
|
|
|
msg.ChannelID)
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// As edges are directional edge node has a unique policy for
|
2016-12-31 03:41:59 +03:00
|
|
|
// the direction of the edge they control. Therefore we first
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// check if we already have the most up to date information for
|
2018-08-20 15:28:09 +03:00
|
|
|
// that edge. If this message has a timestamp not strictly
|
|
|
|
// newer than what we already know of we can exit early.
|
2017-12-01 09:24:27 +03:00
|
|
|
switch {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// A flag set of 0 indicates this is an announcement for the
|
|
|
|
// "first" node in the channel.
|
2019-01-12 20:59:43 +03:00
|
|
|
case msg.ChannelFlags&lnwire.ChanUpdateDirection == 0:
|
2018-03-24 01:50:21 +03:00
|
|
|
|
2018-08-20 15:28:09 +03:00
|
|
|
// Ignore outdated message.
|
|
|
|
if !edge1Timestamp.Before(msg.LastUpdate) {
|
|
|
|
return newErrf(ErrOutdated, "Ignoring "+
|
2019-01-12 20:59:43 +03:00
|
|
|
"outdated update (flags=%v|%v) for "+
|
|
|
|
"known chan_id=%v", msg.MessageFlags,
|
|
|
|
msg.ChannelFlags, msg.ChannelID)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// Similarly, a flag set of 1 indicates this is an announcement
|
|
|
|
// for the "second" node in the channel.
|
2019-01-12 20:59:43 +03:00
|
|
|
case msg.ChannelFlags&lnwire.ChanUpdateDirection == 1:
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-08-20 15:28:09 +03:00
|
|
|
// Ignore outdated message.
|
|
|
|
if !edge2Timestamp.Before(msg.LastUpdate) {
|
|
|
|
return newErrf(ErrOutdated, "Ignoring "+
|
2019-01-12 20:59:43 +03:00
|
|
|
"outdated update (flags=%v|%v) for "+
|
|
|
|
"known chan_id=%v", msg.MessageFlags,
|
|
|
|
msg.ChannelFlags, msg.ChannelID)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-05-11 03:22:26 +03:00
|
|
|
// Now that we know this isn't a stale update, we'll apply the
|
|
|
|
// new edge policy to the proper directional edge within the
|
|
|
|
// channel graph.
|
2021-01-27 15:39:18 +03:00
|
|
|
if err = r.cfg.Graph.UpdateEdgePolicy(msg, op...); err != nil {
|
2017-03-19 21:40:25 +03:00
|
|
|
err := errors.Errorf("unable to add channel: %v", err)
|
|
|
|
log.Error(err)
|
|
|
|
return err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-01-10 10:54:41 +03:00
|
|
|
log.Tracef("New channel update applied: %v",
|
|
|
|
newLogClosure(func() string { return spew.Sdump(msg) }))
|
2019-06-06 17:28:44 +03:00
|
|
|
r.stats.incNumChannelUpdates()
|
2017-03-19 21:40:25 +03:00
|
|
|
|
|
|
|
default:
|
|
|
|
return errors.Errorf("wrong routing update message type")
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
return nil
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-10-01 05:55:03 +03:00
|
|
|
// fetchFundingTx returns the funding transaction identified by the passed
|
|
|
|
// short channel ID.
|
2017-05-11 03:22:26 +03:00
|
|
|
//
|
2017-10-11 05:48:44 +03:00
|
|
|
// TODO(roasbeef): replace with call to GetBlockTransaction? (would allow to
|
2017-05-11 03:22:26 +03:00
|
|
|
// later use getblocktxn)
|
2019-10-01 05:55:03 +03:00
|
|
|
func (r *ChannelRouter) fetchFundingTx(
|
|
|
|
chanID *lnwire.ShortChannelID) (*wire.MsgTx, error) {
|
2018-08-30 05:05:13 +03:00
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// First fetch the block hash by the block number encoded, then use
|
|
|
|
// that hash to fetch the block itself.
|
|
|
|
blockNum := int64(chanID.BlockHeight)
|
|
|
|
blockHash, err := r.cfg.Chain.GetBlockHash(blockNum)
|
|
|
|
if err != nil {
|
2019-10-01 05:55:03 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
fundingBlock, err := r.cfg.Chain.GetBlock(blockHash)
|
|
|
|
if err != nil {
|
2019-10-01 05:55:03 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-02-07 01:54:50 +03:00
|
|
|
// As a sanity check, ensure that the advertised transaction index is
|
|
|
|
// within the bounds of the total number of transactions within a
|
|
|
|
// block.
|
|
|
|
numTxns := uint32(len(fundingBlock.Transactions))
|
|
|
|
if chanID.TxIndex > numTxns-1 {
|
2019-10-01 05:55:03 +03:00
|
|
|
return nil, fmt.Errorf("tx_index=#%v is out of range "+
|
|
|
|
"(max_index=%v), network_chan_id=%v", chanID.TxIndex,
|
|
|
|
numTxns-1, chanID)
|
2018-08-30 05:05:13 +03:00
|
|
|
}
|
|
|
|
|
2019-10-01 05:55:03 +03:00
|
|
|
return fundingBlock.Transactions[chanID.TxIndex], nil
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
// routingMsg couples a routing related routing topology update to the
|
|
|
|
// error channel.
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
type routingMsg struct {
|
2017-03-19 21:40:25 +03:00
|
|
|
msg interface{}
|
2021-01-27 15:39:18 +03:00
|
|
|
op []batch.SchedulerOption
|
2017-03-19 21:40:25 +03:00
|
|
|
err chan error
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// FindRoute attempts to query the ChannelRouter for the optimum path to a
|
|
|
|
// particular target destination to which it is able to send `amt` after
|
|
|
|
// factoring in channel capacities and cumulative fees along the route.
|
|
|
|
func (r *ChannelRouter) FindRoute(source, target route.Vertex,
|
|
|
|
amt lnwire.MilliSatoshi, restrictions *RestrictParams,
|
2019-12-11 12:52:27 +03:00
|
|
|
destCustomRecords record.CustomSet,
|
2020-01-14 13:21:24 +03:00
|
|
|
routeHints map[route.Vertex][]*channeldb.ChannelEdgePolicy,
|
2020-01-14 14:00:26 +03:00
|
|
|
finalExpiry uint16) (*route.Route, error) {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-11-06 02:04:24 +03:00
|
|
|
log.Debugf("Searching for path to %v, sending %v", target, amt)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// We'll attempt to obtain a set of bandwidth hints that can help us
|
|
|
|
// eliminate certain routes early on in the path finding process.
|
2018-05-08 07:04:31 +03:00
|
|
|
bandwidthHints, err := generateBandwidthHints(
|
|
|
|
r.selfNode, r.cfg.QueryBandwidth,
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-12-17 13:55:03 +03:00
|
|
|
// We'll fetch the current block height so we can properly calculate the
|
|
|
|
// required HTLC time locks within the route.
|
|
|
|
_, currentHeight, err := r.cfg.Chain.GetBestBlock()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// Now that we know the destination is reachable within the graph, we'll
|
|
|
|
// execute our path finding algorithm.
|
2020-01-14 14:00:26 +03:00
|
|
|
finalHtlcExpiry := currentHeight + int32(finalExpiry)
|
2019-12-17 13:55:03 +03:00
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
routingTx, err := newDbRoutingTx(r.cfg.Graph)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
defer func() {
|
|
|
|
err := routingTx.close()
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Error closing db tx: %v", err)
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
path, err := findPath(
|
|
|
|
&graphParams{
|
2020-01-14 13:21:24 +03:00
|
|
|
additionalEdges: routeHints,
|
2020-03-17 13:32:07 +03:00
|
|
|
bandwidthHints: bandwidthHints,
|
|
|
|
graph: routingTx,
|
2019-05-07 18:01:01 +03:00
|
|
|
},
|
2020-03-17 13:32:07 +03:00
|
|
|
restrictions,
|
|
|
|
&r.cfg.PathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
source, target, amt, finalHtlcExpiry,
|
2018-02-13 03:19:46 +03:00
|
|
|
)
|
2019-05-24 09:57:17 +03:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2019-05-07 18:01:01 +03:00
|
|
|
|
|
|
|
// Create the route with absolute time lock values.
|
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
source, path, uint32(currentHeight),
|
|
|
|
finalHopParams{
|
|
|
|
amt: amt,
|
2020-03-25 16:06:48 +03:00
|
|
|
totalAmt: amt,
|
2020-01-14 14:00:26 +03:00
|
|
|
cltvDelta: finalExpiry,
|
2019-12-19 10:55:08 +03:00
|
|
|
records: destCustomRecords,
|
|
|
|
},
|
2018-02-13 03:19:46 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2017-03-21 04:31:33 +03:00
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
go log.Tracef("Obtained path to send %v to %x: %v",
|
2019-03-05 18:55:19 +03:00
|
|
|
amt, target, newLogClosure(func() string {
|
2019-05-07 18:01:01 +03:00
|
|
|
return spew.Sdump(route)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}),
|
|
|
|
)
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
return route, nil
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-05-16 16:27:29 +03:00
|
|
|
// generateNewSessionKey generates a new ephemeral private key to be used for a
|
|
|
|
// payment attempt.
|
|
|
|
func generateNewSessionKey() (*btcec.PrivateKey, error) {
|
|
|
|
// Generate a new random session key to ensure that we don't trigger
|
|
|
|
// any replay.
|
|
|
|
//
|
|
|
|
// TODO(roasbeef): add more sources of randomness?
|
|
|
|
return btcec.NewPrivateKey(btcec.S256())
|
|
|
|
}
|
|
|
|
|
2017-02-02 05:29:46 +03:00
|
|
|
// generateSphinxPacket generates then encodes a sphinx packet which encodes
|
|
|
|
// the onion route specified by the passed layer 3 route. The blob returned
|
|
|
|
// from this function can immediately be included within an HTLC add packet to
|
|
|
|
// be sent to the first hop within the route.
|
2019-05-16 16:27:29 +03:00
|
|
|
func generateSphinxPacket(rt *route.Route, paymentHash []byte,
|
|
|
|
sessionKey *btcec.PrivateKey) ([]byte, *sphinx.Circuit, error) {
|
2018-07-30 23:40:56 +03:00
|
|
|
|
2019-01-11 07:03:07 +03:00
|
|
|
// Now that we know we have an actual route, we'll map the route into a
|
|
|
|
// sphinx payument path which includes per-hop paylods for each hop
|
|
|
|
// that give each node within the route the necessary information
|
|
|
|
// (fees, CLTV value, etc) to properly forward the payment.
|
|
|
|
sphinxPath, err := rt.ToSphinxPath()
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
2017-06-16 23:42:55 +03:00
|
|
|
log.Tracef("Constructed per-hop payloads for payment_hash=%x: %v",
|
2018-07-01 01:14:22 +03:00
|
|
|
paymentHash[:], newLogClosure(func() string {
|
2019-06-12 13:19:43 +03:00
|
|
|
path := make([]sphinx.OnionHop, sphinxPath.TrueRouteLength())
|
2019-05-13 20:55:15 +03:00
|
|
|
for i := range path {
|
2019-06-12 13:19:43 +03:00
|
|
|
hopCopy := sphinxPath[i]
|
|
|
|
hopCopy.NodePub.Curve = nil
|
|
|
|
path[i] = hopCopy
|
2019-05-13 20:55:15 +03:00
|
|
|
}
|
|
|
|
return spew.Sdump(path)
|
2018-07-01 01:14:22 +03:00
|
|
|
}),
|
|
|
|
)
|
2017-02-02 05:29:46 +03:00
|
|
|
|
|
|
|
// Next generate the onion routing packet which allows us to perform
|
|
|
|
// privacy preserving source routing across the network.
|
2018-07-01 01:14:22 +03:00
|
|
|
sphinxPacket, err := sphinx.NewOnionPacket(
|
2019-01-11 07:03:07 +03:00
|
|
|
sphinxPath, sessionKey, paymentHash,
|
2020-01-07 05:28:54 +03:00
|
|
|
sphinx.DeterministicPacketFiller,
|
2018-07-01 01:14:22 +03:00
|
|
|
)
|
2017-02-02 05:29:46 +03:00
|
|
|
if err != nil {
|
2017-06-29 16:40:45 +03:00
|
|
|
return nil, nil, err
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
2018-02-07 06:13:07 +03:00
|
|
|
// Finally, encode Sphinx packet using its wire representation to be
|
2017-02-02 05:29:46 +03:00
|
|
|
// included within the HTLC add packet.
|
|
|
|
var onionBlob bytes.Buffer
|
|
|
|
if err := sphinxPacket.Encode(&onionBlob); err != nil {
|
2017-06-29 16:40:45 +03:00
|
|
|
return nil, nil, err
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
log.Tracef("Generated sphinx packet: %v",
|
|
|
|
newLogClosure(func() string {
|
2019-06-12 13:19:43 +03:00
|
|
|
// We make a copy of the ephemeral key and unset the
|
|
|
|
// internal curve here in order to keep the logs from
|
|
|
|
// getting noisy.
|
|
|
|
key := *sphinxPacket.EphemeralKey
|
|
|
|
key.Curve = nil
|
|
|
|
packetCopy := *sphinxPacket
|
|
|
|
packetCopy.EphemeralKey = &key
|
|
|
|
return spew.Sdump(packetCopy)
|
2017-02-02 05:29:46 +03:00
|
|
|
}),
|
|
|
|
)
|
|
|
|
|
2017-06-29 16:40:45 +03:00
|
|
|
return onionBlob.Bytes(), &sphinx.Circuit{
|
|
|
|
SessionKey: sessionKey,
|
2019-01-11 07:03:07 +03:00
|
|
|
PaymentPath: sphinxPath.NodeKeys(),
|
2017-06-29 16:40:45 +03:00
|
|
|
}, nil
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// LightningPayment describes a payment to be sent through the network to the
|
|
|
|
// final destination.
|
|
|
|
type LightningPayment struct {
|
|
|
|
// Target is the node in which the payment should be routed towards.
|
2019-04-05 18:36:11 +03:00
|
|
|
Target route.Vertex
|
2017-02-02 05:29:46 +03:00
|
|
|
|
2017-03-09 01:27:46 +03:00
|
|
|
// Amount is the value of the payment to send through the network in
|
2017-08-22 09:43:20 +03:00
|
|
|
// milli-satoshis.
|
|
|
|
Amount lnwire.MilliSatoshi
|
2017-02-02 05:29:46 +03:00
|
|
|
|
2018-04-19 17:26:41 +03:00
|
|
|
// FeeLimit is the maximum fee in millisatoshis that the payment should
|
|
|
|
// accept when sending it through the network. The payment will fail
|
|
|
|
// if there isn't a route with lower fees than this limit.
|
|
|
|
FeeLimit lnwire.MilliSatoshi
|
2018-02-01 01:36:10 +03:00
|
|
|
|
2019-02-13 13:53:32 +03:00
|
|
|
// CltvLimit is the maximum time lock that is allowed for attempts to
|
|
|
|
// complete this payment.
|
2019-10-11 22:46:10 +03:00
|
|
|
CltvLimit uint32
|
2019-02-13 13:53:32 +03:00
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
// paymentHash is the r-hash value to use within the HTLC extended to
|
|
|
|
// the first hop. This won't be set for AMP payments.
|
|
|
|
paymentHash *lntypes.Hash
|
2017-02-02 05:29:46 +03:00
|
|
|
|
2021-04-12 16:05:48 +03:00
|
|
|
// amp is an optional field that is set if and only if this is am AMP
|
|
|
|
// payment.
|
|
|
|
amp *AMPOptions
|
|
|
|
|
2017-10-19 08:00:03 +03:00
|
|
|
// FinalCLTVDelta is the CTLV expiry delta to use for the _final_ hop
|
|
|
|
// in the route. This means that the final hop will have a CLTV delta
|
2019-04-18 10:45:21 +03:00
|
|
|
// of at least: currentHeight + FinalCLTVDelta.
|
|
|
|
FinalCLTVDelta uint16
|
2017-10-19 08:00:03 +03:00
|
|
|
|
2018-03-20 04:44:41 +03:00
|
|
|
// PayAttemptTimeout is a timeout value that we'll use to determine
|
|
|
|
// when we should should abandon the payment attempt after consecutive
|
|
|
|
// payment failure. This prevents us from attempting to send a payment
|
2019-06-07 12:27:55 +03:00
|
|
|
// indefinitely. A zero value means the payment will never time out.
|
|
|
|
//
|
2019-05-23 21:05:29 +03:00
|
|
|
// TODO(halseth): make wallclock time to allow resume after startup.
|
2018-03-20 04:44:41 +03:00
|
|
|
PayAttemptTimeout time.Duration
|
|
|
|
|
2018-03-27 07:00:24 +03:00
|
|
|
// RouteHints represents the different routing hints that can be used to
|
|
|
|
// assist a payment in reaching its destination successfully. These
|
|
|
|
// hints will act as intermediate hops along the route.
|
|
|
|
//
|
|
|
|
// NOTE: This is optional unless required by the payment. When providing
|
|
|
|
// multiple routes, ensure the hop hints within each route are chained
|
|
|
|
// together and sorted in forward order in order to reach the
|
|
|
|
// destination successfully.
|
2019-02-19 11:09:01 +03:00
|
|
|
RouteHints [][]zpay32.HopHint
|
2018-03-27 07:00:24 +03:00
|
|
|
|
2020-05-07 12:48:39 +03:00
|
|
|
// OutgoingChannelIDs is the list of channels that are allowed for the
|
|
|
|
// first hop. If nil, any channel may be used.
|
|
|
|
OutgoingChannelIDs []uint64
|
2019-02-01 15:53:27 +03:00
|
|
|
|
2019-11-18 14:08:42 +03:00
|
|
|
// LastHop is the pubkey of the last node before the final destination
|
|
|
|
// is reached. If nil, any node may be used.
|
|
|
|
LastHop *route.Vertex
|
|
|
|
|
2019-12-19 10:56:59 +03:00
|
|
|
// DestFeatures specifies the set of features we assume the final node
|
|
|
|
// has for pathfinding. Typically these will be taken directly from an
|
|
|
|
// invoice, but they can also be manually supplied or assumed by the
|
|
|
|
// sender. If a nil feature vector is provided, the router will try to
|
|
|
|
// fallback to the graph in order to load a feature vector for a node in
|
|
|
|
// the public graph.
|
|
|
|
DestFeatures *lnwire.FeatureVector
|
|
|
|
|
|
|
|
// PaymentAddr is the payment address specified by the receiver. This
|
|
|
|
// field should be a random 32-byte nonce presented in the receiver's
|
|
|
|
// invoice to prevent probing of the destination.
|
|
|
|
PaymentAddr *[32]byte
|
|
|
|
|
2019-05-30 02:31:12 +03:00
|
|
|
// PaymentRequest is an optional payment request that this payment is
|
|
|
|
// attempting to complete.
|
|
|
|
PaymentRequest []byte
|
|
|
|
|
2019-12-11 12:52:27 +03:00
|
|
|
// DestCustomRecords are TLV records that are to be sent to the final
|
2019-07-31 07:41:58 +03:00
|
|
|
// hop in the new onion payload format. If the destination does not
|
|
|
|
// understand this new onion payload format, then the payment will
|
|
|
|
// fail.
|
2019-12-11 12:52:27 +03:00
|
|
|
DestCustomRecords record.CustomSet
|
2020-01-28 18:07:34 +03:00
|
|
|
|
2020-04-22 10:19:11 +03:00
|
|
|
// MaxParts is the maximum number of partial payments that may be used
|
2020-04-14 11:10:12 +03:00
|
|
|
// to complete the full amount.
|
2020-04-22 10:19:11 +03:00
|
|
|
MaxParts uint32
|
2021-02-12 05:01:21 +03:00
|
|
|
|
|
|
|
// MaxShardAmt is the largest shard that we'll attempt to split using.
|
|
|
|
// If this field is set, and we need to split, rather than attempting
|
|
|
|
// half of the original payment amount, we'll use this value if half
|
|
|
|
// the payment amount is greater than it.
|
|
|
|
//
|
|
|
|
// NOTE: This field is _optional_.
|
|
|
|
MaxShardAmt *lnwire.MilliSatoshi
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
2021-04-12 16:05:48 +03:00
|
|
|
// AMPOptions houses information that must be known in order to send an AMP
|
|
|
|
// payment.
|
|
|
|
type AMPOptions struct {
|
|
|
|
SetID [32]byte
|
|
|
|
RootShare [32]byte
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
// SetPaymentHash sets the given hash as the payment's overall hash. This
|
|
|
|
// should only be used for non-AMP payments.
|
|
|
|
func (l *LightningPayment) SetPaymentHash(hash lntypes.Hash) error {
|
2021-03-31 13:44:59 +03:00
|
|
|
if l.amp != nil {
|
|
|
|
return fmt.Errorf("cannot set payment hash for AMP payment")
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
l.paymentHash = &hash
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:44:59 +03:00
|
|
|
// SetAMP sets the given AMP options for the payment.
|
|
|
|
func (l *LightningPayment) SetAMP(amp *AMPOptions) error {
|
|
|
|
if l.paymentHash != nil {
|
|
|
|
return fmt.Errorf("cannot set amp options for payment " +
|
|
|
|
"with payment hash")
|
|
|
|
}
|
|
|
|
|
|
|
|
l.amp = amp
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
// Identifier returns a 32-byte slice that uniquely identifies this single
|
|
|
|
// payment. For non-AMP payments this will be the payment hash, for AMP
|
|
|
|
// payments this will be the used SetID.
|
|
|
|
func (l *LightningPayment) Identifier() [32]byte {
|
2021-03-31 13:44:59 +03:00
|
|
|
if l.amp != nil {
|
|
|
|
return l.amp.SetID
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
return *l.paymentHash
|
|
|
|
}
|
|
|
|
|
2017-02-02 05:29:46 +03:00
|
|
|
// SendPayment attempts to send a payment as described within the passed
|
|
|
|
// LightningPayment. This function is blocking and will return either: when the
|
|
|
|
// payment is successful, or all candidates routes have been attempted and
|
|
|
|
// resulted in a failed payment. If the payment succeeds, then a non-nil Route
|
|
|
|
// will be returned which describes the path the successful payment traversed
|
2017-02-21 10:57:43 +03:00
|
|
|
// within the network to reach the destination. Additionally, the payment
|
|
|
|
// preimage will also be returned.
|
2019-05-10 19:00:15 +03:00
|
|
|
func (r *ChannelRouter) SendPayment(payment *LightningPayment) ([32]byte,
|
|
|
|
*route.Route, error) {
|
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
paySession, shardTracker, err := r.preparePayment(payment)
|
2019-05-28 17:36:08 +03:00
|
|
|
if err != nil {
|
|
|
|
return [32]byte{}, nil, err
|
|
|
|
}
|
|
|
|
|
2020-04-01 01:13:22 +03:00
|
|
|
log.Tracef("Dispatching SendPayment for lightning payment: %v",
|
|
|
|
spewPayment(payment))
|
|
|
|
|
2019-05-28 17:36:08 +03:00
|
|
|
// Since this is the first time this payment is being made, we pass nil
|
|
|
|
// for the existing attempt.
|
2020-04-01 01:13:22 +03:00
|
|
|
return r.sendPayment(
|
2021-03-31 13:23:08 +03:00
|
|
|
payment.Amount, payment.FeeLimit, payment.Identifier(),
|
2021-04-12 16:21:59 +03:00
|
|
|
payment.PayAttemptTimeout, paySession, shardTracker,
|
2020-04-01 01:13:22 +03:00
|
|
|
)
|
2019-05-28 17:36:08 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// SendPaymentAsync is the non-blocking version of SendPayment. The payment
|
|
|
|
// result needs to be retrieved via the control tower.
|
|
|
|
func (r *ChannelRouter) SendPaymentAsync(payment *LightningPayment) error {
|
2021-04-12 16:21:59 +03:00
|
|
|
paySession, shardTracker, err := r.preparePayment(payment)
|
2019-05-28 17:36:08 +03:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since this is the first time this payment is being made, we pass nil
|
|
|
|
// for the existing attempt.
|
|
|
|
r.wg.Add(1)
|
|
|
|
go func() {
|
|
|
|
defer r.wg.Done()
|
|
|
|
|
2020-04-01 01:13:22 +03:00
|
|
|
log.Tracef("Dispatching SendPayment for lightning payment: %v",
|
|
|
|
spewPayment(payment))
|
|
|
|
|
|
|
|
_, _, err := r.sendPayment(
|
2021-03-31 13:23:08 +03:00
|
|
|
payment.Amount, payment.FeeLimit, payment.Identifier(),
|
2021-04-12 16:21:59 +03:00
|
|
|
payment.PayAttemptTimeout, paySession, shardTracker,
|
2020-04-01 01:13:22 +03:00
|
|
|
)
|
2019-05-28 17:36:08 +03:00
|
|
|
if err != nil {
|
2021-03-31 13:23:08 +03:00
|
|
|
log.Errorf("Payment %x failed: %v",
|
|
|
|
payment.Identifier(), err)
|
2019-05-28 17:36:08 +03:00
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-01 01:13:22 +03:00
|
|
|
// spewPayment returns a log closures that provides a spewed string
|
|
|
|
// representation of the passed payment.
|
|
|
|
func spewPayment(payment *LightningPayment) logClosure {
|
|
|
|
return newLogClosure(func() string {
|
|
|
|
// Make a copy of the payment with a nilled Curve
|
|
|
|
// before spewing.
|
|
|
|
var routeHints [][]zpay32.HopHint
|
|
|
|
for _, routeHint := range payment.RouteHints {
|
|
|
|
var hopHints []zpay32.HopHint
|
|
|
|
for _, hopHint := range routeHint {
|
|
|
|
h := hopHint.Copy()
|
|
|
|
h.NodeID.Curve = nil
|
|
|
|
hopHints = append(hopHints, h)
|
|
|
|
}
|
|
|
|
routeHints = append(routeHints, hopHints)
|
|
|
|
}
|
|
|
|
p := *payment
|
|
|
|
p.RouteHints = routeHints
|
|
|
|
return spew.Sdump(p)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2019-05-28 17:36:08 +03:00
|
|
|
// preparePayment creates the payment session and registers the payment with the
|
|
|
|
// control tower.
|
|
|
|
func (r *ChannelRouter) preparePayment(payment *LightningPayment) (
|
2021-04-12 16:21:59 +03:00
|
|
|
PaymentSession, shards.ShardTracker, error) {
|
2019-05-28 17:36:08 +03:00
|
|
|
|
2018-01-30 11:03:29 +03:00
|
|
|
// Before starting the HTLC routing attempt, we'll create a fresh
|
|
|
|
// payment session which will report our errors back to mission
|
|
|
|
// control.
|
2020-04-01 01:13:22 +03:00
|
|
|
paySession, err := r.cfg.SessionSource.NewPaymentSession(payment)
|
2018-01-30 11:03:29 +03:00
|
|
|
if err != nil {
|
2021-04-12 16:21:59 +03:00
|
|
|
return nil, nil, err
|
2018-01-30 11:03:29 +03:00
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// Record this payment hash with the ControlTower, ensuring it is not
|
|
|
|
// already in-flight.
|
2019-07-31 07:41:58 +03:00
|
|
|
//
|
|
|
|
// TODO(roasbeef): store records as part of creation info?
|
2019-05-23 21:05:29 +03:00
|
|
|
info := &channeldb.PaymentCreationInfo{
|
2021-03-31 13:23:08 +03:00
|
|
|
PaymentIdentifier: payment.Identifier(),
|
|
|
|
Value: payment.Amount,
|
|
|
|
CreationTime: r.cfg.Clock.Now(),
|
|
|
|
PaymentRequest: payment.PaymentRequest,
|
2019-05-23 21:05:29 +03:00
|
|
|
}
|
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
// Create a new ShardTracker that we'll use during the life cycle of
|
|
|
|
// this payment.
|
2021-04-12 16:05:48 +03:00
|
|
|
var shardTracker shards.ShardTracker
|
|
|
|
switch {
|
|
|
|
|
|
|
|
// If this is an AMP payment, we'll use the AMP shard tracker.
|
|
|
|
case payment.amp != nil:
|
|
|
|
shardTracker = amp.NewShardTracker(
|
|
|
|
payment.amp.RootShare, payment.amp.SetID,
|
|
|
|
*payment.PaymentAddr, payment.Amount,
|
|
|
|
)
|
|
|
|
|
|
|
|
// Otherwise we'll use the simple tracker that will map each attempt to
|
|
|
|
// the same payment hash.
|
|
|
|
default:
|
|
|
|
shardTracker = shards.NewSimpleShardTracker(
|
2021-03-31 13:23:08 +03:00
|
|
|
payment.Identifier(), nil,
|
2021-04-12 16:05:48 +03:00
|
|
|
)
|
|
|
|
}
|
2021-04-12 16:21:59 +03:00
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
err = r.cfg.Control.InitPayment(payment.Identifier(), info)
|
2019-05-23 21:05:29 +03:00
|
|
|
if err != nil {
|
2021-04-12 16:21:59 +03:00
|
|
|
return nil, nil, err
|
2019-05-23 21:05:29 +03:00
|
|
|
}
|
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
return paySession, shardTracker, nil
|
2018-01-30 11:03:29 +03:00
|
|
|
}
|
|
|
|
|
2018-08-08 12:09:30 +03:00
|
|
|
// SendToRoute attempts to send a payment with the given hash through the
|
2020-05-06 16:36:51 +03:00
|
|
|
// provided route. This function is blocking and will return the attempt
|
|
|
|
// information as it is stored in the database. For a successful htlc, this
|
|
|
|
// information will contain the preimage. If an error occurs after the attempt
|
|
|
|
// was initiated, both return values will be non-nil.
|
2021-03-31 13:23:08 +03:00
|
|
|
func (r *ChannelRouter) SendToRoute(htlcHash lntypes.Hash, rt *route.Route) (
|
2020-05-06 16:36:51 +03:00
|
|
|
*channeldb.HTLCAttempt, error) {
|
2018-01-30 11:03:29 +03:00
|
|
|
|
2019-06-04 11:13:33 +03:00
|
|
|
// Calculate amount paid to receiver.
|
2020-04-01 01:13:24 +03:00
|
|
|
amt := rt.ReceiverAmt()
|
2018-08-08 12:09:30 +03:00
|
|
|
|
2020-04-01 01:13:27 +03:00
|
|
|
// If this is meant as a MP payment shard, we set the amount
|
|
|
|
// for the creating info to the total amount of the payment.
|
|
|
|
finalHop := rt.Hops[len(rt.Hops)-1]
|
|
|
|
mpp := finalHop.MPP
|
|
|
|
if mpp != nil {
|
|
|
|
amt = mpp.TotalMsat()
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
// For non-AMP payments the overall payment identifier will be the same
|
|
|
|
// hash as used for this HTLC.
|
|
|
|
paymentIdentifier := htlcHash
|
|
|
|
|
2021-04-08 22:07:15 +03:00
|
|
|
// For AMP-payments, we'll use the setID as the unique ID for the
|
|
|
|
// overall payment.
|
|
|
|
amp := finalHop.AMP
|
|
|
|
if amp != nil {
|
|
|
|
paymentIdentifier = amp.SetID()
|
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// Record this payment hash with the ControlTower, ensuring it is not
|
|
|
|
// already in-flight.
|
|
|
|
info := &channeldb.PaymentCreationInfo{
|
2021-03-31 13:23:08 +03:00
|
|
|
PaymentIdentifier: paymentIdentifier,
|
|
|
|
Value: amt,
|
|
|
|
CreationTime: r.cfg.Clock.Now(),
|
|
|
|
PaymentRequest: nil,
|
2019-05-23 21:05:29 +03:00
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
err := r.cfg.Control.InitPayment(paymentIdentifier, info)
|
2020-04-01 01:13:27 +03:00
|
|
|
switch {
|
|
|
|
// If this is an MPP attempt and the hash is already registered with
|
|
|
|
// the database, we can go on to launch the shard.
|
|
|
|
case err == channeldb.ErrPaymentInFlight && mpp != nil:
|
|
|
|
|
|
|
|
// Any other error is not tolerated.
|
|
|
|
case err != nil:
|
2020-05-06 16:36:51 +03:00
|
|
|
return nil, err
|
2019-05-23 21:05:29 +03:00
|
|
|
}
|
2018-08-08 12:09:30 +03:00
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
log.Tracef("Dispatching SendToRoute for HTLC hash %v: %v",
|
|
|
|
htlcHash, newLogClosure(func() string {
|
2020-04-01 01:13:24 +03:00
|
|
|
return spew.Sdump(rt)
|
2020-04-01 01:13:22 +03:00
|
|
|
}),
|
|
|
|
)
|
2019-06-04 11:13:33 +03:00
|
|
|
|
2021-04-12 16:21:59 +03:00
|
|
|
// Since the HTLC hashes and preimages are specified manually over the
|
|
|
|
// RPC for SendToRoute requests, we don't have to worry about creating
|
|
|
|
// a ShardTracker that can generate hashes for AMP payments. Instead we
|
|
|
|
// create a simple tracker that can just return the hash for the single
|
|
|
|
// shard we'll now launch.
|
2021-03-31 13:23:08 +03:00
|
|
|
shardTracker := shards.NewSimpleShardTracker(htlcHash, nil)
|
2021-04-12 16:21:59 +03:00
|
|
|
|
2020-04-01 01:13:24 +03:00
|
|
|
// Launch a shard along the given route.
|
|
|
|
sh := &shardHandler{
|
2021-04-12 16:21:59 +03:00
|
|
|
router: r,
|
2021-03-31 13:23:08 +03:00
|
|
|
identifier: paymentIdentifier,
|
2021-04-12 16:21:59 +03:00
|
|
|
shardTracker: shardTracker,
|
2020-04-01 01:13:24 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
var shardError error
|
2021-04-12 16:21:59 +03:00
|
|
|
attempt, outcome, err := sh.launchShard(rt, false)
|
2020-04-01 01:13:24 +03:00
|
|
|
|
|
|
|
// With SendToRoute, it can happen that the route exceeds protocol
|
|
|
|
// constraints. Mark the payment as failed with an internal error.
|
|
|
|
if err == route.ErrMaxRouteHopsExceeded ||
|
|
|
|
err == sphinx.ErrMaxRoutingInfoSizeExceeded {
|
|
|
|
|
|
|
|
log.Debugf("Invalid route provided for payment %x: %v",
|
2021-03-31 13:23:08 +03:00
|
|
|
paymentIdentifier, err)
|
2020-04-01 01:13:24 +03:00
|
|
|
|
|
|
|
controlErr := r.cfg.Control.Fail(
|
2021-03-31 13:23:08 +03:00
|
|
|
paymentIdentifier, channeldb.FailureReasonError,
|
2020-04-01 01:13:24 +03:00
|
|
|
)
|
|
|
|
if controlErr != nil {
|
2020-05-06 16:36:51 +03:00
|
|
|
return nil, controlErr
|
2020-04-01 01:13:24 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// In any case, don't continue if there is an error.
|
2019-05-23 22:17:16 +03:00
|
|
|
if err != nil {
|
2020-05-06 16:36:51 +03:00
|
|
|
return nil, err
|
2020-04-01 01:13:24 +03:00
|
|
|
}
|
|
|
|
|
2020-05-06 16:36:51 +03:00
|
|
|
var htlcAttempt *channeldb.HTLCAttempt
|
2020-04-01 01:13:24 +03:00
|
|
|
switch {
|
|
|
|
// Failed to launch shard.
|
|
|
|
case outcome.err != nil:
|
|
|
|
shardError = outcome.err
|
2020-05-06 16:36:51 +03:00
|
|
|
htlcAttempt = outcome.attempt
|
2019-05-23 22:17:16 +03:00
|
|
|
|
2020-04-01 01:13:24 +03:00
|
|
|
// Shard successfully launched, wait for the result to be available.
|
|
|
|
default:
|
|
|
|
result, err := sh.collectResult(attempt)
|
|
|
|
if err != nil {
|
2020-05-06 16:36:51 +03:00
|
|
|
return nil, err
|
2019-05-23 22:17:16 +03:00
|
|
|
}
|
|
|
|
|
2020-04-01 01:13:24 +03:00
|
|
|
// We got a successful result.
|
|
|
|
if result.err == nil {
|
2020-05-06 16:36:51 +03:00
|
|
|
return result.attempt, nil
|
2020-04-01 01:13:24 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// The shard failed, break switch to handle it.
|
|
|
|
shardError = result.err
|
2020-05-06 16:36:51 +03:00
|
|
|
htlcAttempt = result.attempt
|
2020-04-01 01:13:24 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// Since for SendToRoute we won't retry in case the shard fails, we'll
|
|
|
|
// mark the payment failed with the control tower immediately. Process
|
|
|
|
// the error to check if it maps into a terminal error code, if not use
|
|
|
|
// a generic NO_ROUTE error.
|
|
|
|
reason := r.processSendError(
|
|
|
|
attempt.AttemptID, &attempt.Route, shardError,
|
|
|
|
)
|
|
|
|
if reason == nil {
|
|
|
|
r := channeldb.FailureReasonNoRoute
|
|
|
|
reason = &r
|
|
|
|
}
|
|
|
|
|
2021-03-31 13:23:08 +03:00
|
|
|
err = r.cfg.Control.Fail(paymentIdentifier, *reason)
|
2020-04-01 01:13:24 +03:00
|
|
|
if err != nil {
|
2020-05-06 16:36:51 +03:00
|
|
|
return nil, err
|
2019-05-23 22:17:16 +03:00
|
|
|
}
|
|
|
|
|
2020-05-06 16:36:51 +03:00
|
|
|
return htlcAttempt, shardError
|
2018-01-30 11:03:29 +03:00
|
|
|
}
|
|
|
|
|
2020-04-01 01:13:22 +03:00
|
|
|
// sendPayment attempts to send a payment to the passed payment hash. This
|
|
|
|
// function is blocking and will return either: when the payment is successful,
|
|
|
|
// or all candidates routes have been attempted and resulted in a failed
|
|
|
|
// payment. If the payment succeeds, then a non-nil Route will be returned
|
|
|
|
// which describes the path the successful payment traversed within the network
|
|
|
|
// to reach the destination. Additionally, the payment preimage will also be
|
|
|
|
// returned.
|
2019-05-23 21:05:29 +03:00
|
|
|
//
|
|
|
|
// The existing attempt argument should be set to nil if this is a payment that
|
|
|
|
// haven't had any payment attempt sent to the switch yet. If it has had an
|
|
|
|
// attempt already, it should be passed such that the result can be retrieved.
|
|
|
|
//
|
|
|
|
// This method relies on the ControlTower's internal payment state machine to
|
|
|
|
// carry out its execution. After restarts it is safe, and assumed, that the
|
|
|
|
// router will call this method for every payment still in-flight according to
|
|
|
|
// the ControlTower.
|
|
|
|
func (r *ChannelRouter) sendPayment(
|
2021-03-31 13:23:08 +03:00
|
|
|
totalAmt, feeLimit lnwire.MilliSatoshi, identifier lntypes.Hash,
|
2021-04-12 16:21:59 +03:00
|
|
|
timeout time.Duration, paySession PaymentSession,
|
|
|
|
shardTracker shards.ShardTracker) ([32]byte, *route.Route, error) {
|
2017-03-21 04:58:21 +03:00
|
|
|
|
routing: modify SendPayment loop to be lazy, iterative, and use missionControl
In this commit we modify the SendPayment loop to optimize for
time-to-first-payment-success-or-failure. The prior logic would first
attempt to find at least 100 routes to the destination, then
iteratively prune them away as errors were encountered. In this commit,
we modify this approach to instead take a lazy approach: we first find
the current “best” path, attempt to send to that, and if an error
occurs we prune a section of the graph by reporting to missionControl,
then continue.
With this new approach, if the first known path has sufficient
capacity, and is available, then the payment speed is greatly improved
from the PoV of users. Additionally, we avoid the excessive computation
of crawling most of the graph in the k-shortest paths loop. With the
decay on missionControl, all routes will now feed information into the
central knowledge hung, allowing all payments to iteratively find out
the inactive portions of the payment graph.
2017-10-17 05:05:39 +03:00
|
|
|
// We'll also fetch the current block height so we can properly
|
|
|
|
// calculate the required HTLC time locks within the route.
|
|
|
|
_, currentHeight, err := r.cfg.Chain.GetBestBlock()
|
2017-10-03 07:58:34 +03:00
|
|
|
if err != nil {
|
2018-01-30 11:03:29 +03:00
|
|
|
return [32]byte{}, nil, err
|
2017-02-02 05:29:46 +03:00
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
// Now set up a paymentLifecycle struct with these params, such that we
|
|
|
|
// can resume the payment from the current state.
|
|
|
|
p := &paymentLifecycle{
|
2020-04-01 01:13:22 +03:00
|
|
|
router: r,
|
2020-04-01 01:13:22 +03:00
|
|
|
totalAmount: totalAmt,
|
|
|
|
feeLimit: feeLimit,
|
2021-03-31 13:23:08 +03:00
|
|
|
identifier: identifier,
|
2020-04-01 01:13:22 +03:00
|
|
|
paySession: paySession,
|
2021-04-12 16:21:59 +03:00
|
|
|
shardTracker: shardTracker,
|
2020-04-01 01:13:22 +03:00
|
|
|
currentHeight: currentHeight,
|
2019-05-16 16:27:29 +03:00
|
|
|
}
|
2019-05-16 16:27:28 +03:00
|
|
|
|
2019-06-07 12:27:55 +03:00
|
|
|
// If a timeout is specified, create a timeout channel. If no timeout is
|
|
|
|
// specified, the channel is left nil and will never abort the payment
|
|
|
|
// loop.
|
2020-04-01 01:13:22 +03:00
|
|
|
if timeout != 0 {
|
|
|
|
p.timeoutChan = time.After(timeout)
|
2019-06-07 12:27:55 +03:00
|
|
|
}
|
|
|
|
|
2019-05-23 21:05:29 +03:00
|
|
|
return p.resumePayment()
|
|
|
|
|
2019-01-30 16:20:51 +03:00
|
|
|
}
|
2018-03-20 05:08:55 +03:00
|
|
|
|
2019-06-26 09:39:34 +03:00
|
|
|
// tryApplyChannelUpdate tries to apply a channel update present in the failure
|
|
|
|
// message if any.
|
|
|
|
func (r *ChannelRouter) tryApplyChannelUpdate(rt *route.Route,
|
|
|
|
errorSourceIdx int, failure lnwire.FailureMessage) error {
|
|
|
|
|
|
|
|
// It makes no sense to apply our own channel updates.
|
|
|
|
if errorSourceIdx == 0 {
|
|
|
|
log.Errorf("Channel update of ourselves received")
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Extract channel update if the error contains one.
|
|
|
|
update := r.extractChannelUpdate(failure)
|
|
|
|
if update == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Parse pubkey to allow validation of the channel update. This should
|
|
|
|
// always succeed, otherwise there is something wrong in our
|
|
|
|
// implementation. Therefore return an error.
|
|
|
|
errVertex := rt.Hops[errorSourceIdx-1].PubKeyBytes
|
|
|
|
errSource, err := btcec.ParsePubKey(
|
|
|
|
errVertex[:], btcec.S256(),
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Cannot parse pubkey: idx=%v, pubkey=%v",
|
|
|
|
errorSourceIdx, errVertex)
|
|
|
|
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Apply channel update.
|
|
|
|
if !r.applyChannelUpdate(update, errSource) {
|
2019-11-06 02:04:24 +03:00
|
|
|
log.Debugf("Invalid channel update received: node=%v",
|
2019-06-26 09:39:34 +03:00
|
|
|
errVertex)
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-01-30 16:20:51 +03:00
|
|
|
// processSendError analyzes the error for the payment attempt received from the
|
|
|
|
// switch and updates mission control and/or channel policies. Depending on the
|
|
|
|
// error type, this error is either the final outcome of the payment or we need
|
2020-11-24 16:16:03 +03:00
|
|
|
// to continue with an alternative route. A final outcome is indicated by a
|
|
|
|
// non-nil return value.
|
2021-04-07 16:03:54 +03:00
|
|
|
func (r *ChannelRouter) processSendError(attemptID uint64, rt *route.Route,
|
2019-08-05 13:13:58 +03:00
|
|
|
sendErr error) *channeldb.FailureReason {
|
2019-06-19 12:12:10 +03:00
|
|
|
|
2019-08-05 13:13:58 +03:00
|
|
|
internalErrorReason := channeldb.FailureReasonError
|
|
|
|
|
|
|
|
reportFail := func(srcIdx *int,
|
|
|
|
msg lnwire.FailureMessage) *channeldb.FailureReason {
|
2019-06-26 14:00:35 +03:00
|
|
|
|
|
|
|
// Report outcome to mission control.
|
2019-08-05 13:13:58 +03:00
|
|
|
reason, err := r.cfg.MissionControl.ReportPaymentFail(
|
2021-04-07 16:03:54 +03:00
|
|
|
attemptID, rt, srcIdx, msg,
|
2019-06-26 14:00:35 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Error reporting payment result to mc: %v",
|
|
|
|
err)
|
|
|
|
|
2019-08-05 13:13:58 +03:00
|
|
|
return &internalErrorReason
|
2019-06-26 14:00:35 +03:00
|
|
|
}
|
|
|
|
|
2019-08-05 13:13:58 +03:00
|
|
|
return reason
|
2019-06-26 14:00:35 +03:00
|
|
|
}
|
|
|
|
|
2019-06-19 12:12:10 +03:00
|
|
|
if sendErr == htlcswitch.ErrUnreadableFailureMessage {
|
2019-06-26 12:34:25 +03:00
|
|
|
log.Tracef("Unreadable failure when sending htlc")
|
2019-06-19 12:12:10 +03:00
|
|
|
|
2019-06-26 14:00:35 +03:00
|
|
|
return reportFail(nil, nil)
|
2019-06-26 12:34:25 +03:00
|
|
|
}
|
2020-01-14 16:07:42 +03:00
|
|
|
|
|
|
|
// If the error is a ClearTextError, we have received a valid wire
|
|
|
|
// failure message, either from our own outgoing link or from a node
|
|
|
|
// down the route. If the error is not related to the propagation of
|
|
|
|
// our payment, we can stop trying because an internal error has
|
|
|
|
// occurred.
|
|
|
|
rtErr, ok := sendErr.(htlcswitch.ClearTextError)
|
2019-06-19 12:12:10 +03:00
|
|
|
if !ok {
|
2019-08-05 13:13:58 +03:00
|
|
|
return &internalErrorReason
|
2019-06-19 12:12:10 +03:00
|
|
|
}
|
2018-03-20 05:08:55 +03:00
|
|
|
|
2020-01-14 16:07:42 +03:00
|
|
|
// failureSourceIdx is the index of the node that the failure occurred
|
|
|
|
// at. If the ClearTextError received is not a ForwardingError the
|
|
|
|
// payment error occurred at our node, so we leave this value as 0
|
|
|
|
// to indicate that the failure occurred locally. If the error is a
|
|
|
|
// ForwardingError, it did not originate at our node, so we set
|
|
|
|
// failureSourceIdx to the index of the node where the failure occurred.
|
|
|
|
failureSourceIdx := 0
|
|
|
|
source, ok := rtErr.(*htlcswitch.ForwardingError)
|
|
|
|
if ok {
|
|
|
|
failureSourceIdx = source.FailureSourceIdx
|
|
|
|
}
|
2017-10-03 08:03:18 +03:00
|
|
|
|
2020-01-14 16:07:42 +03:00
|
|
|
// Extract the wire failure and apply channel update if it contains one.
|
|
|
|
// If we received an unknown failure message from a node along the
|
|
|
|
// route, the failure message will be nil.
|
|
|
|
failureMessage := rtErr.WireMessage()
|
2019-06-26 09:39:34 +03:00
|
|
|
if failureMessage != nil {
|
|
|
|
err := r.tryApplyChannelUpdate(
|
|
|
|
rt, failureSourceIdx, failureMessage,
|
|
|
|
)
|
|
|
|
if err != nil {
|
2019-08-05 13:13:58 +03:00
|
|
|
return &internalErrorReason
|
2019-06-26 09:39:34 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-06-26 12:34:25 +03:00
|
|
|
log.Tracef("Node=%v reported failure when sending htlc",
|
|
|
|
failureSourceIdx)
|
|
|
|
|
2019-06-26 14:00:35 +03:00
|
|
|
return reportFail(&failureSourceIdx, failureMessage)
|
2017-10-03 08:03:18 +03:00
|
|
|
}
|
|
|
|
|
2019-06-26 09:39:34 +03:00
|
|
|
// extractChannelUpdate examines the error and extracts the channel update.
|
|
|
|
func (r *ChannelRouter) extractChannelUpdate(
|
|
|
|
failure lnwire.FailureMessage) *lnwire.ChannelUpdate {
|
|
|
|
|
|
|
|
var update *lnwire.ChannelUpdate
|
|
|
|
switch onionErr := failure.(type) {
|
|
|
|
case *lnwire.FailExpiryTooSoon:
|
|
|
|
update = &onionErr.Update
|
|
|
|
case *lnwire.FailAmountBelowMinimum:
|
|
|
|
update = &onionErr.Update
|
|
|
|
case *lnwire.FailFeeInsufficient:
|
|
|
|
update = &onionErr.Update
|
|
|
|
case *lnwire.FailIncorrectCltvExpiry:
|
|
|
|
update = &onionErr.Update
|
|
|
|
case *lnwire.FailChannelDisabled:
|
|
|
|
update = &onionErr.Update
|
|
|
|
case *lnwire.FailTemporaryChannelFailure:
|
|
|
|
update = onionErr.Update
|
|
|
|
}
|
|
|
|
|
|
|
|
return update
|
|
|
|
}
|
|
|
|
|
2018-08-16 22:35:59 +03:00
|
|
|
// applyChannelUpdate validates a channel update and if valid, applies it to the
|
2018-10-24 11:28:31 +03:00
|
|
|
// database. It returns a bool indicating whether the updates was successful.
|
2018-08-16 22:35:59 +03:00
|
|
|
func (r *ChannelRouter) applyChannelUpdate(msg *lnwire.ChannelUpdate,
|
2018-10-24 11:28:31 +03:00
|
|
|
pubKey *btcec.PublicKey) bool {
|
2017-10-03 08:03:18 +03:00
|
|
|
|
2019-01-12 20:59:43 +03:00
|
|
|
ch, _, _, err := r.GetChannelByID(msg.ShortChannelID)
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Unable to retrieve channel by id: %v", err)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
if err := ValidateChannelUpdateAnn(pubKey, ch.Capacity, msg); err != nil {
|
2018-10-24 11:28:31 +03:00
|
|
|
log.Errorf("Unable to validate channel update: %v", err)
|
|
|
|
return false
|
2018-08-16 22:35:59 +03:00
|
|
|
}
|
|
|
|
|
2019-01-12 20:59:43 +03:00
|
|
|
err = r.UpdateEdge(&channeldb.ChannelEdgePolicy{
|
2018-01-31 07:26:26 +03:00
|
|
|
SigBytes: msg.Signature.ToSignatureBytes(),
|
2017-10-03 08:03:18 +03:00
|
|
|
ChannelID: msg.ShortChannelID.ToUint64(),
|
|
|
|
LastUpdate: time.Unix(int64(msg.Timestamp), 0),
|
2019-01-12 20:59:43 +03:00
|
|
|
MessageFlags: msg.MessageFlags,
|
|
|
|
ChannelFlags: msg.ChannelFlags,
|
2017-10-03 08:03:18 +03:00
|
|
|
TimeLockDelta: msg.TimeLockDelta,
|
|
|
|
MinHTLC: msg.HtlcMinimumMsat,
|
2019-01-12 20:59:45 +03:00
|
|
|
MaxHTLC: msg.HtlcMaximumMsat,
|
2017-10-03 08:03:18 +03:00
|
|
|
FeeBaseMSat: lnwire.MilliSatoshi(msg.BaseFee),
|
|
|
|
FeeProportionalMillionths: lnwire.MilliSatoshi(msg.FeeRate),
|
|
|
|
})
|
2018-08-20 15:28:09 +03:00
|
|
|
if err != nil && !IsError(err, ErrIgnored, ErrOutdated) {
|
2018-10-24 11:28:31 +03:00
|
|
|
log.Errorf("Unable to apply channel update: %v", err)
|
|
|
|
return false
|
2017-10-03 08:03:18 +03:00
|
|
|
}
|
|
|
|
|
2018-10-24 11:28:31 +03:00
|
|
|
return true
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2017-03-19 21:40:25 +03:00
|
|
|
|
2017-07-14 22:32:00 +03:00
|
|
|
// AddNode is used to add information about a node to the router database. If
|
|
|
|
// the node with this pubkey is not present in an existing channel, it will
|
|
|
|
// be ignored.
|
2017-04-01 15:33:17 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2021-01-27 15:39:18 +03:00
|
|
|
func (r *ChannelRouter) AddNode(node *channeldb.LightningNode,
|
|
|
|
op ...batch.SchedulerOption) error {
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
rMsg := &routingMsg{
|
|
|
|
msg: node,
|
2021-01-27 15:39:18 +03:00
|
|
|
op: op,
|
2017-03-19 21:40:25 +03:00
|
|
|
err: make(chan error, 1),
|
|
|
|
}
|
|
|
|
|
|
|
|
select {
|
|
|
|
case r.networkUpdates <- rMsg:
|
2017-09-26 06:55:04 +03:00
|
|
|
select {
|
|
|
|
case err := <-rMsg.err:
|
|
|
|
return err
|
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-09-26 06:55:04 +03:00
|
|
|
}
|
2017-03-19 21:40:25 +03:00
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-03-19 21:40:25 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// AddEdge is used to add edge/channel to the topology of the router, after all
|
2017-10-11 05:48:44 +03:00
|
|
|
// information about channel will be gathered this edge/channel might be used
|
|
|
|
// in construction of payment path.
|
2017-04-01 15:33:17 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2021-01-27 15:39:18 +03:00
|
|
|
func (r *ChannelRouter) AddEdge(edge *channeldb.ChannelEdgeInfo,
|
|
|
|
op ...batch.SchedulerOption) error {
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
rMsg := &routingMsg{
|
|
|
|
msg: edge,
|
2021-01-27 15:39:18 +03:00
|
|
|
op: op,
|
2017-03-19 21:40:25 +03:00
|
|
|
err: make(chan error, 1),
|
|
|
|
}
|
|
|
|
|
|
|
|
select {
|
|
|
|
case r.networkUpdates <- rMsg:
|
2017-09-26 06:55:04 +03:00
|
|
|
select {
|
|
|
|
case err := <-rMsg.err:
|
|
|
|
return err
|
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-09-26 06:55:04 +03:00
|
|
|
}
|
2017-03-19 21:40:25 +03:00
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-03-19 21:40:25 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// UpdateEdge is used to update edge information, without this message edge
|
|
|
|
// considered as not fully constructed.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2021-01-27 15:39:18 +03:00
|
|
|
func (r *ChannelRouter) UpdateEdge(update *channeldb.ChannelEdgePolicy,
|
|
|
|
op ...batch.SchedulerOption) error {
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
rMsg := &routingMsg{
|
|
|
|
msg: update,
|
2021-01-27 15:39:18 +03:00
|
|
|
op: op,
|
2017-03-19 21:40:25 +03:00
|
|
|
err: make(chan error, 1),
|
|
|
|
}
|
|
|
|
|
|
|
|
select {
|
|
|
|
case r.networkUpdates <- rMsg:
|
2017-09-26 06:55:04 +03:00
|
|
|
select {
|
|
|
|
case err := <-rMsg.err:
|
|
|
|
return err
|
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-09-26 06:55:04 +03:00
|
|
|
}
|
2017-03-19 21:40:25 +03:00
|
|
|
case <-r.quit:
|
2019-04-05 18:36:11 +03:00
|
|
|
return ErrRouterShuttingDown
|
2017-03-19 21:40:25 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// CurrentBlockHeight returns the block height from POV of the router subsystem.
|
2017-04-01 15:33:17 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-03-19 21:40:25 +03:00
|
|
|
func (r *ChannelRouter) CurrentBlockHeight() (uint32, error) {
|
|
|
|
_, height, err := r.cfg.Chain.GetBestBlock()
|
|
|
|
return uint32(height), err
|
|
|
|
}
|
|
|
|
|
2017-03-30 04:01:28 +03:00
|
|
|
// GetChannelByID return the channel by the channel id.
|
2017-04-01 15:33:17 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-03-30 04:01:28 +03:00
|
|
|
func (r *ChannelRouter) GetChannelByID(chanID lnwire.ShortChannelID) (
|
|
|
|
*channeldb.ChannelEdgeInfo,
|
|
|
|
*channeldb.ChannelEdgePolicy,
|
|
|
|
*channeldb.ChannelEdgePolicy, error) {
|
2017-04-01 15:33:17 +03:00
|
|
|
|
2017-03-30 04:01:28 +03:00
|
|
|
return r.cfg.Graph.FetchChannelEdgesByID(chanID.ToUint64())
|
|
|
|
}
|
|
|
|
|
2018-11-05 09:56:39 +03:00
|
|
|
// FetchLightningNode attempts to look up a target node by its identity public
|
|
|
|
// key. channeldb.ErrGraphNodeNotFound is returned if the node doesn't exist
|
|
|
|
// within the graph.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2019-04-05 18:36:11 +03:00
|
|
|
func (r *ChannelRouter) FetchLightningNode(node route.Vertex) (*channeldb.LightningNode, error) {
|
2019-12-20 12:14:13 +03:00
|
|
|
return r.cfg.Graph.FetchLightningNode(nil, node)
|
2018-11-05 09:56:39 +03:00
|
|
|
}
|
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
// ForEachNode is used to iterate over every node in router topology.
|
2017-04-01 15:33:17 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-03-19 21:40:25 +03:00
|
|
|
func (r *ChannelRouter) ForEachNode(cb func(*channeldb.LightningNode) error) error {
|
2020-05-07 01:45:50 +03:00
|
|
|
return r.cfg.Graph.ForEachNode(func(_ kvdb.RTx, n *channeldb.LightningNode) error {
|
2017-04-14 23:14:54 +03:00
|
|
|
return cb(n)
|
|
|
|
})
|
2017-03-19 21:40:25 +03:00
|
|
|
}
|
|
|
|
|
2017-12-18 05:40:05 +03:00
|
|
|
// ForAllOutgoingChannels is used to iterate over all outgoing channels owned by
|
2017-04-01 15:33:17 +03:00
|
|
|
// the router.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-08-22 09:58:59 +03:00
|
|
|
func (r *ChannelRouter) ForAllOutgoingChannels(cb func(*channeldb.ChannelEdgeInfo,
|
|
|
|
*channeldb.ChannelEdgePolicy) error) error {
|
2017-04-01 15:33:17 +03:00
|
|
|
|
2020-05-07 01:45:50 +03:00
|
|
|
return r.selfNode.ForEachChannel(nil, func(_ kvdb.RTx, c *channeldb.ChannelEdgeInfo,
|
2017-08-22 09:58:59 +03:00
|
|
|
e, _ *channeldb.ChannelEdgePolicy) error {
|
2017-04-14 23:14:54 +03:00
|
|
|
|
2018-06-18 13:35:22 +03:00
|
|
|
if e == nil {
|
2020-04-14 20:56:05 +03:00
|
|
|
return fmt.Errorf("channel from self node has no policy")
|
2018-06-18 13:35:22 +03:00
|
|
|
}
|
|
|
|
|
2017-08-22 09:58:59 +03:00
|
|
|
return cb(c, e)
|
2017-03-19 21:40:25 +03:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// ForEachChannel is used to iterate over every known edge (channel) within our
|
|
|
|
// view of the channel graph.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-03-19 21:40:25 +03:00
|
|
|
func (r *ChannelRouter) ForEachChannel(cb func(chanInfo *channeldb.ChannelEdgeInfo,
|
|
|
|
e1, e2 *channeldb.ChannelEdgePolicy) error) error {
|
2017-04-01 15:33:17 +03:00
|
|
|
|
2017-03-19 21:40:25 +03:00
|
|
|
return r.cfg.Graph.ForEachChannel(cb)
|
|
|
|
}
|
2017-03-27 20:00:38 +03:00
|
|
|
|
2017-04-01 15:33:17 +03:00
|
|
|
// AddProof updates the channel edge info with proof which is needed to
|
|
|
|
// properly announce the edge to the rest of the network.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2017-03-27 20:00:38 +03:00
|
|
|
func (r *ChannelRouter) AddProof(chanID lnwire.ShortChannelID,
|
|
|
|
proof *channeldb.ChannelAuthProof) error {
|
|
|
|
|
|
|
|
info, _, _, err := r.cfg.Graph.FetchChannelEdgesByID(chanID.ToUint64())
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
info.AuthProof = proof
|
|
|
|
return r.cfg.Graph.UpdateChannelEdge(info)
|
|
|
|
}
|
2018-02-25 06:34:03 +03:00
|
|
|
|
|
|
|
// IsStaleNode returns true if the graph source has a node announcement for the
|
|
|
|
// target node with a more recent timestamp.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2019-04-05 18:36:11 +03:00
|
|
|
func (r *ChannelRouter) IsStaleNode(node route.Vertex, timestamp time.Time) bool {
|
2018-02-25 06:34:03 +03:00
|
|
|
// If our attempt to assert that the node announcement is fresh fails,
|
|
|
|
// then we know that this is actually a stale announcement.
|
|
|
|
return r.assertNodeAnnFreshness(node, timestamp) != nil
|
|
|
|
}
|
|
|
|
|
2018-10-18 01:47:12 +03:00
|
|
|
// IsPublicNode determines whether the given vertex is seen as a public node in
|
|
|
|
// the graph from the graph's source node's point of view.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
2019-04-05 18:36:11 +03:00
|
|
|
func (r *ChannelRouter) IsPublicNode(node route.Vertex) (bool, error) {
|
2018-10-18 01:47:12 +03:00
|
|
|
return r.cfg.Graph.IsPublicNode(node)
|
|
|
|
}
|
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// IsKnownEdge returns true if the graph source already knows of the passed
|
2019-03-27 23:06:57 +03:00
|
|
|
// channel ID either as a live or zombie edge.
|
2018-02-25 06:34:03 +03:00
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
|
|
|
func (r *ChannelRouter) IsKnownEdge(chanID lnwire.ShortChannelID) bool {
|
2019-03-27 23:06:57 +03:00
|
|
|
_, _, exists, isZombie, _ := r.cfg.Graph.HasChannelEdge(chanID.ToUint64())
|
|
|
|
return exists || isZombie
|
2018-02-25 06:34:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// IsStaleEdgePolicy returns true if the graph soruce has a channel edge for
|
|
|
|
// the passed channel ID (and flags) that have a more recent timestamp.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
|
|
|
func (r *ChannelRouter) IsStaleEdgePolicy(chanID lnwire.ShortChannelID,
|
2019-01-12 20:59:43 +03:00
|
|
|
timestamp time.Time, flags lnwire.ChanUpdateChanFlags) bool {
|
2018-02-25 06:34:03 +03:00
|
|
|
|
2019-03-27 23:06:57 +03:00
|
|
|
edge1Timestamp, edge2Timestamp, exists, isZombie, err :=
|
|
|
|
r.cfg.Graph.HasChannelEdge(chanID.ToUint64())
|
2018-02-25 06:34:03 +03:00
|
|
|
if err != nil {
|
|
|
|
return false
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2019-04-17 23:24:32 +03:00
|
|
|
// If we know of the edge as a zombie, then we'll make some additional
|
|
|
|
// checks to determine if the new policy is fresh.
|
2019-03-27 23:06:57 +03:00
|
|
|
if isZombie {
|
2019-04-17 23:24:32 +03:00
|
|
|
// When running with AssumeChannelValid, we also prune channels
|
|
|
|
// if both of their edges are disabled. We'll mark the new
|
|
|
|
// policy as stale if it remains disabled.
|
|
|
|
if r.cfg.AssumeChannelValid {
|
|
|
|
isDisabled := flags&lnwire.ChanUpdateDisabled ==
|
|
|
|
lnwire.ChanUpdateDisabled
|
|
|
|
if isDisabled {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, we'll fall back to our usual ChannelPruneExpiry.
|
2019-03-27 23:06:57 +03:00
|
|
|
return time.Since(timestamp) > r.cfg.ChannelPruneExpiry
|
|
|
|
}
|
|
|
|
|
2018-02-25 06:34:03 +03:00
|
|
|
// If we don't know of the edge, then it means it's fresh (thus not
|
|
|
|
// stale).
|
|
|
|
if !exists {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// As edges are directional edge node has a unique policy for the
|
|
|
|
// direction of the edge they control. Therefore we first check if we
|
|
|
|
// already have the most up to date information for that edge. If so,
|
|
|
|
// then we can exit early.
|
|
|
|
switch {
|
|
|
|
// A flag set of 0 indicates this is an announcement for the "first"
|
|
|
|
// node in the channel.
|
|
|
|
case flags&lnwire.ChanUpdateDirection == 0:
|
|
|
|
return !edge1Timestamp.Before(timestamp)
|
|
|
|
|
|
|
|
// Similarly, a flag set of 1 indicates this is an announcement for the
|
|
|
|
// "second" node in the channel.
|
|
|
|
case flags&lnwire.ChanUpdateDirection == 1:
|
|
|
|
return !edge2Timestamp.Before(timestamp)
|
|
|
|
}
|
|
|
|
|
|
|
|
return false
|
|
|
|
}
|
2019-03-27 23:07:30 +03:00
|
|
|
|
|
|
|
// MarkEdgeLive clears an edge from our zombie index, deeming it as live.
|
|
|
|
//
|
|
|
|
// NOTE: This method is part of the ChannelGraphSource interface.
|
|
|
|
func (r *ChannelRouter) MarkEdgeLive(chanID lnwire.ShortChannelID) error {
|
|
|
|
return r.cfg.Graph.MarkEdgeLive(chanID.ToUint64())
|
|
|
|
}
|
2019-06-24 10:08:04 +03:00
|
|
|
|
|
|
|
// generateBandwidthHints is a helper function that's utilized the main
|
|
|
|
// findPath function in order to obtain hints from the lower layer w.r.t to the
|
|
|
|
// available bandwidth of edges on the network. Currently, we'll only obtain
|
|
|
|
// bandwidth hints for the edges we directly have open ourselves. Obtaining
|
|
|
|
// these hints allows us to reduce the number of extraneous attempts as we can
|
|
|
|
// skip channels that are inactive, or just don't have enough bandwidth to
|
|
|
|
// carry the payment.
|
|
|
|
func generateBandwidthHints(sourceNode *channeldb.LightningNode,
|
|
|
|
queryBandwidth func(*channeldb.ChannelEdgeInfo) lnwire.MilliSatoshi) (map[uint64]lnwire.MilliSatoshi, error) {
|
|
|
|
|
|
|
|
// First, we'll collect the set of outbound edges from the target
|
|
|
|
// source node.
|
|
|
|
var localChans []*channeldb.ChannelEdgeInfo
|
2020-05-07 01:45:50 +03:00
|
|
|
err := sourceNode.ForEachChannel(nil, func(tx kvdb.RTx,
|
2019-06-24 10:08:04 +03:00
|
|
|
edgeInfo *channeldb.ChannelEdgeInfo,
|
|
|
|
_, _ *channeldb.ChannelEdgePolicy) error {
|
|
|
|
|
|
|
|
localChans = append(localChans, edgeInfo)
|
|
|
|
return nil
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now that we have all of our outbound edges, we'll populate the set
|
|
|
|
// of bandwidth hints, querying the lower switch layer for the most up
|
|
|
|
// to date values.
|
|
|
|
bandwidthHints := make(map[uint64]lnwire.MilliSatoshi)
|
|
|
|
for _, localChan := range localChans {
|
|
|
|
bandwidthHints[localChan.ChannelID] = queryBandwidth(localChan)
|
|
|
|
}
|
|
|
|
|
|
|
|
return bandwidthHints, nil
|
|
|
|
}
|
2019-08-29 14:03:37 +03:00
|
|
|
|
|
|
|
// ErrNoChannel is returned when a route cannot be built because there are no
|
|
|
|
// channels that satisfy all requirements.
|
|
|
|
type ErrNoChannel struct {
|
|
|
|
position int
|
|
|
|
fromNode route.Vertex
|
|
|
|
}
|
|
|
|
|
|
|
|
// Error returns a human readable string describing the error.
|
|
|
|
func (e ErrNoChannel) Error() string {
|
|
|
|
return fmt.Sprintf("no matching outgoing channel available for "+
|
|
|
|
"node %v (%v)", e.position, e.fromNode)
|
|
|
|
}
|
|
|
|
|
|
|
|
// BuildRoute returns a fully specified route based on a list of pubkeys. If
|
|
|
|
// amount is nil, the minimum routable amount is used. To force a specific
|
|
|
|
// outgoing channel, use the outgoingChan parameter.
|
|
|
|
func (r *ChannelRouter) BuildRoute(amt *lnwire.MilliSatoshi,
|
|
|
|
hops []route.Vertex, outgoingChan *uint64,
|
2020-11-24 07:17:16 +03:00
|
|
|
finalCltvDelta int32, payAddr *[32]byte) (*route.Route, error) {
|
2019-08-29 14:03:37 +03:00
|
|
|
|
|
|
|
log.Tracef("BuildRoute called: hopsCount=%v, amt=%v",
|
|
|
|
len(hops), amt)
|
|
|
|
|
2020-05-07 12:48:39 +03:00
|
|
|
var outgoingChans map[uint64]struct{}
|
|
|
|
if outgoingChan != nil {
|
|
|
|
outgoingChans = map[uint64]struct{}{
|
|
|
|
*outgoingChan: {},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-08-29 14:03:37 +03:00
|
|
|
// If no amount is specified, we need to build a route for the minimum
|
|
|
|
// amount that this route can carry.
|
|
|
|
useMinAmt := amt == nil
|
|
|
|
|
|
|
|
// We'll attempt to obtain a set of bandwidth hints that helps us select
|
|
|
|
// the best outgoing channel to use in case no outgoing channel is set.
|
|
|
|
bandwidthHints, err := generateBandwidthHints(
|
|
|
|
r.selfNode, r.cfg.QueryBandwidth,
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-01-27 14:33:53 +03:00
|
|
|
// Fetch the current block height outside the routing transaction, to
|
|
|
|
// prevent the rpc call blocking the database.
|
|
|
|
_, height, err := r.cfg.Chain.GetBestBlock()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Allocate a list that will contain the unified policies for this
|
2019-08-29 14:03:37 +03:00
|
|
|
// route.
|
2019-09-30 16:45:16 +03:00
|
|
|
edges := make([]*unifiedPolicy, len(hops))
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
var runningAmt lnwire.MilliSatoshi
|
2019-08-29 14:03:37 +03:00
|
|
|
if useMinAmt {
|
|
|
|
// For minimum amount routes, aim to deliver at least 1 msat to
|
|
|
|
// the destination. There are nodes in the wild that have a
|
|
|
|
// min_htlc channel policy of zero, which could lead to a zero
|
|
|
|
// amount payment being made.
|
2019-09-30 16:45:16 +03:00
|
|
|
runningAmt = 1
|
2019-08-29 14:03:37 +03:00
|
|
|
} else {
|
|
|
|
// If an amount is specified, we need to build a route that
|
|
|
|
// delivers exactly this amount to the final destination.
|
2019-09-30 16:45:16 +03:00
|
|
|
runningAmt = *amt
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
|
|
|
|
2020-01-27 14:33:53 +03:00
|
|
|
// Open a transaction to execute the graph queries in.
|
|
|
|
routingTx, err := newDbRoutingTx(r.cfg.Graph)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
defer func() {
|
|
|
|
err := routingTx.close()
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Error closing db tx: %v", err)
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2019-08-29 14:03:37 +03:00
|
|
|
// Traverse hops backwards to accumulate fees in the running amounts.
|
|
|
|
source := r.selfNode.PubKeyBytes
|
|
|
|
for i := len(hops) - 1; i >= 0; i-- {
|
|
|
|
toNode := hops[i]
|
|
|
|
|
|
|
|
var fromNode route.Vertex
|
|
|
|
if i == 0 {
|
|
|
|
fromNode = source
|
|
|
|
} else {
|
|
|
|
fromNode = hops[i-1]
|
|
|
|
}
|
|
|
|
|
|
|
|
localChan := i == 0
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Build unified policies for this hop based on the channels
|
|
|
|
// known in the graph.
|
2020-05-07 12:48:39 +03:00
|
|
|
u := newUnifiedPolicies(source, toNode, outgoingChans)
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2020-01-27 14:33:53 +03:00
|
|
|
err := u.addGraphPolicies(routingTx)
|
2019-09-30 16:45:16 +03:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Exit if there are no channels.
|
|
|
|
unifiedPolicy, ok := u.policies[fromNode]
|
|
|
|
if !ok {
|
|
|
|
return nil, ErrNoChannel{
|
|
|
|
fromNode: fromNode,
|
|
|
|
position: i,
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
2019-09-30 16:45:16 +03:00
|
|
|
}
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// If using min amt, increase amt if needed.
|
|
|
|
if useMinAmt {
|
|
|
|
min := unifiedPolicy.minAmt()
|
|
|
|
if min > runningAmt {
|
|
|
|
runningAmt = min
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
2019-09-30 16:45:16 +03:00
|
|
|
}
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Get a forwarding policy for the specific amount that we want
|
|
|
|
// to forward.
|
|
|
|
policy := unifiedPolicy.getPolicy(runningAmt, bandwidthHints)
|
|
|
|
if policy == nil {
|
|
|
|
return nil, ErrNoChannel{
|
|
|
|
fromNode: fromNode,
|
|
|
|
position: i,
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Add fee for this hop.
|
|
|
|
if !localChan {
|
|
|
|
runningAmt += policy.ComputeFee(runningAmt)
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
log.Tracef("Select channel %v at position %v", policy.ChannelID, i)
|
|
|
|
|
|
|
|
edges[i] = unifiedPolicy
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now that we arrived at the start of the route and found out the route
|
|
|
|
// total amount, we make a forward pass. Because the amount may have
|
|
|
|
// been increased in the backward pass, fees need to be recalculated and
|
|
|
|
// amount ranges re-checked.
|
|
|
|
var pathEdges []*channeldb.ChannelEdgePolicy
|
|
|
|
receiverAmt := runningAmt
|
|
|
|
for i, edge := range edges {
|
|
|
|
policy := edge.getPolicy(receiverAmt, bandwidthHints)
|
|
|
|
if policy == nil {
|
2019-08-29 14:03:37 +03:00
|
|
|
return nil, ErrNoChannel{
|
2019-09-30 16:45:16 +03:00
|
|
|
fromNode: hops[i-1],
|
2019-08-29 14:03:37 +03:00
|
|
|
position: i,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
if i > 0 {
|
|
|
|
// Decrease the amount to send while going forward.
|
|
|
|
receiverAmt -= policy.ComputeFeeFromIncoming(
|
|
|
|
receiverAmt,
|
|
|
|
)
|
|
|
|
}
|
2019-08-29 14:03:37 +03:00
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
pathEdges = append(pathEdges, policy)
|
2019-08-29 14:03:37 +03:00
|
|
|
}
|
|
|
|
|
2019-09-30 16:45:16 +03:00
|
|
|
// Build and return the final route.
|
2019-08-29 14:03:37 +03:00
|
|
|
return newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
source, pathEdges, uint32(height),
|
|
|
|
finalHopParams{
|
2020-11-24 07:17:16 +03:00
|
|
|
amt: receiverAmt,
|
|
|
|
totalAmt: receiverAmt,
|
|
|
|
cltvDelta: uint16(finalCltvDelta),
|
|
|
|
records: nil,
|
|
|
|
paymentAddr: payAddr,
|
2019-12-19 10:55:08 +03:00
|
|
|
},
|
2019-08-29 14:03:37 +03:00
|
|
|
)
|
|
|
|
}
|