routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
package routing
|
|
|
|
|
|
|
|
import (
|
2017-06-16 23:45:24 +03:00
|
|
|
"bytes"
|
2018-06-09 23:36:48 +03:00
|
|
|
"crypto/sha256"
|
2017-06-16 23:45:24 +03:00
|
|
|
"encoding/binary"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"encoding/hex"
|
|
|
|
"encoding/json"
|
|
|
|
"errors"
|
2018-11-29 18:31:25 +03:00
|
|
|
"fmt"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"io/ioutil"
|
2018-04-19 17:32:24 +03:00
|
|
|
"math"
|
2017-03-09 01:24:59 +03:00
|
|
|
"math/big"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"net"
|
|
|
|
"os"
|
2019-12-19 10:56:42 +03:00
|
|
|
"reflect"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"strings"
|
|
|
|
"testing"
|
2019-04-17 23:22:33 +03:00
|
|
|
"time"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-06-05 04:34:16 +03:00
|
|
|
"github.com/btcsuite/btcd/btcec"
|
|
|
|
"github.com/btcsuite/btcd/chaincfg/chainhash"
|
|
|
|
"github.com/btcsuite/btcd/wire"
|
|
|
|
"github.com/btcsuite/btcutil"
|
2018-07-31 10:17:17 +03:00
|
|
|
"github.com/lightningnetwork/lnd/channeldb"
|
2019-12-19 10:54:11 +03:00
|
|
|
"github.com/lightningnetwork/lnd/feature"
|
2018-07-31 10:17:17 +03:00
|
|
|
"github.com/lightningnetwork/lnd/lnwire"
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
"github.com/lightningnetwork/lnd/record"
|
2019-04-05 18:36:11 +03:00
|
|
|
"github.com/lightningnetwork/lnd/routing/route"
|
2019-02-19 11:09:01 +03:00
|
|
|
"github.com/lightningnetwork/lnd/zpay32"
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
)
|
|
|
|
|
|
|
|
const (
|
|
|
|
// basicGraphFilePath is the file path for a basic graph used within
|
|
|
|
// the tests. The basic graph consists of 5 nodes with 5 channels
|
|
|
|
// connecting them.
|
2017-03-09 01:24:59 +03:00
|
|
|
basicGraphFilePath = "testdata/basic_graph.json"
|
|
|
|
|
2017-10-19 08:10:00 +03:00
|
|
|
// specExampleFilePath is a file path which stores an example which
|
|
|
|
// implementations will use in order to ensure that they're calculating
|
|
|
|
// the payload for each hop in path properly.
|
|
|
|
specExampleFilePath = "testdata/spec_example.json"
|
2018-04-19 17:32:24 +03:00
|
|
|
|
|
|
|
// noFeeLimit is the maximum value of a payment through Lightning. We
|
|
|
|
// can use this value to signal there is no fee limit since payments
|
|
|
|
// should never be larger than this.
|
|
|
|
noFeeLimit = lnwire.MilliSatoshi(math.MaxUint32)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
)
|
|
|
|
|
2019-03-05 14:42:29 +03:00
|
|
|
var (
|
|
|
|
noRestrictions = &RestrictParams{
|
2019-03-19 13:45:10 +03:00
|
|
|
FeeLimit: noFeeLimit,
|
|
|
|
ProbabilitySource: noProbabilitySource,
|
2019-10-11 22:46:10 +03:00
|
|
|
CltvLimit: math.MaxUint32,
|
2019-03-05 14:42:29 +03:00
|
|
|
}
|
2019-06-20 13:03:45 +03:00
|
|
|
|
|
|
|
testPathFindingConfig = &PathFindingConfig{}
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
|
|
|
|
tlvFeatures = lnwire.NewFeatureVector(
|
|
|
|
lnwire.NewRawFeatureVector(
|
|
|
|
lnwire.TLVOnionPayloadOptional,
|
|
|
|
), lnwire.Features,
|
|
|
|
)
|
2019-12-19 10:54:11 +03:00
|
|
|
|
|
|
|
payAddrFeatures = lnwire.NewFeatureVector(
|
|
|
|
lnwire.NewRawFeatureVector(
|
|
|
|
lnwire.PaymentAddrOptional,
|
|
|
|
), lnwire.Features,
|
|
|
|
)
|
|
|
|
|
|
|
|
tlvPayAddrFeatures = lnwire.NewFeatureVector(
|
|
|
|
lnwire.NewRawFeatureVector(
|
|
|
|
lnwire.TLVOnionPayloadOptional,
|
|
|
|
lnwire.PaymentAddrOptional,
|
|
|
|
), lnwire.Features,
|
|
|
|
)
|
2020-01-08 23:25:00 +03:00
|
|
|
|
2020-01-28 18:07:34 +03:00
|
|
|
mppFeatures = lnwire.NewRawFeatureVector(
|
|
|
|
lnwire.TLVOnionPayloadOptional,
|
|
|
|
lnwire.PaymentAddrOptional,
|
|
|
|
lnwire.MPPOptional,
|
|
|
|
)
|
|
|
|
|
2020-01-08 23:25:00 +03:00
|
|
|
unknownRequiredFeatures = lnwire.NewFeatureVector(
|
|
|
|
lnwire.NewRawFeatureVector(100), lnwire.Features,
|
|
|
|
)
|
2019-03-05 14:42:29 +03:00
|
|
|
)
|
|
|
|
|
2017-03-09 01:24:59 +03:00
|
|
|
var (
|
2018-08-16 21:47:29 +03:00
|
|
|
testSig = &btcec.Signature{
|
2017-03-09 01:24:59 +03:00
|
|
|
R: new(big.Int),
|
|
|
|
S: new(big.Int),
|
|
|
|
}
|
|
|
|
_, _ = testSig.R.SetString("63724406601629180062774974542967536251589935445068131219452686511677818569431", 10)
|
|
|
|
_, _ = testSig.S.SetString("18801056069249825825291287104931333862866033135609736119018462340006816851118", 10)
|
|
|
|
|
|
|
|
testAuthProof = channeldb.ChannelAuthProof{
|
2018-01-31 07:26:26 +03:00
|
|
|
NodeSig1Bytes: testSig.Serialize(),
|
|
|
|
NodeSig2Bytes: testSig.Serialize(),
|
|
|
|
BitcoinSig1Bytes: testSig.Serialize(),
|
|
|
|
BitcoinSig2Bytes: testSig.Serialize(),
|
2017-03-09 01:24:59 +03:00
|
|
|
}
|
|
|
|
)
|
|
|
|
|
2019-03-19 13:45:10 +03:00
|
|
|
// noProbabilitySource is used in testing to return the same probability 1 for
|
|
|
|
// all edges.
|
2019-07-29 16:10:58 +03:00
|
|
|
func noProbabilitySource(route.Vertex, route.Vertex, lnwire.MilliSatoshi) float64 {
|
2019-03-19 13:45:10 +03:00
|
|
|
return 1
|
|
|
|
}
|
|
|
|
|
2017-03-09 01:24:59 +03:00
|
|
|
// testGraph is the struct which corresponds to the JSON format used to encode
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// graphs within the files in the testdata directory.
|
|
|
|
//
|
|
|
|
// TODO(roasbeef): add test graph auto-generator
|
|
|
|
type testGraph struct {
|
|
|
|
Info []string `json:"info"`
|
|
|
|
Nodes []testNode `json:"nodes"`
|
|
|
|
Edges []testChan `json:"edges"`
|
|
|
|
}
|
|
|
|
|
|
|
|
// testNode represents a node within the test graph above. We skip certain
|
|
|
|
// information such as the node's IP address as that information isn't needed
|
|
|
|
// for our tests.
|
|
|
|
type testNode struct {
|
|
|
|
Source bool `json:"source"`
|
|
|
|
PubKey string `json:"pubkey"`
|
|
|
|
Alias string `json:"alias"`
|
|
|
|
}
|
|
|
|
|
|
|
|
// testChan represents the JSON version of a payment channel. This struct
|
|
|
|
// matches the Json that's encoded under the "edges" key within the test graph.
|
|
|
|
type testChan struct {
|
2017-10-19 07:22:49 +03:00
|
|
|
Node1 string `json:"node_1"`
|
|
|
|
Node2 string `json:"node_2"`
|
|
|
|
ChannelID uint64 `json:"channel_id"`
|
|
|
|
ChannelPoint string `json:"channel_point"`
|
2018-12-09 05:42:46 +03:00
|
|
|
ChannelFlags uint8 `json:"channel_flags"`
|
|
|
|
MessageFlags uint8 `json:"message_flags"`
|
2017-10-19 07:22:49 +03:00
|
|
|
Expiry uint16 `json:"expiry"`
|
|
|
|
MinHTLC int64 `json:"min_htlc"`
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC int64 `json:"max_htlc"`
|
2017-10-19 07:22:49 +03:00
|
|
|
FeeBaseMsat int64 `json:"fee_base_msat"`
|
|
|
|
FeeRate int64 `json:"fee_rate"`
|
|
|
|
Capacity int64 `json:"capacity"`
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// makeTestGraph creates a new instance of a channeldb.ChannelGraph for testing
|
|
|
|
// purposes. A callback which cleans up the created temporary directories is
|
|
|
|
// also returned and intended to be executed after the test completes.
|
|
|
|
func makeTestGraph() (*channeldb.ChannelGraph, func(), error) {
|
|
|
|
// First, create a temporary directory to be used for the duration of
|
|
|
|
// this test.
|
|
|
|
tempDirName, err := ioutil.TempDir("", "channeldb")
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Next, create channeldb for the first time.
|
|
|
|
cdb, err := channeldb.Open(tempDirName)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
cleanUp := func() {
|
|
|
|
cdb.Close()
|
|
|
|
os.RemoveAll(tempDirName)
|
|
|
|
}
|
|
|
|
|
|
|
|
return cdb.ChannelGraph(), cleanUp, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// parseTestGraph returns a fully populated ChannelGraph given a path to a JSON
|
|
|
|
// file which encodes a test graph.
|
2018-08-16 22:35:38 +03:00
|
|
|
func parseTestGraph(path string) (*testGraphInstance, error) {
|
2017-02-23 22:56:47 +03:00
|
|
|
graphJSON, err := ioutil.ReadFile(path)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// First unmarshal the JSON graph into an instance of the testGraph
|
|
|
|
// struct. Using the struct tags created above in the struct, the JSON
|
|
|
|
// will be properly parsed into the struct above.
|
|
|
|
var g testGraph
|
2017-02-23 22:56:47 +03:00
|
|
|
if err := json.Unmarshal(graphJSON, &g); err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// We'll use this fake address for the IP address of all the nodes in
|
|
|
|
// our tests. This value isn't needed for path finding so it doesn't
|
|
|
|
// need to be unique.
|
2017-02-17 12:29:23 +03:00
|
|
|
var testAddrs []net.Addr
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
testAddr, err := net.ResolveTCPAddr("tcp", "192.0.0.1:8888")
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2017-02-17 12:29:23 +03:00
|
|
|
testAddrs = append(testAddrs, testAddr)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
|
|
|
// Next, create a temporary graph database for usage within the test.
|
|
|
|
graph, cleanUp, err := makeTestGraph()
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
aliasMap := make(map[string]route.Vertex)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
var source *channeldb.LightningNode
|
|
|
|
|
|
|
|
// First we insert all the nodes within the graph as vertexes.
|
|
|
|
for _, node := range g.Nodes {
|
|
|
|
pubBytes, err := hex.DecodeString(node.PubKey)
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
dbNode := &channeldb.LightningNode{
|
2017-07-14 22:32:00 +03:00
|
|
|
HaveNodeAnnouncement: true,
|
2018-01-31 07:26:26 +03:00
|
|
|
AuthSigBytes: testSig.Serialize(),
|
2018-08-16 21:47:29 +03:00
|
|
|
LastUpdate: testTime,
|
2017-07-14 22:32:00 +03:00
|
|
|
Addresses: testAddrs,
|
|
|
|
Alias: node.Alias,
|
|
|
|
Features: testFeatures,
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2018-01-31 07:26:26 +03:00
|
|
|
copy(dbNode.PubKeyBytes[:], pubBytes)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
|
|
|
// We require all aliases within the graph to be unique for our
|
|
|
|
// tests.
|
|
|
|
if _, ok := aliasMap[node.Alias]; ok {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, errors.New("aliases for nodes " +
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"must be unique!")
|
|
|
|
}
|
|
|
|
|
2017-02-23 22:56:47 +03:00
|
|
|
// If the alias is unique, then add the node to the
|
|
|
|
// alias map for easy lookup.
|
2019-03-05 18:55:19 +03:00
|
|
|
aliasMap[node.Alias] = dbNode.PubKeyBytes
|
2017-02-23 22:56:47 +03:00
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// If the node is tagged as the source, then we create a
|
|
|
|
// pointer to is so we can mark the source in the graph
|
|
|
|
// properly.
|
|
|
|
if node.Source {
|
|
|
|
// If we come across a node that's marked as the
|
|
|
|
// source, and we've already set the source in a prior
|
|
|
|
// iteration, then the JSON has an error as only ONE
|
|
|
|
// node can be the source in the graph.
|
|
|
|
if source != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, errors.New("JSON is invalid " +
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
"multiple nodes are tagged as the source")
|
|
|
|
}
|
|
|
|
|
|
|
|
source = dbNode
|
|
|
|
}
|
|
|
|
|
|
|
|
// With the node fully parsed, add it as a vertex within the
|
|
|
|
// graph.
|
|
|
|
if err := graph.AddLightningNode(dbNode); err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-10-19 07:29:42 +03:00
|
|
|
if source != nil {
|
|
|
|
// Set the selected source node
|
|
|
|
if err := graph.SetSourceNode(source); err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2017-10-19 07:29:42 +03:00
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// With all the vertexes inserted, we can now insert the edges into the
|
|
|
|
// test graph.
|
|
|
|
for _, edge := range g.Edges {
|
|
|
|
node1Bytes, err := hex.DecodeString(edge.Node1)
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
node2Bytes, err := hex.DecodeString(edge.Node2)
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-11-29 18:31:25 +03:00
|
|
|
if bytes.Compare(node1Bytes, node2Bytes) == 1 {
|
|
|
|
return nil, fmt.Errorf(
|
|
|
|
"channel %v node order incorrect",
|
|
|
|
edge.ChannelID,
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
fundingTXID := strings.Split(edge.ChannelPoint, ":")[0]
|
2017-01-06 00:56:27 +03:00
|
|
|
txidBytes, err := chainhash.NewHashFromStr(fundingTXID)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
fundingPoint := wire.OutPoint{
|
|
|
|
Hash: *txidBytes,
|
|
|
|
Index: 0,
|
|
|
|
}
|
|
|
|
|
|
|
|
// We first insert the existence of the edge between the two
|
|
|
|
// nodes.
|
2017-03-09 01:24:59 +03:00
|
|
|
edgeInfo := channeldb.ChannelEdgeInfo{
|
|
|
|
ChannelID: edge.ChannelID,
|
|
|
|
AuthProof: &testAuthProof,
|
|
|
|
ChannelPoint: fundingPoint,
|
|
|
|
Capacity: btcutil.Amount(edge.Capacity),
|
|
|
|
}
|
2018-01-31 07:26:26 +03:00
|
|
|
|
|
|
|
copy(edgeInfo.NodeKey1Bytes[:], node1Bytes)
|
|
|
|
copy(edgeInfo.NodeKey2Bytes[:], node2Bytes)
|
|
|
|
copy(edgeInfo.BitcoinKey1Bytes[:], node1Bytes)
|
|
|
|
copy(edgeInfo.BitcoinKey2Bytes[:], node2Bytes)
|
|
|
|
|
2017-10-19 07:29:14 +03:00
|
|
|
err = graph.AddChannelEdge(&edgeInfo)
|
|
|
|
if err != nil && err != channeldb.ErrEdgeAlreadyExist {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-03-09 01:24:59 +03:00
|
|
|
edgePolicy := &channeldb.ChannelEdgePolicy{
|
2018-01-31 07:26:26 +03:00
|
|
|
SigBytes: testSig.Serialize(),
|
2018-12-09 05:42:46 +03:00
|
|
|
MessageFlags: lnwire.ChanUpdateMsgFlags(edge.MessageFlags),
|
|
|
|
ChannelFlags: lnwire.ChanUpdateChanFlags(edge.ChannelFlags),
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
ChannelID: edge.ChannelID,
|
2018-08-16 21:47:29 +03:00
|
|
|
LastUpdate: testTime,
|
2017-03-09 01:24:59 +03:00
|
|
|
TimeLockDelta: edge.Expiry,
|
2017-08-22 09:43:20 +03:00
|
|
|
MinHTLC: lnwire.MilliSatoshi(edge.MinHTLC),
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: lnwire.MilliSatoshi(edge.MaxHTLC),
|
2017-08-22 09:43:20 +03:00
|
|
|
FeeBaseMSat: lnwire.MilliSatoshi(edge.FeeBaseMsat),
|
|
|
|
FeeProportionalMillionths: lnwire.MilliSatoshi(edge.FeeRate),
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2017-03-09 01:24:59 +03:00
|
|
|
if err := graph.UpdateEdgePolicy(edgePolicy); err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
return &testGraphInstance{
|
|
|
|
graph: graph,
|
|
|
|
cleanUp: cleanUp,
|
|
|
|
aliasMap: aliasMap,
|
|
|
|
}, nil
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
type testChannelPolicy struct {
|
|
|
|
Expiry uint16
|
|
|
|
MinHTLC lnwire.MilliSatoshi
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC lnwire.MilliSatoshi
|
2018-06-09 23:36:48 +03:00
|
|
|
FeeBaseMsat lnwire.MilliSatoshi
|
|
|
|
FeeRate lnwire.MilliSatoshi
|
2019-04-17 23:22:33 +03:00
|
|
|
LastUpdate time.Time
|
2019-04-17 23:24:14 +03:00
|
|
|
Disabled bool
|
2019-12-19 10:53:32 +03:00
|
|
|
Features *lnwire.FeatureVector
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
type testChannelEnd struct {
|
|
|
|
Alias string
|
2019-04-18 23:02:08 +03:00
|
|
|
*testChannelPolicy
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2019-12-19 10:53:18 +03:00
|
|
|
func symmetricTestChannel(alias1, alias2 string, capacity btcutil.Amount,
|
2018-08-18 21:30:02 +03:00
|
|
|
policy *testChannelPolicy, chanID ...uint64) *testChannel {
|
|
|
|
|
|
|
|
// Leaving id zero will result in auto-generation of a channel id during
|
|
|
|
// graph construction.
|
|
|
|
var id uint64
|
|
|
|
if len(chanID) > 0 {
|
|
|
|
id = chanID[0]
|
|
|
|
}
|
|
|
|
|
2019-12-19 10:53:18 +03:00
|
|
|
policy2 := *policy
|
|
|
|
|
|
|
|
return asymmetricTestChannel(
|
|
|
|
alias1, alias2, capacity, policy, &policy2, id,
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
func asymmetricTestChannel(alias1, alias2 string, capacity btcutil.Amount,
|
|
|
|
policy1, policy2 *testChannelPolicy, id uint64) *testChannel {
|
2019-07-10 15:20:42 +03:00
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
return &testChannel{
|
|
|
|
Capacity: capacity,
|
|
|
|
Node1: &testChannelEnd{
|
|
|
|
Alias: alias1,
|
2019-12-19 10:53:18 +03:00
|
|
|
testChannelPolicy: policy1,
|
2018-06-09 23:36:48 +03:00
|
|
|
},
|
|
|
|
Node2: &testChannelEnd{
|
|
|
|
Alias: alias2,
|
2019-12-19 10:53:18 +03:00
|
|
|
testChannelPolicy: policy2,
|
2018-06-09 23:36:48 +03:00
|
|
|
},
|
2018-08-18 21:30:02 +03:00
|
|
|
ChannelID: id,
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
type testChannel struct {
|
2018-08-18 21:30:02 +03:00
|
|
|
Node1 *testChannelEnd
|
|
|
|
Node2 *testChannelEnd
|
|
|
|
Capacity btcutil.Amount
|
|
|
|
ChannelID uint64
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
type testGraphInstance struct {
|
|
|
|
graph *channeldb.ChannelGraph
|
|
|
|
cleanUp func()
|
|
|
|
|
|
|
|
// aliasMap is a map from a node's alias to its public key. This type is
|
|
|
|
// provided in order to allow easily look up from the human memorable alias
|
|
|
|
// to an exact node's public key.
|
2019-04-05 18:36:11 +03:00
|
|
|
aliasMap map[string]route.Vertex
|
2018-08-18 21:12:33 +03:00
|
|
|
|
|
|
|
// privKeyMap maps a node alias to its private key. This is used to be
|
|
|
|
// able to mock a remote node's signing behaviour.
|
|
|
|
privKeyMap map[string]*btcec.PrivateKey
|
2018-08-16 22:35:38 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// createTestGraphFromChannels returns a fully populated ChannelGraph based on a set of
|
2018-06-09 23:36:48 +03:00
|
|
|
// test channels. Additional required information like keys are derived in
|
|
|
|
// a deterministical way and added to the channel graph. A list of nodes is
|
|
|
|
// not required and derived from the channel data. The goal is to keep
|
|
|
|
// instantiating a test channel graph as light weight as possible.
|
2019-01-31 18:07:30 +03:00
|
|
|
func createTestGraphFromChannels(testChannels []*testChannel, source string) (
|
|
|
|
*testGraphInstance, error) {
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
// We'll use this fake address for the IP address of all the nodes in
|
|
|
|
// our tests. This value isn't needed for path finding so it doesn't
|
|
|
|
// need to be unique.
|
|
|
|
var testAddrs []net.Addr
|
|
|
|
testAddr, err := net.ResolveTCPAddr("tcp", "192.0.0.1:8888")
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
testAddrs = append(testAddrs, testAddr)
|
|
|
|
|
|
|
|
// Next, create a temporary graph database for usage within the test.
|
|
|
|
graph, cleanUp, err := makeTestGraph()
|
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
aliasMap := make(map[string]route.Vertex)
|
2018-08-18 21:12:33 +03:00
|
|
|
privKeyMap := make(map[string]*btcec.PrivateKey)
|
2018-06-09 23:36:48 +03:00
|
|
|
|
|
|
|
nodeIndex := byte(0)
|
2019-12-19 10:53:32 +03:00
|
|
|
addNodeWithAlias := func(alias string, features *lnwire.FeatureVector) (
|
|
|
|
*channeldb.LightningNode, error) {
|
|
|
|
|
2020-04-09 21:41:55 +03:00
|
|
|
keyBytes := []byte{
|
2018-06-09 23:36:48 +03:00
|
|
|
0, 0, 0, 0, 0, 0, 0, 0,
|
|
|
|
0, 0, 0, 0, 0, 0, 0, 0,
|
|
|
|
0, 0, 0, 0, 0, 0, 0, 0,
|
|
|
|
0, 0, 0, 0, 0, 0, 0, nodeIndex + 1,
|
|
|
|
}
|
|
|
|
|
2018-08-18 21:12:33 +03:00
|
|
|
privKey, pubKey := btcec.PrivKeyFromBytes(btcec.S256(),
|
2018-06-09 23:36:48 +03:00
|
|
|
keyBytes)
|
|
|
|
|
2019-12-19 10:53:32 +03:00
|
|
|
if features == nil {
|
|
|
|
features = lnwire.EmptyFeatureVector()
|
|
|
|
}
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
dbNode := &channeldb.LightningNode{
|
|
|
|
HaveNodeAnnouncement: true,
|
|
|
|
AuthSigBytes: testSig.Serialize(),
|
2018-08-16 21:47:29 +03:00
|
|
|
LastUpdate: testTime,
|
2018-06-09 23:36:48 +03:00
|
|
|
Addresses: testAddrs,
|
|
|
|
Alias: alias,
|
2019-12-19 10:53:32 +03:00
|
|
|
Features: features,
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
copy(dbNode.PubKeyBytes[:], pubKey.SerializeCompressed())
|
|
|
|
|
2018-08-18 21:12:33 +03:00
|
|
|
privKeyMap[alias] = privKey
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
// With the node fully parsed, add it as a vertex within the
|
|
|
|
// graph.
|
|
|
|
if err := graph.AddLightningNode(dbNode); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-03-05 18:55:19 +03:00
|
|
|
aliasMap[alias] = dbNode.PubKeyBytes
|
2018-06-09 23:36:48 +03:00
|
|
|
nodeIndex++
|
|
|
|
|
|
|
|
return dbNode, nil
|
|
|
|
}
|
|
|
|
|
2019-01-31 18:07:30 +03:00
|
|
|
// Add the source node.
|
2019-12-19 10:53:32 +03:00
|
|
|
dbNode, err := addNodeWithAlias(source, lnwire.EmptyFeatureVector())
|
2019-01-31 18:07:30 +03:00
|
|
|
if err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2019-01-31 18:07:30 +03:00
|
|
|
if err = graph.SetSourceNode(dbNode); err != nil {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2018-08-18 21:30:02 +03:00
|
|
|
// Initialize variable that keeps track of the next channel id to assign
|
|
|
|
// if none is specified.
|
|
|
|
nextUnassignedChannelID := uint64(100000)
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
for _, testChannel := range testChannels {
|
2019-12-19 10:53:32 +03:00
|
|
|
for _, node := range []*testChannelEnd{
|
|
|
|
testChannel.Node1, testChannel.Node2} {
|
2018-06-09 23:36:48 +03:00
|
|
|
|
2019-12-19 10:53:32 +03:00
|
|
|
_, exists := aliasMap[node.Alias]
|
2018-06-09 23:36:48 +03:00
|
|
|
if !exists {
|
2019-12-19 10:53:32 +03:00
|
|
|
var features *lnwire.FeatureVector
|
|
|
|
if node.testChannelPolicy != nil {
|
|
|
|
features =
|
|
|
|
node.testChannelPolicy.Features
|
|
|
|
}
|
|
|
|
_, err := addNodeWithAlias(
|
|
|
|
node.Alias, features,
|
|
|
|
)
|
2019-01-31 18:07:30 +03:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-18 21:30:02 +03:00
|
|
|
channelID := testChannel.ChannelID
|
|
|
|
|
|
|
|
// If no channel id is specified, generate an id.
|
|
|
|
if channelID == 0 {
|
|
|
|
channelID = nextUnassignedChannelID
|
|
|
|
nextUnassignedChannelID++
|
|
|
|
}
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
var hash [sha256.Size]byte
|
|
|
|
hash[len(hash)-1] = byte(channelID)
|
|
|
|
|
|
|
|
fundingPoint := &wire.OutPoint{
|
|
|
|
Hash: chainhash.Hash(hash),
|
|
|
|
Index: 0,
|
|
|
|
}
|
|
|
|
|
2019-03-19 17:07:05 +03:00
|
|
|
// Sort nodes
|
|
|
|
node1 := testChannel.Node1
|
|
|
|
node2 := testChannel.Node2
|
|
|
|
node1Vertex := aliasMap[node1.Alias]
|
|
|
|
node2Vertex := aliasMap[node2.Alias]
|
|
|
|
if bytes.Compare(node1Vertex[:], node2Vertex[:]) == 1 {
|
|
|
|
node1, node2 = node2, node1
|
|
|
|
node1Vertex, node2Vertex = node2Vertex, node1Vertex
|
|
|
|
}
|
|
|
|
|
2018-06-09 23:36:48 +03:00
|
|
|
// We first insert the existence of the edge between the two
|
|
|
|
// nodes.
|
|
|
|
edgeInfo := channeldb.ChannelEdgeInfo{
|
|
|
|
ChannelID: channelID,
|
|
|
|
AuthProof: &testAuthProof,
|
|
|
|
ChannelPoint: *fundingPoint,
|
|
|
|
Capacity: testChannel.Capacity,
|
|
|
|
|
2019-03-19 17:07:05 +03:00
|
|
|
NodeKey1Bytes: node1Vertex,
|
|
|
|
BitcoinKey1Bytes: node1Vertex,
|
|
|
|
NodeKey2Bytes: node2Vertex,
|
|
|
|
BitcoinKey2Bytes: node2Vertex,
|
2019-03-05 18:55:19 +03:00
|
|
|
}
|
2018-06-09 23:36:48 +03:00
|
|
|
|
|
|
|
err = graph.AddChannelEdge(&edgeInfo)
|
|
|
|
if err != nil && err != channeldb.ErrEdgeAlreadyExist {
|
2018-08-16 22:35:38 +03:00
|
|
|
return nil, err
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:35:16 +03:00
|
|
|
if node1.testChannelPolicy != nil {
|
2019-04-18 23:02:08 +03:00
|
|
|
var msgFlags lnwire.ChanUpdateMsgFlags
|
2019-12-19 14:35:16 +03:00
|
|
|
if node1.MaxHTLC != 0 {
|
2019-04-17 23:24:14 +03:00
|
|
|
msgFlags |= lnwire.ChanUpdateOptionMaxHtlc
|
|
|
|
}
|
|
|
|
var channelFlags lnwire.ChanUpdateChanFlags
|
2019-12-19 14:35:16 +03:00
|
|
|
if node1.Disabled {
|
2019-04-17 23:24:14 +03:00
|
|
|
channelFlags |= lnwire.ChanUpdateDisabled
|
2019-04-18 23:02:08 +03:00
|
|
|
}
|
2019-12-19 14:35:16 +03:00
|
|
|
|
2019-04-18 23:02:08 +03:00
|
|
|
edgePolicy := &channeldb.ChannelEdgePolicy{
|
|
|
|
SigBytes: testSig.Serialize(),
|
|
|
|
MessageFlags: msgFlags,
|
2019-04-17 23:24:14 +03:00
|
|
|
ChannelFlags: channelFlags,
|
2019-04-18 23:02:08 +03:00
|
|
|
ChannelID: channelID,
|
2019-03-19 17:07:05 +03:00
|
|
|
LastUpdate: node1.LastUpdate,
|
|
|
|
TimeLockDelta: node1.Expiry,
|
|
|
|
MinHTLC: node1.MinHTLC,
|
|
|
|
MaxHTLC: node1.MaxHTLC,
|
|
|
|
FeeBaseMSat: node1.FeeBaseMsat,
|
|
|
|
FeeProportionalMillionths: node1.FeeRate,
|
2019-04-18 23:02:08 +03:00
|
|
|
}
|
|
|
|
if err := graph.UpdateEdgePolicy(edgePolicy); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:35:16 +03:00
|
|
|
if node2.testChannelPolicy != nil {
|
2019-04-18 23:02:08 +03:00
|
|
|
var msgFlags lnwire.ChanUpdateMsgFlags
|
2019-12-19 14:35:16 +03:00
|
|
|
if node2.MaxHTLC != 0 {
|
2019-04-17 23:24:14 +03:00
|
|
|
msgFlags |= lnwire.ChanUpdateOptionMaxHtlc
|
|
|
|
}
|
2019-12-19 14:35:16 +03:00
|
|
|
var channelFlags lnwire.ChanUpdateChanFlags
|
|
|
|
if node2.Disabled {
|
2019-04-17 23:24:14 +03:00
|
|
|
channelFlags |= lnwire.ChanUpdateDisabled
|
2019-04-18 23:02:08 +03:00
|
|
|
}
|
2019-12-19 14:35:16 +03:00
|
|
|
channelFlags |= lnwire.ChanUpdateDirection
|
|
|
|
|
2019-04-18 23:02:08 +03:00
|
|
|
edgePolicy := &channeldb.ChannelEdgePolicy{
|
|
|
|
SigBytes: testSig.Serialize(),
|
|
|
|
MessageFlags: msgFlags,
|
2019-04-17 23:24:14 +03:00
|
|
|
ChannelFlags: channelFlags,
|
2019-04-18 23:02:08 +03:00
|
|
|
ChannelID: channelID,
|
2019-03-19 17:07:05 +03:00
|
|
|
LastUpdate: node2.LastUpdate,
|
|
|
|
TimeLockDelta: node2.Expiry,
|
|
|
|
MinHTLC: node2.MinHTLC,
|
|
|
|
MaxHTLC: node2.MaxHTLC,
|
|
|
|
FeeBaseMSat: node2.FeeBaseMsat,
|
|
|
|
FeeProportionalMillionths: node2.FeeRate,
|
2019-04-18 23:02:08 +03:00
|
|
|
}
|
|
|
|
if err := graph.UpdateEdgePolicy(edgePolicy); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
channelID++
|
|
|
|
}
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
return &testGraphInstance{
|
2018-08-18 21:12:33 +03:00
|
|
|
graph: graph,
|
|
|
|
cleanUp: cleanUp,
|
|
|
|
aliasMap: aliasMap,
|
|
|
|
privKeyMap: privKeyMap,
|
|
|
|
}, nil
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// TestFindLowestFeePath tests that out of two routes with identical total
|
|
|
|
// time lock values, the route with the lowest total fee should be returned.
|
|
|
|
// The fee rates are chosen such that the test failed on the previous edge
|
|
|
|
// weight function where one of the terms was fee squared.
|
|
|
|
func TestFindLowestFeePath(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with two paths from roasbeef to target. Both
|
|
|
|
// paths have equal total time locks, but the path through b has lower
|
|
|
|
// fees (700 compared to 800 for the path through a).
|
|
|
|
testChannels := []*testChannel{
|
2018-06-04 23:10:05 +03:00
|
|
|
symmetricTestChannel("roasbeef", "first", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: 100000000,
|
2018-06-04 23:10:05 +03:00
|
|
|
}),
|
|
|
|
symmetricTestChannel("first", "a", 100000, &testChannelPolicy{
|
2018-06-09 23:36:48 +03:00
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: 100000000,
|
2018-06-09 23:36:48 +03:00
|
|
|
}),
|
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: 100000000,
|
2018-06-09 23:36:48 +03:00
|
|
|
}),
|
2018-06-04 23:10:05 +03:00
|
|
|
symmetricTestChannel("first", "b", 100000, &testChannelPolicy{
|
2018-06-09 23:36:48 +03:00
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 100,
|
|
|
|
MinHTLC: 1,
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: 100000000,
|
2018-06-09 23:36:48 +03:00
|
|
|
}),
|
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 600,
|
|
|
|
MinHTLC: 1,
|
2018-12-09 05:42:46 +03:00
|
|
|
MaxHTLC: 100000000,
|
2018-06-09 23:36:48 +03:00
|
|
|
}),
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
2018-06-09 23:36:48 +03:00
|
|
|
|
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
2019-11-18 12:52:11 +03:00
|
|
|
target := ctx.keyFromAlias("target")
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
2018-06-09 23:36:48 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
ctx.source, path, startingHeight,
|
|
|
|
finalHopParams{
|
|
|
|
amt: paymentAmt,
|
|
|
|
cltvDelta: finalHopCLTV,
|
|
|
|
records: nil,
|
|
|
|
},
|
2019-07-31 07:41:58 +03:00
|
|
|
)
|
2018-06-09 23:36:48 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create path: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-07-31 10:20:22 +03:00
|
|
|
// Assert that the lowest fee route is returned.
|
2019-11-18 12:52:11 +03:00
|
|
|
if route.Hops[1].PubKeyBytes != ctx.keyFromAlias("b") {
|
2018-06-09 23:36:48 +03:00
|
|
|
t.Fatalf("expected route to pass through b, "+
|
|
|
|
"but got a route through %v",
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx.aliasFromKey(route.Hops[1].PubKeyBytes))
|
2018-06-09 23:36:48 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
func getAliasFromPubKey(pubKey route.Vertex,
|
|
|
|
aliases map[string]route.Vertex) string {
|
2018-08-09 16:36:28 +03:00
|
|
|
|
|
|
|
for alias, key := range aliases {
|
2019-03-05 18:55:19 +03:00
|
|
|
if key == pubKey {
|
2018-08-09 16:36:28 +03:00
|
|
|
return alias
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ""
|
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
type expectedHop struct {
|
|
|
|
alias string
|
|
|
|
fee lnwire.MilliSatoshi
|
|
|
|
fwdAmount lnwire.MilliSatoshi
|
|
|
|
timeLock uint32
|
|
|
|
}
|
|
|
|
|
|
|
|
type basicGraphPathFindingTestCase struct {
|
2018-06-04 23:10:05 +03:00
|
|
|
target string
|
|
|
|
paymentAmt btcutil.Amount
|
|
|
|
feeLimit lnwire.MilliSatoshi
|
|
|
|
expectedTotalAmt lnwire.MilliSatoshi
|
|
|
|
expectedTotalTimeLock uint32
|
|
|
|
expectedHops []expectedHop
|
|
|
|
expectFailureNoPath bool
|
2018-06-07 12:00:58 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
var basicGraphPathFindingTests = []basicGraphPathFindingTestCase{
|
2018-06-04 23:10:05 +03:00
|
|
|
// Basic route with one intermediate hop.
|
|
|
|
{target: "sophon", paymentAmt: 100, feeLimit: noFeeLimit,
|
|
|
|
expectedTotalTimeLock: 102, expectedTotalAmt: 100110,
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedHops: []expectedHop{
|
|
|
|
{alias: "songoku", fwdAmount: 100000, fee: 110, timeLock: 101},
|
|
|
|
{alias: "sophon", fwdAmount: 100000, fee: 0, timeLock: 101},
|
|
|
|
}},
|
2018-06-04 23:10:05 +03:00
|
|
|
|
|
|
|
// Basic direct (one hop) route.
|
|
|
|
{target: "luoji", paymentAmt: 100, feeLimit: noFeeLimit,
|
|
|
|
expectedTotalTimeLock: 101, expectedTotalAmt: 100000,
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedHops: []expectedHop{
|
|
|
|
{alias: "luoji", fwdAmount: 100000, fee: 0, timeLock: 101},
|
|
|
|
}},
|
2018-06-04 23:10:05 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// Three hop route where fees need to be added in to the forwarding amount.
|
2018-06-04 23:10:05 +03:00
|
|
|
// The high fee hop phamnewun should be avoided.
|
|
|
|
{target: "elst", paymentAmt: 50000, feeLimit: noFeeLimit,
|
|
|
|
expectedTotalTimeLock: 103, expectedTotalAmt: 50050210,
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedHops: []expectedHop{
|
|
|
|
{alias: "songoku", fwdAmount: 50000200, fee: 50010, timeLock: 102},
|
|
|
|
{alias: "sophon", fwdAmount: 50000000, fee: 200, timeLock: 101},
|
|
|
|
{alias: "elst", fwdAmount: 50000000, fee: 0, timeLock: 101},
|
|
|
|
}},
|
|
|
|
// Three hop route where fees need to be added in to the forwarding amount.
|
|
|
|
// However this time the fwdAmount becomes too large for the roasbeef <->
|
|
|
|
// songoku channel. Then there is no other option than to choose the
|
|
|
|
// expensive phamnuwen channel. This test case was failing before
|
|
|
|
// the route search was executed backwards.
|
2018-06-04 23:10:05 +03:00
|
|
|
{target: "elst", paymentAmt: 100000, feeLimit: noFeeLimit,
|
|
|
|
expectedTotalTimeLock: 103, expectedTotalAmt: 110010220,
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedHops: []expectedHop{
|
|
|
|
{alias: "phamnuwen", fwdAmount: 100000200, fee: 10010020, timeLock: 102},
|
|
|
|
{alias: "sophon", fwdAmount: 100000000, fee: 200, timeLock: 101},
|
|
|
|
{alias: "elst", fwdAmount: 100000000, fee: 0, timeLock: 101},
|
2018-06-04 23:10:05 +03:00
|
|
|
}},
|
|
|
|
|
|
|
|
// Basic route with fee limit.
|
|
|
|
{target: "sophon", paymentAmt: 100, feeLimit: 50,
|
|
|
|
expectFailureNoPath: true,
|
|
|
|
}}
|
2018-06-07 12:00:58 +03:00
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
func TestBasicGraphPathFinding(t *testing.T) {
|
2017-06-17 01:59:20 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
testGraphInstance, err := parseTestGraph(basicGraphFilePath)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer testGraphInstance.cleanUp()
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// With the test graph loaded, we'll test some basic path finding using
|
|
|
|
// the pre-generated graph. Consult the testdata/basic_graph.json file
|
|
|
|
// to follow along with the assumptions we'll use to test the path
|
|
|
|
// finding.
|
|
|
|
|
|
|
|
for _, testCase := range basicGraphPathFindingTests {
|
|
|
|
t.Run(testCase.target, func(subT *testing.T) {
|
2018-08-16 22:35:38 +03:00
|
|
|
testBasicGraphPathFindingCase(subT, testGraphInstance, &testCase)
|
2018-06-07 12:00:58 +03:00
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
func testBasicGraphPathFindingCase(t *testing.T, graphInstance *testGraphInstance,
|
|
|
|
test *basicGraphPathFindingTestCase) {
|
2018-06-07 12:00:58 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
aliases := graphInstance.aliasMap
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedHops := test.expectedHops
|
|
|
|
expectedHopCount := len(expectedHops)
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graphInstance.graph.SourceNode()
|
2017-03-20 00:32:52 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
2019-04-05 18:36:11 +03:00
|
|
|
sourceVertex := route.Vertex(sourceNode.PubKeyBytes)
|
2017-03-20 00:32:52 +03:00
|
|
|
|
2017-10-19 08:09:38 +03:00
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(test.paymentAmt)
|
2018-08-16 22:35:38 +03:00
|
|
|
target := graphInstance.aliasMap[test.target]
|
2020-03-17 13:32:07 +03:00
|
|
|
path, err := dbFindPath(
|
|
|
|
graphInstance.graph, nil, nil,
|
2019-03-05 13:13:44 +03:00
|
|
|
&RestrictParams{
|
2019-03-19 13:45:10 +03:00
|
|
|
FeeLimit: test.feeLimit,
|
|
|
|
ProbabilitySource: noProbabilitySource,
|
2019-10-11 22:46:10 +03:00
|
|
|
CltvLimit: math.MaxUint32,
|
2018-10-25 00:06:12 +03:00
|
|
|
},
|
2019-06-20 13:03:45 +03:00
|
|
|
testPathFindingConfig,
|
2019-03-05 18:55:19 +03:00
|
|
|
sourceNode.PubKeyBytes, target, paymentAmt,
|
2019-12-17 13:55:03 +03:00
|
|
|
startingHeight+finalHopCLTV,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2018-06-04 23:10:05 +03:00
|
|
|
if test.expectFailureNoPath {
|
|
|
|
if err == nil {
|
|
|
|
t.Fatal("expected no path to be found")
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
2017-03-20 01:15:24 +03:00
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
2018-02-01 01:36:10 +03:00
|
|
|
|
2018-04-19 17:32:24 +03:00
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
sourceVertex, path, startingHeight,
|
|
|
|
finalHopParams{
|
|
|
|
amt: paymentAmt,
|
|
|
|
cltvDelta: finalHopCLTV,
|
|
|
|
records: nil,
|
|
|
|
},
|
2018-04-19 17:32:24 +03:00
|
|
|
)
|
2017-03-20 01:15:24 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create path: %v", err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
if len(route.Hops) != len(expectedHops) {
|
|
|
|
t.Fatalf("route is of incorrect length, expected %v got %v",
|
|
|
|
expectedHopCount, len(route.Hops))
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// Check hop nodes
|
|
|
|
for i := 0; i < len(expectedHops); i++ {
|
2019-03-05 18:55:19 +03:00
|
|
|
if route.Hops[i].PubKeyBytes != aliases[expectedHops[i].alias] {
|
2018-01-31 07:26:26 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
t.Fatalf("%v-th hop should be %v, is instead: %v",
|
2018-08-09 16:36:28 +03:00
|
|
|
i, expectedHops[i],
|
2019-03-05 18:55:19 +03:00
|
|
|
getAliasFromPubKey(route.Hops[i].PubKeyBytes,
|
2018-08-09 16:36:28 +03:00
|
|
|
aliases))
|
2018-06-07 12:00:58 +03:00
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2017-06-16 23:45:24 +03:00
|
|
|
// Next, we'll assert that the "next hop" field in each route payload
|
|
|
|
// properly points to the channel ID that the HTLC should be forwarded
|
|
|
|
// along.
|
2019-01-11 07:03:07 +03:00
|
|
|
sphinxPath, err := route.ToSphinxPath()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to make sphinx path: %v", err)
|
|
|
|
}
|
|
|
|
if sphinxPath.TrueRouteLength() != expectedHopCount {
|
2017-06-16 23:45:24 +03:00
|
|
|
t.Fatalf("incorrect number of hop payloads: expected %v, got %v",
|
2019-01-11 07:03:07 +03:00
|
|
|
expectedHopCount, sphinxPath.TrueRouteLength())
|
2017-06-16 23:45:24 +03:00
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// Hops should point to the next hop
|
|
|
|
for i := 0; i < len(expectedHops)-1; i++ {
|
|
|
|
var expectedHop [8]byte
|
2018-08-09 16:36:28 +03:00
|
|
|
binary.BigEndian.PutUint64(expectedHop[:], route.Hops[i+1].ChannelID)
|
2019-07-31 07:41:58 +03:00
|
|
|
|
|
|
|
hopData, err := sphinxPath[i].HopPayload.HopData()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to make hop data: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
if !bytes.Equal(hopData.NextAddress[:], expectedHop[:]) {
|
2018-06-07 12:00:58 +03:00
|
|
|
t.Fatalf("first hop has incorrect next hop: expected %x, got %x",
|
2019-07-31 07:41:58 +03:00
|
|
|
expectedHop[:], hopData.NextAddress[:])
|
2018-06-07 12:00:58 +03:00
|
|
|
}
|
2017-06-16 23:45:24 +03:00
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// The final hop should have a next hop value of all zeroes in order
|
2017-06-16 23:45:24 +03:00
|
|
|
// to indicate it's the exit hop.
|
|
|
|
var exitHop [8]byte
|
2018-06-07 12:00:58 +03:00
|
|
|
lastHopIndex := len(expectedHops) - 1
|
2019-07-31 07:41:58 +03:00
|
|
|
|
|
|
|
hopData, err := sphinxPath[lastHopIndex].HopPayload.HopData()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create hop data: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
if !bytes.Equal(hopData.NextAddress[:], exitHop[:]) {
|
2017-06-16 23:45:24 +03:00
|
|
|
t.Fatalf("first hop has incorrect next hop: expected %x, got %x",
|
2019-07-31 07:41:58 +03:00
|
|
|
exitHop[:], hopData.NextAddress)
|
2017-06-16 23:45:24 +03:00
|
|
|
}
|
|
|
|
|
2018-06-04 23:10:05 +03:00
|
|
|
var expectedTotalFee lnwire.MilliSatoshi
|
2018-06-07 12:00:58 +03:00
|
|
|
for i := 0; i < expectedHopCount; i++ {
|
|
|
|
// We'll ensure that the amount to forward, and fees
|
|
|
|
// computed for each hop are correct.
|
2017-10-19 08:09:38 +03:00
|
|
|
|
2018-08-09 16:36:28 +03:00
|
|
|
fee := route.HopFee(i)
|
|
|
|
if fee != expectedHops[i].fee {
|
2018-06-07 12:00:58 +03:00
|
|
|
t.Fatalf("fee incorrect for hop %v: expected %v, got %v",
|
2018-08-09 16:36:28 +03:00
|
|
|
i, expectedHops[i].fee, fee)
|
2018-06-07 12:00:58 +03:00
|
|
|
}
|
2017-10-19 08:09:38 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
if route.Hops[i].AmtToForward != expectedHops[i].fwdAmount {
|
|
|
|
t.Fatalf("forwarding amount for hop %v incorrect: "+
|
|
|
|
"expected %v, got %v",
|
|
|
|
i, expectedHops[i].fwdAmount,
|
|
|
|
route.Hops[i].AmtToForward)
|
|
|
|
}
|
2017-06-16 23:45:24 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// We'll also assert that the outgoing CLTV value for each
|
|
|
|
// hop was set accordingly.
|
|
|
|
if route.Hops[i].OutgoingTimeLock != expectedHops[i].timeLock {
|
|
|
|
t.Fatalf("outgoing time-lock for hop %v is incorrect: "+
|
|
|
|
"expected %v, got %v", i,
|
|
|
|
expectedHops[i].timeLock,
|
|
|
|
route.Hops[i].OutgoingTimeLock)
|
|
|
|
}
|
2018-01-11 02:15:49 +03:00
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
expectedTotalFee += expectedHops[i].fee
|
2018-01-11 02:15:49 +03:00
|
|
|
}
|
|
|
|
|
2018-06-04 23:10:05 +03:00
|
|
|
if route.TotalAmount != test.expectedTotalAmt {
|
2018-06-07 12:00:58 +03:00
|
|
|
t.Fatalf("total amount incorrect: "+
|
|
|
|
"expected %v, got %v",
|
2018-06-04 23:10:05 +03:00
|
|
|
test.expectedTotalAmt, route.TotalAmount)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2018-02-01 01:36:10 +03:00
|
|
|
|
2018-06-04 23:10:05 +03:00
|
|
|
if route.TotalTimeLock != test.expectedTotalTimeLock {
|
2018-06-07 12:00:58 +03:00
|
|
|
t.Fatalf("expected time lock of %v, instead have %v", 2,
|
|
|
|
route.TotalTimeLock)
|
2017-03-20 01:15:24 +03:00
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
// TestPathFindingWithAdditionalEdges asserts that we are able to find paths to
|
|
|
|
// nodes that do not exist in the graph by way of hop hints. We also test that
|
|
|
|
// the path can support custom TLV records for the receiver under the
|
|
|
|
// appropriate circumstances.
|
2018-03-27 07:32:37 +03:00
|
|
|
func TestPathFindingWithAdditionalEdges(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
2018-03-27 07:32:37 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
2018-03-27 07:32:37 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
2018-03-27 07:32:37 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
|
|
|
|
|
|
|
// In this test, we'll test that we're able to find paths through
|
|
|
|
// private channels when providing them as additional edges in our path
|
|
|
|
// finding algorithm. To do so, we'll create a new node, doge, and
|
|
|
|
// create a private channel between it and songoku. We'll then attempt
|
|
|
|
// to find a path from our source node, roasbeef, to doge.
|
|
|
|
dogePubKeyHex := "03dd46ff29a6941b4a2607525b043ec9b020b3f318a1bf281536fd7011ec59c882"
|
|
|
|
dogePubKeyBytes, err := hex.DecodeString(dogePubKeyHex)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to decode public key: %v", err)
|
|
|
|
}
|
|
|
|
dogePubKey, err := btcec.ParsePubKey(dogePubKeyBytes, btcec.S256())
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to parse public key from bytes: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
doge := &channeldb.LightningNode{}
|
|
|
|
doge.AddPubKey(dogePubKey)
|
|
|
|
doge.Alias = "doge"
|
2019-03-05 18:55:19 +03:00
|
|
|
copy(doge.PubKeyBytes[:], dogePubKeyBytes)
|
|
|
|
graph.aliasMap["doge"] = doge.PubKeyBytes
|
2018-03-27 07:32:37 +03:00
|
|
|
|
|
|
|
// Create the channel edge going from songoku to doge and include it in
|
|
|
|
// our map of additional edges.
|
|
|
|
songokuToDoge := &channeldb.ChannelEdgePolicy{
|
|
|
|
Node: doge,
|
|
|
|
ChannelID: 1337,
|
|
|
|
FeeBaseMSat: 1,
|
|
|
|
FeeProportionalMillionths: 1000,
|
|
|
|
TimeLockDelta: 9,
|
|
|
|
}
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
additionalEdges := map[route.Vertex][]*channeldb.ChannelEdgePolicy{
|
2019-03-05 18:55:19 +03:00
|
|
|
graph.aliasMap["songoku"]: {songokuToDoge},
|
2018-03-27 07:32:37 +03:00
|
|
|
}
|
|
|
|
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
find := func(r *RestrictParams) (
|
|
|
|
[]*channeldb.ChannelEdgePolicy, error) {
|
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
return dbFindPath(
|
|
|
|
graph.graph, additionalEdges, nil,
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
r, testPathFindingConfig,
|
|
|
|
sourceNode.PubKeyBytes, doge.PubKeyBytes, paymentAmt,
|
2019-12-17 13:55:03 +03:00
|
|
|
0,
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
)
|
|
|
|
}
|
|
|
|
|
2018-03-27 07:32:37 +03:00
|
|
|
// We should now be able to find a path from roasbeef to doge.
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
path, err := find(noRestrictions)
|
2018-03-27 07:32:37 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find private path to doge: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// The path should represent the following hops:
|
|
|
|
// roasbeef -> songoku -> doge
|
2019-03-05 18:55:19 +03:00
|
|
|
assertExpectedPath(t, graph.aliasMap, path, "songoku", "doge")
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
|
|
|
|
// Now, set custom records for the final hop. This should fail since no
|
|
|
|
// dest features are set, and we won't have a node ann to fall back on.
|
|
|
|
restrictions := *noRestrictions
|
|
|
|
restrictions.DestCustomRecords = record.CustomSet{70000: []byte{}}
|
|
|
|
|
|
|
|
_, err = find(&restrictions)
|
|
|
|
if err != errNoTlvPayload {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Set empty dest features so we don't try the fallback. We should still
|
|
|
|
// fail since the tlv feature isn't set.
|
|
|
|
restrictions.DestFeatures = lnwire.EmptyFeatureVector()
|
|
|
|
|
|
|
|
_, err = find(&restrictions)
|
|
|
|
if err != errNoTlvPayload {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Finally, set the tlv feature in the payload and assert we found the
|
|
|
|
// same path as before.
|
|
|
|
restrictions.DestFeatures = tlvFeatures
|
|
|
|
|
|
|
|
path, err = find(&restrictions)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("path should have been found: %v", err)
|
|
|
|
}
|
|
|
|
assertExpectedPath(t, graph.aliasMap, path, "songoku", "doge")
|
2018-03-27 07:32:37 +03:00
|
|
|
}
|
|
|
|
|
2018-06-12 14:04:40 +03:00
|
|
|
// TestNewRoute tests whether the construction of hop payloads by newRoute
|
|
|
|
// is executed correctly.
|
|
|
|
func TestNewRoute(t *testing.T) {
|
|
|
|
|
|
|
|
var sourceKey [33]byte
|
2019-04-05 18:36:11 +03:00
|
|
|
sourceVertex := route.Vertex(sourceKey)
|
2018-06-12 14:04:40 +03:00
|
|
|
|
2019-12-19 10:56:42 +03:00
|
|
|
testPaymentAddr := [32]byte{0x01, 0x02, 0x03}
|
|
|
|
|
2018-06-12 14:04:40 +03:00
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
|
|
|
|
|
|
|
createHop := func(baseFee lnwire.MilliSatoshi,
|
2018-07-31 10:20:22 +03:00
|
|
|
feeRate lnwire.MilliSatoshi,
|
2018-06-13 10:59:02 +03:00
|
|
|
bandwidth lnwire.MilliSatoshi,
|
2018-08-09 16:36:28 +03:00
|
|
|
timeLockDelta uint16) *channeldb.ChannelEdgePolicy {
|
|
|
|
|
|
|
|
return &channeldb.ChannelEdgePolicy{
|
2019-07-31 07:41:58 +03:00
|
|
|
Node: &channeldb.LightningNode{
|
|
|
|
Features: lnwire.NewFeatureVector(
|
|
|
|
nil, nil,
|
|
|
|
),
|
|
|
|
},
|
2018-08-09 16:36:28 +03:00
|
|
|
FeeProportionalMillionths: feeRate,
|
|
|
|
FeeBaseMSat: baseFee,
|
|
|
|
TimeLockDelta: timeLockDelta,
|
2018-06-12 14:04:40 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-07-31 10:20:22 +03:00
|
|
|
testCases := []struct {
|
2018-06-27 12:01:18 +03:00
|
|
|
// name identifies the test case in the test output.
|
2018-07-31 10:20:22 +03:00
|
|
|
name string
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// hops is the list of hops (the route) that gets passed into
|
|
|
|
// the call to newRoute.
|
2018-08-09 16:36:28 +03:00
|
|
|
hops []*channeldb.ChannelEdgePolicy
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// paymentAmount is the amount that is send into the route
|
|
|
|
// indicated by hops.
|
2018-07-31 10:20:22 +03:00
|
|
|
paymentAmount lnwire.MilliSatoshi
|
2018-06-27 12:01:18 +03:00
|
|
|
|
2019-12-19 10:56:05 +03:00
|
|
|
// destFeatures is a feature vector, that if non-nil, will
|
|
|
|
// overwrite the final hop's feature vector in the graph.
|
|
|
|
destFeatures *lnwire.FeatureVector
|
|
|
|
|
2019-12-19 10:56:42 +03:00
|
|
|
paymentAddr *[32]byte
|
|
|
|
|
2018-06-27 12:01:18 +03:00
|
|
|
// expectedFees is a list of fees that every hop is expected
|
|
|
|
// to charge for forwarding.
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedFees []lnwire.MilliSatoshi
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// expectedTimeLocks is a list of time lock values that every
|
|
|
|
// hop is expected to specify in its outgoing HTLC. The time
|
|
|
|
// lock values in this list are relative to the current block
|
|
|
|
// height.
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedTimeLocks []uint32
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// expectedTotalAmount is the total amount that is expected to
|
|
|
|
// be returned from newRoute. This amount should include all
|
|
|
|
// the fees to be paid to intermediate hops.
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedTotalAmount lnwire.MilliSatoshi
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// expectedTotalTimeLock is the time lock that is expected to
|
|
|
|
// be returned from newRoute. This is the time lock that should
|
|
|
|
// be specified in the HTLC that is sent by the source node.
|
|
|
|
// expectedTotalTimeLock is relative to the current block height.
|
|
|
|
expectedTotalTimeLock uint32
|
|
|
|
|
|
|
|
// expectError indicates whether the newRoute call is expected
|
|
|
|
// to fail or succeed.
|
2018-07-31 10:20:22 +03:00
|
|
|
expectError bool
|
2018-06-27 12:01:18 +03:00
|
|
|
|
|
|
|
// expectedErrorCode indicates the expected error code when
|
|
|
|
// expectError is true.
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedErrorCode errorCode
|
2019-12-19 10:56:05 +03:00
|
|
|
|
|
|
|
expectedTLVPayload bool
|
2019-12-19 10:56:42 +03:00
|
|
|
|
|
|
|
expectedMPP *record.MPP
|
2018-07-31 10:20:22 +03:00
|
|
|
}{
|
|
|
|
{
|
|
|
|
// For a single hop payment, no fees are expected to be paid.
|
|
|
|
name: "single hop",
|
|
|
|
paymentAmount: 100000,
|
2018-08-09 16:36:28 +03:00
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(100, 1000, 1000000, 10),
|
2018-07-31 10:20:22 +03:00
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{0},
|
|
|
|
expectedTimeLocks: []uint32{1},
|
|
|
|
expectedTotalAmount: 100000,
|
|
|
|
expectedTotalTimeLock: 1,
|
|
|
|
}, {
|
|
|
|
// For a two hop payment, only the fee for the first hop
|
|
|
|
// needs to be paid. The destination hop does not require
|
|
|
|
// a fee to receive the payment.
|
|
|
|
name: "two hop",
|
|
|
|
paymentAmount: 100000,
|
2018-08-09 16:36:28 +03:00
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(0, 1000, 1000000, 10),
|
|
|
|
createHop(30, 1000, 1000000, 5),
|
2018-07-31 10:20:22 +03:00
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{130, 0},
|
|
|
|
expectedTimeLocks: []uint32{1, 1},
|
|
|
|
expectedTotalAmount: 100130,
|
|
|
|
expectedTotalTimeLock: 6,
|
2019-12-19 10:56:05 +03:00
|
|
|
}, {
|
|
|
|
// For a two hop payment, only the fee for the first hop
|
|
|
|
// needs to be paid. The destination hop does not require
|
|
|
|
// a fee to receive the payment.
|
|
|
|
name: "two hop tlv onion feature",
|
|
|
|
destFeatures: tlvFeatures,
|
|
|
|
paymentAmount: 100000,
|
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
|
|
|
createHop(0, 1000, 1000000, 10),
|
|
|
|
createHop(30, 1000, 1000000, 5),
|
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{130, 0},
|
|
|
|
expectedTimeLocks: []uint32{1, 1},
|
|
|
|
expectedTotalAmount: 100130,
|
|
|
|
expectedTotalTimeLock: 6,
|
|
|
|
expectedTLVPayload: true,
|
2019-12-19 10:56:42 +03:00
|
|
|
}, {
|
|
|
|
// For a two hop payment, only the fee for the first hop
|
|
|
|
// needs to be paid. The destination hop does not require
|
|
|
|
// a fee to receive the payment.
|
|
|
|
name: "two hop single shot mpp",
|
|
|
|
destFeatures: tlvPayAddrFeatures,
|
|
|
|
paymentAddr: &testPaymentAddr,
|
|
|
|
paymentAmount: 100000,
|
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
|
|
|
createHop(0, 1000, 1000000, 10),
|
|
|
|
createHop(30, 1000, 1000000, 5),
|
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{130, 0},
|
|
|
|
expectedTimeLocks: []uint32{1, 1},
|
|
|
|
expectedTotalAmount: 100130,
|
|
|
|
expectedTotalTimeLock: 6,
|
|
|
|
expectedTLVPayload: true,
|
|
|
|
expectedMPP: record.NewMPP(
|
|
|
|
100000, testPaymentAddr,
|
|
|
|
),
|
2018-07-31 10:20:22 +03:00
|
|
|
}, {
|
|
|
|
// A three hop payment where the first and second hop
|
|
|
|
// will both charge 1 msat. The fee for the first hop
|
|
|
|
// is actually slightly higher than 1, because the amount
|
|
|
|
// to forward also includes the fee for the second hop. This
|
|
|
|
// gets rounded down to 1.
|
|
|
|
name: "three hop",
|
|
|
|
paymentAmount: 100000,
|
2018-08-09 16:36:28 +03:00
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(0, 10, 1000000, 10),
|
|
|
|
createHop(0, 10, 1000000, 5),
|
|
|
|
createHop(0, 10, 1000000, 3),
|
2018-07-31 10:20:22 +03:00
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{1, 1, 0},
|
|
|
|
expectedTotalAmount: 100002,
|
|
|
|
expectedTimeLocks: []uint32{4, 1, 1},
|
|
|
|
expectedTotalTimeLock: 9,
|
|
|
|
}, {
|
|
|
|
// A three hop payment where the fee of the first hop
|
|
|
|
// is slightly higher (11) than the fee at the second hop,
|
|
|
|
// because of the increase amount to forward.
|
|
|
|
name: "three hop with fee carry over",
|
|
|
|
paymentAmount: 100000,
|
2018-08-09 16:36:28 +03:00
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(0, 10000, 1000000, 10),
|
|
|
|
createHop(0, 10000, 1000000, 5),
|
|
|
|
createHop(0, 10000, 1000000, 3),
|
2018-07-31 10:20:22 +03:00
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{1010, 1000, 0},
|
|
|
|
expectedTotalAmount: 102010,
|
|
|
|
expectedTimeLocks: []uint32{4, 1, 1},
|
|
|
|
expectedTotalTimeLock: 9,
|
|
|
|
}, {
|
|
|
|
// A three hop payment where the fee policies of the first and
|
|
|
|
// second hop are just high enough to show the fee carry over
|
|
|
|
// effect.
|
|
|
|
name: "three hop with minimal fees for carry over",
|
|
|
|
paymentAmount: 100000,
|
2018-08-09 16:36:28 +03:00
|
|
|
hops: []*channeldb.ChannelEdgePolicy{
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(0, 10000, 1000000, 10),
|
2018-07-31 10:20:22 +03:00
|
|
|
|
|
|
|
// First hop charges 0.1% so the second hop fee
|
|
|
|
// should show up in the first hop fee as 1 msat
|
|
|
|
// extra.
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(0, 1000, 1000000, 5),
|
2018-07-31 10:20:22 +03:00
|
|
|
|
|
|
|
// Second hop charges a fixed 1000 msat.
|
2018-06-13 10:59:02 +03:00
|
|
|
createHop(1000, 0, 1000000, 3),
|
2018-07-31 10:20:22 +03:00
|
|
|
},
|
|
|
|
expectedFees: []lnwire.MilliSatoshi{101, 1000, 0},
|
|
|
|
expectedTotalAmount: 101101,
|
|
|
|
expectedTimeLocks: []uint32{4, 1, 1},
|
|
|
|
expectedTotalTimeLock: 9,
|
|
|
|
}}
|
|
|
|
|
2018-06-12 14:04:40 +03:00
|
|
|
for _, testCase := range testCases {
|
2019-12-19 10:56:05 +03:00
|
|
|
testCase := testCase
|
|
|
|
|
|
|
|
// Overwrite the final hop's features if the test requires a
|
|
|
|
// custom feature vector.
|
|
|
|
if testCase.destFeatures != nil {
|
|
|
|
finalHop := testCase.hops[len(testCase.hops)-1]
|
|
|
|
finalHop.Node.Features = testCase.destFeatures
|
|
|
|
}
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
assertRoute := func(t *testing.T, route *route.Route) {
|
2018-06-12 14:04:40 +03:00
|
|
|
if route.TotalAmount != testCase.expectedTotalAmount {
|
2018-07-31 10:20:22 +03:00
|
|
|
t.Errorf("Expected total amount is be %v"+
|
2018-06-12 14:04:40 +03:00
|
|
|
", but got %v instead",
|
|
|
|
testCase.expectedTotalAmount,
|
|
|
|
route.TotalAmount)
|
|
|
|
}
|
2018-06-27 12:01:18 +03:00
|
|
|
|
2018-06-12 14:04:40 +03:00
|
|
|
for i := 0; i < len(testCase.expectedFees); i++ {
|
2018-08-09 16:36:28 +03:00
|
|
|
fee := route.HopFee(i)
|
|
|
|
if testCase.expectedFees[i] != fee {
|
2018-07-31 10:20:22 +03:00
|
|
|
|
|
|
|
t.Errorf("Expected fee for hop %v to "+
|
|
|
|
"be %v, but got %v instead",
|
|
|
|
i, testCase.expectedFees[i],
|
2018-08-09 16:36:28 +03:00
|
|
|
fee)
|
2018-06-12 14:04:40 +03:00
|
|
|
}
|
|
|
|
}
|
2018-06-27 12:01:18 +03:00
|
|
|
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedTimeLockHeight := startingHeight +
|
2018-06-27 12:01:18 +03:00
|
|
|
testCase.expectedTotalTimeLock
|
|
|
|
|
|
|
|
if route.TotalTimeLock != expectedTimeLockHeight {
|
2018-07-31 10:20:22 +03:00
|
|
|
|
|
|
|
t.Errorf("Expected total time lock to be %v"+
|
2018-06-27 12:01:18 +03:00
|
|
|
", but got %v instead",
|
|
|
|
expectedTimeLockHeight,
|
|
|
|
route.TotalTimeLock)
|
|
|
|
}
|
2018-07-31 10:20:22 +03:00
|
|
|
|
2018-06-27 12:01:18 +03:00
|
|
|
for i := 0; i < len(testCase.expectedTimeLocks); i++ {
|
2018-07-31 10:20:22 +03:00
|
|
|
expectedTimeLockHeight := startingHeight +
|
2018-06-27 12:01:18 +03:00
|
|
|
testCase.expectedTimeLocks[i]
|
|
|
|
|
|
|
|
if expectedTimeLockHeight !=
|
|
|
|
route.Hops[i].OutgoingTimeLock {
|
2018-07-31 10:20:22 +03:00
|
|
|
|
|
|
|
t.Errorf("Expected time lock for hop "+
|
2018-06-27 12:01:18 +03:00
|
|
|
"%v to be %v, but got %v instead",
|
2018-07-31 10:20:22 +03:00
|
|
|
i, expectedTimeLockHeight,
|
|
|
|
route.Hops[i].OutgoingTimeLock)
|
2018-06-27 12:01:18 +03:00
|
|
|
}
|
|
|
|
}
|
2019-12-19 10:56:05 +03:00
|
|
|
|
|
|
|
finalHop := route.Hops[len(route.Hops)-1]
|
|
|
|
if !finalHop.LegacyPayload !=
|
|
|
|
testCase.expectedTLVPayload {
|
|
|
|
|
2019-12-19 10:56:42 +03:00
|
|
|
t.Errorf("Expected final hop tlv payload: %t, "+
|
2019-12-19 10:56:05 +03:00
|
|
|
"but got: %t instead",
|
|
|
|
testCase.expectedTLVPayload,
|
|
|
|
!finalHop.LegacyPayload)
|
|
|
|
}
|
2019-12-19 10:56:42 +03:00
|
|
|
|
|
|
|
if !reflect.DeepEqual(
|
|
|
|
finalHop.MPP, testCase.expectedMPP,
|
|
|
|
) {
|
|
|
|
t.Errorf("Expected final hop mpp field: %v, "+
|
|
|
|
" but got: %v instead",
|
|
|
|
testCase.expectedMPP, finalHop.MPP)
|
|
|
|
}
|
2018-06-12 14:04:40 +03:00
|
|
|
}
|
2018-07-31 10:20:22 +03:00
|
|
|
|
2018-06-12 14:04:40 +03:00
|
|
|
t.Run(testCase.name, func(t *testing.T) {
|
2019-07-31 07:41:58 +03:00
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
sourceVertex, testCase.hops, startingHeight,
|
|
|
|
finalHopParams{
|
2019-12-19 10:56:42 +03:00
|
|
|
amt: testCase.paymentAmount,
|
2020-03-25 16:06:48 +03:00
|
|
|
totalAmt: testCase.paymentAmount,
|
2019-12-19 10:56:42 +03:00
|
|
|
cltvDelta: finalHopCLTV,
|
|
|
|
records: nil,
|
|
|
|
paymentAddr: testCase.paymentAddr,
|
2019-12-19 10:55:08 +03:00
|
|
|
},
|
2019-07-31 07:41:58 +03:00
|
|
|
)
|
2018-06-12 14:04:40 +03:00
|
|
|
|
|
|
|
if testCase.expectError {
|
|
|
|
expectedCode := testCase.expectedErrorCode
|
|
|
|
if err == nil || !IsError(err, expectedCode) {
|
2018-06-13 10:59:02 +03:00
|
|
|
t.Fatalf("expected newRoute to fail "+
|
|
|
|
"with error code %v but got "+
|
2018-07-31 10:20:22 +03:00
|
|
|
"%v instead",
|
|
|
|
expectedCode, err)
|
2018-06-12 14:04:40 +03:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if err != nil {
|
|
|
|
t.Errorf("unable to create path: %v", err)
|
2018-06-13 10:59:02 +03:00
|
|
|
return
|
2018-06-12 14:04:40 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
assertRoute(t, route)
|
|
|
|
}
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
func TestNewRoutePathTooLong(t *testing.T) {
|
2019-12-18 12:28:21 +03:00
|
|
|
t.Parallel()
|
2017-06-17 01:59:20 +03:00
|
|
|
|
2019-12-18 12:28:21 +03:00
|
|
|
var testChannels []*testChannel
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2019-12-18 12:28:21 +03:00
|
|
|
// Setup a linear network of 21 hops.
|
|
|
|
fromNode := "start"
|
|
|
|
for i := 0; i < 21; i++ {
|
|
|
|
toNode := fmt.Sprintf("node-%v", i+1)
|
|
|
|
c := symmetricTestChannel(fromNode, toNode, 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000001,
|
|
|
|
})
|
|
|
|
testChannels = append(testChannels, c)
|
|
|
|
|
|
|
|
fromNode = toNode
|
2017-03-20 00:32:52 +03:00
|
|
|
}
|
|
|
|
|
2019-12-18 12:28:21 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "start")
|
|
|
|
defer ctx.cleanup()
|
2017-02-01 18:48:30 +03:00
|
|
|
|
2019-12-18 12:28:21 +03:00
|
|
|
// Assert that we can find 20 hop routes.
|
|
|
|
node20 := ctx.keyFromAlias("node-20")
|
|
|
|
payAmt := lnwire.MilliSatoshi(100001)
|
|
|
|
_, err := ctx.findPath(node20, payAmt)
|
2017-03-21 04:15:50 +03:00
|
|
|
if err != nil {
|
2019-12-18 12:28:21 +03:00
|
|
|
t.Fatalf("unexpected pathfinding failure: %v", err)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2017-02-01 18:48:30 +03:00
|
|
|
|
2019-12-18 12:28:21 +03:00
|
|
|
// Assert that finding a 21 hop route fails.
|
|
|
|
node21 := ctx.keyFromAlias("node-21")
|
|
|
|
_, err = ctx.findPath(node21, payAmt)
|
2019-12-16 16:22:42 +03:00
|
|
|
if err != errNoPathFound {
|
|
|
|
t.Fatalf("not route error expected, but got %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Assert that we can't find a 20 hop route if custom records make it
|
|
|
|
// exceed the maximum payload size.
|
|
|
|
ctx.restrictParams.DestFeatures = tlvFeatures
|
|
|
|
ctx.restrictParams.DestCustomRecords = map[uint64][]byte{
|
|
|
|
100000: bytes.Repeat([]byte{1}, 100),
|
|
|
|
}
|
|
|
|
_, err = ctx.findPath(node20, payAmt)
|
|
|
|
if err != errNoPathFound {
|
|
|
|
t.Fatalf("not route error expected, but got %v", err)
|
2017-02-01 18:48:30 +03:00
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
func TestPathNotAvailable(t *testing.T) {
|
2017-06-17 01:59:20 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
2017-03-20 00:32:52 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// With the test graph loaded, we'll test that queries for target that
|
|
|
|
// are either unreachable within the graph, or unknown result in an
|
|
|
|
// error.
|
|
|
|
unknownNodeStr := "03dd46ff29a6941b4a2607525b043ec9b020b3f318a1bf281536fd7011ec59c882"
|
|
|
|
unknownNodeBytes, err := hex.DecodeString(unknownNodeStr)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to parse bytes: %v", err)
|
|
|
|
}
|
2019-04-05 18:36:11 +03:00
|
|
|
var unknownNode route.Vertex
|
2019-03-05 18:55:19 +03:00
|
|
|
copy(unknownNode[:], unknownNodeBytes)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, unknownNode, 100, 0,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
// TestDestTLVGraphFallback asserts that we properly detect when we can send TLV
|
|
|
|
// records to a receiver, and also that we fallback to the receiver's node
|
|
|
|
// announcement if we don't have an invoice features.
|
|
|
|
func TestDestTLVGraphFallback(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
asymmetricTestChannel("roasbeef", "luoji", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
}, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
}, 0),
|
|
|
|
asymmetricTestChannel("roasbeef", "satoshi", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
}, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
Features: tlvFeatures,
|
|
|
|
}, 0),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
sourceNode, err := ctx.graph.SourceNode()
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
find := func(r *RestrictParams,
|
|
|
|
target route.Vertex) ([]*channeldb.ChannelEdgePolicy, error) {
|
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
return dbFindPath(
|
|
|
|
ctx.graph, nil, nil,
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
r, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, 100, 0,
|
routing/pathfind: fix TLV fallback for unadvertised hops
In this commit, we fix a bug that prevents us from sending custom
records to nodes that aren't in the graph. Previously we would simply
fail if we were unable to retrieve the node's features.
To remedy, we add the option of supplying the destination's feature bits
into path finding. If present, we will use them directly without
consulting the graph, resolving the original issue. Instead, we will
only consult the graph as a fallback, which will still fail if the node
doesn't exist since the TLV features won't be populated in the empty
feature vector.
Furthermore, this also permits us to provide "virtual features" into the
pathfinding logic, where we make assumptions about what the receiver
supports even if the feature vector isn't actually taken from an
invoice. This can useful in cases like keysend, where we don't have an
invoice, but we can still attempt the payment if we assume the receiver
supports TLV.
2019-12-19 10:53:51 +03:00
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Luoji's node ann has an empty feature vector.
|
|
|
|
luoji := ctx.testGraphInstance.aliasMap["luoji"]
|
|
|
|
|
|
|
|
// Satoshi's node ann supports TLV.
|
|
|
|
satoshi := ctx.testGraphInstance.aliasMap["satoshi"]
|
|
|
|
|
|
|
|
restrictions := *noRestrictions
|
|
|
|
|
|
|
|
// Add custom records w/o any dest features.
|
|
|
|
restrictions.DestCustomRecords = record.CustomSet{70000: []byte{}}
|
|
|
|
|
|
|
|
// Path to luoji should fail because his node ann features are empty.
|
|
|
|
_, err = find(&restrictions, luoji)
|
|
|
|
if err != errNoTlvPayload {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// However, path to satoshi should succeed via the fallback because his
|
|
|
|
// node ann features have the TLV bit.
|
|
|
|
path, err := find(&restrictions, satoshi)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("path should have been found: %v", err)
|
|
|
|
}
|
|
|
|
assertExpectedPath(t, ctx.testGraphInstance.aliasMap, path, "satoshi")
|
|
|
|
|
|
|
|
// Add empty destination features. This should cause both paths to fail,
|
|
|
|
// since this override anything in the graph.
|
|
|
|
restrictions.DestFeatures = lnwire.EmptyFeatureVector()
|
|
|
|
|
|
|
|
_, err = find(&restrictions, luoji)
|
|
|
|
if err != errNoTlvPayload {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
_, err = find(&restrictions, satoshi)
|
|
|
|
if err != errNoTlvPayload {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Finally, set the TLV dest feature. We should succeed in finding a
|
|
|
|
// path to luoji.
|
|
|
|
restrictions.DestFeatures = tlvFeatures
|
|
|
|
|
|
|
|
path, err = find(&restrictions, luoji)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("path should have been found: %v", err)
|
|
|
|
}
|
|
|
|
assertExpectedPath(t, ctx.testGraphInstance.aliasMap, path, "luoji")
|
|
|
|
}
|
|
|
|
|
2019-12-19 10:54:11 +03:00
|
|
|
// TestMissingFeatureDep asserts that we fail path finding when the
|
|
|
|
// destination's features are broken, in that the feature vector doesn't signal
|
|
|
|
// all transitive dependencies.
|
|
|
|
func TestMissingFeatureDep(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
asymmetricTestChannel("roasbeef", "conner", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
},
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
Features: payAddrFeatures,
|
|
|
|
}, 0,
|
|
|
|
),
|
2019-12-19 10:54:49 +03:00
|
|
|
asymmetricTestChannel("conner", "joost", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
Features: payAddrFeatures,
|
|
|
|
},
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
}, 0,
|
|
|
|
),
|
2019-12-19 10:54:11 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
// Conner's node in the graph has a broken feature vector, since it
|
|
|
|
// signals payment addresses without signaling tlv onions. Pathfinding
|
|
|
|
// should fail since we validate transitive feature dependencies for the
|
|
|
|
// final node.
|
2020-01-10 00:52:35 +03:00
|
|
|
conner := ctx.keyFromAlias("conner")
|
|
|
|
joost := ctx.keyFromAlias("joost")
|
2019-12-19 10:54:11 +03:00
|
|
|
|
2020-01-10 00:52:35 +03:00
|
|
|
_, err := ctx.findPath(conner, 100)
|
2019-12-19 10:54:11 +03:00
|
|
|
if err != feature.NewErrMissingFeatureDep(
|
|
|
|
lnwire.TLVOnionPayloadOptional,
|
|
|
|
) {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now, set the TLV and payment addresses features to override the
|
|
|
|
// broken features found in the graph. We should succeed in finding a
|
|
|
|
// path to conner.
|
2020-01-10 00:52:35 +03:00
|
|
|
ctx.restrictParams.DestFeatures = tlvPayAddrFeatures
|
2019-12-19 10:54:11 +03:00
|
|
|
|
2020-01-10 00:52:35 +03:00
|
|
|
path, err := ctx.findPath(conner, 100)
|
2019-12-19 10:54:11 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("path should have been found: %v", err)
|
|
|
|
}
|
|
|
|
assertExpectedPath(t, ctx.testGraphInstance.aliasMap, path, "conner")
|
2019-12-19 10:54:49 +03:00
|
|
|
|
|
|
|
// Finally, try to find a route to joost through conner. The
|
|
|
|
// destination features are set properly from the previous assertions,
|
|
|
|
// but conner's feature vector in the graph is still broken. We expect
|
|
|
|
// errNoPathFound and not the missing feature dep err above since
|
|
|
|
// intermediate hops are simply skipped if they have invalid feature
|
|
|
|
// vectors, leaving no possible route to joost.
|
2020-01-10 00:52:35 +03:00
|
|
|
_, err = ctx.findPath(joost, 100)
|
2019-12-19 10:54:49 +03:00
|
|
|
if err != errNoPathFound {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
2019-12-19 10:54:11 +03:00
|
|
|
}
|
|
|
|
|
2020-01-08 23:25:00 +03:00
|
|
|
// TestUnknownRequiredFeatures asserts that we fail path finding when the
|
|
|
|
// destination requires an unknown required feature, and that we skip
|
|
|
|
// intermediaries that signal unknown required features.
|
|
|
|
func TestUnknownRequiredFeatures(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
asymmetricTestChannel("roasbeef", "conner", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
},
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
Features: unknownRequiredFeatures,
|
|
|
|
}, 0,
|
|
|
|
),
|
|
|
|
asymmetricTestChannel("conner", "joost", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
Features: unknownRequiredFeatures,
|
|
|
|
},
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
}, 0,
|
|
|
|
),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
conner := ctx.keyFromAlias("conner")
|
|
|
|
joost := ctx.keyFromAlias("joost")
|
|
|
|
|
|
|
|
// Conner's node in the graph has an unknown required feature (100).
|
|
|
|
// Pathfinding should fail since we check the destination's features for
|
|
|
|
// unknown required features before beginning pathfinding.
|
|
|
|
expErr := feature.NewErrUnknownRequired([]lnwire.FeatureBit{100})
|
|
|
|
_, err := ctx.findPath(conner, 100)
|
|
|
|
if !reflect.DeepEqual(err, expErr) {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now, try to find a route to joost through conner. The destination
|
|
|
|
// features are valid, but conner's feature vector in the graph still
|
|
|
|
// requires feature 100. We expect errNoPathFound and not the error
|
|
|
|
// above since intermediate hops are simply skipped if they have invalid
|
|
|
|
// feature vectors, leaving no possible route to joost. This asserts
|
|
|
|
// that we don't try to route _through_ nodes with unknown required
|
|
|
|
// features.
|
|
|
|
_, err = ctx.findPath(joost, 100)
|
|
|
|
if err != errNoPathFound {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-19 10:54:31 +03:00
|
|
|
// TestDestPaymentAddr asserts that we properly detect when we can send a
|
|
|
|
// payment address to a receiver, and also that we fallback to the receiver's
|
|
|
|
// node announcement if we don't have an invoice features.
|
|
|
|
func TestDestPaymentAddr(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("roasbeef", "luoji", 100000,
|
|
|
|
&testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000000,
|
|
|
|
},
|
|
|
|
),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
2020-01-10 00:52:35 +03:00
|
|
|
luoji := ctx.keyFromAlias("luoji")
|
2019-12-19 10:54:31 +03:00
|
|
|
|
|
|
|
// Add payment address w/o any invoice features.
|
2020-01-10 00:52:35 +03:00
|
|
|
ctx.restrictParams.PaymentAddr = &[32]byte{1}
|
2019-12-19 10:54:31 +03:00
|
|
|
|
|
|
|
// Add empty destination features. This should cause us to fail, since
|
|
|
|
// this overrides anything in the graph.
|
2020-01-10 00:52:35 +03:00
|
|
|
ctx.restrictParams.DestFeatures = lnwire.EmptyFeatureVector()
|
2019-12-19 10:54:31 +03:00
|
|
|
|
2020-01-10 00:52:35 +03:00
|
|
|
_, err := ctx.findPath(luoji, 100)
|
2019-12-19 10:54:31 +03:00
|
|
|
if err != errNoPaymentAddr {
|
|
|
|
t.Fatalf("path shouldn't have been found: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now, set the TLV and payment address features for the destination. We
|
|
|
|
// should succeed in finding a path to luoji.
|
2020-01-10 00:52:35 +03:00
|
|
|
ctx.restrictParams.DestFeatures = tlvPayAddrFeatures
|
2019-12-19 10:54:31 +03:00
|
|
|
|
2020-01-10 00:52:35 +03:00
|
|
|
path, err := ctx.findPath(luoji, 100)
|
2019-12-19 10:54:31 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("path should have been found: %v", err)
|
|
|
|
}
|
|
|
|
assertExpectedPath(t, ctx.testGraphInstance.aliasMap, path, "luoji")
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
func TestPathInsufficientCapacity(t *testing.T) {
|
2017-06-17 01:59:20 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
2017-03-20 00:32:52 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// Next, test that attempting to find a path in which the current
|
|
|
|
// channel graph cannot support due to insufficient capacity triggers
|
|
|
|
// an error.
|
|
|
|
|
|
|
|
// To test his we'll attempt to make a payment of 1 BTC, or 100 million
|
|
|
|
// satoshis. The largest channel in the basic graph is of size 100k
|
|
|
|
// satoshis, so we shouldn't be able to find a path to sophon even
|
|
|
|
// though we have a 2-hop link.
|
2018-08-16 22:35:38 +03:00
|
|
|
target := graph.aliasMap["sophon"]
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
|
2018-03-23 03:39:21 +03:00
|
|
|
payAmt := lnwire.NewMSatFromSatoshis(btcutil.SatoshiPerBitcoin)
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2020-04-16 11:45:00 +03:00
|
|
|
if err != errInsufficientBalance {
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
t.Fatalf("graph shouldn't be able to support payment: %v", err)
|
|
|
|
}
|
2017-12-01 09:31:18 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// TestRouteFailMinHTLC tests that if we attempt to route an HTLC which is
|
|
|
|
// smaller than the advertised minHTLC of an edge, then path finding fails.
|
|
|
|
func TestRouteFailMinHTLC(t *testing.T) {
|
2018-04-19 17:32:24 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
2017-12-01 09:31:18 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// We'll not attempt to route an HTLC of 10 SAT from roasbeef to Son
|
|
|
|
// Goku. However, the min HTLC of Son Goku is 1k SAT, as a result, this
|
|
|
|
// attempt should fail.
|
2018-08-16 22:35:38 +03:00
|
|
|
target := graph.aliasMap["songoku"]
|
2017-12-01 09:31:18 +03:00
|
|
|
payAmt := lnwire.MilliSatoshi(10)
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
2017-12-01 09:31:18 +03:00
|
|
|
t.Fatalf("graph shouldn't be able to support payment: %v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-09 05:32:49 +03:00
|
|
|
// TestRouteFailMaxHTLC tests that if we attempt to route an HTLC which is
|
|
|
|
// larger than the advertised max HTLC of an edge, then path finding fails.
|
|
|
|
func TestRouteFailMaxHTLC(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph:
|
|
|
|
// roasbeef <--> firstHop <--> secondHop <--> target
|
|
|
|
// We will be adjusting the max HTLC of the edge between the first and
|
|
|
|
// second hops.
|
|
|
|
var firstToSecondID uint64 = 1
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("roasbeef", "first", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000001,
|
|
|
|
}),
|
|
|
|
symmetricTestChannel("first", "second", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000002,
|
|
|
|
}, firstToSecondID),
|
|
|
|
symmetricTestChannel("second", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
MinHTLC: 1,
|
|
|
|
MaxHTLC: 100000003,
|
|
|
|
}),
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
2018-12-09 05:32:49 +03:00
|
|
|
|
|
|
|
// First, attempt to send a payment greater than the max HTLC we are
|
|
|
|
// about to set, which should succeed.
|
2019-11-18 12:52:11 +03:00
|
|
|
target := ctx.keyFromAlias("target")
|
2018-12-09 05:32:49 +03:00
|
|
|
payAmt := lnwire.MilliSatoshi(100001)
|
2019-11-18 12:52:11 +03:00
|
|
|
_, err := ctx.findPath(target, payAmt)
|
2018-12-09 05:32:49 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("graph should've been able to support payment: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Next, update the middle edge policy to only allow payments up to 100k
|
|
|
|
// msat.
|
2019-11-18 12:52:11 +03:00
|
|
|
graph := ctx.testGraphInstance.graph
|
|
|
|
_, midEdge, _, err := graph.FetchChannelEdgesByID(firstToSecondID)
|
2019-09-13 05:59:07 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch channel edges by ID: %v", err)
|
|
|
|
}
|
2018-12-09 05:32:49 +03:00
|
|
|
midEdge.MessageFlags = 1
|
|
|
|
midEdge.MaxHTLC = payAmt - 1
|
2019-11-18 12:52:11 +03:00
|
|
|
if err := graph.UpdateEdgePolicy(midEdge); err != nil {
|
2018-12-09 05:32:49 +03:00
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// We'll now attempt to route through that edge with a payment above
|
|
|
|
// 100k msat, which should fail.
|
2019-11-18 12:52:11 +03:00
|
|
|
_, err = ctx.findPath(target, payAmt)
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
2018-12-09 05:32:49 +03:00
|
|
|
t.Fatalf("graph shouldn't be able to support payment: %v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-01 09:31:18 +03:00
|
|
|
// TestRouteFailDisabledEdge tests that if we attempt to route to an edge
|
|
|
|
// that's disabled, then that edge is disqualified, and the routing attempt
|
2018-10-26 21:41:55 +03:00
|
|
|
// will fail. We also test that this is true only for non-local edges, as we'll
|
|
|
|
// ignore the disable flags, with the assumption that the correct bandwidth is
|
|
|
|
// found among the bandwidth hints.
|
2017-12-01 09:31:18 +03:00
|
|
|
func TestRouteFailDisabledEdge(t *testing.T) {
|
2018-04-19 17:32:24 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
2017-12-01 09:31:18 +03:00
|
|
|
|
2018-08-16 22:35:38 +03:00
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-06-07 12:00:58 +03:00
|
|
|
// First, we'll try to route from roasbeef -> sophon. This should
|
|
|
|
// succeed without issue, and return a single path via phamnuwen
|
2018-08-16 22:35:38 +03:00
|
|
|
target := graph.aliasMap["sophon"]
|
2018-06-04 23:10:05 +03:00
|
|
|
payAmt := lnwire.NewMSatFromSatoshis(105000)
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-10-26 21:41:55 +03:00
|
|
|
// Disable the edge roasbeef->phamnuwen. This should not impact the
|
|
|
|
// path finding, as we don't consider the disable flag for local
|
|
|
|
// channels (and roasbeef is the source).
|
|
|
|
roasToPham := uint64(999991)
|
|
|
|
_, e1, e2, err := graph.graph.FetchChannelEdgesByID(roasToPham)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch edge: %v", err)
|
|
|
|
}
|
2019-01-12 20:59:43 +03:00
|
|
|
e1.ChannelFlags |= lnwire.ChanUpdateDisabled
|
2018-10-26 21:41:55 +03:00
|
|
|
if err := graph.graph.UpdateEdgePolicy(e1); err != nil {
|
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
2019-01-12 20:59:43 +03:00
|
|
|
e2.ChannelFlags |= lnwire.ChanUpdateDisabled
|
2018-10-26 21:41:55 +03:00
|
|
|
if err := graph.graph.UpdateEdgePolicy(e2); err != nil {
|
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-10-26 21:41:55 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now, we'll modify the edge from phamnuwen -> sophon, to read that
|
2017-12-01 09:31:18 +03:00
|
|
|
// it's disabled.
|
2018-10-26 21:41:55 +03:00
|
|
|
phamToSophon := uint64(99999)
|
|
|
|
_, e, _, err := graph.graph.FetchChannelEdgesByID(phamToSophon)
|
2017-12-01 09:31:18 +03:00
|
|
|
if err != nil {
|
2018-10-26 21:41:55 +03:00
|
|
|
t.Fatalf("unable to fetch edge: %v", err)
|
2017-12-01 09:31:18 +03:00
|
|
|
}
|
2019-01-12 20:59:43 +03:00
|
|
|
e.ChannelFlags |= lnwire.ChanUpdateDisabled
|
2018-10-26 21:41:55 +03:00
|
|
|
if err := graph.graph.UpdateEdgePolicy(e); err != nil {
|
2017-12-01 09:31:18 +03:00
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-10-26 21:41:55 +03:00
|
|
|
// If we attempt to route through that edge, we should get a failure as
|
|
|
|
// it is no longer eligible.
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-03-27 07:14:10 +03:00
|
|
|
)
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
2017-12-01 09:31:18 +03:00
|
|
|
t.Fatalf("graph shouldn't be able to support payment: %v", err)
|
|
|
|
}
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
|
|
|
|
2018-10-25 00:06:12 +03:00
|
|
|
// TestPathSourceEdgesBandwidth tests that explicitly passing in a set of
|
|
|
|
// bandwidth hints is used by the path finding algorithm to consider whether to
|
|
|
|
// use a local channel.
|
|
|
|
func TestPathSourceEdgesBandwidth(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
graph, err := parseTestGraph(basicGraphFilePath)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer graph.cleanUp()
|
2018-10-25 00:06:12 +03:00
|
|
|
|
|
|
|
sourceNode, err := graph.graph.SourceNode()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// First, we'll try to route from roasbeef -> sophon. This should
|
|
|
|
// succeed without issue, and return a path via songoku, as that's the
|
|
|
|
// cheapest path.
|
|
|
|
target := graph.aliasMap["sophon"]
|
|
|
|
payAmt := lnwire.NewMSatFromSatoshis(50000)
|
2020-03-17 13:32:07 +03:00
|
|
|
path, err := dbFindPath(
|
|
|
|
graph.graph, nil, nil,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-10-25 00:06:12 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
2019-03-05 18:55:19 +03:00
|
|
|
assertExpectedPath(t, graph.aliasMap, path, "songoku", "sophon")
|
2018-10-25 00:06:12 +03:00
|
|
|
|
|
|
|
// Now we'll set the bandwidth of the edge roasbeef->songoku and
|
|
|
|
// roasbeef->phamnuwen to 0.
|
|
|
|
roasToSongoku := uint64(12345)
|
|
|
|
roasToPham := uint64(999991)
|
|
|
|
bandwidths := map[uint64]lnwire.MilliSatoshi{
|
|
|
|
roasToSongoku: 0,
|
|
|
|
roasToPham: 0,
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since both these edges has a bandwidth of zero, no path should be
|
|
|
|
// found.
|
2020-03-17 13:32:07 +03:00
|
|
|
_, err = dbFindPath(
|
|
|
|
graph.graph, nil, bandwidths,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-10-25 00:06:12 +03:00
|
|
|
)
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
2018-10-25 00:06:12 +03:00
|
|
|
t.Fatalf("graph shouldn't be able to support payment: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Set the bandwidth of roasbeef->phamnuwen high enough to carry the
|
|
|
|
// payment.
|
|
|
|
bandwidths[roasToPham] = 2 * payAmt
|
|
|
|
|
|
|
|
// Now, if we attempt to route again, we should find the path via
|
|
|
|
// phamnuven, as the other source edge won't be considered.
|
2020-03-17 13:32:07 +03:00
|
|
|
path, err = dbFindPath(
|
|
|
|
graph.graph, nil, bandwidths,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-10-25 00:06:12 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
2019-03-05 18:55:19 +03:00
|
|
|
assertExpectedPath(t, graph.aliasMap, path, "phamnuwen", "sophon")
|
2018-10-25 00:06:12 +03:00
|
|
|
|
|
|
|
// Finally, set the roasbeef->songoku bandwidth, but also set its
|
|
|
|
// disable flag.
|
|
|
|
bandwidths[roasToSongoku] = 2 * payAmt
|
|
|
|
_, e1, e2, err := graph.graph.FetchChannelEdgesByID(roasToSongoku)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch edge: %v", err)
|
|
|
|
}
|
2019-01-12 20:59:43 +03:00
|
|
|
e1.ChannelFlags |= lnwire.ChanUpdateDisabled
|
2018-10-25 00:06:12 +03:00
|
|
|
if err := graph.graph.UpdateEdgePolicy(e1); err != nil {
|
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
2019-01-12 20:59:43 +03:00
|
|
|
e2.ChannelFlags |= lnwire.ChanUpdateDisabled
|
2018-10-25 00:06:12 +03:00
|
|
|
if err := graph.graph.UpdateEdgePolicy(e2); err != nil {
|
|
|
|
t.Fatalf("unable to update edge: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since we ignore disable flags for local channels, a path should
|
|
|
|
// still be found.
|
2020-03-17 13:32:07 +03:00
|
|
|
path, err = dbFindPath(
|
|
|
|
graph.graph, nil, bandwidths,
|
2019-06-20 13:03:45 +03:00
|
|
|
noRestrictions, testPathFindingConfig,
|
2019-12-17 13:55:03 +03:00
|
|
|
sourceNode.PubKeyBytes, target, payAmt, 0,
|
2018-10-25 00:06:12 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
2019-03-05 18:55:19 +03:00
|
|
|
assertExpectedPath(t, graph.aliasMap, path, "songoku", "sophon")
|
2018-10-25 00:06:12 +03:00
|
|
|
}
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
func TestPathInsufficientCapacityWithFee(t *testing.T) {
|
2017-06-17 01:59:20 +03:00
|
|
|
t.Parallel()
|
|
|
|
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
// TODO(roasbeef): encode live graph to json
|
2017-03-20 00:11:25 +03:00
|
|
|
|
|
|
|
// TODO(roasbeef): need to add a case, or modify the fee ratio for one
|
|
|
|
// to ensure that has going forward, but when fees are applied doesn't
|
|
|
|
// work
|
routing: rewrite package to conform to BOLT07 and factor in fees+timelocks
This commit overhauls the routing package significantly to simplify the
code, conform to the rest of the coding style within the package, and
observe the new authenticated gossiping scheme outlined in BOLT07.
As a major step towards a more realistic path finding algorithm, fees
are properly calculated and observed during path finding. If a path has
sufficient capacity _before_ fees are applied, but afterwards the
finalized route would exceed the capacity of a single link, the route
is marked as invalid.
Currently a naive weighting algorithm is used which only factors in the
time-lock delta at each hop, thereby optimizing for the lowest time
lock. Fee calculation also isn’t finalized since we aren’t yet using
milli-satoshi throughout the daemon. The final TODO item within the PR
is to properly perform a multi-path search and rank the results based
on a summation heuristic rather than just return the first (out of
many) route found.
On the server side, once nodes are initially connected to the daemon,
our routing table will be synced with the peer’s using a naive “just
send everything scheme” to hold us over until I spec out some a
efficient graph reconciliation protocol. Additionally, the routing
table is now pruned by the channel router itself once new blocks arrive
rather than depending on peers to tell us when a channel flaps or is
closed.
Finally, the validation of peer announcements aren’t yet fully
implemented as they’ll be implemented within the pending discovery
package that was blocking on the completion of this package. Most off
the routing message processing will be moved out of this package and
into the discovery package where full validation will be carried out.
2016-12-27 08:20:26 +03:00
|
|
|
}
|
2017-06-16 23:45:24 +03:00
|
|
|
|
2017-10-19 08:10:00 +03:00
|
|
|
func TestPathFindSpecExample(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// All our path finding tests will assume a starting height of 100, so
|
|
|
|
// we'll pass that in to ensure that the router uses 100 as the current
|
|
|
|
// height.
|
|
|
|
const startingHeight = 100
|
2018-08-16 22:35:38 +03:00
|
|
|
ctx, cleanUp, err := createTestCtxFromFile(startingHeight, specExampleFilePath)
|
2017-10-19 08:10:00 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create router: %v", err)
|
|
|
|
}
|
2018-11-07 11:33:35 +03:00
|
|
|
defer cleanUp()
|
2017-10-19 08:10:00 +03:00
|
|
|
|
|
|
|
// We'll first exercise the scenario of a direct payment from Bob to
|
|
|
|
// Carol, so we set "B" as the source node so path finding starts from
|
|
|
|
// Bob.
|
|
|
|
bob := ctx.aliases["B"]
|
2019-12-20 12:14:13 +03:00
|
|
|
bobNode, err := ctx.graph.FetchLightningNode(nil, bob)
|
2017-10-19 08:10:00 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find bob: %v", err)
|
|
|
|
}
|
|
|
|
if err := ctx.graph.SetSourceNode(bobNode); err != nil {
|
|
|
|
t.Fatalf("unable to set source node: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Query for a route of 4,999,999 mSAT to carol.
|
|
|
|
carol := ctx.aliases["C"]
|
|
|
|
const amt lnwire.MilliSatoshi = 4999999
|
2019-05-07 18:01:01 +03:00
|
|
|
route, err := ctx.router.FindRoute(
|
2020-01-14 13:21:24 +03:00
|
|
|
bobNode.PubKeyBytes, carol, amt, noRestrictions, nil, nil,
|
2020-01-14 14:00:26 +03:00
|
|
|
zpay32.DefaultFinalCLTVDelta,
|
2019-03-05 18:55:19 +03:00
|
|
|
)
|
2017-10-19 08:10:00 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find route: %v", err)
|
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// Now we'll examine the route returned for correctness.
|
2017-10-19 08:10:00 +03:00
|
|
|
//
|
2018-02-07 06:11:11 +03:00
|
|
|
// It should be sending the exact payment amount as there are no
|
2017-10-19 08:10:00 +03:00
|
|
|
// additional hops.
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.TotalAmount != amt {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("wrong total amount: got %v, expected %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.TotalAmount, amt)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.Hops[0].AmtToForward != amt {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("wrong forward amount: got %v, expected %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.Hops[0].AmtToForward, amt)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
2018-08-09 16:36:28 +03:00
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
fee := route.HopFee(0)
|
2018-08-09 16:36:28 +03:00
|
|
|
if fee != 0 {
|
|
|
|
t.Fatalf("wrong hop fee: got %v, expected %v", fee, 0)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// The CLTV expiry should be the current height plus 9 (the expiry for
|
|
|
|
// the B -> C channel.
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.TotalTimeLock !=
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta {
|
2017-10-19 08:10:00 +03:00
|
|
|
|
|
|
|
t.Fatalf("wrong total time lock: got %v, expecting %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.TotalTimeLock,
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// Next, we'll set A as the source node so we can assert that we create
|
|
|
|
// the proper route for any queries starting with Alice.
|
|
|
|
alice := ctx.aliases["A"]
|
2019-12-20 12:14:13 +03:00
|
|
|
aliceNode, err := ctx.graph.FetchLightningNode(nil, alice)
|
2017-10-19 08:10:00 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find alice: %v", err)
|
|
|
|
}
|
|
|
|
if err := ctx.graph.SetSourceNode(aliceNode); err != nil {
|
|
|
|
t.Fatalf("unable to set source node: %v", err)
|
|
|
|
}
|
2017-10-23 03:29:55 +03:00
|
|
|
ctx.router.selfNode = aliceNode
|
2017-10-19 08:10:00 +03:00
|
|
|
source, err := ctx.graph.SourceNode()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to retrieve source node: %v", err)
|
|
|
|
}
|
2019-03-05 18:55:19 +03:00
|
|
|
if source.PubKeyBytes != alice {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("source node not set")
|
|
|
|
}
|
|
|
|
|
|
|
|
// We'll now request a route from A -> B -> C.
|
2019-05-07 18:01:01 +03:00
|
|
|
route, err = ctx.router.FindRoute(
|
2020-01-14 13:21:24 +03:00
|
|
|
source.PubKeyBytes, carol, amt, noRestrictions, nil, nil,
|
2020-01-14 14:00:26 +03:00
|
|
|
zpay32.DefaultFinalCLTVDelta,
|
2019-03-05 18:55:19 +03:00
|
|
|
)
|
2017-10-19 08:10:00 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find routes: %v", err)
|
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// The route should be two hops.
|
|
|
|
if len(route.Hops) != 2 {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("route should be %v hops, is instead %v", 2,
|
2019-05-07 18:01:01 +03:00
|
|
|
len(route.Hops))
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// The total amount should factor in a fee of 10199 and also use a CLTV
|
|
|
|
// delta total of 29 (20 + 9),
|
|
|
|
expectedAmt := lnwire.MilliSatoshi(5010198)
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.TotalAmount != expectedAmt {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("wrong amount: got %v, expected %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.TotalAmount, expectedAmt)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.TotalTimeLock != startingHeight+29 {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("wrong total time lock: got %v, expecting %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.TotalTimeLock, startingHeight+29)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
// Ensure that the hops of the route are properly crafted.
|
2017-10-19 08:10:00 +03:00
|
|
|
//
|
|
|
|
// After taking the fee, Bob should be forwarding the remainder which
|
|
|
|
// is the exact payment to Bob.
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.Hops[0].AmtToForward != amt {
|
2017-10-19 08:10:00 +03:00
|
|
|
t.Fatalf("wrong forward amount: got %v, expected %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.Hops[0].AmtToForward, amt)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// We shouldn't pay any fee for the first, hop, but the fee for the
|
|
|
|
// second hop posted fee should be exactly:
|
2017-10-25 04:29:18 +03:00
|
|
|
|
|
|
|
// The fee that we pay for the second hop will be "applied to the first
|
|
|
|
// hop, so we should get a fee of exactly:
|
2017-10-19 08:10:00 +03:00
|
|
|
//
|
|
|
|
// * 200 + 4999999 * 2000 / 1000000 = 10199
|
2018-08-09 16:36:28 +03:00
|
|
|
|
2019-05-07 18:01:01 +03:00
|
|
|
fee = route.HopFee(0)
|
2018-08-09 16:36:28 +03:00
|
|
|
if fee != 10199 {
|
|
|
|
t.Fatalf("wrong hop fee: got %v, expected %v", fee, 10199)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
2017-10-25 04:29:18 +03:00
|
|
|
|
|
|
|
// While for the final hop, as there's no additional hop afterwards, we
|
|
|
|
// pay no fee.
|
2019-05-07 18:01:01 +03:00
|
|
|
fee = route.HopFee(1)
|
2018-08-09 16:36:28 +03:00
|
|
|
if fee != 0 {
|
|
|
|
t.Fatalf("wrong hop fee: got %v, expected %v", fee, 0)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// The outgoing CLTV value itself should be the current height plus 30
|
|
|
|
// to meet Carol's requirements.
|
2019-05-07 18:01:01 +03:00
|
|
|
if route.Hops[0].OutgoingTimeLock !=
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta {
|
2017-10-19 08:10:00 +03:00
|
|
|
|
|
|
|
t.Fatalf("wrong total time lock: got %v, expecting %v",
|
2019-05-07 18:01:01 +03:00
|
|
|
route.Hops[0].OutgoingTimeLock,
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// For B -> C, we assert that the final hop also has the proper
|
|
|
|
// parameters.
|
2019-05-07 18:01:01 +03:00
|
|
|
lastHop := route.Hops[1]
|
2017-10-19 08:10:00 +03:00
|
|
|
if lastHop.AmtToForward != amt {
|
|
|
|
t.Fatalf("wrong forward amount: got %v, expected %v",
|
|
|
|
lastHop.AmtToForward, amt)
|
|
|
|
}
|
|
|
|
if lastHop.OutgoingTimeLock !=
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta {
|
2017-10-19 08:10:00 +03:00
|
|
|
|
|
|
|
t.Fatalf("wrong total time lock: got %v, expecting %v",
|
|
|
|
lastHop.OutgoingTimeLock,
|
2019-02-19 11:09:01 +03:00
|
|
|
startingHeight+zpay32.DefaultFinalCLTVDelta)
|
2017-10-19 08:10:00 +03:00
|
|
|
}
|
|
|
|
}
|
2018-03-27 07:32:37 +03:00
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
func assertExpectedPath(t *testing.T, aliasMap map[string]route.Vertex,
|
2019-03-05 18:55:19 +03:00
|
|
|
path []*channeldb.ChannelEdgePolicy, nodeAliases ...string) {
|
2018-08-09 16:36:28 +03:00
|
|
|
|
2018-03-27 07:32:37 +03:00
|
|
|
if len(path) != len(nodeAliases) {
|
|
|
|
t.Fatal("number of hops and number of aliases do not match")
|
|
|
|
}
|
|
|
|
|
|
|
|
for i, hop := range path {
|
2019-03-05 18:55:19 +03:00
|
|
|
if hop.Node.PubKeyBytes != aliasMap[nodeAliases[i]] {
|
2018-03-27 07:32:37 +03:00
|
|
|
t.Fatalf("expected %v to be pos #%v in hop, instead "+
|
|
|
|
"%v was", nodeAliases[i], i, hop.Node.Alias)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2018-12-10 11:41:03 +03:00
|
|
|
|
|
|
|
// TestNewRouteFromEmptyHops tests that the NewRouteFromHops function returns an
|
|
|
|
// error when the hop list is empty.
|
|
|
|
func TestNewRouteFromEmptyHops(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
2019-04-05 18:36:11 +03:00
|
|
|
var source route.Vertex
|
|
|
|
_, err := route.NewRouteFromHops(0, 0, source, []*route.Hop{})
|
|
|
|
if err != route.ErrNoRouteHopsProvided {
|
2018-12-10 11:41:03 +03:00
|
|
|
t.Fatalf("expected empty hops error: instead got: %v", err)
|
|
|
|
}
|
|
|
|
}
|
2019-02-01 15:53:27 +03:00
|
|
|
|
|
|
|
// TestRestrictOutgoingChannel asserts that a outgoing channel restriction is
|
|
|
|
// obeyed by the path finding algorithm.
|
|
|
|
func TestRestrictOutgoingChannel(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
2020-05-08 14:53:22 +03:00
|
|
|
// Define channel id constants
|
|
|
|
const (
|
2020-05-07 12:48:39 +03:00
|
|
|
chanSourceA = 1
|
|
|
|
chanATarget = 4
|
|
|
|
chanSourceB1 = 2
|
|
|
|
chanSourceB2 = 3
|
|
|
|
chanBTarget = 5
|
|
|
|
chanSourceTarget = 6
|
2020-05-08 14:53:22 +03:00
|
|
|
)
|
|
|
|
|
2019-02-01 15:53:27 +03:00
|
|
|
// Set up a test graph with three possible paths from roasbeef to
|
2020-05-08 14:53:22 +03:00
|
|
|
// target. The path through chanSourceB1 is the highest cost path.
|
2019-02-01 15:53:27 +03:00
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("roasbeef", "a", 100000, &testChannelPolicy{
|
2019-11-18 13:34:47 +03:00
|
|
|
Expiry: 144,
|
2020-05-08 14:53:22 +03:00
|
|
|
}, chanSourceA),
|
2019-02-01 15:53:27 +03:00
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
2020-05-08 14:53:22 +03:00
|
|
|
}, chanATarget),
|
2019-02-01 15:53:27 +03:00
|
|
|
symmetricTestChannel("roasbeef", "b", 100000, &testChannelPolicy{
|
2019-11-18 13:34:47 +03:00
|
|
|
Expiry: 144,
|
2020-05-08 14:53:22 +03:00
|
|
|
}, chanSourceB1),
|
2019-02-01 15:53:27 +03:00
|
|
|
symmetricTestChannel("roasbeef", "b", 100000, &testChannelPolicy{
|
2019-11-18 13:34:47 +03:00
|
|
|
Expiry: 144,
|
2020-05-08 14:53:22 +03:00
|
|
|
}, chanSourceB2),
|
2019-02-01 15:53:27 +03:00
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
2019-11-18 13:34:47 +03:00
|
|
|
FeeRate: 800,
|
2020-05-08 14:53:22 +03:00
|
|
|
}, chanBTarget),
|
2020-05-07 12:48:39 +03:00
|
|
|
symmetricTestChannel("roasbeef", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
}, chanSourceTarget),
|
2019-02-01 15:53:27 +03:00
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
2019-02-01 15:53:27 +03:00
|
|
|
|
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
2019-11-18 12:52:11 +03:00
|
|
|
target := ctx.keyFromAlias("target")
|
2020-05-08 14:53:22 +03:00
|
|
|
outgoingChannelID := uint64(chanSourceB1)
|
2019-02-01 15:53:27 +03:00
|
|
|
|
2020-05-07 12:48:39 +03:00
|
|
|
// Find the best path given the restriction to only use channel 2 as the
|
|
|
|
// outgoing channel.
|
|
|
|
ctx.restrictParams.OutgoingChannelIDs = []uint64{outgoingChannelID}
|
2019-11-18 12:52:11 +03:00
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
2019-02-01 15:53:27 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
|
2020-05-08 14:53:22 +03:00
|
|
|
// Assert that the route starts with channel chanSourceB1, in line with
|
|
|
|
// the specified restriction.
|
|
|
|
if path[0].ChannelID != chanSourceB1 {
|
|
|
|
t.Fatalf("expected route to pass through channel %v, "+
|
|
|
|
"but channel %v was selected instead", chanSourceB1,
|
|
|
|
path[0].ChannelID)
|
2019-02-01 15:53:27 +03:00
|
|
|
}
|
2020-05-07 12:48:39 +03:00
|
|
|
|
|
|
|
// If a direct channel to target is allowed as well, that channel is
|
|
|
|
// expected to be selected because the routing fees are zero.
|
|
|
|
ctx.restrictParams.OutgoingChannelIDs = []uint64{
|
|
|
|
chanSourceB1, chanSourceTarget,
|
|
|
|
}
|
|
|
|
path, err = ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
if path[0].ChannelID != chanSourceTarget {
|
|
|
|
t.Fatalf("expected route to pass through channel %v",
|
|
|
|
chanSourceTarget)
|
|
|
|
}
|
2019-02-01 15:53:27 +03:00
|
|
|
}
|
2019-02-13 12:08:53 +03:00
|
|
|
|
2019-11-18 13:54:15 +03:00
|
|
|
// TestRestrictLastHop asserts that a last hop restriction is obeyed by the path
|
|
|
|
// finding algorithm.
|
|
|
|
func TestRestrictLastHop(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with three possible paths from roasbeef to
|
|
|
|
// target. The path via channel 1 and 2 is the lowest cost path.
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("source", "a", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
}, 1),
|
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 400,
|
|
|
|
}, 2),
|
|
|
|
symmetricTestChannel("source", "b", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
}, 3),
|
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeRate: 800,
|
|
|
|
}, 4),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "source")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
|
|
|
target := ctx.keyFromAlias("target")
|
|
|
|
lastHop := ctx.keyFromAlias("b")
|
|
|
|
|
|
|
|
// Find the best path given the restriction to use b as the last hop.
|
|
|
|
// This should force pathfinding to not take the lowest cost option.
|
|
|
|
ctx.restrictParams.LastHop = &lastHop
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
if path[0].ChannelID != 3 {
|
|
|
|
t.Fatalf("expected route to pass through channel 3, "+
|
|
|
|
"but channel %v was selected instead",
|
|
|
|
path[0].ChannelID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-02-13 12:08:53 +03:00
|
|
|
// TestCltvLimit asserts that a cltv limit is obeyed by the path finding
|
|
|
|
// algorithm.
|
|
|
|
func TestCltvLimit(t *testing.T) {
|
2019-10-11 22:46:10 +03:00
|
|
|
t.Run("no limit", func(t *testing.T) { testCltvLimit(t, 2016, 1) })
|
2019-02-13 12:08:53 +03:00
|
|
|
t.Run("no path", func(t *testing.T) { testCltvLimit(t, 50, 0) })
|
|
|
|
t.Run("force high cost", func(t *testing.T) { testCltvLimit(t, 80, 3) })
|
|
|
|
}
|
|
|
|
|
|
|
|
func testCltvLimit(t *testing.T, limit uint32, expectedChannel uint64) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with three possible paths to the target. The path
|
|
|
|
// through a is the lowest cost with a high time lock (144). The path
|
|
|
|
// through b has a higher cost but a lower time lock (100). That path
|
|
|
|
// through c and d (two hops) has the same case as the path through b,
|
|
|
|
// but the total time lock is lower (60).
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("roasbeef", "a", 100000, &testChannelPolicy{}, 1),
|
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 10000,
|
|
|
|
MinHTLC: 1,
|
|
|
|
}),
|
|
|
|
symmetricTestChannel("roasbeef", "b", 100000, &testChannelPolicy{}, 2),
|
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 100,
|
|
|
|
FeeBaseMsat: 20000,
|
|
|
|
MinHTLC: 1,
|
|
|
|
}),
|
|
|
|
symmetricTestChannel("roasbeef", "c", 100000, &testChannelPolicy{}, 3),
|
|
|
|
symmetricTestChannel("c", "d", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 30,
|
|
|
|
FeeBaseMsat: 10000,
|
|
|
|
MinHTLC: 1,
|
|
|
|
}),
|
|
|
|
symmetricTestChannel("d", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 30,
|
|
|
|
FeeBaseMsat: 10000,
|
|
|
|
MinHTLC: 1,
|
|
|
|
}),
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
2019-02-13 12:08:53 +03:00
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
2019-11-18 12:52:11 +03:00
|
|
|
target := ctx.keyFromAlias("target")
|
2019-02-13 12:08:53 +03:00
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx.restrictParams.CltvLimit = limit
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
2019-02-13 12:08:53 +03:00
|
|
|
if expectedChannel == 0 {
|
|
|
|
// Finish test if we expect no route.
|
2019-11-19 22:00:01 +03:00
|
|
|
if err == errNoPathFound {
|
2019-02-13 12:08:53 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
t.Fatal("expected no path to be found")
|
|
|
|
}
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
ctx.source, path, startingHeight,
|
|
|
|
finalHopParams{
|
|
|
|
amt: paymentAmt,
|
|
|
|
cltvDelta: finalHopCLTV,
|
|
|
|
records: nil,
|
|
|
|
},
|
2019-02-13 12:08:53 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create path: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Assert that the route starts with the expected channel.
|
|
|
|
if route.Hops[0].ChannelID != expectedChannel {
|
|
|
|
t.Fatalf("expected route to pass through channel %v, "+
|
|
|
|
"but channel %v was selected instead", expectedChannel,
|
|
|
|
route.Hops[0].ChannelID)
|
|
|
|
}
|
|
|
|
}
|
2019-03-19 13:45:10 +03:00
|
|
|
|
|
|
|
// TestProbabilityRouting asserts that path finding not only takes into account
|
|
|
|
// fees but also success probability.
|
|
|
|
func TestProbabilityRouting(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testCases := []struct {
|
2019-05-13 18:00:35 +03:00
|
|
|
name string
|
|
|
|
p10, p11, p20 float64
|
|
|
|
minProbability float64
|
|
|
|
expectedChan uint64
|
2019-03-19 13:45:10 +03:00
|
|
|
}{
|
|
|
|
// Test two variations with probabilities that should multiply
|
|
|
|
// to the same total route probability. In both cases the three
|
|
|
|
// hop route should be the best route. The three hop route has a
|
|
|
|
// probability of 0.5 * 0.8 = 0.4. The fee is 5 (chan 10) + 8
|
|
|
|
// (chan 11) = 13. Path finding distance should work out to: 13
|
|
|
|
// + 10 (attempt penalty) / 0.4 = 38. The two hop route is 25 +
|
|
|
|
// 10 / 0.7 = 39.
|
|
|
|
{
|
|
|
|
name: "three hop 1",
|
|
|
|
p10: 0.8, p11: 0.5, p20: 0.7,
|
2019-05-13 18:00:35 +03:00
|
|
|
minProbability: 0.1,
|
|
|
|
expectedChan: 10,
|
2019-03-19 13:45:10 +03:00
|
|
|
},
|
|
|
|
{
|
|
|
|
name: "three hop 2",
|
|
|
|
p10: 0.5, p11: 0.8, p20: 0.7,
|
2019-05-13 18:00:35 +03:00
|
|
|
minProbability: 0.1,
|
|
|
|
expectedChan: 10,
|
2019-03-19 13:45:10 +03:00
|
|
|
},
|
|
|
|
|
|
|
|
// If the probability of the two hop route is increased, its
|
|
|
|
// distance becomes 25 + 10 / 0.85 = 37. This is less than the
|
|
|
|
// three hop route with its distance 38. So with an attempt
|
|
|
|
// penalty of 10, the higher fee route is chosen because of the
|
|
|
|
// compensation for success probability.
|
|
|
|
{
|
|
|
|
name: "two hop higher cost",
|
|
|
|
p10: 0.5, p11: 0.8, p20: 0.85,
|
2019-05-13 18:00:35 +03:00
|
|
|
minProbability: 0.1,
|
|
|
|
expectedChan: 20,
|
|
|
|
},
|
|
|
|
|
|
|
|
// If the same probabilities are used with a probability lower bound of
|
|
|
|
// 0.5, we expect the three hop route with probability 0.4 to be
|
|
|
|
// excluded and the two hop route to be picked.
|
|
|
|
{
|
|
|
|
name: "probability limit",
|
|
|
|
p10: 0.8, p11: 0.5, p20: 0.7,
|
|
|
|
minProbability: 0.5,
|
|
|
|
expectedChan: 20,
|
|
|
|
},
|
|
|
|
|
|
|
|
// With a probability limit above the probability of both routes, we
|
|
|
|
// expect no route to be returned. This expectation is signaled by using
|
|
|
|
// expected channel 0.
|
|
|
|
{
|
|
|
|
name: "probability limit no routes",
|
|
|
|
p10: 0.8, p11: 0.5, p20: 0.7,
|
|
|
|
minProbability: 0.8,
|
|
|
|
expectedChan: 0,
|
2019-03-19 13:45:10 +03:00
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, tc := range testCases {
|
|
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
|
|
testProbabilityRouting(
|
2019-05-13 18:00:35 +03:00
|
|
|
t, tc.p10, tc.p11, tc.p20,
|
|
|
|
tc.minProbability, tc.expectedChan,
|
2019-03-19 13:45:10 +03:00
|
|
|
)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-05-13 18:00:35 +03:00
|
|
|
func testProbabilityRouting(t *testing.T, p10, p11, p20, minProbability float64,
|
2019-03-19 13:45:10 +03:00
|
|
|
expectedChan uint64) {
|
|
|
|
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with two possible paths to the target: a three
|
|
|
|
// hop path (via channels 10 and 11) and a two hop path (via channel
|
|
|
|
// 20).
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("roasbeef", "a1", 100000, &testChannelPolicy{}),
|
|
|
|
symmetricTestChannel("roasbeef", "b", 100000, &testChannelPolicy{}),
|
|
|
|
symmetricTestChannel("a1", "a2", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: lnwire.NewMSatFromSatoshis(5),
|
|
|
|
MinHTLC: 1,
|
|
|
|
}, 10),
|
|
|
|
symmetricTestChannel("a2", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: lnwire.NewMSatFromSatoshis(8),
|
|
|
|
MinHTLC: 1,
|
|
|
|
}, 11),
|
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 100,
|
|
|
|
FeeBaseMsat: lnwire.NewMSatFromSatoshis(25),
|
|
|
|
MinHTLC: 1,
|
|
|
|
}, 20),
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "roasbeef")
|
|
|
|
defer ctx.cleanup()
|
2019-07-29 16:10:58 +03:00
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
alias := ctx.testGraphInstance.aliasMap
|
2019-03-19 13:45:10 +03:00
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
2019-11-18 12:52:11 +03:00
|
|
|
target := ctx.testGraphInstance.aliasMap["target"]
|
2019-03-19 13:45:10 +03:00
|
|
|
|
|
|
|
// Configure a probability source with the test parameters.
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx.restrictParams.ProbabilitySource = func(fromNode, toNode route.Vertex,
|
2019-03-19 19:09:27 +03:00
|
|
|
amt lnwire.MilliSatoshi) float64 {
|
|
|
|
|
|
|
|
if amt == 0 {
|
|
|
|
t.Fatal("expected non-zero amount")
|
|
|
|
}
|
2019-03-19 13:45:10 +03:00
|
|
|
|
2019-07-29 16:10:58 +03:00
|
|
|
switch {
|
|
|
|
case fromNode == alias["a1"] && toNode == alias["a2"]:
|
2019-03-19 13:45:10 +03:00
|
|
|
return p10
|
2019-07-29 16:10:58 +03:00
|
|
|
case fromNode == alias["a2"] && toNode == alias["target"]:
|
2019-03-19 13:45:10 +03:00
|
|
|
return p11
|
2019-07-29 16:10:58 +03:00
|
|
|
case fromNode == alias["b"] && toNode == alias["target"]:
|
2019-03-19 13:45:10 +03:00
|
|
|
return p20
|
|
|
|
default:
|
|
|
|
return 1
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
ctx.pathFindingConfig = PathFindingConfig{
|
|
|
|
PaymentAttemptPenalty: lnwire.NewMSatFromSatoshis(10),
|
|
|
|
MinProbability: minProbability,
|
|
|
|
}
|
|
|
|
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
2019-05-13 18:00:35 +03:00
|
|
|
if expectedChan == 0 {
|
2019-11-19 22:00:01 +03:00
|
|
|
if err != errNoPathFound {
|
2019-05-13 18:00:35 +03:00
|
|
|
t.Fatalf("expected no path found, but got %v", err)
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
2019-03-19 13:45:10 +03:00
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Assert that the route passes through the expected channel.
|
|
|
|
if path[1].ChannelID != expectedChan {
|
|
|
|
t.Fatalf("expected route to pass through channel %v, "+
|
|
|
|
"but channel %v was selected instead", expectedChan,
|
|
|
|
path[1].ChannelID)
|
|
|
|
}
|
|
|
|
}
|
2019-11-18 12:52:11 +03:00
|
|
|
|
2019-12-01 16:55:01 +03:00
|
|
|
// TestEqualCostRouteSelection asserts that route probability will be used as a
|
|
|
|
// tie breaker in case the path finding probabilities are equal.
|
|
|
|
func TestEqualCostRouteSelection(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with two possible paths to the target: via a and
|
|
|
|
// via b. The routing fees and probabilities are chosen such that the
|
|
|
|
// algorithm will first explore target->a->source (backwards search).
|
|
|
|
// This route has fee 6 and a penality of 4 for the 25% success
|
|
|
|
// probability. The algorithm will then proceed with evaluating
|
|
|
|
// target->b->source, which has a fee of 8 and a penalty of 2 for the
|
|
|
|
// 50% success probability. Both routes have the same path finding cost
|
|
|
|
// of 10. It is expected that in that case, the highest probability
|
|
|
|
// route (through b) is chosen.
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("source", "a", 100000, &testChannelPolicy{}),
|
|
|
|
symmetricTestChannel("source", "b", 100000, &testChannelPolicy{}),
|
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: lnwire.NewMSatFromSatoshis(6),
|
|
|
|
MinHTLC: 1,
|
|
|
|
}, 1),
|
|
|
|
symmetricTestChannel("b", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 100,
|
|
|
|
FeeBaseMsat: lnwire.NewMSatFromSatoshis(8),
|
|
|
|
MinHTLC: 1,
|
|
|
|
}, 2),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "source")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
alias := ctx.testGraphInstance.aliasMap
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
|
|
|
target := ctx.testGraphInstance.aliasMap["target"]
|
|
|
|
|
|
|
|
ctx.restrictParams.ProbabilitySource = func(fromNode, toNode route.Vertex,
|
|
|
|
amt lnwire.MilliSatoshi) float64 {
|
|
|
|
|
|
|
|
switch {
|
|
|
|
case fromNode == alias["source"] && toNode == alias["a"]:
|
|
|
|
return 0.25
|
|
|
|
case fromNode == alias["source"] && toNode == alias["b"]:
|
|
|
|
return 0.5
|
|
|
|
default:
|
|
|
|
return 1
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx.pathFindingConfig = PathFindingConfig{
|
|
|
|
PaymentAttemptPenalty: lnwire.NewMSatFromSatoshis(1),
|
|
|
|
}
|
|
|
|
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
if path[1].ChannelID != 2 {
|
|
|
|
t.Fatalf("expected route to pass through channel %v, "+
|
|
|
|
"but channel %v was selected instead", 2,
|
|
|
|
path[1].ChannelID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-18 11:15:05 +03:00
|
|
|
// TestNoCycle tries to guide the path finding algorithm into reconstructing an
|
|
|
|
// endless route. It asserts that the algorithm is able to handle this properly.
|
|
|
|
func TestNoCycle(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
// Set up a test graph with two paths: source->a->target and
|
|
|
|
// source->b->c->target. The fees are setup such that, searching
|
|
|
|
// backwards, the algorithm will evaluate the following end of the route
|
|
|
|
// first: ->target->c->target. This does not make sense, because if
|
|
|
|
// target is reached, there is no need to continue to c. A proper
|
|
|
|
// implementation will then go on with alternative routes. It will then
|
|
|
|
// consider ->a->target because its cost is lower than the alternative
|
|
|
|
// ->b->c->target and finally find source->a->target as the best route.
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("source", "a", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
}, 1),
|
|
|
|
symmetricTestChannel("source", "b", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
}, 2),
|
|
|
|
symmetricTestChannel("b", "c", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 2000,
|
|
|
|
}, 3),
|
|
|
|
symmetricTestChannel("c", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 0,
|
|
|
|
}, 4),
|
|
|
|
symmetricTestChannel("a", "target", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 600,
|
|
|
|
}, 5),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "source")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
const (
|
|
|
|
startingHeight = 100
|
|
|
|
finalHopCLTV = 1
|
|
|
|
)
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
|
|
|
target := ctx.keyFromAlias("target")
|
|
|
|
|
|
|
|
// Find the best path given the restriction to only use channel 2 as the
|
|
|
|
// outgoing channel.
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
route, err := newRoute(
|
2019-12-19 10:55:08 +03:00
|
|
|
ctx.source, path, startingHeight,
|
|
|
|
finalHopParams{
|
|
|
|
amt: paymentAmt,
|
|
|
|
cltvDelta: finalHopCLTV,
|
|
|
|
records: nil,
|
|
|
|
},
|
2019-11-18 11:15:05 +03:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create path: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(route.Hops) != 2 {
|
|
|
|
t.Fatalf("unexpected route")
|
|
|
|
}
|
|
|
|
if route.Hops[0].ChannelID != 1 {
|
|
|
|
t.Fatalf("unexpected first hop")
|
|
|
|
}
|
|
|
|
if route.Hops[1].ChannelID != 5 {
|
|
|
|
t.Fatalf("unexpected second hop")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:19:20 +03:00
|
|
|
// TestRouteToSelf tests that it is possible to find a route to the self node.
|
|
|
|
func TestRouteToSelf(t *testing.T) {
|
|
|
|
t.Parallel()
|
|
|
|
|
|
|
|
testChannels := []*testChannel{
|
|
|
|
symmetricTestChannel("source", "a", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 500,
|
|
|
|
}, 1),
|
|
|
|
symmetricTestChannel("source", "b", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 1000,
|
|
|
|
}, 2),
|
|
|
|
symmetricTestChannel("a", "b", 100000, &testChannelPolicy{
|
|
|
|
Expiry: 144,
|
|
|
|
FeeBaseMsat: 1000,
|
|
|
|
}, 3),
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := newPathFindingTestContext(t, testChannels, "source")
|
|
|
|
defer ctx.cleanup()
|
|
|
|
|
|
|
|
paymentAmt := lnwire.NewMSatFromSatoshis(100)
|
|
|
|
target := ctx.source
|
|
|
|
|
|
|
|
// Find the best path to self. We expect this to be source->a->source,
|
|
|
|
// because a charges the lowest forwarding fee.
|
|
|
|
path, err := ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
ctx.assertPath(path, []uint64{1, 1})
|
|
|
|
|
|
|
|
outgoingChanID := uint64(1)
|
|
|
|
lastHop := ctx.keyFromAlias("b")
|
2020-05-07 12:48:39 +03:00
|
|
|
ctx.restrictParams.OutgoingChannelIDs = []uint64{outgoingChanID}
|
2019-11-18 12:19:20 +03:00
|
|
|
ctx.restrictParams.LastHop = &lastHop
|
|
|
|
|
|
|
|
// Find the best path to self given that we want to go out via channel 1
|
|
|
|
// and return through node b.
|
|
|
|
path, err = ctx.findPath(target, paymentAmt)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to find path: %v", err)
|
|
|
|
}
|
|
|
|
ctx.assertPath(path, []uint64{1, 3, 2})
|
|
|
|
}
|
|
|
|
|
2019-11-18 12:52:11 +03:00
|
|
|
type pathFindingTestContext struct {
|
|
|
|
t *testing.T
|
2020-03-17 13:32:07 +03:00
|
|
|
graph *channeldb.ChannelGraph
|
2019-11-18 12:52:11 +03:00
|
|
|
restrictParams RestrictParams
|
2020-04-16 11:45:00 +03:00
|
|
|
bandwidthHints map[uint64]lnwire.MilliSatoshi
|
2019-11-18 12:52:11 +03:00
|
|
|
pathFindingConfig PathFindingConfig
|
|
|
|
testGraphInstance *testGraphInstance
|
|
|
|
source route.Vertex
|
|
|
|
}
|
|
|
|
|
|
|
|
func newPathFindingTestContext(t *testing.T, testChannels []*testChannel,
|
|
|
|
source string) *pathFindingTestContext {
|
|
|
|
|
|
|
|
testGraphInstance, err := createTestGraphFromChannels(
|
|
|
|
testChannels, source,
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to create graph: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
sourceNode, err := testGraphInstance.graph.SourceNode()
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("unable to fetch source node: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx := &pathFindingTestContext{
|
|
|
|
t: t,
|
|
|
|
testGraphInstance: testGraphInstance,
|
|
|
|
source: route.Vertex(sourceNode.PubKeyBytes),
|
2019-11-21 14:39:08 +03:00
|
|
|
pathFindingConfig: *testPathFindingConfig,
|
2020-03-17 13:32:07 +03:00
|
|
|
graph: testGraphInstance.graph,
|
|
|
|
restrictParams: *noRestrictions,
|
2019-11-18 12:52:11 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return ctx
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *pathFindingTestContext) keyFromAlias(alias string) route.Vertex {
|
|
|
|
return c.testGraphInstance.aliasMap[alias]
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *pathFindingTestContext) aliasFromKey(pubKey route.Vertex) string {
|
|
|
|
for alias, key := range c.testGraphInstance.aliasMap {
|
|
|
|
if key == pubKey {
|
|
|
|
return alias
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ""
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *pathFindingTestContext) cleanup() {
|
|
|
|
c.testGraphInstance.cleanUp()
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *pathFindingTestContext) findPath(target route.Vertex,
|
|
|
|
amt lnwire.MilliSatoshi) ([]*channeldb.ChannelEdgePolicy,
|
|
|
|
error) {
|
|
|
|
|
2020-03-17 13:32:07 +03:00
|
|
|
return dbFindPath(
|
2020-04-16 11:45:00 +03:00
|
|
|
c.graph, nil, c.bandwidthHints, &c.restrictParams,
|
|
|
|
&c.pathFindingConfig, c.source, target, amt, 0,
|
2019-11-18 12:52:11 +03:00
|
|
|
)
|
|
|
|
}
|
2019-11-18 12:19:20 +03:00
|
|
|
|
|
|
|
func (c *pathFindingTestContext) assertPath(path []*channeldb.ChannelEdgePolicy, expected []uint64) {
|
|
|
|
if len(path) != len(expected) {
|
|
|
|
c.t.Fatalf("expected path of length %v, but got %v",
|
|
|
|
len(expected), len(path))
|
|
|
|
}
|
|
|
|
|
|
|
|
for i, edge := range path {
|
|
|
|
if edge.ChannelID != expected[i] {
|
|
|
|
c.t.Fatalf("expected hop %v to be channel %v, "+
|
|
|
|
"but got %v", i, expected[i], edge.ChannelID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-03-17 13:32:07 +03:00
|
|
|
|
|
|
|
// dbFindPath calls findPath after getting a db transaction from the database
|
|
|
|
// graph.
|
|
|
|
func dbFindPath(graph *channeldb.ChannelGraph,
|
|
|
|
additionalEdges map[route.Vertex][]*channeldb.ChannelEdgePolicy,
|
|
|
|
bandwidthHints map[uint64]lnwire.MilliSatoshi,
|
|
|
|
r *RestrictParams, cfg *PathFindingConfig,
|
|
|
|
source, target route.Vertex, amt lnwire.MilliSatoshi,
|
|
|
|
finalHtlcExpiry int32) ([]*channeldb.ChannelEdgePolicy, error) {
|
|
|
|
|
|
|
|
routingTx, err := newDbRoutingTx(graph)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
defer func() {
|
|
|
|
err := routingTx.close()
|
|
|
|
if err != nil {
|
|
|
|
log.Errorf("Error closing db tx: %v", err)
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
return findPath(
|
|
|
|
&graphParams{
|
|
|
|
additionalEdges: additionalEdges,
|
|
|
|
bandwidthHints: bandwidthHints,
|
|
|
|
graph: routingTx,
|
|
|
|
},
|
|
|
|
r, cfg, source, target, amt, finalHtlcExpiry,
|
|
|
|
)
|
|
|
|
}
|