Since we will use peer flap rate to determine how we rate limit, we
store this value on disk per peer per channel. This allows us to
restart with memory of our peers past behaviour, so we don't give badly
behaving peers have a fresh start on restart. Last flap timestamp is
stored with our flap count so that we can degrade this all time flap
count over time for peers that have not recently flapped.
To prevent flapping peers from endlessly dos-ing us with online and
offline events, we rate limit the number of events we will store per
period using their flap rate to determine how often we will add their
events to our in memory list of online events.
Since we are tracking online events, we need to track the aggregate
change over the rate limited period, otherwise we will lose track of
a peer's current state. For example, if we store an online event, then
do not store the subsequent offline event, we will believe that the
peer is online when they actually aren't. To address this, we "stage"
a single event which keeps track of all the events that occurred while
we were rate limiting the peer. At the end of the rate limting period,
we will store the last state for that peer, thereby ensureing that
we maintain our record of their most recent state.
When dealing with online events, we actually need to track our events
by peer, not by channel. All we need to track channels is to have a
set of online events for a peer which at least contain those events.
This change refactors chanfitness to track by peer.
We currently query the store for uptime and lifespan individually. As
we add more fields, we will need to add more queries with this design.
This change combines requests into a single channel infor request so
that we do not need to add unnecessary boilerplate going forward.
To get our uptime, we first filter our event log to get online periods.
This change updates this code to be tolerant of consecutive online or
offline events in the log. This will be required for rate limiting,
because we will not record every event for anti-dos reasons, so we could
record an online event, ignore an offline event and then record another
offline event. We could just ignore this duplicate event, but we will
also need this tolerance for when we persist uptime and our peers
can have their last event before restart as an online event and record
another online event when we come back up.
As we add more elements to the chanfitness subsystem, we will require
more complex testing. The current tests are built around the inability
to mock subscriptions, which is remedied by addition of our own mock.
This context allows us to run the full store in a test, rather than
having to manually spin up the main goroutine. Mocking our subscriptions
is required so that we can block our subscribe updates on consumption,
using the real package provides us with no guarantee that the client
receives the update before shutdown, which produces test flakes.
This change also makes a move towards separating out the testing of our
event store from testing the underlying event logs to prepare for
further refactoring.
The current implementation of subscribe is difficult to mock because
the queue that we send updates on in unexported, so you cannot create
a subscribe.Client object and then add your own updates. While it is
possible to run a subscribe server in tests, subscribe servers will
shutdown before dispatching their udpates to all clients, which can be
flakey (and is difficult to workaround). In this commit, we add a
subscription interface so that these testing struggles can be addressed
with a mock.
Original PR was written with 4 spaces instead of 8, do a once off fix
here rather than fixing bit-by bit in the subsequent commits and
cluttering them for review.
To fix the compiler of some IDEs complaining about types and functions
it cannot find, we rename all files that contain tests back to lnd_xxx_test.go to make
sure they are compiled correctly.
As a convenience method for users to look up what RPC method URIs exist
and what permissions they require, we add a new ListPermissions call
that simply returns all registered URIs (including internal and external
subservers) and their required permissions.
To make the permission system even more fine-grained, we want to allow
users to specify exact gRPC URIs in the macaroon permissions instead of
just broad entity/action groups.
For this we add the special entity "uri" which allows an URI specific
permission to be defined as "uri:/lnrpc.Lightning/GetInfo" for example
instead of the more coarse "info:read" which gives access to multiple
URIs.
In this commit, we update the `sample-lnd.conf` example config file to
be up to date with all the new configuration parameters we've added over
the past few release.
This commit changes the logic when garbage collecting forwarding
packages such that they are removed once when the function is called,
and then again upon subsequent ticks. This allows us to bump the
peer timer to 1 hour to limit the number of db transactions happening
in lnd. The forwarding packages need to be removed initially as
otherwise a flappy node will never have them garbage collected.
As a follow-up to #4560 we actually need to hold the reservation mutex
during the full loop where we count the pending reservations. Otherwise
the results might become inaccurate for concurrent funding flows.
We change the external funding test to now test two more things: First
that we can open multiple externally funded channels without needing to
lift the default --maxpendingchannels setting. Then we test that we can
use the safer pending_funding_shim_only flag of the AbandonChannel RPC
to get rid of the never confirming external channels.
As a preparation to test accepting multiple externally funded channels
at the same time, we extract the deriveFundingShim function from the
external funding integration test.
To make sure we can use the abandonchannel RPC for getting rid of
externally funded channels who's funding transaction was never
published, we allow the RPC to be used on non-dev builds for externally
funded and pending channels only.
Externally funded channels are expected by the user and explicitly
registered through the use of a funding shim and should therefore not
count towards the max pending channel count which is primarily there to
mitigate DoS attacks.