In this commit, we add a simple bash script to parse out the current PR
number from an environment variable in the GH actions context, and use
that to check to see if the PR has been referenced in the release notes
or not. This isn't 100% fool proof, but it should catch most of the
common cases.
The monitoring server still needs to be enabled using prometheus.enable,
so including this in the default build does not add an additional http
server unless the user opts in.
The golang build cache seems to only grow over time and is now causing
disk space issues on the release builder. Since the release build has to
build for targets that aren't built during other GH actions and our
releases are too far apart to be hitting the cache anyway we suspect the
cache doesn't actually help that much.
Removing it might mean the build takes a bit longer but at least won't
cause any problems with full virtual disks anymore.
Due to a misunderstanding of how the gpg command line options work, we
didn't actually create detached signatures because the --clear-sign
flag would overwrite that. We update our verification script to now only
download the detached signatures and verify them against the main
manifest file.
We also update the signing instructions.
Because we now build a docker image for the RPC compilation, we can save
some execution minutes if we run the mobile RPC and code compilation check in the
same step of the CI workflow.
To avoid leaking any sensitive information like Docker Hub credentials
because of compromised actions repositories, we use our own, vendored
actions for all steps that potentially touch sensitive information.
To enable building docker images for ARM64 platforms as well,
we just need to specify the desired target platforms and the Docker
Buildx service will do the job for us (provided the base images support
the given platforms, which is the case for golang).
This commit adds another GitHub workflow that is activated for each
pushed tag. The release binaries are compiled from that tag for all
supported architectures. A new release in the GitHub repository is then
drafted for the tag and the finished binary packages are uploaded to
that release.
We add a GitHub workflow that is triggered whenever a new version tag is
pushed. It will trigger a docker image build for that version and
automatically push it to the specified repo.
Because we now have conditionally compiled code that depends on the
architecture it is built for, we want to make sure we can build all
architectures that we also release. Since GitHub builds are very fast,
we can easily do this instead of only compiling for certain select
architectures.
We add a GitHub action to our workflow that makes sure all command line
flags of lnd that are available with the default build tags are
contained in the sample-lnd.conf file.
To free up build in Travis, we decided to run the non-flaky parts of
the CI pipeline in GitHub Workflows/Actions only. The integration tests
on the other hand are removed from GitHub because individual actions
cannot be restarted there which caused us to restart the whole workflow
if one test was flaky.
This split should give us the best of both worlds: Fast run of small
checks, linting and unit tests with an easy overview of what failed in
the PR directly. And more free build slots on Travis to do more advanced
integration tests on other architectures and/or operating systems. And
the option to restart a single flaky integration test on Travis.
Checkout v1 has a known flake:
https://github.com/actions/checkout/issues/23#issuecomment-572688577.
For our linter to pass, we need to checkout our full history (default
depth is 1 commit). We could set fetch-depth, but we will eventually
move that depth past the linter's start point commit and need to
implement another fix. Indead, we add an extra step in our linter to
fetch full history so that the linter reference commit is found.
The continue-on-error was added to make sure the log files of the
failed itests would always be uploaded. But this has the side effect
of marking the whole job successful, even if the itest job itself
failed. The failure condition in the log file steps already solve
that, so the continue-on-error is not needed anymore.