This repository contains documentation, tools and scripts for managing our self hosted GitHub Actions runners on Hetzner.
NOTE: Though the repository is public to provide a reference how to setup self hosted GitHub Actions runners on Hetzner, the scripts will need tweaking to fit other organisations (ie. hard coded references to our "mynewsdesk" organisation).
NOTE2: Self hosted runners will only run on private repositories by default. See Public vs Private repositories for more information.
Prerequisites: Activate and reboot the target server into Hetzner's Rescue OS with the devops-talos-manager
SSH key.
The steps documented below have been automated in the bin/add-runner
script. You can run it like this:
GITHUB_TOKEN=<root-user-personal-access-token> IP=<server-ip> RUNNER_NAME=<runner-name> bin/add-runner
We use RUNNER_NAME
in the style of AX52-<n>
where AX52 is the name of the Hetzner server type and <n>
is a number that increments for each additional server.
Once logged in to the Rescue OS we can use the installimage
tool to install Ubuntu 24.04 on the server:
IP=<ip>
RUNNER_NAME=AX52-<n>
ssh root@$IP -i ~/.ssh/devops-talos-manager.pem "/root/.oldroot/nfs/install/installimage -a \
-n github-actions-runner-$RUNNER_NAME \
-r no \
-i root/.oldroot/nfs/install/../images/Ubuntu-2404-noble-amd64-base.tar.gz \
-p /boot/efi:esp:256M,swap:swap:31G,/boot:ext3:1024M,/:ext4:all \
-d nvme0n1,nvme1n1 && reboot"
NOTE: Because installimage
is an alias for /root/.oldroot/nfs/install/installimage
we need to specify the full path to the image file to run it directly via ssh
.
IP=<ip>
scp -i ~/.ssh/devops-talos-manager.pem bin/bootstrap root@$IP:
ssh root@$IP -i ~/.ssh/devops-talos-manager.pem "chmod +x bootstrap && time ./bootstrap && reboot"
The approach for bootstrapping the GitHub runner agents are based on example scripts found at: https://github.com/actions/runner/tree/8db8bbe13a0dabc165d0ff19a1ecb85a4fe86dd8/scripts
IP=<ip>
RUNNER_NAME=AX52-<n>
GITHUB_TOKEN=<personal-access-token>
scp -i ~/.ssh/devops-talos-manager.pem bin/install-runner-agent root@$IP:
ssh root@$IP -i ~/.ssh/devops-talos-manager.pem "
chmod +x install-runner-agent &&
GITHUB_TOKEN=$GITHUB_TOKEN RUNNER_NAME=$RUNNER_NAME ./install-runner-agent"
Installation notes:
- The installer will add a service with the name
actions.runner.mynewsdesk.<$RUNNER_NAME>.service
- The service configuration is found under
/etc/systemd/system/
- The script used for running the service is at
/home/runner/actions-runner/runsvc.sh
The actual github runner is updated automatically. From https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners:
Receive automatic updates for the self-hosted runner application only, though you may disable automatic updates of the runner. For more information about controlling runner software updates on self-hosted runners, see "Autoscaling with self-hosted runners." You are responsible for updating the operating system and all other software.
To update the OS and other components run bin/update
.
You can also run bin/execute-command "<command>"
to conveniently run any command on all GitHub runners.
Inspect the scripts in the bin/
directory for more information.
In our previous CI setup using Buildkite we pre-baked all necessary credentials into the EC2 image. This had a major benefit in that a lot of integrations required 0 config and everything "just worked" out of the box. The downside is that it required quite a bit of manual work to update the image, making credential rotation more painful.
Since Github Actions provide support for secrets management, we decided to try leveraging this during our migration from Buildkite to Github Actions.
Authentication with the GitHub API and git repositories is done via personal access tokens. Reference our GH_PERSONAL_ACCESS_TOKEN organisation secret for complete access to all our private repositories.
When using the pre-installed gh
CLI tool you can set the GH_TOKEN
environment variable and it should just work.
To pull/push GitHub repositories you can use git URL's in the format of https://[email protected]/mynewsdesk/<repository>.git
.
If you want to use repository dispatch in GitHub actions you can use something like:
- uses: peter-evans/repository-dispatch@v2
with:
token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
event-type: <event-type>
client-payload: '{ "some": "values" }'
The docker
CLI is pre-installed on the runner. You can use the docker/login-action to authenticate docker
with a private Docker registry. To authenticate with the Google Artifact Registry we use for storing images for our Kubernetes cluster use:
- uses: docker/login-action@v2
with:
registry: ${{ vars.K_DOCKER_REGISTRY }}
username: _json_key
password: ${{ secrets.K_DOCKER_REGISTRY_JSON_KEY }}
The k
CLI is pre-installed on the runner. Configuration can be done via the official k-action action. You can use eg. our GITOPS_REPOSITORY_URL
, KUBE_CONFIG
, K_DOCKER_REGISTRY
and K_DOCKER_NAMESPACE
organisation secrets to configure k
:
- uses: reclaim-the-stack/k-action@master
with:
gitops-repository-url: ${{ secrets.GITOPS_REPOSITORY_URL }}
kube-config: ${{ secrets.KUBE_CONFIG }}
registry: ${{ vars.K_DOCKER_REGISTRY }}
registry-namespace: ${{ vars.K_DOCKER_NAMESPACE }}
NOTE: For docker registry integration to work you need to use the docker/login-action
action as described above as well.
Note: if you're using the k-action
action, that will implicitly configure kubectl
as well.
The kubectl
CLI is pre-installed on the runner. Configuration and authentication is normally handled via the ~/.kube/config
configuration file. Most GitHub actions dealing with Kubernetes allow passing in a complete kubeconfig as a base64 encoded string. You can reference our KUBE_CONFIG organisation secret for this purpose.
The heroku
CLI is pre-installed on the runner. Set the HEROKU_API_KEY
and HEROKU_ORGANIZATION
environment variables and heroku
should just work. HEROKU_ORGANIZATION
should be set to mynewsdesk
and HEROKU_API_KEY
can be set to our HEROKU_API_KEY organisation secret.
If you just want to git pull/push to a Heroku git remote you can use an url in the format of https://heroku:[email protected]/<app-name>.git
.
The aws
CLI is pre-installed on the runner. For authentication put the contents of AWS_CLI_CREDENTIALS
into ~/.aws/credentials
. This gives you access to dev
, staging
and prod
AWS profiles with full access to their respective AWS accounts, eg:
- name: Prepare AWS CLI credentials
run: |
mkdir -p ~/.aws
echo "${{ secrets.AWS_CLI_CREDENTIALS }}" > ~/.aws/credentials
The runners are configured to only run workflows from private repositories by default to adhere to GitHub's recommended security best practices.
For public repositories (and forks of public repositories), GitHub's own runners can be used instead. An example of a forked repository using GitHub's own runners is mynewsdesk/omniauth-redirect-proxy.