Using CI/CD in Home Infrastructure
As I’ve written about previously, I’ve always had a pretty large home lab setup. Unfortunately, it has lead to a huge electric bill that I can’t justify for the few services that are actually running. I had plans of grandeur to support a bunch of other things from home, including a huge storage footprint for Live Music, it just was not cost effective.
Instead, I’ve been building out all new home infrastructure, almost 100% based on the new Raspberry Pi 4 w/ 4GB memory. I’ve found that the Pi 4, coupled with the POE hat has made a tremendously powerful ally in building a solid home infrastructure.
Along the way, I realized that I was very tired of managing a ton of config files and management systems, while I would much rather just modify some files on my laptop and push them into a git repo. So, my first step was to deploy a Pi with Gitlab on it, along with an older Pi 3B+ for a Gitlab Runner. For each new service I deployed (each on it’s own Pi), I made sure to build out a Gitlab CI/CD file for each service. I then allow the runner to ssh into the respective service host and update the config files and reload the service.
Why not config management? Because, I wanted to understand CI/CD a bit better. Also, my experience with config management has been mostly bleh.
Let’s look through the anatomy of my Pi-Hole setup. Below you’ll see my
.gitlab-ci.yml file. First an overview of the whole thing, then I’ll break it down into parts.
So there’s a lot there, so let’s break it down.
First I declare the docker image I want to use with my build script, in this case it’s a slightly modified stock Debian image. The only thing I did was add an SSH client. This simplified the
before_script section and reduced the deployment time because I no longer needed to install the SSH client as part of the build process. You can visit the repository for whiskykilo/debian-ssh to see how simple this container is (yes, this Gitlab instance is running on one of my Pis at home).
before_script section, runs a bunch of commands in the build container to ensure that we’re able to connect to the remote service host over ssh. The 4th command,
cp ./cicd/id_rsa ~/.ssh/id_rsa copies the private key for my Gitlab server to the
.ssh folder in the container. Now that I’m typing this up, I’m seeing that this step could be simplified but I’ll leave it for now. There is a way to use Build Variables, but I was not able to get it to work properly. It’s on my to-do list for later.
Next up, I back everything up!
In this stage, we create some directories to store backups from my Pi-Holes and then
scp the files from the Pi-Holes to the build Docker Container. Then I declare that I want to create a few build artifacts, in this case, the artifacts are the backed up files. These are then stored in the Gitlab server for a period of 30 days, in case I need to revert. I will be moving a lot of this to variables in the future (IP Addresses, usernames, etc).
Right now, the artifacts are being stored on the local machine, but I will be offloading them onto a Minio based Object store in the near future.
Now we dump out the new config files and run a script to shut the service off, copy the new configs, and then restart the service.
Once again, I’m copying the folder containing my config files, in the case of Pi-Hole it’s mostly blocklists for now. Then I run
update_configs.sh on each host, with a 10 second sleep between them to ensure DNS resolution remains up at home. I need to introduce some error handling in the sleep phase to ensure I don’t screw up both Pi-Holes and end up without DNS.
Below, you’ll see my
update_configs.sh file, which is pretty straight forward:
I disable Pi-Hole, remove the current ad list, replace it with the new one, then re-enable Pi-Hole.
I’ve been able to mostly re-use this CI file for most of the services running at my house including: Wireguard, Caddy (reverse proxy), Home Assistant, and others. Plex will be added in shortly, as well as Sonarr, Radarr, Jackett, etc.