Using CI/CD in Home Infrastructure

December 16, 2019

As I’ve written about previously, I’ve always had a pretty large home lab setup. Unfortunately, it has lead to a huge electric bill that I can’t justify for the few services that are actually running. I had plans of grandeur to support a bunch of other things from home, including a huge storage footprint for Live Music, it just was not cost effective.

Instead, I’ve been building out all new home infrastructure, almost 100% based on the new Raspberry Pi 4 w/ 4GB memory. I’ve found that the Pi 4, coupled with the POE hat has made a tremendously powerful ally in building a solid home infrastructure.

Along the way, I realized that I was very tired of managing a ton of config files and management systems, while I would much rather just modify some files on my laptop and push them into a git repo. So, my first step was to deploy a Pi with Gitlab on it, along with an older Pi 3B+ for a Gitlab Runner. For each new service I deployed (each on it’s own Pi), I made sure to build out a Gitlab CI/CD file for each service. I then allow the runner to ssh into the respective service host and update the config files and reload the service.

Why not config management? Because, I wanted to understand CI/CD a bit better. Also, my experience with config management has been mostly bleh.

Example: Pi-Hole

Let’s look through the anatomy of my Pi-Hole setup. Below you’ll see my .gitlab-ci.yml file. First an overview of the whole thing, then I’ll break it down into parts.

image: whiskykilo/debian-ssh

before_script:
  - ssh-keyscan 10.45.2.8 >> ~/.ssh/known_hosts
  - ssh-keyscan 10.45.2.9 >> ~/.ssh/known_hosts
  - chmod 644 ~/.ssh/known_hosts
  - cp ./cicd/id_rsa ~/.ssh/id_rsa
  - chmod 600 ~/.ssh/id_rsa

backup_running_config:
  stage: .pre
  script:
    - mkdir backup
    - mkdir backup/pihole1
    - mkdir backup/pihole2
    - scp -r -i ~/.ssh/id_rsa root@10.45.2.8:/etc/pihole/* backup/pihole1/
    - scp -r -i ~/.ssh/id_rsa root@10.45.2.9:/etc/pihole/* backup/pihole2/
  artifacts:
    name: "pihole-$CI_COMMIT_TAG"
    paths:
      - backup
    expire_in: 30 days

replace_running_config:
  stage: deploy
  script:
    - scp -r -i ~/.ssh/id_rsa ./configs root@10.45.2.8:/tmp/
    - scp -r -i ~/.ssh/id_rsa ./configs root@10.45.2.9:/tmp/
    - scp -i ~/.ssh/id_rsa ./cicd/update_configs.sh root@10.45.2.8:/tmp/update_configs.sh
    - scp -i ~/.ssh/id_rsa ./cicd/update_configs.sh root@10.45.2.9:/tmp/update_configs.sh
    - ssh -i ~/.ssh/id_rsa root@10.45.2.8 'bash /tmp/update_configs.sh'
    - sleep 10
    - ssh -i ~/.ssh/id_rsa root@10.45.2.9 'bash /tmp/update_configs.sh'

So there’s a lot there, so let’s break it down.

image: whiskykilo/debian-ssh

before_script:
  - ssh-keyscan 10.45.2.8 >> ~/.ssh/known_hosts
  - ssh-keyscan 10.45.2.9 >> ~/.ssh/known_hosts
  - chmod 644 ~/.ssh/known_hosts
  - cp ./cicd/id_rsa ~/.ssh/id_rsa
  - chmod 600 ~/.ssh/id_rsa

First I declare the docker image I want to use with my build script, in this case it’s a slightly modified stock Debian image. The only thing I did was add an SSH client. This simplified the before_script section and reduced the deployment time because I no longer needed to install the SSH client as part of the build process. You can visit the repository for whiskykilo/debian-ssh to see how simple this container is (yes, this Gitlab instance is running on one of my Pis at home).

The before_script section, runs a bunch of commands in the build container to ensure that we’re able to connect to the remote service host over ssh. The 4th command, cp ./cicd/id_rsa ~/.ssh/id_rsa copies the private key for my Gitlab server to the .ssh folder in the container. Now that I’m typing this up, I’m seeing that this step could be simplified but I’ll leave it for now. There is a way to use Build Variables, but I was not able to get it to work properly. It’s on my to-do list for later.

Next up, I back everything up!

backup_running_config:
  stage: .pre
  script:
    - mkdir backup
    - mkdir backup/pihole1
    - mkdir backup/pihole2
    - scp -r -i ~/.ssh/id_rsa root@10.45.2.8:/etc/pihole/* backup/pihole1/
    - scp -r -i ~/.ssh/id_rsa root@10.45.2.9:/etc/pihole/* backup/pihole2/
  artifacts:
    name: "pihole-$CI_COMMIT_TAG"
    paths:
      - backup
    expire_in: 30 days

In this stage, we create some directories to store backups from my Pi-Holes and then scp the files from the Pi-Holes to the build Docker Container. Then I declare that I want to create a few build artifacts, in this case, the artifacts are the backed up files. These are then stored in the Gitlab server for a period of 30 days, in case I need to revert. I will be moving a lot of this to variables in the future (IP Addresses, usernames, etc).

Right now, the artifacts are being stored on the local machine, but I will be offloading them onto a Minio based Object store in the near future.

Now we dump out the new config files and run a script to shut the service off, copy the new configs, and then restart the service.

replace_running_config:
  stage: deploy
  script:
    - scp -r -i ~/.ssh/id_rsa ./configs root@10.45.2.8:/tmp/
    - scp -r -i ~/.ssh/id_rsa ./configs root@10.45.2.9:/tmp/
    - scp -i ~/.ssh/id_rsa ./cicd/update_configs.sh root@10.45.2.8:/tmp/update_configs.sh
    - scp -i ~/.ssh/id_rsa ./cicd/update_configs.sh root@10.45.2.9:/tmp/update_configs.sh
    - ssh -i ~/.ssh/id_rsa root@10.45.2.8 'bash /tmp/update_configs.sh'
    - sleep 10
    - ssh -i ~/.ssh/id_rsa root@10.45.2.9 'bash /tmp/update_configs.sh'

Once again, I’m copying the folder containing my config files, in the case of Pi-Hole it’s mostly blocklists for now. Then I run update_configs.sh on each host, with a 10 second sleep between them to ensure DNS resolution remains up at home. I need to introduce some error handling in the sleep phase to ensure I don’t screw up both Pi-Holes and end up without DNS.

Below, you’ll see my update_configs.sh file, which is pretty straight forward:

#!/bin/bash

pihole disable
rm -rf /etc/pihole/adlists.list
cp /tmp/configs/adlists.list /etc/pihole/adlists.list
pihole enable

I disable Pi-Hole, remove the current ad list, replace it with the new one, then re-enable Pi-Hole.

I’ve been able to mostly re-use this CI file for most of the services running at my house including: Wireguard, Caddy (reverse proxy), Home Assistant, and others. Plex will be added in shortly, as well as Sonarr, Radarr, Jackett, etc.