

Kubernetes. For a homelab, the stripped-down k3s is fantastic and surprisingly easy to get going.
Once you’ve got Kubernetes set up, you can lean on all the many tools already out there for things like deploying complex projects (Helm) and monitoring (Prometheus/Grafana). OpenLens is a nice piece of software you can use to monitor and control your cluster too, as is k9s.


I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run
helm install ...either directly or via the CI and stuff is deployed.So for example at my last job, our GitLab CI just had a section triggered exclusively for merges into
masterthat ranhelm install ...for all three environments. We had threevalues.yamlfiles, one for each environment, and when we wanted to deploy a new version, the process was:1.2.3) and push it to the repo. This would trigger a build and push the resulting image into the container registry.1.2.3todevelopmentbut not yet tostagingorproduction, then thetag:value in each of the environment files would look like this:k8s/chart/environments/development.yaml:tag: 1.2.3k8s/chart/environments/staging.yaml:tag: 1.2.2k8s/chart/environments/production.yaml:tag: 1.2.2Once that change is pushed, the CI will automatically apply it with
helm install ...and make sure that all three environments are what they’re supposed to be.As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:
The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:
#!/bin/sh while ! nc -z postgres 5432; do echo "Waiting for postgres..." sleep 0.1 done echo "PostgreSQL started" touch /tmp/ready exec "$@"I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.