It is not always all about serverless.
You know how it is. You spend years in a serverless world, enjoying every day. Then a new project is assigned to you at work. Microservice on Kubernetes + ... Then another client with similar requirements. Deployed on bare metal, on EKS (AWS) or Azure, etc. My professional work is now closer to Java/Microservice/K8s world than to Typescript/React/AWS/serverless. But this is good. So I've decided to describe a fun project I did at my home lab. Why not? It is all exciting tech.
Ok, so what is this post about?
I built a single Kubernetes cluster on my home PC. I have Intel NUC with Intel 11th gen + 32GB RAM. Small K3s lightweight Kubernetes distribution. On top of that GitOps ArgoCD CI tool. ArgoCD drives all applications deployed on the cluster. Which is a Postgres database deployed with the Zalando operator. Then REST application implemented in Kubernetes native framework - Quarkus. And all this is connected to the world via the public domain and Cloudflare tunnel. No public IP is required to host your app from a PC sitting at your desk. Quarkus project and ArgoCD projects are stored on GitHub. GitHub actions configured to build a container image and push to AWS ECR repo. Another quarkus application is configured to sync AWS secrets for ECR tokens and SecretManager. Why not use the power of AWS and delegate heavy tasks to them? S3 for Postgres backup. AWS ECR for private container image repository. AWS Secret manager to manage your secrets.
Ok, few links below to mentioned projects:
https://k3s.io https://argo-cd.readthedocs.io/en/stable/ https://www.cloudflare.com/products/tunnel/ https://kubernetes.io https://quarkus.io https://GitHub.com/zalando/postgres-operatorI often experiment with many different dev stacks. I like to deploy them to the public internet. This motivates me to finalize them. For serverless/lambda stacks it is easy to start fresh and maintain multiple projects at the same time. Also from a cost perspective. There are however many use cases where you need the RDS/Redis/ so VPN/VPC. It gets complex and expensive very quickly. And also hard to maintain and monitor. Of course, I'm talking about a hobby project developed by a single person after work. I can't spend too many hours on such configurations, because these projects would never be finished, and sometimes even working.
K8s setup is a nice alternative to serverless applications. It is cheaper and easier to maintain. You have many options to monitor your app and manage the logs. You are not limited to what the cloud provider offers. You can install any stack you want. Of course, not on the scale as cloud. So consider that this is a playground only. Adding GitHub Actions to the mix speeds things up considerably. Start a new project. Add actions. Update your ArgoCD deployment file. That is all that is needed. After you push change, a project is built, deployed to AWS ECR, fetched by ArgoCD and deployed on your cluster.
So that is all. Different stack with different capabilities. Cheaper than AWS and using my existing idle PC.
I will describe it from a very high level. All provided links point to the documentation. Please check QuickStart pages to understand how these frameworks/libs/operator works.
Ok, I started with Intel NUC with PopOS installed. I'm running Plex server and desktop environment on this machine. In the first step, I installed K3s distribution. I'm not a Devops expert. And this is home lab. Installation of K3s consists of running shell script and copying K8s files to user directories. Once you have it configured, and you can control it from other machine, install ArgoCD. I have a GitHub project that stores configuration for all applications I want to have deployed through ArgoCD. These configs are Helm charts, plain Kubernetes files, etc. The idea is that all of them reside in the Git repo and you decide which to pick up and install on your cluster. Once you have them installed, ArgoCD synchronizes repository state with Kubernetes, and applies any required updates. The picture on the top shows all described apps in my default cluster.
From now on, whenever I want to add an application to the cluster I perform a few steps:
1. Ensure I know where to install the app. It can be Helm chart or ECR repository. I need to know the location and install type.
2. Add a new folder with app config to ArgoCD repo.
3. When I need to add an application to my cluster, I execute ArgoCD command to deploy it.
When the application is deployed and developed by me, I can configure GitHub actions to automate CI/CD. Every commit results in a new image build and is pushed into my AWS ECR private repo. ArgoCD can sync and deploy my application after a new image is published. You can easily separate dev and prod environments and decide what is promoted to go live.
Application stack does not matter. It is a container image at the end, it can be Java/Python/Js. Anyway, in this stack, I deployed a relational database. Postgres operators are very popular and mature. I've decided to use the Zalando operator. The documentation was not easy for me to understand but it all makes sense. You must understand the concept of backup/restore they decided to implement.
I'm using Quarkus and deployed a native image to the cluster. They are smaller and faster than regular Java ones. For single homelab machine, it makes sense to use a native build they offer. It is built-in feature of Quarkus so why not use it? I've one, additional quarkus application that I use for cluster management. It synchronizes secrets for AWS ECR and AWS Secrets into K8s secrets. Vault is overkill to install on a single machine. I can use a nice AWS UI to manage my secrets.
The final step is to expose applications to the public internet. I'm using a few Cloudflare services already, and they offer Tunnel product. This was a perfect option for me because in my country public IP costs 300$ monthly (or more). I've registered the domain in Cloudflare. Tunnel setup is very easy, there is a command line available and Cloudflare provides monitoring UI for running tunnels. For example, my REST API is exposed via tunnel and can be consumed by SPA exposed and hosted on Cloudflare workers. This plus extra DDOS protection from Cloudflare is perfect for my hosting home lab needs.
That is all:) Kubernetes is fun, try it and find a use-case for it.