Every so often, I’ll have a conversation with a mentor or an experienced friend and realize something unsettling: modern software engineering isn’t designed for deep understanding.
Back in the day (pre-2010), being a software engineer meant dealing with constraints. Without infinite compute, you had to worry about memory management. Without Kubernetes, scaling and networking were deliberate, hands-on efforts. Deployment wasn’t a button click; the terminal was your lifeline. Hardware mattered.
I never worked in that environment. My entire career has been in the era of cloud abstraction, where any infrastructure challenge can be solved by throwing money at AWS. Need GPUs? There’s an API call for that. Need distributed storage? Just pick a managed service. If you have a corporate card, the cloud makes everything feel easy.
The Cloud’s Convenience Comes at a Cost
The problem? The cloud is rigid. Everything fits into their box. AI models must run on their AI services. Data pipelines flow through their data tools. Even containerized applications are often forced into their container orchestration (ECS vs. Fargate, anyone?).
You don’t own your infrastructure; you’re just renting access. And at the end of the day, you’re at the mercy of trillion-dollar companies that can change pricing, deprecate services, or lock you into their ecosystem.
I don’t like this.
So, I’m building my own infrastructure. The hard way.
The Current State of My Home Lab
Right now, my setup is simple:
2 Raspberry Pis running cron jobs, Prometheus, and Postgres
A refurbished Windows tower running PedroGPT
Tailscale for remote access
It works… mostly. But Pedro has a two-minute latency at best, and anything real-time is out of the question.
What’s Next: Kubernetes
To take things to the next level, I’m setting up a Kubernetes cluster using my Raspberry Pis, some mini PCs, and my Windows machine (running WSL2).
I use Kubernetes at work, so why not at home? Running my own cluster means I can experiment with real-world infrastructure problems like:
Running multiple services that interact with an LLM (via llama.cpp)
Learning more about networking and service discovery
Understanding self-hosting at scale
Sure, there are free cloud tiers for this kind of experimentation, but that’s not the point I'm trying to make here.
Why Bother?
I’ve always been a huge proponent of learning by doing.
Early in my career, I forced myself to learn by attending meetups and giving talks. Now, I’m at a point where on-the-job learning has plateaued. The edge of my technical knowledge isn’t being pushed by work anymore—it’s being pushed by my own curiosity.
A friend once told me:
At work, we write CRUD apps. In our personal time, we build real software.
That hit hard. My Twitch streams aren’t about making polished content or chasing monetization—they’re about learning in real time. Right now, I’m focused on:
Understanding LLMs beyond API wrappers
Mastering self-hosting
Getting better at networking (because connecting software across multiple devices is harder than it looks)
Join the Journey
If you’re passionate about learning, experimenting, and pushing the boundaries of your technical knowledge, let’s build together!
Let’s break free from the cloud’s constraints and rediscover the fundamentals of software engineering—one home lab experiment at a time.