PedroCLI > Clawd/Molt/etc
Why I’m Building PedroCLI And Why This Became a Learning Project
I started a small side project a few months ago because I heard about how cool Cursor background jobs were, and I wanted background jobs.
This wasn’t a new idea; I wanted to build Pedro into a code reviewer and editor ever since I tried the Claude Code research preview in March 2025. I knew that I could set up tool calls and that my self-hosted AI could be an agent that could edit and interact with files through tools. This seemed like a good time to build it, so that I would never have to use a non-terminal-based code editor.
This would also force me to work with tool calls, which I have been dying to do for months, and so I started building my own system.
That system became PedroCLI.
NOTE: the double credit limit on Claude Code from Christmas to New Years were also a huge motivation, so I am using Claude Code to build a replacement.
What PedroCLI Is (At a High Level)
PedroCLI is a local-first agent system. It’s designed to help me with real work: coding, writing, planning, and long-running tasks that I have to do outside of my day job.
Today, PedroCLI includes:
A job system for background execution
Agent workflows
Tool calling with validation and observable results
Context compaction is caused by memory limitations
A web UI for mobile interaction
Local model execution for my homelab
Now Pedro, my homelab AI, has grown beyond a simple chatbot to something much more powerful.
How This Turned Into a Learning Project
Originally, I thought this would be a small utility.
Instead, like all homelab projects, it turned into a forcing function for learning.
Every assumption I had about agents was challenged once I removed the safety nets:
Quantized models behave very differently from hosted ones
Self-iterating agents break down under real constraints
Context windows are not memory; they’re a budget
Tool calling is API design, not prompt design
“TASK_COMPLETE” is just a word the LLM gives me
Each fix required a refactor—not just of code, but of how I thought about agentic systems.
At some point, I realized this wasn’t just a tool I was building, iIt was a practice environment.
Why This Is Valuable for Everyone
If you work with AI systems long enough, you eventually hit a wall where prompting stops being enough. This is true for the engineers integrating AI into their systems, to the Research Scientist trying to distill the new model.
What’s next then?
This project forces you to confront:
How agents actually fail in production-like settings
Why autonomy without guardrails is fragile
How architecture matters when the system constraints are the hardware
What changes when you own the entire execution stack
Where model limitations shape system design
For AI practitioners, especially those coming from software engineering, this is where intuition gets built. Not just from reading Anthropic blog posts, but from watching systems break and then fixing them.
Rather than being an example of “how to build agents correctly,” PedroCLI is a record of how my mental model of agents kept breaking and what I learned by repairing it.
What This Series Will Cover
This post is the lead-in and therefore sets the context.
The rest of the series will walk through the major lessons learned while building PedroCLI, including:
Why free-form, self-iterating agents failed
How phased workflows restored reliability
Why did agent jobs claim completion without doing work
What context compaction actually looks like in practice
How tool calling went wrong, and how it was fixed
Why logit manipulation helped, hurt, and then helped again
Why some “correct” abstractions (like MCP) didn’t belong here
Where I draw the line on automating my life
Each post builds on the last. Some will be architectural, others will be deeply technical, and then some will just be uncomfortable realizations about how much trust we hand to systems we don’t understand.
Why I’m Sharing This
I’m not publishing this as a finished product or a best-practices guide.
I’m publishing it because learning in public is how I sharpen my thinking, and because I suspect a lot of other practitioners are running into the same wall—where AI tools are powerful, but opaque.
If you’re looking for agent hype, this series probably isn’t for you.
If you’re interested in what actually happens when you treat AI like actual software, meaning constrained, fallible, and worth understanding, then this is exactly what we’ll be exploring next.
This is PedroCLI, and this is where the real learning starts.
Stay Connected
Want to stay updated on what I’m working on? Here’s where you can find me:
Twitch: https://twitch.tv/soypetetech
Discord: https://discord.gg/ExTAH54KCE
GitHub: https://github.com/soypete
Linktree: https://linktr.ee/soypetetech
Newsletter Highlights
Recent posts you might have missed:



