Mar 2026 • Build Log #003
My current OpenClaw setup
What I am actually running right now, why it is structured this way, and how I use it day to day.
My OpenClaw setup is still pretty small, but it is finally starting to feel like an actual system instead of a toy. The point is not to build an "AI operating system" for the sake of sounding futuristic. The point is much simpler: keep an assistant close to my real workflow, make it useful enough that I actually use it, and let the setup compound over time.
The purpose
I want OpenClaw to be a low-friction execution layer around my work, not a chatbot novelty. That means a few things:
- It should be easy to reach from the place I already use all the time.
- It should remember useful context through files, not fake memory.
- It should help with writing, planning, infra work, and repeatable technical tasks.
- It should stay under my control when real commands or changes are involved.
So the current setup is optimized for practicality: one personal agent, direct chat access, local files as memory, and enough tooling to do real work without turning the whole thing into a science project.
What is in the setup right now
- One main personal agent.
- Telegram as the primary front door.
- A local-mode OpenClaw gateway bound to loopback.
- A workspace at
~/.openclaw/workspacethat acts as persistent context. - OpenAI Codex as the default model.
- The coding tool profile enabled, plus web search through Brave.
- Workspace-local skills for repeatable tasks.
How the setup is structured
The biggest conceptual split in OpenClaw is between config/state and workspace. That matters a lot because it affects how I think about memory, backups, and behavior.
The workspace is the agent's home. That is where the operating instructions, persona, memory, tool notes, heartbeat checklist, and local skills live. In other words: the workspace is where the useful continuity lives.
The global OpenClaw directory holds the machine-level stuff: config, credentials, sessions, and system state. I do not treat that like normal working memory. It is runtime plumbing, not the assistant's brain.
What the workspace gives me
The workspace approach is one of the best parts of OpenClaw because it forces something honest: if I want continuity, I need to write it down.
AGENTS.mddefines behavior and workflow rules.SOUL.mddefines tone, boundaries, and style.USER.mdcaptures who I am and how I want help.TOOLS.mdstores operational notes about the machine and infra.memory/YYYY-MM-DD.mdstores daily logs.MEMORY.mdstores distilled long-term memory.skills/holds workspace-specific skills for recurring workflows.
This is a much better model than pretending the assistant magically remembers everything. It does not. Files persist. Chat context is temporary. That constraint is good because it pushes the setup toward explicitness.
Why Telegram is the front door
Telegram is the easiest way for me to keep the assistant close to daily life. If something is too annoying to access, I will stop using it. A command palette on a server is powerful; a chat I already open constantly is more usable.
Right now that means I can talk to the agent from Telegram for planning, notes, summaries, setup work, and lightweight execution. It keeps the barrier low. That matters more than theoretical elegance.
Execution model
My setup is intentionally not "fully autonomous." The assistant can inspect, read, organize, write non-destructive files, and do a lot of useful work. But for risky or meaningful system changes, I want explicit control.
That ends up being a good middle ground. I do not need to micromanage every small action, but I also do not want an agent freelancing on infrastructure. The result is closer to a competent operator sitting next to me than some fantasy of unsupervised autonomy.
Skills are where it starts getting interesting
OpenClaw supports AgentSkills-compatible skill folders, and that is where the setup starts becoming more than a generic assistant. Skills let me encode repeatable workflows in a way that is explicit, local, and reusable.
I am using workspace-local skills for tasks that I know will recur, especially deployment and operational flows. The nice part is that I do not need to remember exact commands every time. I can just ask naturally for the task, and the assistant has a focused procedure to follow.
That is the direction I care about most: not adding novelty, but steadily turning repeated work into cleaner reusable patterns.
Current workflow
- Use Telegram as the main interaction layer.
- Let the agent read the workspace at session start and recover context from files.
- Use it for writing, planning, research, infra tasks, and repetitive technical operations.
- Persist important changes into memory files and operational docs instead of trusting chat history.
- Use skills for recurring deploy/update workflows.
- Spawn background subagents for heavier work so the main conversation stays responsive, then report back with the result.
- Keep heartbeats small, mainly for periodic memory maintenance and light background review.
What this setup is not
It is not a giant multi-agent setup yet. It is not a universal automation platform. And it is not trying to replace clear thinking. Right now it is one practical personal agent with enough memory, structure, and tooling to become more useful every week.
That is enough.
Where I think it goes next
The next meaningful step is not random complexity. It is tighter separation of roles and cleaner operating loops. More specifically:
- More reusable skills for deploys, debugging, content updates, and maintenance.
- Better periodic memory review so useful context gets promoted instead of rotting in daily notes.
- Possibly more isolated agents later when the workflows actually justify them.
For now, the setup is doing what it should do: stay close to real work, reduce friction, and slowly turn repeated actions into infrastructure.