I have mentioned OpenClaw previously on this platform, but I feel it is worth calling attention to again. OpenClaw is an open source project that uses a clever series of markdown files and prompt strategies to completely transform the generative AI experience. Gone are the days of opening a web browser and starting a two-dimensional chat with Claude. Now, you open a messaging app, send a voice message, and your agent actually gets things done.

In the future, everyone will have a personal AI assistant, and OpenClaw's open source nature is perfectly suited to win the user base for this application. Think of the LLMs like fire – useful in its raw form, but limited. You can catch a fish and cook it on an open fire, but it isn't useful for much else beyond warmth and cooking. Harness that fire in an internal combustion engine, and now you're really going places.

OpenClaw runs locally, and uses the LLM like the gasoline in the engine. It generates markdown files to record and update various relevant things to the user, improving its memory and functionality over time. It then inserts all of that context into every prompt. So that the user experience is highly custom and feels very lifelike.

The beauty of this setup is that if next week, OpenAI comes out with a better model than Claude's Opus 4.6, it is trivial to switch your OpenClaw over to that model and continue along with all the memory and customization intact, because it lives on the OpenClaw server. No model provider lock-in means LLM computation becomes a commodity. Hopefully, one day soon we'll be able to self host that commodity compute as well.

If any of this is interesting to my readers, please go check out www.makenomistakes.shop, a business I have launched to help onboard non-technical users to the OpenClaw project.