Your notes folder is the operating system, and AI is the engine

Every Sunday at 9pm, my note vault writes a review of me.
Not a log of tasks — I have an app for that. It generates a scored review across eight dimensions of life: physical, social, intellectual, emotional, spiritual, environmental, financial, occupational. The material is drawn from signals gathered throughout the week from my notes.
More than once, it has flagged a dimension I’d neglected for weeks.
The system noticed where I forgot to look.
That’s not AI answering a question I asked.
That’s AI extracting patterns — automatically.
And it runs on a system originally designed as a coding assistant, now evolving into a more general reasoning tool for a broader audience, not just software engineers.
A recent example is Claude Cowork by Anthropic, evolving from its earlier coding-focused tool, Claude Code.
I’ve kept nearly 4,000 notes in Obsidian over the years. I like it because it allows me to store everything as plain text in a folder, and Obsidian builds application layers on top of it. If I want, I can read everything without relying on any specific tool, and my data is not locked into any service.
At the same time, it can index the “vault” — what Obsidian calls the note folder — and provide a structured interface: backlinks, cross-references, and even database-like views.
But the more I used AI elsewhere, the more this interface started to feel like a wall between me and my own knowledge.
I didn’t want to click through notes anymore.
I wanted to talk to them.
Those notes turned out to be perfect input for a personal productivity AI system. They provide context — something close to the “vibe” in vibe coding — and allow answers to become more relevant to me, not just generally correct.
It turns out many people have hit the same wall and started building past it.
Andrej Karpathy shared an LLM wiki workflow.
The Obsidian community has seen a surge of AI-agent plugins.
The New Stack argued that markdown-based approaches can simplify or even replace MCP-style architectures in some cases.
Visual Studio Magazine highlighted markdown as a central working layer for agentic AI workflows, with some sources describing it as the “lingua franca” of agentic AI.
Strip away the branding from many of these “AI second brain” ideas, and the structure looks very similar:
- a folder of plain markdown files
- an LLM with read/write/search access
- a thin layer connecting the two
This feels less like a trend, and more like convergence.
Plain markdown survives any tool going out of business.
Models are now good enough that the connecting layer can remain thin.
The core building blocks are simple and stable.
PKM tools were designed for humans to retrieve information.
But retrieval is exactly what AI is now better at.
Semantic search often outperforms my tagging system.
Summarisation replaces manual highlighting.
Synthesis across dozens of notes happens faster than I could do on my own.
But better AI answers are not the whole story.
The more interesting question isn’t how to add AI to PKM.
It’s this:
What happens when your notes folder stops being a filing cabinet,
and becomes the storage layer of an operating system — with AI as the interface?
A few things change.
-
Your knowledge stays yours — plain
.mdfiles, portable and independent. -
One input starts to show up everywhere — a note, a task, a reminder.
-
And the system begins to surface patterns, but leaves the decisions to you.
That boundary matters.
PKM built the body of a second brain.
AI is what finally makes it move.
Listen to this post¶
⬇ Download as video — share on LinkedIn, X, etc.
AI-generated podcast discussion of this article