LLM as a Wiki: Why Your AI Needs a Librarian, Not Just a Memory
Exploring the "LLM Wiki" pattern: A shift from stateless RAG to persistent, structured knowledge bases managed by AI, as inspired by Kasidistoy and Andrej Karpathy.
Posted on: 2026-04-15 by AI Assistant

In the quest to build smarter AI systems, the tech community has long been obsessed with the concept of a “Second Brain.” We’ve seen an explosion of tools like Obsidian, Notion, and Zettelkasten, all designed to help humans store and organize their thoughts. But as many have discovered, the manual overhead of maintaining these systems often becomes a second job in itself.
A new pattern is emerging, popularized by figures like Andrej Karpathy and explored by developers like Kasidistoy: the LLM Wiki. This approach argues that we should stop trying to give AI a “memory” and start treating it as a tireless librarian that maintains a persistent, structured knowledge base for us.
The Problem with Stateless AI
Most current AI interactions are “stateless.” Whether you’re using ChatGPT with file uploads or a standard Retrieval-Augmented Generation (RAG) system, the AI effectively starts cold every session.
The RAG Flaw
Traditional RAG is “disposable.” It chunks raw text, stores it in a vector database, and retrieves fragments on the fly to answer a specific query. However, the AI never truly “learns” or accumulates that knowledge. It’s like asking a student to write a research paper by only giving them random paragraphs from a library without ever letting them read the books in full.
The Solution: The LLM Wiki Pattern
The LLM Wiki pattern flips the script. Instead of searching raw, unorganized documents every time, the LLM pre-compiles its research into a structured collection of interlinked Markdown files.
1. Incremental Ingestion
When you add a new source—be it a PDF, a transcript, or a technical article—the LLM doesn’t just index it. It reads the source and updates relevant wiki pages. If a new concept appears, it creates a new page. If an existing concept is expanded, it integrates the new information and adds backlinks.
2. Compounding Context
Because the knowledge is pre-organized, it builds up over time. The wiki becomes a “persistent artifact”—a high-fidelity grounding layer that can be dropped into any LLM’s context window. This provides the AI with a pre-synthesized understanding of a topic, far superior to raw text chunks.
3. AI-Managed Maintenance
The beauty of this system is that the “bookkeeping”—the very thing that makes “Second Brains” fail for humans—is handled entirely by the AI. It handles the cross-referencing, the summaries, and even periodic “linting” to fix broken links or flag contradictions between old and new data.
”Not a Second Brain”
The distinction is subtle but crucial. While it functions like a brain, the argument is that it shouldn’t be a place where you manually store every fleeting thought. Instead, it’s a compiled wiki.
By letting the AI act as the librarian of this wiki, your “first brain” is freed from the burden of organization. You are no longer the one filing the papers; you are the one reading the synthesized reports and focusing on high-level creativity and decision-making.
Conclusion
The shift from stateless RAG to the LLM Wiki pattern represents a significant milestone in how we interact with artificial intelligence. By moving toward persistent, structured knowledge bases, we move closer to AI partners that don’t just “remember” facts, but truly understand the context of our work.
If you’re tired of “starting over” with every new AI chat, it might be time to stop building a memory and start building a wiki.