Immutable Architecture
The four-layer model. Why monolithic LLM-workspace deployments break under upgrades, model switches, and distribution — and the immutable substrate that fixes it.
maiaOS is a model-agnostic operating system for working with LLMs, designed around a brain-inspired architecture — a small fast frontal lobe routing to specialist cortices, not a single big model trying to be the whole house.
Most LLM-powered workspaces are tightly coupled to a single CLI, a single vendor, a single filesystem layout. Upgrading the runtime means risking the data. Switching models means starting over. Distributing across devices is impossible. Memory, identity, skills, and history are entangled with the tool, not the operator.
Today's dominant LLM-app pattern: the biggest available model is the front door and also the entire house. maiaOS inverts that. The front door is deliberately small and fast — a frontal lobe that routes, synthesises, and decides who to ask. Behind it sit specialist cortices: heavy reasoning, fast conversation, creative generation, vision, audio, memory consolidation, salience monitoring.
The frontal lobe doesn't do the heavy thinking. It's the switchboard, not the calculator. Aggregate intelligence comes from orchestrating specialists, not from a single ever-larger model.
The four-layer model separates what changes from what doesn't, and what's recoverable from what isn't. Each layer is backed up, deployed, and reasoned about independently — and each has a brain analogue, because the layering wasn't designed brain-first but turned out strikingly brain-shaped anyway.
Every load-bearing decision lives in a SPEC. The SPEC is the contract — code follows it, not the other way around. These four are the foundational specs the rest of maiaOS builds on. All currently in Draft — they will evolve as implementation feedback lands.
The four-layer model. Why monolithic LLM-workspace deployments break under upgrades, model switches, and distribution — and the immutable substrate that fixes it.
Per-layer backup classification, retention rules, and verification. Treats irreplaceable data differently from recoverable data — because they are different.
A process standard for every HTTP API in the maiaOS ecosystem. Patterns, ordering, verification rules — so the secure path is the default path.
Cross-LLM coordination. A session manager that sits outside Claude Code, Codex, Gemini, Aider — so concurrent sessions can see each other, share context, and avoid blind conflicts.
The four foundational SPECs are drafted and live. The brain-architecture framing — frontal lobe, cortex taxonomy, regulatory cortices — is conceptual and being worked through publicly. There is no shipped product. There is a thinking process, and these are its current artefacts. Watch this page evolve.