A brain-inspired AI operating system

A brain, not a monolith.

maiaOS is a model-agnostic operating system for working with LLMs, designed around a brain-inspired architecture — a small fast frontal lobe routing to specialist cortices, not a single big model trying to be the whole house.

Read the framing See the architecture
This is thinking out loud, not a product. maiaOS is highly conceptual at the moment and under active development. Specifications are drafts. The brain-architecture framing is live notes — not a finished spec yet. Live thinking is published here so the work is visible. Expect change.

LLM workspaces are mortgaged to whichever model happens to be best this quarter.

Most LLM-powered workspaces are tightly coupled to a single CLI, a single vendor, a single filesystem layout. Upgrading the runtime means risking the data. Switching models means starting over. Distributing across devices is impossible. Memory, identity, skills, and history are entangled with the tool, not the operator.

The Algorithm, the skills, the memory, the identity — these are model-agnostic. The deployment shouldn't assume a single runtime forever.

A small frontal lobe. A team of specialist cortices.

Today's dominant LLM-app pattern: the biggest available model is the front door and also the entire house. maiaOS inverts that. The front door is deliberately small and fast — a frontal lobe that routes, synthesises, and decides who to ask. Behind it sit specialist cortices: heavy reasoning, fast conversation, creative generation, vision, audio, memory consolidation, salience monitoring.

The frontal lobe doesn't do the heavy thinking. It's the switchboard, not the calculator. Aggregate intelligence comes from orchestrating specialists, not from a single ever-larger model.

The prefrontal cortex is not where most computation happens — it's where decisions about computation happen.

Four principles, applied everywhere.

Immutable where it matters
System layers are read-only and reproducible. Upgrades replace, not merge. The state you care about lives elsewhere.
Model-agnostic by design
Claude, Codex, Gemini, Aider, future tools — maiaOS doesn't pick a winner. The substrate sits underneath all of them.
Distribution-ready
Built so devices can become thin clients of a personal Maia service, not islands carrying their own state.
Verifiable, always
Backups verified, APIs tested, sessions reconciled. An unverified anything is a hope, not a system.

Four layers. Each maps to a brain region.

The four-layer model separates what changes from what doesn't, and what's recoverable from what isn't. Each layer is backed up, deployed, and reasoned about independently — and each has a brain analogue, because the layering wasn't designed brain-first but turned out strikingly brain-shaped anyway.

coreOS
Immutable system code. Algorithm, skill engine, hooks. Recoverable from a git registry — short backup chain, fast restore. Analogue: brainstem · genetically-encoded substrate. Doesn't change per-thought.
userOS
Personal customisation. Custom skills, preferences, project trees. Irreplaceable — the strongest backup tier. Analogue: autobiographical self. Identity, long-held preferences, life context.
memoryOS
Persistent memory. Auto-memory, learned preferences, identity. Continuously written, continuously protected. Analogue: hippocampus + cortical memory. Episodic, semantic, procedural.
runtimeOS
Ephemeral state. Session caches, in-flight work. Recomposable on demand — minimal backup overhead. Analogue: working memory · active neurons. Current attention, scratch, transient plans.
drivers
Runtime adapters. Thin wiring between the frontal lobe and a cortex — SDK/API binding, transport, envelope. The work happens in the cortex; the connection is the driver. Analogue: peripheral nervous system · sensorimotor adapters.
The gap today: there is no persistent frontal lobe. Whichever runtime wakes up becomes the temporary executive, and leaves when the session ends. Closing that gap is the next direction of work.

SPEC-driven where the architecture is settled.

Every load-bearing decision lives in a SPEC. The SPEC is the contract — code follows it, not the other way around. These four are the foundational specs the rest of maiaOS builds on. All currently in Draft — they will evolve as implementation feedback lands.

SPEC-001

Immutable Architecture

The four-layer model. Why monolithic LLM-workspace deployments break under upgrades, model switches, and distribution — and the immutable substrate that fixes it.

FoundationDraft
SPEC-002

Backup Model

Per-layer backup classification, retention rules, and verification. Treats irreplaceable data differently from recoverable data — because they are different.

ReliabilityDraft
SPEC-003

API Security Standard

A process standard for every HTTP API in the maiaOS ecosystem. Patterns, ordering, verification rules — so the secure path is the default path.

SecurityDraft
SPEC-004

Session Manager

Cross-LLM coordination. A session manager that sits outside Claude Code, Codex, Gemini, Aider — so concurrent sessions can see each other, share context, and avoid blind conflicts.

CoordinationDraft

Where this is.

The four foundational SPECs are drafted and live. The brain-architecture framing — frontal lobe, cortex taxonomy, regulatory cortices — is conceptual and being worked through publicly. There is no shipped product. There is a thinking process, and these are its current artefacts. Watch this page evolve.