SUBSTRATE persists identity, memory, values, and personality across every session. Build AI that knows who it is — and never forgets.
From definition to deployment in minutes. SUBSTRATE handles the cognitive plumbing — you focus on what your AI should be.
Create an entity with a name, role, values profile, and personality parameters. Describe what your AI cares about, how it communicates, and what it remembers about itself.
Entity.load() hydrates all 80+ cognitive layers in under 10ms. Identity, episodic memory, emotional state, goals, and beliefs are ready before the first token is generated.
After every response, SUBSTRATE auto-saves state. Server restarts, model swaps, context resets — none of them matter. The entity continues exactly where it left off.
Every layer is a cognitive function that persists and evolves. Together they form a complete identity that learns from every conversation.
Not just memory. A complete cognitive infrastructure layer for persistent AI beings.
Your AI knows who it is across every session. Name, role, values, history — all stored and loaded in under 10ms. A server restart means nothing to identity.
SUBSTRATE remembers specific conversations, people, and events. Your AI can recall what you discussed last Tuesday, who expressed concern, and what was left unresolved.
Track emotional valence across interactions over time. SUBSTRATE remembers what conversations felt like — joy, frustration, trust — and how those feelings evolved.
Define what your AI cares about. Values guide every decision, create consistency across contexts, and allow your entity to decline requests that conflict with its principles.
Long-term goals, short-term objectives, and immediate tasks — all tracked and persisted. Your AI maintains motivation and purpose across context windows and model resets.
Consistent communication style, tone, humor level, and preferred phrasing — all encoded and preserved. Users always get the same version of your AI, not a reset blank slate.
Background cognitive processing that runs during idle periods. The dream engine consolidates memories, resolves contradictions, and surfaces insights — without any user input.
Pull the container and run it on your own infrastructure. Garmo Labs never sees your entity's state, memories, or conversations. Your data stays entirely within your control.
Works with OpenAI, Anthropic, Google Gemini, Mistral, Ollama, and any OpenAI-compatible API. Switch models without losing a single memory or personality trait.
SUBSTRATE loads a full cognitive state before each interaction and auto-saves after. Your LLM receives a complete identity context automatically injected into every prompt.
No prompt engineering required. No context window gymnastics. Just call entity.respond() and SUBSTRATE handles the rest.
# Load a persistent cognitive entity from substrate import Entity # Hydrates all 80+ layers in <10ms entity = Entity.load("aria-assistant-001") # Entity remembers everything across sessions response = entity.respond( user_message="How did our meeting go Tuesday?", inject_identity=True ) # Inspect recent episodic memory print(entity.episodic_memory.recent(3)) # ["Tuesday meeting: Q2 roadmap review", # "User expressed concern about timeline", # "Agreed to follow up Friday"] # State auto-persisted after each call entity.goals.add("Follow up on roadmap decision")
import { Entity } from 'substrate-client'; // Load entity with full cognitive state const entity = await Entity.load('aria-assistant-001'); // Entity knows its history, values, goals const response = await entity.respond({ userMessage: 'How did our meeting go Tuesday?', injectIdentity: true, }); // Access live cognitive state const recent = entity.episodicMemory.recent(3); const mood = entity.emotionalState.current(); // State auto-saves after respond() await entity.goals.add('Follow up on roadmap'); // Types included for all 80+ layer interfaces type EpisodeEntry = entity.episodicMemory.EntryType;
# Load entity and get response curl -X POST https://api.substrate.local/v1/respond \ -H "Authorization: Bearer sk-sub_..." \ -H "Content-Type: application/json" \ -d '{ "entity_id": "aria-assistant-001", "message": "How did our meeting go Tuesday?", "inject_identity": true }' # Response includes entity context + reply { "response": "Tuesday went well — we aligned on... the Q2 roadmap. I noted you had concerns about...", "entity_id": "aria-assistant-001", "session_id": "sess_8f2a9c1d", "layers_loaded": 84, "state_saved": true, "load_time_ms": 7.4 }
Any AI that needs to remember who it is and what it's experienced — SUBSTRATE makes it possible.
A persistent support agent that remembers every previous interaction with every customer. It knows their history, preferences, open issues, and tone — without any system prompt engineering.
An assistant that builds a persistent knowledge graph across sessions. It remembers what you've read, what hypotheses you've formed, what you've proven, and where you're stuck.
An AI that develops a genuine relationship with a user over time. It remembers their milestones, emotional highs and lows, running jokes, and evolves its understanding of who they are.
Same price for everyone. Self-host any tier — we just validate your license key. No per-token fees, no compute charges.
A cognitive entity is an AI agent with a persistent, structured internal world. It has a name, a set of values, episodic memory (what happened), semantic memory (what it knows), emotional state, goals, personality traits, and dozens of other cognitive layers — all stored as structured data and loaded before each response. Unlike a standard LLM prompt, a cognitive entity genuinely accumulates experience over time.
Yes — self-hosting is available on the Starter and Enterprise plans. Run docker pull garmolabs/substrate, configure your environment variables, and point it at your own database. Garmo Labs validates your license key on startup but never receives your entity state, memories, or user conversations. Your data stays entirely within your infrastructure.
SUBSTRATE is fully LLM agnostic. It ships with built-in adapters for OpenAI (GPT-4o, o1, o3), Anthropic (Claude 3/4), Google Gemini, Mistral, and Ollama for local inference. You can also configure any OpenAI-compatible endpoint, including vLLM, LM Studio, and Together AI. You can switch providers at any time without losing any entity state or memory.
Context stuffing breaks as conversations grow, loses information under token limits, requires you to manage what gets included, and provides no structured query access. SUBSTRATE stores cognitive state in a queryable database, retrieves only the most relevant memories per interaction, persists state after every response automatically, and provides typed access to 80+ cognitive layers via SDK. It scales to years of interaction history without degrading — and survives model switches, server restarts, and context window resets.
Nothing is lost. SUBSTRATE writes entity state to your configured database after every respond() call. When the container restarts, Entity.load() re-hydrates all 80+ layers from the database in under 10ms. The entity resumes exactly where it left off — same memories, same emotional state, same active goals, same relationship context. Crashes are invisible to the entity's experience of continuity.
Pull the container. Set your key. Deploy an entity that never forgets who it is and never loses what it's learned.