CORTEX
Mem0
LanceDB
Markdown Files
Local
Single runtime, $0 embeddings
npm install, run locally. ONNX embeddings built in.
Docker stack + external APIs
3 containers (API + Postgres + Neo4j) + OpenAI key
Embedded file + external API
Local vector store, but needs OpenAI for embeddings
Flat file, no dependencies
No vector search, no semantic retrieval
Cloud
(Free Tier)
Up to 1.6M retrievals/mo
2M writes. 5GB storage. 20M real-time messages. $0 embedding cost.
×1,600 more vs. Mem0
1k retrievals/mo
10K memories. Plus external embedding API costs.
Cloud
(@ ~$250)
Up to 33M retrievals/mo
20M writes. 10GB storage. 500M real-time messages. $0 embedding cost.
×660 more vs. Mem0
50k retrievals/mo
$249/month, fully managed
Persistence
Distributed via Fabric
Survives restarts, replicates to multiple locations
Cloud or self-hosted
Platform managed, or Docker (API + Postgres + Neo4j)
Local file
Dies with the agent process
Multi-agent sharing
Namespace isolation + shared pools
Per-agent, per-team, or global scoping
User/agent/session scoping
Supports user_id, agent_id, and run_id
No
Known memory bleed issues
Search
HNSW vector + attribute filters
Hybrid retrieval in a single query
Semantic + graph relationships
Entity extraction via Neo4j
Also an app runtime
✔ Unified Runtime (Harper)
DB, cache, API, messaging in one process