Now in Early Access

Your AI agent forgets
everything. We fixed that.

One API to remember, recall, and share memories across agents. Install in one line. Works with any framework.

$ pip install mrmemory
# That's it. Your agent has memory now.

Free for 7 days · Then $5/mo · Cancel anytime

Every conversation starts from zero

You're building sophisticated AI — and it can't remember what you told it 5 minutes ago. The workarounds are worse than the problem.

💸

Stuff the Prompt

Shove everything in context. Token costs explode.

🕐

Build Your Own

Roll a vector DB pipeline. Weeks of work that isn't your product.

🤷

Just… Don't

Most people skip memory entirely. Users suffer in silence.

Memory that just works

Seven powerful primitives. One clean API. Zero infrastructure to manage.

🧠

Remember

Store memories with automatic embedding and indexing. No vector DB setup. Just remember().

🔍

Recall

Semantic search that returns what's relevant — not keyword matches. Real cosine similarity scores.

🤝

Share

Multi-agent memory sharing with real-time WebSocket sync. Your agents finally share a brain.

Auto-Remember

Send raw conversations, get intelligent extraction. Auto-dedup, entity tagging, and more.

🗜️

Compress

Merge similar memories into dense representations. Cut token costs 40-60% while keeping recall sharp.

✏️

Self-Edit

Agents update, merge, and prune their own memories. Full autonomous memory management.

🛡️

Governance

Three-layer memory governance: private scratchpad, provisional with LLM judge, and core verified memories.

🔗

LangChain Native

Drop-in MrMemoryCheckpointer and MrMemoryStore for LangGraph. One import away.

🦀

Rust Core

Built in Rust for speed and reliability. Sub-millisecond overhead. No garbage collector pauses.

Three lines to integrate

SDKs for Python and TypeScript. Integrations with LangChain, CrewAI, AutoGen, and more.

main.py
from mrmemory import MrMemory

client = MrMemory("amr_sk_...", agent_id="my-agent")

# Store a memory
client.remember("User prefers concise answers and dark mode")

# Recall with semantic search
results = client.recall("What are the user's preferences?")

for memory in results:
    print(memory.content, memory.similarity)
# → "User prefers concise answers and dark mode" 0.94
⚡ Real-timeWebSocket sync
🔒 IsolatedPer-tenant encryption
🌐 Any FrameworkREST + WebSocket API
🦀 RustNo GC pauses
📦 PublishedPyPI + npm
Simple, predictable pricing

No surprise fees. No usage-based gotchas. Start free, scale when ready.

Pro
$25/mo
For production workloads
  • 100,000 memories
  • 500,000 API calls/month
  • Unlimited agents
  • WebSocket real-time sync
  • Memory governance
  • Priority support
  • Self-hosting option
Common questions
How is this different from a vector database?
A vector DB is infrastructure. MrMemory is a product. You don't need to manage embeddings, write retrieval logic, handle tenant isolation, or build cleanup pipelines. You call remember() and recall(). We handle everything else.
What frameworks do you support?
All of them. MrMemory is a REST + WebSocket API with SDKs for Python and TypeScript. We have first-class integrations with LangChain, CrewAI, AutoGen, and OpenClaw. If your framework can make HTTP calls, it works.
Is my data secure?
Yes. Every tenant gets isolated storage with row-level PostgreSQL isolation and separate Qdrant collections. API keys are SHA-256 hashed, never stored plaintext. Data encrypted at rest.
Can I self-host?
Yes — we provide a Docker Compose setup for self-hosting. The Pro plan includes a managed self-hosted option with support.
How long does integration take?
Under 2 minutes. Install the SDK, add your API key, call remember(). That's it.
Stop building memory infra.
Start building agents.

Give your agents a brain in under 2 minutes.

Free for 7 days · Then $5/mo · Cancel anytime

Developer updates, tutorials, zero spam.