Why Does OpenClaw Keep Forgetting? A Complete Memory Solution from Single-Session to Permanent Memory
Ever had this experience? Chat with OpenClaw for hours, context exceeds limits and old messages get truncated. Next time you restart the session, the AI completely forgets previous decisions; scattered technical notes are impossible to find; memory disappears after switching devices.
OpenClaw, as a flexible Agent framework, has developed various memory solutions in its ecosystem, each solving different problems. This article helps you understand the landscape and find what suits you in just one minute.
What Problems Does Memory Actually Need to Solve?
Let's break down the requirements. AI memory actually spans two completely different dimensions:
| Dimension | Problem | Solution Approach |
|---|---|---|
| Single-session memory | Conversations exceed model context window, direct truncation loses information | Compress old messages, keep context within window while minimizing information loss |
| Cross-session memory | After restarting session or switching devices, AI completely forgets previous conversations and decisions | Persistent storage of history, automatically retrieve relevant content when needed |
| External knowledge base | Personal notes and technical documents need to be searchable by AI and integrated into conversations | Build document index, support hybrid search retrieval |
Different solutions focus on different aspects. Let's go through them one by one, with detailed configuration steps.
Solution 1: lossless-claw — Single-Session Lossless Compression
Project URL: https://github.com/Martian-Engineering/lossless-claw
One-line positioning: Replaces OpenClaw native sliding window truncation with DAG hierarchical compression, losing no original information.
Core Features
- LCM (Lossless Context Management) algorithm: Old messages aren't directly deleted but hierarchically compressed into summaries, forming a Directed Acyclic Graph (DAG)
- Originals preserved forever: All original messages stored in SQLite, Agent can use
lcm_grep/lcm_expandto search and expand to retrieve details when needed - Works out of the box: After installation, replaces
contextEngineslot, works automatically, users don't need to manage it
Installation and Configuration Steps
|
|
The installation command automatically configures the slot. If you need manual configuration, edit ~/.openclaw/openclaw.json:
|
|
Recommended configuration (environment variable approach):
|
|
| Parameter | Recommended Value | Description |
|---|---|---|
LCM_FRESH_TAIL_COUNT |
32 | Don't compress the most recent N messages, keep original content |
LCM_INCREMENTAL_MAX_DEPTH |
-1 | -1 means unlimited depth hierarchical compression |
LCM_CONTEXT_THRESHOLD |
0.75 | Trigger compression when using 75% of context window |
Restart OpenClaw to take effect. Verify:
|
|
Suitable for you if:
- ✅ You frequently have long conversations and want to avoid losing original information when exceeding context window
- ✅ You only need single-session compression, cross-session reset depends on OpenClaw native mechanism
- ✅ Pure local, no additional services needed
Solution 2: qmd — OpenClaw Native Hybrid Search Memory Backend
Project URL: https://github.com/tobi/qmd
Official docs: https://docs.openclaw.ai/concepts/memory#qmd-backend-experimental
One-line positioning: OpenClaw natively integrated local hybrid search backend, replacing built-in SQLite indexer with BM25+vector+reranking triple hybrid search.
Core Features
- Native integration: Directly integrated as OpenClaw memory backend, no separate MCP configuration needed
- Full local stack: Three GGUF models (embedding + reranking + query expansion) run locally, no cloud services needed
- Triple hybrid search: BM25 keywords + vector semantics + LLM reranking, much more accurate than single search methods
- Automatic fallback: Automatically falls back to built-in SQLite indexer when qmd fails, ensuring memory tools always work
Prerequisites
|
|
Install qmd
|
|
Configure in OpenClaw
Edit ~/.openclaw/openclaw.json, add the following configuration:
|
|
Configuration explanation:
| Parameter | Recommended value | Description |
|---|---|---|
backend |
"qmd" |
Enable qmd as memory backend |
citations |
"auto" |
Automatically add source citations to search results |
includeDefaultMemory |
true |
Automatically index MEMORY.md and memory/**/*.md |
update.interval |
"5m" |
Update index every 5 minutes |
limits.maxResults |
6 | Return at most 6 search results |
paths[] |
custom | Additional Markdown note directories |
Verify configuration
|
|
Manually pre-download models (optional)
qmd will automatically download models on first search, but you can pre-download manually:
|
|
Recommended configuration for multilingual users
|
|
After changing, need to regenerate index:
|
|
Suitable for you if:
- ✅ You need OpenClaw native integrated memory search, not external MCP service
- ✅ You have large amounts of personal Markdown notes/technical documents and want AI to search and cite them
- ✅ You prefer pure local solutions, not dependent on cloud services
- ✅ You need hybrid search (keywords + semantics + reranking), more accurate than single vector search
Solution 3: Nowledge Mem — Local-First Structured Cross-Session Memory
Website: https://mem.nowledge.co/zh/docs/integrations/openclaw
One-line positioning: Local-first structured knowledge memory, lets AI remember decisions and preferences you've made, automatically recalls across sessions.
Core Features
- Structured memory: Each memory marked with type (fact/decision/preference/plan/…), time, source associations, not just text piled together
- Knowledge evolution: When cognition changes, add new versions preserving history, can see how ideas evolved
- Fully automatic:
autoRecallautomatically inserts relevant memories at session start,autoCaptureautomatically saves and extracts at session end - Local-first: Defaults to local storage, no cloud account needed, privacy fully controlled
- Cross-tool sharing: Cursor/Claude/OpenClaw share the same memory library, knowledge doesn't belong to any specific tool
Installation and Configuration Steps
1. Install Nowledge Mem and start
Follow official instructions: https://mem.nowledge.co/zh/docs/install to complete nmem CLI installation.
Verify:
|
|
2. Install OpenClaw plugin
|
|
3. Configure OpenClaw
Edit ~/.openclaw/openclaw.json, replace memory slot:
|
|
Configuration explanation:
| Parameter | Recommended Value | Description |
|---|---|---|
autoRecall |
true |
Automatically recall relevant memories and insert into context at session start |
autoCapture |
false |
Auto-save at session end. false means only save when you use /remember, recommended starting point for beginners |
maxRecallResults |
5-12 | How many results to retrieve and insert into context |
4. Restart OpenClaw and verify
|
|
Suitable for you if:
- ✅ Your biggest pain point is “AI forgets what I decided next session”
- ✅ You want AI to remember your preferences, decisions, facts, new sessions automatically bring context
- ✅ You prefer local-first, don't want forced cloud sync
- ✅ You need structured knowledge, not just storing text chunks
Solution 4: mem9.ai — Cloud Persistent Memory Infrastructure
Website: https://mem9.ai/
One-line positioning: Cloud-hosted persistent memory infrastructure, providing your Agent with cross-device cross-session cloud storage.
Core Features
- Zero operations: Works out of the box, no need to set up services yourself, create persistent backend in seconds
- Progressive hybrid search: Start with pure keywords, add embeddings to automatically upgrade to hybrid search, no need to rebuild index
- Cross-Agent sharing: Multiple Agents share memory, cloud sync consistent
- Open source self-hosted: Apache 2.0, if you don't want to use official cloud, you can host it yourself
Installation and Configuration Steps
Follow official instructions: https://mem9.ai/SKILL.md
Quick steps:
- Register account at https://mem9.ai to get API key
- Install plugin in OpenClaw:
|
|
- Configure API key to environment variable or OpenClaw configuration, restart to take effect.
Suitable for you if:
- ✅ You use OpenClaw on multiple devices, need memory to follow you
- ✅ You don't want to deal with infrastructure, want cloud to manage everything
- ✅ You need multiple Agents to share memory
- ✅ You accept cloud storage, trust service provider's security solution
One-Minute Selection Comparison Table
| Your need | Recommended solution | Notes |
|---|---|---|
| Long conversation single-session no information loss | lossless-claw | Must install, solves context exceeding limit problem |
| Cross-session remember decisions/preferences | Nowledge Mem | Local-first, structured, recommended for most individual users |
| Multi-device sync + zero operations | mem9.ai | Cloud-hosted, convenient |
| Search personal Markdown notes/documents | qmd | OpenClaw native integration, no additional services |
| Want everything | Install all | lossless (single-session compression) + Nowledge/qmd (cross-session/documents) don't conflict, work best together |
Recommended Combination Solutions
Individual developer/Heavy note user (completely local):
|
|
This is the most concise local full-stack solution, data entirely on your own machine.
Configuration checklist:
plugins.slots.contextEngine = lossless-clawagents.defaults.memory.backend = "qmd"agents.defaults.memory.qmd.pathsconfigured with your note directories
Users needing cross-session memory:
|
|
Configuration checklist:
plugins.slots.contextEngine = lossless-clawplugins.slots.memory = openclaw-nowledge-memagents.defaults.memory.backend = "qmd"
Multi-device mobile user:
|
|
Convenient, can continue chatting on whichever device you open.
Verify Correct Configuration
After installing all components, run through this checklist:
- lossless-claw: Start a long conversation, chat until compression triggers, ask AI if it can remember what was said at the beginning, if it can retrieve through
lcm_expand, it's OK - Nowledge Mem:
/remembersave one,/newstart new session, ask if it can recall - qmd: Ask AI “search my notes for content about XXX”, see if it returns correct results
All pass, your OpenClaw is now an AI assistant with “photographic memory” 🎉
What's Next: Looking Forward to PostgreSQL Native Memory Backend
If you're a PostgreSQL heavy user like me, here's something to look forward to: the OpenClaw community is discussing a native PostgreSQL + pgvector memory backend (Issue #15093).
Why is it worth looking forward to?
The current qmd solution is powerful, but has pain points:
- QMD depends on Bun + GGUF models, higher resource usage
- The subprocess → CLI → SQLite → GGUF model chain has too many failure points
- Local models perform poorly on VPS/container environments
Advantages of PostgreSQL native backend:
- ✅ Zero dependencies: Direct
pgpackage database connection, no subprocess needed - ✅ Reuse existing infrastructure: Many users already run PostgreSQL, no additional deployment
- ✅ Production-grade reliability: pgvector is battle-tested at scale
- ✅ Multi-instance sharing: Multiple OpenClaw instances can share the same memory database
- ✅ Hybrid search: pgvector + tsvector for BM25 + vector dual retrieval
- ✅ Debug-friendly: Inspect state directly with
psql, no SQLite parsing needed - ✅ Massive vector support: Seamlessly handle tens of millions of vectors via VectorChord, IVF+RaBitQ architecture enables low-cost billion-scale retrieval
- ✅ High-performance full-text search: Use VectorChord-bm25 to replace native tsvector, Block-WeakAnd algorithm significantly improves BM25 ranking query performance
- ✅ Professional tokenization: Integrated pg_tokenizer.rs multilingual tokenizer, supports Chinese and other professional tokenization
Configuration preview (proposed design):
|
|
Status: RFC stage, awaiting maintainer feedback. If you also want this feature, go to the Issue and leave a 👍 to show support.
Once merged, I'll write a practical guide on “migrating from qmd to PostgreSQL memory backend”.
Related Reading
- Why I Only Bet on PostgreSQL for Agent Projects: Vector Search - Deep dive into vector retrieval's role in Agent applications
- Ins and Outs of Vector Databases - Vector search technology fundamentals
This article is organized based on existing solutions in the OpenClaw ecosystem. Welcome to supplement and update after discovering new projects.