Ever had this experience? Chat with OpenClaw for hours, context exceeds limits and old messages get truncated. Next time you restart the session, the AI completely forgets previous decisions; scattered technical notes are impossible to find; memory disappears after switching devices.

OpenClaw, as a flexible Agent framework, has developed various memory solutions in its ecosystem, each solving different problems. This article helps you understand the landscape and find what suits you in just one minute.


What Problems Does Memory Actually Need to Solve?

Let's break down the requirements. AI memory actually spans two completely different dimensions:

Dimension Problem Solution Approach
Single-session memory Conversations exceed model context window, direct truncation loses information Compress old messages, keep context within window while minimizing information loss
Cross-session memory After restarting session or switching devices, AI completely forgets previous conversations and decisions Persistent storage of history, automatically retrieve relevant content when needed
External knowledge base Personal notes and technical documents need to be searchable by AI and integrated into conversations Build document index, support hybrid search retrieval

Different solutions focus on different aspects. Let's go through them one by one, with detailed configuration steps.


Solution 1: lossless-claw — Single-Session Lossless Compression

Project URL: https://github.com/Martian-Engineering/lossless-claw

One-line positioning: Replaces OpenClaw native sliding window truncation with DAG hierarchical compression, losing no original information.

Core Features

  • LCM (Lossless Context Management) algorithm: Old messages aren't directly deleted but hierarchically compressed into summaries, forming a Directed Acyclic Graph (DAG)
  • Originals preserved forever: All original messages stored in SQLite, Agent can use lcm_grep / lcm_expand to search and expand to retrieve details when needed
  • Works out of the box: After installation, replaces contextEngine slot, works automatically, users don't need to manage it

Installation and Configuration Steps

1
2
3
4
5
# 1. Install via OpenClaw plugin
openclaw plugins install @martian-engineering/lossless-claw

# If you're running OpenClaw from source
pnpm openclaw plugins install @martian-engineering/lossless-claw

The installation command automatically configures the slot. If you need manual configuration, edit ~/.openclaw/openclaw.json:

1
2
3
4
5
6
7
{
  "plugins": {
    "slots": {
      "contextEngine": "lossless-claw"
    }
  }
}

Recommended configuration (environment variable approach):

1
2
3
4
# Add to your ~/.profile or ~/.zshrc
export LCM_FRESH_TAIL_COUNT=32
export LCM_INCREMENTAL_MAX_DEPTH=-1
export LCM_CONTEXT_THRESHOLD=0.75
Parameter Recommended Value Description
LCM_FRESH_TAIL_COUNT 32 Don't compress the most recent N messages, keep original content
LCM_INCREMENTAL_MAX_DEPTH -1 -1 means unlimited depth hierarchical compression
LCM_CONTEXT_THRESHOLD 0.75 Trigger compression when using 75% of context window

Restart OpenClaw to take effect. Verify:

1
2
openclaw plugins list
# Should see lossless-claw enabled

Suitable for you if:

  • ✅ You frequently have long conversations and want to avoid losing original information when exceeding context window
  • ✅ You only need single-session compression, cross-session reset depends on OpenClaw native mechanism
  • ✅ Pure local, no additional services needed

Solution 2: qmd — OpenClaw Native Hybrid Search Memory Backend

Project URL: https://github.com/tobi/qmd

Official docs: https://docs.openclaw.ai/concepts/memory#qmd-backend-experimental

One-line positioning: OpenClaw natively integrated local hybrid search backend, replacing built-in SQLite indexer with BM25+vector+reranking triple hybrid search.

Core Features

  • Native integration: Directly integrated as OpenClaw memory backend, no separate MCP configuration needed
  • Full local stack: Three GGUF models (embedding + reranking + query expansion) run locally, no cloud services needed
  • Triple hybrid search: BM25 keywords + vector semantics + LLM reranking, much more accurate than single search methods
  • Automatic fallback: Automatically falls back to built-in SQLite indexer when qmd fails, ensuring memory tools always work

Prerequisites

1
2
3
4
5
# 1. Install extension-enabled SQLite (macOS)
brew install sqlite

# 2. Install Bun (if not already installed)
curl -fsSL https://bun.sh/install | bash

Install qmd

1
2
3
4
5
# Global qmd installation (note: use GitHub repo URL, not npm package)
bun install -g https://github.com/tobi/qmd

# Or download release directly
# https://github.com/tobi/qmd/releases

Configure in OpenClaw

Edit ~/.openclaw/openclaw.json, add the following configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
  "agents": {
    "defaults": {
      "memory": {
        "backend": "qmd",
        "citations": "auto",
        "qmd": {
          "includeDefaultMemory": true,
          "update": {
            "interval": "5m",
            "debounceMs": 15000
          },
          "limits": {
            "maxResults": 6,
            "timeoutMs": 4000
          },
          "scope": {
            "default": "deny",
            "rules": [
              { "action": "allow", "match": { "chatType": "direct" } }
            ]
          },
          "paths": [
            { "name": "notes", "path": "~/notes", "pattern": "**/*.md" },
            { "name": "docs", "path": "~/project/docs", "pattern": "**/*.md" }
          ]
        }
      }
    }
  }
}

Configuration explanation:

Parameter Recommended value Description
backend "qmd" Enable qmd as memory backend
citations "auto" Automatically add source citations to search results
includeDefaultMemory true Automatically index MEMORY.md and memory/**/*.md
update.interval "5m" Update index every 5 minutes
limits.maxResults 6 Return at most 6 search results
paths[] custom Additional Markdown note directories

Verify configuration

1
2
3
4
5
6
# Restart OpenClaw
openclaw restart

# Check if qmd is running properly
openclaw status
# Should show memory backend: qmd

Manually pre-download models (optional)

qmd will automatically download models on first search, but you can pre-download manually:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Use the same XDG config as OpenClaw
STATE_DIR="${OPENCLAW_STATE_DIR:-$HOME/.openclaw}"
export XDG_CONFIG_HOME="$STATE_DIR/agents/main/qmd/xdg-config"
export XDG_CACHE_HOME="$STATE_DIR/agents/main/qmd/xdg-cache"

# Force refresh index + generate embeddings
qmd update
qmd embed

# Warm up / trigger first-time model downloads
qmd query "test" -c memory-root --json >/dev/null 2>&1
1
2
# Add to ~/.profile, use Qwen3-Embedding supporting 119 languages
export QMD_EMBED_MODEL="hf:Qwen/Qwen3-Embedding-0.6B-GGUF/Qwen3-Embedding-0.6B-Q8_0.gguf"

After changing, need to regenerate index:

1
2
# Use the XDG config above, then execute
qmd embed --force

Suitable for you if:

  • ✅ You need OpenClaw native integrated memory search, not external MCP service
  • ✅ You have large amounts of personal Markdown notes/technical documents and want AI to search and cite them
  • ✅ You prefer pure local solutions, not dependent on cloud services
  • ✅ You need hybrid search (keywords + semantics + reranking), more accurate than single vector search

Solution 3: Nowledge Mem — Local-First Structured Cross-Session Memory

Website: https://mem.nowledge.co/zh/docs/integrations/openclaw

One-line positioning: Local-first structured knowledge memory, lets AI remember decisions and preferences you've made, automatically recalls across sessions.

Core Features

  • Structured memory: Each memory marked with type (fact/decision/preference/plan/…), time, source associations, not just text piled together
  • Knowledge evolution: When cognition changes, add new versions preserving history, can see how ideas evolved
  • Fully automatic: autoRecall automatically inserts relevant memories at session start, autoCapture automatically saves and extracts at session end
  • Local-first: Defaults to local storage, no cloud account needed, privacy fully controlled
  • Cross-tool sharing: Cursor/Claude/OpenClaw share the same memory library, knowledge doesn't belong to any specific tool

Installation and Configuration Steps

1. Install Nowledge Mem and start

Follow official instructions: https://mem.nowledge.co/zh/docs/install to complete nmem CLI installation.

Verify:

1
2
nmem status
# Should show Nowledge Mem is running

2. Install OpenClaw plugin

1
openclaw plugins install @nowledge/openclaw-nowledge-mem

3. Configure OpenClaw

Edit ~/.openclaw/openclaw.json, replace memory slot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
  "plugins": {
    "slots": {
      "memory": "openclaw-nowledge-mem"
    },
    "entries": {
      "openclaw-nowledge-mem": {
        "enabled": true,
        "config": {
          "autoRecall": true,
          "autoCapture": false,
          "maxRecallResults": 5
        }
      }
    }
  }
}

Configuration explanation:

Parameter Recommended Value Description
autoRecall true Automatically recall relevant memories and insert into context at session start
autoCapture false Auto-save at session end. false means only save when you use /remember, recommended starting point for beginners
maxRecallResults 5-12 How many results to retrieve and insert into context

4. Restart OpenClaw and verify

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Verify configuration
openclaw nowledge-mem status
# Should show "Nowledge Mem is accessible"

# Test workflow:
# 1. /remember we chose PostgreSQL for task event database
# 2. /recall PostgreSQL
# 3. /new start new session
# 4. Ask: what database did we choose for task events?
# 5. If it can answer, configuration is successful

Suitable for you if:

  • ✅ Your biggest pain point is “AI forgets what I decided next session”
  • ✅ You want AI to remember your preferences, decisions, facts, new sessions automatically bring context
  • ✅ You prefer local-first, don't want forced cloud sync
  • ✅ You need structured knowledge, not just storing text chunks

Solution 4: mem9.ai — Cloud Persistent Memory Infrastructure

Website: https://mem9.ai/

One-line positioning: Cloud-hosted persistent memory infrastructure, providing your Agent with cross-device cross-session cloud storage.

Core Features

  • Zero operations: Works out of the box, no need to set up services yourself, create persistent backend in seconds
  • Progressive hybrid search: Start with pure keywords, add embeddings to automatically upgrade to hybrid search, no need to rebuild index
  • Cross-Agent sharing: Multiple Agents share memory, cloud sync consistent
  • Open source self-hosted: Apache 2.0, if you don't want to use official cloud, you can host it yourself

Installation and Configuration Steps

Follow official instructions: https://mem9.ai/SKILL.md

Quick steps:

  1. Register account at https://mem9.ai to get API key
  2. Install plugin in OpenClaw:
1
# Follow SKILL.md instructions for one-step installation, mem9 provides automatic installation script
  1. Configure API key to environment variable or OpenClaw configuration, restart to take effect.

Suitable for you if:

  • ✅ You use OpenClaw on multiple devices, need memory to follow you
  • ✅ You don't want to deal with infrastructure, want cloud to manage everything
  • ✅ You need multiple Agents to share memory
  • ✅ You accept cloud storage, trust service provider's security solution

One-Minute Selection Comparison Table

Your need Recommended solution Notes
Long conversation single-session no information loss lossless-claw Must install, solves context exceeding limit problem
Cross-session remember decisions/preferences Nowledge Mem Local-first, structured, recommended for most individual users
Multi-device sync + zero operations mem9.ai Cloud-hosted, convenient
Search personal Markdown notes/documents qmd OpenClaw native integration, no additional services
Want everything Install all lossless (single-session compression) + Nowledge/qmd (cross-session/documents) don't conflict, work best together

Individual developer/Heavy note user (completely local):

1
2
3
lossless-claw  +  qmd
│           └── OpenClaw native hybrid search memory backend
└─────────────── Single-session lossless compression, long conversation original info not lost

This is the most concise local full-stack solution, data entirely on your own machine.

Configuration checklist:

  • plugins.slots.contextEngine = lossless-claw
  • agents.defaults.memory.backend = "qmd"
  • agents.defaults.memory.qmd.paths configured with your note directories

Users needing cross-session memory:

1
2
3
4
lossless-claw  +  Nowledge Mem  +  qmd
│           │           └── OpenClaw native hybrid search memory backend
│           └────────────── Cross-session remember decisions preferences, local-first, structured
└────────────────────────── Single-session lossless compression

Configuration checklist:

  • plugins.slots.contextEngine = lossless-claw
  • plugins.slots.memory = openclaw-nowledge-mem
  • agents.defaults.memory.backend = "qmd"

Multi-device mobile user:

1
2
3
4
lossless-claw  +  mem9.ai  +  qmd
│           │        └── OpenClaw native hybrid search
│           └──────────── Cloud cross-device sync memory, zero operations
└──────────────────────── Single-session compression

Convenient, can continue chatting on whichever device you open.


Verify Correct Configuration

After installing all components, run through this checklist:

  1. lossless-claw: Start a long conversation, chat until compression triggers, ask AI if it can remember what was said at the beginning, if it can retrieve through lcm_expand, it's OK
  2. Nowledge Mem: /remember save one, /new start new session, ask if it can recall
  3. qmd: Ask AI “search my notes for content about XXX”, see if it returns correct results

All pass, your OpenClaw is now an AI assistant with “photographic memory” 🎉


What's Next: Looking Forward to PostgreSQL Native Memory Backend

If you're a PostgreSQL heavy user like me, here's something to look forward to: the OpenClaw community is discussing a native PostgreSQL + pgvector memory backend (Issue #15093).

Why is it worth looking forward to?

The current qmd solution is powerful, but has pain points:

  • QMD depends on Bun + GGUF models, higher resource usage
  • The subprocess → CLI → SQLite → GGUF model chain has too many failure points
  • Local models perform poorly on VPS/container environments

Advantages of PostgreSQL native backend:

  • ✅ Zero dependencies: Direct pg package database connection, no subprocess needed
  • ✅ Reuse existing infrastructure: Many users already run PostgreSQL, no additional deployment
  • ✅ Production-grade reliability: pgvector is battle-tested at scale
  • ✅ Multi-instance sharing: Multiple OpenClaw instances can share the same memory database
  • ✅ Hybrid search: pgvector + tsvector for BM25 + vector dual retrieval
  • ✅ Debug-friendly: Inspect state directly with psql, no SQLite parsing needed
  • ✅ Massive vector support: Seamlessly handle tens of millions of vectors via VectorChord, IVF+RaBitQ architecture enables low-cost billion-scale retrieval
  • ✅ High-performance full-text search: Use VectorChord-bm25 to replace native tsvector, Block-WeakAnd algorithm significantly improves BM25 ranking query performance
  • ✅ Professional tokenization: Integrated pg_tokenizer.rs multilingual tokenizer, supports Chinese and other professional tokenization

Configuration preview (proposed design):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
  "memory": {
    "backend": "postgres",
    "postgres": {
      "connectionString": "postgresql://user:pass@host:5432/dbname",
      "tablePrefix": "openclaw_memory",
      "embedding": {
        "provider": "voyage",
        "model": "voyage-3-lite",
        "dimensions": 512
      },
      "hybrid": {
        "enabled": true,
        "vectorWeight": 0.7,
        "textWeight": 0.3
      }
    }
  }
}

Status: RFC stage, awaiting maintainer feedback. If you also want this feature, go to the Issue and leave a 👍 to show support.

Once merged, I'll write a practical guide on “migrating from qmd to PostgreSQL memory backend”.



This article is organized based on existing solutions in the OpenClaw ecosystem. Welcome to supplement and update after discovering new projects.