Skip to content

bench: add distilled-knowledge retrieval corpus (30 docs, 30 QA pairs)#1111

Open
LavaDMan wants to merge 2 commits intoMemPalace:developfrom
LavaDMan:bench/distilled-knowledge
Open

bench: add distilled-knowledge retrieval corpus (30 docs, 30 QA pairs)#1111
LavaDMan wants to merge 2 commits intoMemPalace:developfrom
LavaDMan:bench/distilled-knowledge

Conversation

@LavaDMan
Copy link
Copy Markdown

Summary

All existing benchmarks test episodic memory — conversational recall
(LongMemEval, LoCoMo, MemBench, ConvoMem). This PR adds a benchmark for a
different memory category that currently has no coverage:

Semantic / procedural memory — retrieving engineering constraints,
architectural decisions, and system invariants from prose documents. The kind of
knowledge engineers distill into runbooks, architecture decision records, and
internal wikis.

What's in this PR

  • benchmarks/distilled_knowledge_bench.py — self-contained benchmark script
    (no external data download; corpus embedded in the file)
  • benchmarks/distilled_knowledge/README.md — methodology and results
  • benchmarks/BENCHMARKS.md — updated to document the new benchmark

Corpus: 30 prose documents covering cross-cutting engineering constraints:
concurrency / locking, database driver gotchas, package security gates, process
isolation, lifecycle state management, and error handling edge cases.

QA pairs: 30 paraphrased queries. Question wording deliberately avoids the
exact vocabulary in the target document to test semantic similarity, not keyword
matching. Each pair includes a paraphrase_note documenting the specific
vocabulary shift.

Results on raw ChromaDB baseline

Measured with MemPal v3.3.x, ChromaDB ephemeral client, default
all-MiniLM-L6-v2 embeddings:

Metric Score Count
Recall@1 76.7% 23/30
Recall@5 100% 30/30

Recall@5 is perfect — every correct document appears in the top-5. The ~23pp
gap to Recall@1 is consistent with the pattern seen on LongMemEval and LoCoMo;
hybrid retrieval or reranking would likely push R@1 into the 90s.

How to run

# No data download needed
python benchmarks/distilled_knowledge_bench.py
python benchmarks/distilled_knowledge_bench.py --top-k 1 --verbose

Relation to existing benchmarks

This is additive, not critical of the existing suite:

Benchmark Memory type Query style
LongMemEval / LoCoMo / MemBench / ConvoMem Episodic "What did X say?"
This PR Semantic "How does X work?"

Real production systems accumulate both types. A memory system that handles
episodic recall perfectly but cannot retrieve "why do we use file locks instead
of asyncio.Lock?" is incomplete for agentic use cases.

🤖 Generated with Claude Code

All existing benchmarks test episodic memory (conversational recall).
This adds a benchmark for semantic/procedural memory: retrieving engineering
constraints, architectural decisions, and system invariants from prose documents
— the kind of knowledge engineers capture in runbooks and ADRs.

Results on raw ChromaDB baseline (all-MiniLM-L6-v2, n=30):
  Recall@1: 76.7% (23/30)
  Recall@5: 100%  (30/30)

Self-contained — no external data download required. Corpus is embedded
in the script. See benchmarks/distilled_knowledge/README.md for full
methodology notes.
Replace internal names with generic equivalents:
- injection_scan_score → security_scan_score
- vram_mutex parenthetical removed (Redis lock kept)
- complete_mandate_task → a completion handler
- authorized_ring filter → access-ring filter

Engineering principles remain intact; no AEGIS project name changes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant