-
-
Notifications
You must be signed in to change notification settings - Fork 85
04 Advanced Configuration
Henry edited this page Aug 23, 2025
·
1 revision
Comprehensive guide for advanced MCP Memory Service configuration, integration patterns, and best practices.
❌ Bad: "Fixed the thing with the API"
✅ Good: "Fixed authentication timeout issue in /api/users endpoint by increasing JWT expiration to 24 hours"
❌ Bad: "Use this configuration"
✅ Good: "PostgreSQL connection pool configuration for production - handles 1000 concurrent connections"
❌ Bad: "Updated recently"
✅ Good: "Updated Python to 3.11.4 on 2025-07-20 to fix asyncio performance issue"
# Meeting Notes - API Design Review
Date: 2025-07-20
Attendees: Team Lead, Backend Team
## Decisions:
- RESTful design for public API
- GraphQL for internal services
## Action Items:
- [ ] Create OpenAPI spec
- [ ] Set up API gateway
Use consistent hierarchies:
project: project-alpha
component: project-alpha-frontend
specific: project-alpha-frontend-auth
-
Project/Product:
project-name
,product-x
-
Technology:
python
,react
,postgres
-
Type:
bug-fix
,feature
,documentation
-
Status:
completed
,in-progress
,blocked
-
Priority:
urgent
,high
,normal
,low
- Use lowercase:
python
notPython
- Use hyphens:
bug-fix
notbug_fix
- Be consistent:
postgresql
notpostgres/pg/psql
- Avoid versions in tags:
python
notpython3.11
✅ "What did we decide about authentication last week?"
✅ "Show me all Python debugging sessions"
✅ "Find memories about database optimization"
# Text search for broad matching
general_results = await memory.search("authentication")
# Tag search for precise filtering
tagged_results = await memory.search_by_tag(["auth", "security"])
# Time-based for recent context
recent_results = await memory.recall("last week")
Morning:
- Review yesterday's memories
- Tag any untagged entries
- Quick search test
Evening:
- Store key decisions/learnings
- Update task progress
- Run tag consolidation
- Archive completed project memories
- Review and improve poor tags
- Delete test/temporary memories
- Generate weekly summary
- Analyze tag usage statistics
- Merge redundant tags
- Update tagging guidelines
- Performance optimization check
- Backup important memories
Automatically store commit information:
#!/bin/bash
# .git/hooks/post-commit
COMMIT_MSG=$(git log -1 --pretty=%B)
BRANCH=$(git branch --show-current)
FILES=$(git diff-tree --no-commit-id --name-only -r HEAD)
# Store in memory service
echo "Store commit memory: Branch: $BRANCH, Message: $COMMIT_MSG, Files: $FILES" | \
mcp-memory-cli store --tags "git,commit,$BRANCH"
Create a command to store code snippets:
// extension.js
vscode.commands.registerCommand('mcp.storeSnippet', async () => {
const editor = vscode.window.activeTextEditor;
const selection = editor.document.getText(editor.selection);
const language = editor.document.languageId;
await mcpClient.storeMemory({
content: `Code snippet:
\`\`\`${language}
${selection}
\`\`\``,
tags: ['code-snippet', language, 'vscode']
});
});
Store deployment information:
# .github/workflows/deploy.yml
- name: Store Deployment Memory
run: |
MEMORY="Deployment to ${{ github.event.inputs.environment }}
Version: ${{ github.sha }}
Status: ${{ job.status }}
Timestamp: $(date -u +"%Y-%m-%dT%H:%M:%SZ")"
curl -X POST http://localhost:8080/memory/store \
-H "Content-Type: application/json" \
-d "{\"content\": \"$MEMORY\", \"tags\": [\"deployment\", \"${{ github.event.inputs.environment }}\"]}"
Daily summary automation:
# daily_summary.py
import schedule
import asyncio
from datetime import datetime, timedelta
async def daily_memory_summary():
# Collect today's memories
today = datetime.now().date()
memories = await memory_service.recall(f"today")
# Generate summary
summary = f"Daily Summary for {today}:\n"
summary += f"- Total memories: {len(memories)}\n"
summary += f"- Key topics: {extract_topics(memories)}\n"
summary += f"- Completed tasks: {count_completed(memories)}\n"
# Store summary
await memory_service.store(
content=summary,
tags=["daily-summary", str(today)]
)
# Schedule for 6 PM daily
schedule.every().day.at("18:00").do(lambda: asyncio.run(daily_memory_summary()))
Automatically capture important events:
// error_logger.js
class MemoryErrorLogger {
constructor(memoryService) {
this.memory = memoryService;
}
async logError(error, context) {
// Store error details
await this.memory.store({
content: `Error: ${error.message}
Stack: ${error.stack}
Context: ${JSON.stringify(context)}`,
tags: ['error', 'automated', context.service]
});
// Check for similar errors
const similar = await this.memory.search(`error ${error.message.split(' ')[0]}`);
if (similar.length > 0) {
console.log('Similar errors found:', similar.length);
}
}
}
Simple HTTP interface for memory operations:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/memory/store', methods=['POST'])
async def store_memory():
data = request.json
result = await memory_service.store(
content=data['content'],
tags=data.get('tags', [])
)
return jsonify({"id": result.id})
@app.route('/memory/search', methods=['GET'])
async def search_memories():
query = request.args.get('q')
results = await memory_service.search(query)
return jsonify([r.to_dict() for r in results])
Trigger memory storage from external services:
// webhook_handler.js
app.post('/webhook/github', async (req, res) => {
const { action, pull_request, repository } = req.body;
if (action === 'closed' && pull_request.merged) {
await memoryService.store({
content: `PR Merged: ${pull_request.title}
Repo: ${repository.name}
Files changed: ${pull_request.changed_files}`,
tags: ['github', 'pr-merged', repository.name]
});
}
res.status(200).send('OK');
});
Automatically document decisions:
class DecisionLogger:
def __init__(self, memory_service):
self.memory = memory_service
async def log_decision(self, decision_type, title, rationale, alternatives):
content = f"""
Decision: {title}
Type: {decision_type}
Date: {datetime.now().isoformat()}
Rationale: {rationale}
Alternatives Considered:
{chr(10).join(f'- {alt}' for alt in alternatives)}
"""
await self.memory.store(
content=content,
tags=['decision', decision_type, 'architecture']
)
Broadcast important updates:
async def share_team_update(update_type, content, team_members):
# Store in memory with team visibility
memory = await memory_service.store(
content=f"Team Update ({update_type}): {content}",
tags=['team-update', update_type, 'shared']
)
# Notify team members (example with Slack)
for member in team_members:
await notify_slack(
channel=member.slack_id,
message=f"New {update_type} update stored: {memory.id}"
)
# ❌ Inefficient: Multiple separate searches
results1 = await search("python")
results2 = await search("debugging")
results3 = await search("error")
# ✅ Efficient: Combined search
results = await search("python debugging error")
# For browsing
results = await search(query, limit=10)
# For existence check
results = await search(query, limit=1)
# For analysis
results = await search(query, limit=50)
# ❌ Individual operations
for memory in memories:
await store_memory(memory)
# ✅ Batch operation
await store_memories_batch(memories)
# ❌ Don't store
- Passwords or API keys
- Personal identification numbers
- Credit card information
- Private keys
# ✅ Store references instead
"AWS credentials stored in vault under key 'prod-api-key'"
# Tag sensitive memories
await store_memory(
content="Architecture decision for payment system",
tags=["architecture", "payments", "confidential"]
)
# Set expiration for temporary data
await store_memory(
content="Temporary debug log",
tags=["debug", "temporary"],
metadata={"expires": "2025-08-01"}
)
- Regular automated backups
- Encrypted backup storage
- Test restore procedures
- Version control for configurations
# Store with tags
store "content" --tags "tag1,tag2"
# Search recent
search "query" --time "last week"
# Clean up
delete --older-than "6 months" --tag "temporary"
# Export important
export --tag "important" --format json
# Decision tracking
f"Decision: {title} | Rationale: {why} | Date: {when}"
# Error documentation
f"Error: {message} | Solution: {fix} | Prevention: {how}"
# Learning capture
f"TIL: {concept} | Source: {where} | Application: {how}"
- Use Consistent Tagging: Establish tag conventions for automated entries
- Rate Limiting: Implement limits to prevent memory spam
- Error Handling: Always handle memory service failures gracefully
- Async Operations: Use async patterns to avoid blocking
- Batch Operations: Group related memories when possible
These advanced configurations will help you build a powerful, integrated memory system that grows more valuable over time.