Skip to content

04 Advanced Configuration

Henry edited this page Aug 23, 2025 · 1 revision

Advanced Configuration

Comprehensive guide for advanced MCP Memory Service configuration, integration patterns, and best practices.

Table of Contents

Best Practices

Memory Storage Guidelines

Write Clear, Searchable Content

❌ Bad: "Fixed the thing with the API"
✅ Good: "Fixed authentication timeout issue in /api/users endpoint by increasing JWT expiration to 24 hours"

Include Context and Specifics

❌ Bad: "Use this configuration"
✅ Good: "PostgreSQL connection pool configuration for production - handles 1000 concurrent connections"

❌ Bad: "Updated recently"  
✅ Good: "Updated Python to 3.11.4 on 2025-07-20 to fix asyncio performance issue"

Structure Long Content

# Meeting Notes - API Design Review
Date: 2025-07-20
Attendees: Team Lead, Backend Team

## Decisions:
- RESTful design for public API
- GraphQL for internal services

## Action Items:
- [ ] Create OpenAPI spec
- [ ] Set up API gateway

Tagging Strategy

Hierarchical Tagging

Use consistent hierarchies:

project: project-alpha
component: project-alpha-frontend
specific: project-alpha-frontend-auth

Standard Tag Categories

  1. Project/Product: project-name, product-x
  2. Technology: python, react, postgres
  3. Type: bug-fix, feature, documentation
  4. Status: completed, in-progress, blocked
  5. Priority: urgent, high, normal, low

Tag Naming Rules

  • Use lowercase: python not Python
  • Use hyphens: bug-fix not bug_fix
  • Be consistent: postgresql not postgres/pg/psql
  • Avoid versions in tags: python not python3.11

Search Optimization

Use Natural Language

✅ "What did we decide about authentication last week?"
✅ "Show me all Python debugging sessions"
✅ "Find memories about database optimization"

Combine Search Strategies

# Text search for broad matching
general_results = await memory.search("authentication")

# Tag search for precise filtering
tagged_results = await memory.search_by_tag(["auth", "security"])

# Time-based for recent context
recent_results = await memory.recall("last week")

Maintenance Routines

Daily (5 minutes)

Morning:
- Review yesterday's memories
- Tag any untagged entries
- Quick search test

Evening:
- Store key decisions/learnings
- Update task progress

Weekly (30 minutes)

  1. Run tag consolidation
  2. Archive completed project memories
  3. Review and improve poor tags
  4. Delete test/temporary memories
  5. Generate weekly summary

Monthly (1 hour)

  1. Analyze tag usage statistics
  2. Merge redundant tags
  3. Update tagging guidelines
  4. Performance optimization check
  5. Backup important memories

Integration Patterns

Development Tool Integrations

Git Hooks Integration

Automatically store commit information:

#!/bin/bash
# .git/hooks/post-commit

COMMIT_MSG=$(git log -1 --pretty=%B)
BRANCH=$(git branch --show-current)
FILES=$(git diff-tree --no-commit-id --name-only -r HEAD)

# Store in memory service
echo "Store commit memory: Branch: $BRANCH, Message: $COMMIT_MSG, Files: $FILES" | \
  mcp-memory-cli store --tags "git,commit,$BRANCH"

VS Code Extension Pattern

Create a command to store code snippets:

// extension.js
vscode.commands.registerCommand('mcp.storeSnippet', async () => {
    const editor = vscode.window.activeTextEditor;
    const selection = editor.document.getText(editor.selection);
    const language = editor.document.languageId;
    
    await mcpClient.storeMemory({
        content: `Code snippet:
\`\`\`${language}
${selection}
\`\`\``,
        tags: ['code-snippet', language, 'vscode']
    });
});

CI/CD Pipeline Integration

Store deployment information:

# .github/workflows/deploy.yml
- name: Store Deployment Memory
  run: |
    MEMORY="Deployment to ${{ github.event.inputs.environment }}
    Version: ${{ github.sha }}
    Status: ${{ job.status }}
    Timestamp: $(date -u +"%Y-%m-%dT%H:%M:%SZ")"
    
    curl -X POST http://localhost:8080/memory/store \
      -H "Content-Type: application/json" \
      -d "{\"content\": \"$MEMORY\", \"tags\": [\"deployment\", \"${{ github.event.inputs.environment }}\"]}"

Automation Patterns

Scheduled Memory Collection

Daily summary automation:

# daily_summary.py
import schedule
import asyncio
from datetime import datetime, timedelta

async def daily_memory_summary():
    # Collect today's memories
    today = datetime.now().date()
    memories = await memory_service.recall(f"today")
    
    # Generate summary
    summary = f"Daily Summary for {today}:\n"
    summary += f"- Total memories: {len(memories)}\n"
    summary += f"- Key topics: {extract_topics(memories)}\n"
    summary += f"- Completed tasks: {count_completed(memories)}\n"
    
    # Store summary
    await memory_service.store(
        content=summary,
        tags=["daily-summary", str(today)]
    )

# Schedule for 6 PM daily
schedule.every().day.at("18:00").do(lambda: asyncio.run(daily_memory_summary()))

Event-Driven Memory Creation

Automatically capture important events:

// error_logger.js
class MemoryErrorLogger {
    constructor(memoryService) {
        this.memory = memoryService;
    }
    
    async logError(error, context) {
        // Store error details
        await this.memory.store({
            content: `Error: ${error.message}
Stack: ${error.stack}
Context: ${JSON.stringify(context)}`,
            tags: ['error', 'automated', context.service]
        });
        
        // Check for similar errors
        const similar = await this.memory.search(`error ${error.message.split(' ')[0]}`);
        if (similar.length > 0) {
            console.log('Similar errors found:', similar.length);
        }
    }
}

API Integration

REST API Wrapper

Simple HTTP interface for memory operations:

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/memory/store', methods=['POST'])
async def store_memory():
    data = request.json
    result = await memory_service.store(
        content=data['content'],
        tags=data.get('tags', [])
    )
    return jsonify({"id": result.id})

@app.route('/memory/search', methods=['GET'])
async def search_memories():
    query = request.args.get('q')
    results = await memory_service.search(query)
    return jsonify([r.to_dict() for r in results])

Webhook Integration

Trigger memory storage from external services:

// webhook_handler.js
app.post('/webhook/github', async (req, res) => {
    const { action, pull_request, repository } = req.body;
    
    if (action === 'closed' && pull_request.merged) {
        await memoryService.store({
            content: `PR Merged: ${pull_request.title}
Repo: ${repository.name}
Files changed: ${pull_request.changed_files}`,
            tags: ['github', 'pr-merged', repository.name]
        });
    }
    
    res.status(200).send('OK');
});

Workflow Examples

Documentation Workflow

Automatically document decisions:

class DecisionLogger:
    def __init__(self, memory_service):
        self.memory = memory_service
    
    async def log_decision(self, decision_type, title, rationale, alternatives):
        content = f"""
        Decision: {title}
        Type: {decision_type}
        Date: {datetime.now().isoformat()}
        
        Rationale: {rationale}
        
        Alternatives Considered:
        {chr(10).join(f'- {alt}' for alt in alternatives)}
        """
        
        await self.memory.store(
            content=content,
            tags=['decision', decision_type, 'architecture']
        )

Team Knowledge Sharing

Broadcast important updates:

async def share_team_update(update_type, content, team_members):
    # Store in memory with team visibility
    memory = await memory_service.store(
        content=f"Team Update ({update_type}): {content}",
        tags=['team-update', update_type, 'shared']
    )
    
    # Notify team members (example with Slack)
    for member in team_members:
        await notify_slack(
            channel=member.slack_id,
            message=f"New {update_type} update stored: {memory.id}"
        )

Performance Optimization

Optimize Search Queries

# ❌ Inefficient: Multiple separate searches
results1 = await search("python")
results2 = await search("debugging")
results3 = await search("error")

# ✅ Efficient: Combined search
results = await search("python debugging error")

Use Appropriate Result Limits

# For browsing
results = await search(query, limit=10)

# For existence check
results = await search(query, limit=1)

# For analysis
results = await search(query, limit=50)

Batch Operations

# ❌ Individual operations
for memory in memories:
    await store_memory(memory)

# ✅ Batch operation
await store_memories_batch(memories)

Security Configuration

Sensitive Information Guidelines

# ❌ Don't store
- Passwords or API keys
- Personal identification numbers
- Credit card information
- Private keys

# ✅ Store references instead
"AWS credentials stored in vault under key 'prod-api-key'"

Access Control

# Tag sensitive memories
await store_memory(
    content="Architecture decision for payment system",
    tags=["architecture", "payments", "confidential"]
)

Data Retention

# Set expiration for temporary data
await store_memory(
    content="Temporary debug log",
    tags=["debug", "temporary"],
    metadata={"expires": "2025-08-01"}
)

Backup Strategy

  • Regular automated backups
  • Encrypted backup storage
  • Test restore procedures
  • Version control for configurations

Quick Reference

Essential Commands

# Store with tags
store "content" --tags "tag1,tag2"

# Search recent
search "query" --time "last week"

# Clean up
delete --older-than "6 months" --tag "temporary"

# Export important
export --tag "important" --format json

Common Patterns

# Decision tracking
f"Decision: {title} | Rationale: {why} | Date: {when}"

# Error documentation  
f"Error: {message} | Solution: {fix} | Prevention: {how}"

# Learning capture
f"TIL: {concept} | Source: {where} | Application: {how}"

Integration Best Practices

  1. Use Consistent Tagging: Establish tag conventions for automated entries
  2. Rate Limiting: Implement limits to prevent memory spam
  3. Error Handling: Always handle memory service failures gracefully
  4. Async Operations: Use async patterns to avoid blocking
  5. Batch Operations: Group related memories when possible

These advanced configurations will help you build a powerful, integrated memory system that grows more valuable over time.

Clone this wiki locally