Skip to content

sahitya-chandra/codexa

Repository files navigation

Codexa

A powerful CLI tool that ingests your codebase and allows you to ask questions about it using Retrieval-Augmented Generation (RAG).

npm version License TypeScript Node.js version

InstallationQuick StartCommandsConfigurationExamples


Table of Contents

Features

  • 🔒 Privacy-First: All data processing happens locally by default
  • Fast & Efficient: Local embeddings and optimized vector search
  • 🤖 Multiple LLM Support: Works with Groq (cloud)
  • 💾 Local Storage: SQLite database for embeddings and context
  • 🎯 Smart Chunking: Intelligent code splitting with configurable overlap
  • 📊 Streaming Output: Real-time response streaming for better UX
  • 🎨 Multiple File Types: Supports TypeScript, JavaScript, Python, Go, Rust, Java, and more
  • 🧠 Smart Configuration: Automatically detects project languages and optimizes config
  • 🛡️ Intelligent Filtering: Automatically excludes binaries, large files, and build artifacts
  • ⚙️ Highly Configurable: Fine-tune chunking, retrieval, and model parameters
  • 🚀 Zero Setup: Works out of the box with sensible defaults

⚠️ Codebase Size Limitation: Codexa is optimized for small to medium-sized codebases. It currently supports projects with up to 200 files and 20,000 chunks. For larger codebases, consider using more restrictive includeGlobs patterns to focus on specific directories or file types.

Installation

Prerequisites

Before installing Codexa, ensure you have the following:

  • Node.js: v20.0.0 or higher

    node --version  # Should be v20.0.0 or higher
  • For Cloud LLM (Groq): A Groq API key from console.groq.com

Installation Methods

Choose the installation method that works best for your system:

Method 1: npm (Recommended)

Install Codexa globally using npm:

npm install -g codexa

Verify installation:

codexa

Method 2: Homebrew (macOS)

Install codexa using Homebrew on macOS:

First, add the tap:

brew tap sahitya-chandra/codexa

Then install:

brew install codexa

Updating Codexa

To update codexa to the latest version:

If installed via npm:

npm install -g codexa@latest

If installed via Homebrew:

brew upgrade codexa

Check your current version:

codexa --version

Check for updates:

💡 Tip: It's recommended to keep Codexa updated to get the latest features, bug fixes, and security updates.

LLM Setup

Codexa requires an LLM to generate answers. You can use Groq (cloud).

Groq provides fast cloud-based LLMs with a generous free tier.

Step 1: Get a Groq API Key

  1. Visit console.groq.com
  2. Sign up or log in
  3. Navigate to API Keys section
  4. Create a new API key
  5. Copy your API key (starts with gsk_)

Step 2: Set GROQ API Key

Run the following command to securely save your API key:

codexa config set GROQ_API_KEY "gsk_your_api_key_here"

This will save the key to your local configuration file (.codexarc.json).

Step 3: Verify API Key is Set

codexa config get GROQ_API_KEY

Step 4: Configure Codexa

Codexa defaults to using Groq when you run codexa init. If you need to manually configure, edit .codexarc.json:

{
  "modelProvider": "groq",
  "model": "openai/gpt-oss-120b",
  "embeddingProvider": "local",
  "embeddingModel": "Xenova/all-MiniLM-L6-v2"
}

Models you can use:

  • openai/gpt-oss-120b (recommended, default)
  • llama-3.1-70b-versatile

Quick Setup Summary

For Groq:

# 1. Get API key from console.groq.com

# 2. Run codexa init (defaults to Groq)
codexa init

# 3. Set GROQ API key
codexa config set GROQ_API_KEY "gsk_your_key"

# 4. Proceed to igestion

Quick Start

Once Codexa is installed and your LLM is configured, you're ready to use it:

  1. Navigate to your project directory:

    cd /path/to/your/project
  2. Initialize Codexa:

    codexa init

    This creates a .codexarc.json configuration file with sensible defaults.

  3. Set GROQ API Key

    codexa config set GROQ_API_KEY "gsk_your_key"

    This will save the key to your local configuration file (.codexarc.json).

  4. Ingest your codebase:

    codexa ingest

    This indexes your codebase and creates embeddings. First run may take a few minutes.

  5. Ask questions:

    codexa ask "How does the authentication flow work?"
    codexa ask "What is the main entry point of this application?"
    codexa ask "Show me how error handling is implemented"

Commands

init

Creates a .codexarc.json configuration file optimized for your codebase.

codexa init

What it does:

  • Analyzes your codebase to detect languages, package managers, and frameworks
  • Creates optimized config with language-specific include/exclude patterns
  • Generates .codexarc.json in the project root with tailored settings
  • Can be safely run multiple times (won't overwrite existing config)

Detection Capabilities:

  • Languages: TypeScript, JavaScript, Python, Go, Rust, Java, Kotlin, Scala, C/C++, Ruby, PHP, Swift, Dart, and more
  • Package Managers: npm, yarn, pnpm, pip, poetry, go, cargo, maven, gradle, sbt, bundler, composer, and more
  • Frameworks: Next.js, React, Django, Flask, Rails, Laravel, Spring, Flutter, and more

Example Output:

Analyzing codebase...
✓ Detected: typescript, javascript (npm, yarn)

✓ Created .codexarc.json with optimized settings for your codebase!

┌ 🚀 Setup Complete ──────────────────────────────────────────┐
│                                                             │
│   Next Steps:                                               │
│                                                             │
│   1. Review .codexarc.json - Update provider keys if needed |
│   2. Set your GROQ API Key: codexa config set GROQ_API_KEY  |
│   3. Run: codexa ingest - Start indexing your codebase      │
│   4. Run: codexa ask "your question" - Ask questions        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

ingest

Indexes the codebase and generates embeddings for semantic search.

codexa ingest [options]

Options:

  • -f, --force - Clear existing index and rebuild from scratch

Examples:

# Standard ingestion
codexa ingest

# Force rebuild (useful if you've updated code significantly)
codexa ingest --force

What it does:

  1. Scans your repository based on includeGlobs and excludeGlobs patterns
  2. Filters files - Automatically excludes binaries, large files (>5MB), and build artifacts
  3. Chunks files into manageable segments
  4. Generates vector embeddings for each chunk
  5. Stores everything in .codexa/index.db (SQLite database)

Smart Filtering:

  • Automatically skips binary files (executables, images, archives, etc.)
  • Excludes files larger than the configured size limit (default: 5MB)
  • Filters based on file content analysis (not just extensions)

Note: First ingestion may take a few minutes depending on your codebase size. Subsequent ingestions are faster as they only process changed files.


config

Manage configuration values, including API keys.

codexa config <action> [key] [value]

Actions:

  • set <key> <value> - Set a configuration value
  • get <key> - Get a configuration value
  • list - List all configuration values

Examples:

# Set Groq API Key
codexa config set GROQ_API_KEY "gsk_..."

# Check current key
codexa config get GROQ_API_KEY

ask

Ask natural language questions about your codebase.

codexa ask <question...> [options]

Arguments:

  • <question...> - Your question (can be multiple words)

Options:

  • --stream - Enable streaming output

Examples:

# Basic question
codexa ask "How does user authentication work?"

# Question with multiple words
codexa ask "What is the main entry point of this application?"

# Enable streaming
codexa ask "Summarize the codebase structure" --stream

How it works:

  1. Converts your question to a vector embedding
  2. Searches the codebase for relevant chunks using vector similarity
  3. Retrieves the top-K most relevant code sections
  4. Sends question + context to the LLM
  5. Returns a contextual answer about your codebase

Configuration

Configuration File

Codexa uses a .codexarc.json file in your project root for configuration. This file is automatically created when you run codexa init.

Location: .codexarc.json (project root)

Format: JSON

Dynamic Configuration Generation

When you run codexa init, Codexa automatically:

  1. Analyzes your codebase structure to detect:

    • Languages present (by file extensions)
    • Package managers used (by config files)
    • Frameworks detected (by dependencies and config files)
  2. Generates optimized patterns:

    • Include patterns: Only file extensions relevant to detected languages
    • Exclude patterns: Language-specific build artifacts, dependency directories, and cache folders
    • Smart defaults: Based on your project type
  3. Applies best practices:

    • Excludes common build outputs (dist/, build/, target/, etc.)
    • Excludes dependency directories (node_modules/, vendor/, .venv/, etc.)
    • Includes important config files and documentation
    • Filters binaries and large files automatically

This means your config is tailored to your project from the start, ensuring optimal indexing performance!

Environment Variables

Some settings can be configured via environment variables:

Variable Description Required For
GROQ_API_KEY Groq API key for cloud LLM Groq provider

Example:

# Using config command (Recommended)
codexa config set GROQ_API_KEY "gsk_your_key_here"

# Or using environment variables
export GROQ_API_KEY="gsk_your_key_here" # macOS/Linux

Configuration Options

modelProvider

Type: "groq"
Default: "groq"

The LLM provider to use for generating answers.

  • "groq" - Uses Groq's cloud API (requires GROQ_API_KEY)

model

Type: string
Type: string
Default: "openai/gpt-oss-120b"

The model identifier to use.

embeddingProvider

Type: "local"
Default: "local"

The embedding provider for vector search.

  • "local" - Uses @xenova/transformers (runs entirely locally)

embeddingModel

Type: string
Default: "Xenova/all-MiniLM-L6-v2"

The embedding model for generating vector representations. This model is downloaded automatically on first use.

maxChunkSize

Type: number
Default: 200

Maximum number of lines per code chunk. Larger values = more context per chunk but fewer chunks.

chunkOverlap

Type: number
Default: 20

Number of lines to overlap between consecutive chunks. Helps maintain context at chunk boundaries.

includeGlobs

Type: string[]
Default: ["**/*.ts", "**/*.tsx", "**/*.js", "**/*.jsx", "**/*.py", "**/*.go", "**/*.rs", "**/*.java", "**/*.md", "**/*.json"]

File patterns to include in indexing. Supports glob patterns.

Examples:

{
  "includeGlobs": [
    "**/*.ts",
    "**/*.tsx",
    "src/**/*.js",
    "lib/**/*.py"
  ]
}

excludeGlobs

Type: string[]
Default: ["node_modules/**", ".git/**", "dist/**", "build/**", ".codexa/**", "package-lock.json"]

File patterns to exclude from indexing.

Examples:

{
  "excludeGlobs": [
    "node_modules/**",
    ".git/**",
    "dist/**",
    "**/*.test.ts",
    "coverage/**"
  ]
}

historyDir

Type: string
Default: ".codexa/sessions"

Directory to store conversation history for session management.

dbPath

Type: string
Default: ".codexa/index.db"

Path to the SQLite database storing code chunks and embeddings.

temperature

Type: number
Default: 0.2

Controls randomness in LLM responses (0.0 = deterministic, 1.0 = creative).

  • Lower values (0.0-0.3): More focused, deterministic answers
  • Higher values (0.7-1.0): More creative, varied responses

topK

Type: number
Default: 4

Number of code chunks to retrieve and use as context for each question. Higher values provide more context but may include less relevant information.

maxFileSize

Type: number
Default: 5242880 (5MB)

Maximum file size in bytes. Files larger than this will be excluded from indexing. Helps avoid processing large binary files or generated artifacts.

Example:

{
  "maxFileSize": 10485760
}

skipBinaryFiles

Type: boolean
Default: true

Whether to automatically skip binary files during indexing. Binary detection uses both file extension and content analysis.

Example:

{
  "skipBinaryFiles": true
}

skipLargeFiles

Type: boolean
Default: true

Whether to skip files exceeding maxFileSize during indexing. Set to false if you want to include all files regardless of size.

Example:

{
  "skipLargeFiles": true,
  "maxFileSize": 10485760
}

Example Configurations

Groq Cloud Provider (Default)

{
  "modelProvider": "groq",
  "model": "openai/gpt-oss-120b",
  "embeddingProvider": "local",
  "embeddingModel": "Xenova/all-MiniLM-L6-v2",
  "maxChunkSize": 300,
  "chunkOverlap": 20,
  "temperature": 0.2,
  "topK": 4
}

Remember: Set GROQ_API_KEY:

codexa config set GROQ_API_KEY "your-api-key"

Optimized for Large Codebases

{
  "modelProvider": "groq",
  "model": "openai/gpt-oss-120b",
  "maxChunkSize": 150,
  "chunkOverlap": 15,
  "topK": 6,
  "temperature": 0.1,
  "includeGlobs": [
    "src/**/*.ts",
    "src/**/*.tsx",
    "lib/**/*.ts"
  ],
  "excludeGlobs": [
    "node_modules/**",
    "dist/**",
    "**/*.test.ts",
    "**/*.spec.ts",
    "coverage/**"
  ]
}

Examples

Basic Workflow

# 1. Initialize in your project
cd my-project
codexa init

# 2. Set Groq Api Key
codexa config set GROQ_API_KEY <your-groq-key>

# 3. Index your codebase
codexa ingest

# 4. Ask questions
codexa ask "What is the main purpose of this codebase?"
codexa ask "How does the user authentication work?"
codexa ask "Where is the API routing configured?"

Force Re-indexing

# After significant code changes
codexa ingest --force

Working with Specific File Types

Update .codexarc.json to focus on specific languages:

{
  "includeGlobs": [
    "**/*.ts",
    "**/*.tsx"
  ],
  "excludeGlobs": [
    "node_modules/**",
    "**/*.test.ts",
    "**/*.spec.ts"
  ]
}

How It Works

Codexa uses Retrieval-Augmented Generation (RAG) to answer questions about your codebase:

1. Ingestion Phase

When you run codexa ingest:

  1. File Discovery: Scans your repository using glob patterns (includeGlobs/excludeGlobs)
  2. Code Chunking: Splits files into manageable chunks with configurable overlap
  3. Embedding Generation: Creates vector embeddings for each chunk using local transformers
  4. Storage: Stores chunks and embeddings in a SQLite database (.codexa/index.db)

2. Query Phase

When you run codexa ask:

  1. Question Embedding: Converts your question into a vector embedding
  2. Vector Search: Finds the most similar code chunks using cosine similarity
  3. Context Retrieval: Selects top-K most relevant chunks as context
  4. LLM Generation: Sends question + context to your configured LLM
  5. Response: Returns an answer grounded in your actual codebase

Benefits

  • Privacy: All processing happens locally by default
  • Speed: Local embeddings and vector search are very fast
  • Accuracy: Answers are based on your actual code, not generic responses
  • Context-Aware: Understands relationships across your codebase

Architecture

┌─────────────────┐
│   User Query    │
└────────┬────────┘
         │
         ▼
┌─────────────────┐     ┌──────────────┐
│  Embedding      │────▶│   Vector     │
│  Generation     │     │   Search     │
└─────────────────┘     └──────┬───────┘
                               │
                               ▼
                        ┌──────────────┐
                        │   Context    │
                        │   Retrieval  │
                        └──────┬───────┘
                               │
                               ▼
┌─────────────────┐     ┌──────────────┐
│   SQLite DB     │◀────│   LLM        │
│   (Chunks +     │     │   (Groq)     │
│   Embeddings)   │     │              │
└─────────────────┘     └──────┬───────┘
                               │
                               ▼
                        ┌──────────────┐
                        │   Answer     │
                        └──────────────┘

Key Components:

  • Chunker: Splits code files into semantic chunks
  • Embedder: Generates vector embeddings (local transformers)
  • Retriever: Finds relevant chunks using vector similarity
  • LLM Client: Generates answers (Groq cloud)
  • Database: SQLite for storing chunks and embeddings

Troubleshooting

"GROQ_API_KEY not set" Error

Problem: Using Groq provider but API key is missing.

Solutions:

  1. Set the API key using the config command (Recommended):
    codexa config set GROQ_API_KEY "your-api-key"
  2. Or set the environment variable:
    export GROQ_API_KEY="your-api-key" # macOS/Linux
  3. Verify it's set:
    codexa config get GROQ_API_KEY

Ingestion is Very Slow

Problem: First ingestion takes too long.

Solutions:

  1. The dynamic config should already optimize patterns - check your .codexarc.json was generated correctly
  2. Reduce maxFileSize to exclude more large files
  3. Reduce maxChunkSize to create more, smaller chunks
  4. Add more patterns to excludeGlobs to skip unnecessary files
  5. Be more specific with includeGlobs to focus on important files
  6. Use --force only when necessary (incremental updates are faster)
  7. Ensure skipBinaryFiles and skipLargeFiles are enabled (default)

Poor Quality Answers

Problem: Answers are not relevant or accurate.

Solutions:

  1. Increase topK to retrieve more context:

    {
      "topK": 6
    }
  2. Adjust temperature for more focused answers:

    {
      "temperature": 0.1
    }
  3. Re-index after significant code changes:

    codexa ingest --force
  4. Ask more specific questions

Database Locked Error

Problem: SQLite database is locked (multiple processes accessing it).

Solutions:

  1. Ensure only one codexa process runs at a time
  2. If using concurrent processes, each should use a different dbPath

Missing Files in Index

Problem: Some files aren't being indexed.

Solutions:

  1. Check includeGlobs patterns in .codexarc.json
  2. Verify files aren't excluded by excludeGlobs
  3. Run with --force to rebuild:
    codexa ingest --force
  4. Check file permissions (ensure Codexa can read the files)

FAQ

Q: Can I use Codexa with private/confidential code?
A: Yes! Codexa processes everything locally by default. Your code never leaves your machine unless you explicitly use cloud providers like Groq.

Q: How much disk space does Codexa use?
A: Typically 10-50MB per 1000 files, depending on file sizes. The SQLite database stores chunks and embeddings.

Q: Can I use Codexa in CI/CD?
A: Yes, but you'll need to ensure your LLM provider is accessible. For CI/CD, consider using Groq (cloud).

Q: Does Codexa work with monorepos?
A: Yes! Adjust includeGlobs and excludeGlobs to target specific packages or workspaces.

Q: Can I use multiple LLM providers?
A: You can switch providers by updating modelProvider in .codexarc.json. Each repository can have its own configuration.

Q: How often should I re-index?
A: Codexa only processes changed files on subsequent runs, so you can run ingest frequently. Use --force only when you need a complete rebuild.

Q: Is there a way to query the database directly?
A: The SQLite database (.codexa/index.db) can be queried directly, but the schema is internal. Use Codexa's commands for all operations.

Q: Can I customize the prompt sent to the LLM?
A: Currently, the prompt is fixed, but this may be configurable in future versions.

Contributing

Contributions are welcome! Please see our Contributing Guide for details.

Quick start:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

For major changes, please open an issue first to discuss what you would like to change.

See CONTRIBUTING.md for detailed guidelines.

License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❤️ by the Codexa team

Report BugRequest Feature

About

A CLI tool that ingests your codebase and allows you to ask questions about it using RAG

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published