Systematic problem-solving framework for Claude Code CLI based on Princeton NLP research.
Tree of Thought (ToT) enables AI to solve complex problems through systematic exploration of solution spaces. Based on Princeton NLP's research, it:
- Generates multiple solution paths at each decision point
- Evaluates and compares different approaches systematically
- Backtracks and explores alternatives when paths don't work
- Finds optimal solutions through structured search
Install the /tot command for Claude Code CLI:
npm install -g tree-of-thought-cliThen use it in Claude Code:
/tot "your problem description"
/totcommand in~/.claude/commands/- Algorithm guides in
~/.claude/tot/core/ - Problem templates in
~/.claude/tot/templates/ - Usage examples in
~/.claude/tot/examples/
Claude Code reads documentation files and implements ToT dynamically. No code execution - purely prompt-based and transparent.
Enable Multi-AI mode with Gemini and Codex MCP for maximum diversity and deeper analysis:
# Install Gemini MCP - provides system architecture expertise
claude mcp add gemini-cli -s user -- npx -y gemini-mcp-tool# 1. Install OpenAI Codex CLI
npm install -g @openai/codex
# or: brew install codex
# 2. Login to Codex
codex login
# 3. Add Codex MCP to Claude Code
claude mcp add codex -s user -- codex mcp-server# Check if MCPs are configured
claude mcp listYou should see gemini-cli and codex in the MCP server list.
π‘ Note:
- No MCPs:
/totruns in Claude-only mode (~15s, 6 thoughts)- Gemini only: Claude + Gemini Hybrid mode (~20s)
- Codex only: Claude + Codex Hybrid mode (~25s)
- Both MCPs: Multi-AI mode (~20s, maximum diversity) β¨
/tot "Production app memory grows 50MB/hour after user logout"
Output:
Level 1: Generate 5 hypotheses
ββ [8.5] Unclosed event listeners
ββ [9.1] Timer not cleared β
ββ [7.9] Cache not released
...
Level 2: Expand top 3
ββ Timer β [9.5] Search for setInterval β
ββ Timer β [8.8] Search for setTimeout
...
Solution: Found 3 setInterval calls without cleanup
/tot "Design real-time notification system for 100k concurrent users"
/tot "Database query takes 5 seconds on 1M+ rows with JOINs"
Based on Princeton ToT paper with Multi-AI enhancements:
- Generate 6 diverse solution approaches (n_generate=6)
- Multi-AI: 2 Claude + 2 Gemini + 2 Codex
- Hybrid: 3 from each of 2 AIs
- Single: 6 from one AI
- Evaluate each approach 3 times independently (n_evaluate=3)
- Select top 3-4 for further exploration (n_select=3-4)
- Search using BFS or DFS with early stopping
- Return optimal solution path
- Claude: Practical, user-focused, proven patterns
- Gemini: Innovative architecture, creative system design
- Codex: Algorithm optimization, performance analysis
This prevents thought overlap and maximizes solution diversity.
| Mode | MCPs Required | Execution Time | AI Distribution | Use Case |
|---|---|---|---|---|
| Multi-AI (Default) | Gemini + Codex | ~20s | Claude 33% + Gemini 33% + Codex 33% | Maximum diversity β¨ |
Hybrid CG (--hybrid cg) |
Gemini | ~18s | Claude 50% + Gemini 50% | Practical + Architecture |
Hybrid CX (--hybrid cx) |
Codex | ~22s | Claude 50% + Codex 50% | Practical + Performance |
Hybrid GX (--hybrid gx) |
Gemini + Codex | ~20s | Gemini 50% + Codex 50% | Architecture + Performance |
Claude-Only (-c) |
None | ~15s | Claude 100% | Quick, practical solutions |
Gemini-Only (-g) |
Gemini | ~15s | Gemini 100% | Architecture focus only |
Codex-Only (-x) |
Codex | ~18s | Codex 100% | Performance focus only |
π Key Point: MCPs not configured? Auto-fallback to Claude-only mode (~15s). All modes work without MCPs!
/tot "your problem" # Default: 3 AI perspectives
/tot --ratio 2:2:2 # Equal distribution (default)
/tot --ratio 3:2:1 # Claude-focused
/tot --ratio 1:2:3 # Performance-focusedCharacteristics:
- Speed: ~20 seconds (all AIs run in parallel)
- Thoughts: 6 total (2 from each AI)
- Strengths: Maximum diversity, balanced perspectives, comprehensive coverage
- Best for: Complex problems requiring multiple viewpoints
AI Roles:
- Claude (33%): Practical solutions, proven patterns, quick wins
- Gemini (33%): System architecture, creative design, scalability
- Codex (33%): Algorithm optimization, performance tuning, profiling
Auto-Fallback: If Gemini or Codex unavailable, Claude generates replacement thoughts automatically.
# Claude + Gemini (Practical + Architecture)
/tot --hybrid cg "your problem"
# Claude + Codex (Practical + Performance) - Classic
/tot --hybrid cx "your problem"
# Gemini + Codex (Architecture + Performance)
/tot --hybrid gx "your problem"Characteristics:
- Speed: ~18-22 seconds
- Thoughts: 6 total (3 from each AI)
- Best for: Focused expertise in 2 specific areas
/tot -c "your problem" # Claude-only (practical)
/tot -g "your problem" # Gemini-only (architecture)
/tot -x "your problem" # Codex-only (performance)Characteristics:
- Speed: ~15-18 seconds
- Thoughts: 6 total (all from one AI)
- Best for: Quick analysis, specific expertise needed
/tot debug "Memory leak grows 50MB/hour"
/tot -c debug "React component infinite re-render"/tot refactor "PaymentService has 500 lines, needs modularization"
/tot --ratio 7:3 refactor "Extract algorithm to separate module"/tot design "Real-time notification system for 100k users"
/tot -c design "API versioning strategy"/tot optimize "Database query takes 5s on 1M+ rows"
/tot --ratio 3:7 optimize "Reduce bundle size from 2MB to 500KB"/tot "How to implement distributed cache invalidation?"
/tot -c "Best approach for handling file uploads in React?"| Scenario | Recommended Mode | Reason |
|---|---|---|
| Complex system design | Multi-AI (default) | Need practical + architecture + performance |
| Quick bug fix | Claude-Only (-c) |
Faster, sufficient for common issues |
| Architecture planning | --hybrid cg or -g |
Gemini's design expertise |
| Algorithm optimization | --hybrid cx or -x |
Codex's performance focus |
| Standard refactoring | Claude-Only (-c) |
Proven patterns work well |
| Novel architecture | Multi-AI (default) | Maximum diversity needed |
| Performance tuning | --hybrid gx |
Architecture + Optimization |
| Time-critical | Claude-Only (-c) |
Fastest (~15s) |
| Algorithm | Best For | Memory | Speed |
|---|---|---|---|
| BFS | Comprehensive exploration | High | Medium |
| DFS | Deep analysis with backtracking | Low | Fast |
- Command Reference - Full
/totusage guide - Multi-AI Template - Multi-AI usage patterns
- Core Algorithms - BFS, DFS, evaluation methods
- Problem Templates - Debug, refactor, design
- Usage Examples - Real-world scenarios
- Output Format - Transparent thought display
- Gemini MCP Integration - Gemini setup & usage
- Codex MCP Integration - Codex setup & usage
tree-of-thought/
βββ packages/cli/ # npm package
β βββ commands/tot.md # Command definition
β βββ scripts/install.js # Installation script
βββ docs/
βββ guide/
β βββ core/ # 15+ algorithm guides
β βββ templates/ # 6 problem templates
βββ examples/ # 6 usage examples
βββ OUTPUT_FORMAT.md
Contributions welcome! Open an issue or pull request.
git clone https://github.com/youkchansim/tree-of-thought.git
cd tree-of-thought/packages/cli
# Test locally
npm link
# The /tot command will be available in Claude Code
# Make changes to documentation in docs/guide/
# Publish updates
npm version patch
npm publishMIT License - See LICENSE file.
Based on Princeton NLP's Tree of Thought research:
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Yao et al., 2023)
- Princeton NLP Repository
- Issues: github.com/youkchansim/tree-of-thought/issues
- Author: @youkchansim