Releases: numman-ali/opencode-openai-codex-auth
v4.0.2 - Fix Compaction & Agent Creation
Bugfix Release
Fixes compaction context loss, agent creation, and SSE/JSON response handling.
Fixed
- Compaction losing context: v4.0.1 was too aggressive in filtering tool calls - it removed ALL
function_call/function_call_outputitems when tools weren't present. Now only orphaned outputs (without matching calls) are filtered, preserving matched pairs for compaction context. - Agent creation failing: The
/agent createcommand was failing with "Invalid JSON response" because we were returning SSE streams instead of JSON forgenerateText()requests. - SSE/JSON response handling: Properly detect original request intent -
streamText()requests get SSE passthrough,generateText()requests get SSE→JSON conversion.
Added
gpt-5.1-chat-latestmodel support: Added to model map, normalizes togpt-5.1.
Technical Details
- Compaction fix: OpenCode sends
item_referencewithfc_*IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphanedfunction_call_outputitems (where no matchingfunction_callexists). - Agent creation fix: We were forcing
stream: truefor all requests and returning SSE for all responses. Now we capture originalstreamvalue before transformation and convert SSE→JSON only when original request wasn't streaming. - The Codex API always receives
stream: true(required), but response handling is based on original intent.
Upgrade
Update your opencode.json:
{
"plugin": ["[email protected]"]
}If stuck on an old version, clear the cache:
rm -rf ~/.cache/opencode/node_modules ~/.cache/opencode/bun.lockFull Changelog: v4.0.1...v4.0.2
v4.0.1 - Bugfix Release
Bugfix Release
Fixes API errors during summary/compaction and GitHub rate limiting.
Fixed
- Orphaned
function_call_outputerrors: Fixed 400 errors during summary/compaction requests when OpenCode sendsitem_referencepointers to server-stored function calls. The plugin now filters outfunction_callandfunction_call_outputitems when no tools are present in the request. - GitHub API rate limiting: Added fallback mechanism when fetching Codex instructions from GitHub. If the API returns 403 (rate limit), the plugin now falls back to parsing the HTML releases page.
Technical Details
- Root cause: OpenCode's secondary model (gpt-5-nano) uses
item_referencewithfc_*IDs to reference stored function calls. Our plugin filtersitem_referencefor stateless mode (store: false), leavingfunction_call_outputorphaned. The Codex API rejects requests with orphaned outputs. - Fix: When
hasTools === false, filter out allfunction_callandfunction_call_outputitems from the input array. - GitHub fallback chain: API endpoint → HTML page → redirect URL parsing → HTML regex parsing.
Upgrade
Update your opencode.json:
{
"plugin": ["[email protected]"]
}If stuck on an old version, clear the cache:
rm -rf ~/.cache/opencode/node_modules ~/.cache/opencode/bun.lockFull Changelog: v4.0.0...v4.0.1
v4.0.0 - 🎉 Major Release: Full Codex Max Support & Prompt Engineering Overhaul
This release brings full GPT-5.1 Codex Max support with dedicated prompts, plus complete parity with Codex CLI's prompt selection logic.
🚀 Highlights
- Full Codex Max support with dedicated prompt including frontend design guidelines
- Model-specific prompts matching Codex CLI's prompt selection logic
- GPT-5.0 → GPT-5.1 migration as legacy models are phased out by OpenAI
✨ Model-Specific System Prompts
The plugin now fetches the correct Codex prompt based on model family, matching Codex CLI's model_family.rs logic:
| Model Family | Prompt File | Lines | Use Case |
|---|---|---|---|
gpt-5.1-codex-max* |
gpt-5.1-codex-max_prompt.md |
117 | Codex Max with frontend design guidelines |
gpt-5.1-codex*, codex-* |
gpt_5_codex_prompt.md |
105 | Focused coding prompt |
gpt-5.1* |
gpt_5_1_prompt.md |
368 | Full behavioral guidance |
🔄 Legacy GPT-5.0 → GPT-5.1 Migration
All legacy GPT-5.0 models automatically normalize to GPT-5.1 equivalents:
gpt-5-codex→gpt-5.1-codexgpt-5→gpt-5.1gpt-5-mini,gpt-5-nano→gpt-5.1codex-mini-latest→gpt-5.1-codex-mini
🔧 Technical Improvements
- New
ModelFamilytype:"codex-max" | "codex" | "gpt-5.1" - Lazy instruction loading: Instructions fetched per-request based on model
- Separate caching per family: Better cache efficiency
- Model family logging: Debug with
modelFamilyfield in logs
🧪 Test Coverage
- 191 unit tests (16 new for model family detection)
- 13 integration tests with family verification
- All tests passing ✅
📝 Full Changelog
See CHANGELOG.md for complete details.
Installation:
```bash
npm install [email protected]
```
v3.3.0: GPT 5.1 Enforcement + Configuration Standardization
This release enforces GPT 5.1 model identifiers across all configurations and documentation, removes deprecated GPT 5.0 models, and establishes config/full-opencode.json as the only officially supported configuration. These changes address GPT 5 model temperamental behavior and ensure users have a reliable, tested setup that works consistently with OpenCode features.
🏷️ Model Naming & Deprecation
GPT 5.1 Standardization
Impact: 🟡 MEDIUM - Configuration update required
Changes:
All model identifiers updated to GPT 5.1 naming convention:
- ✅
gpt-5-codex-low→gpt-5.1-codex-low - ✅
gpt-5-codex-medium→gpt-5.1-codex-medium - ✅
gpt-5-codex-high→gpt-5.1-codex-high - ✅
gpt-5-codex-mini-medium→gpt-5.1-codex-mini-medium - ✅
gpt-5-codex-mini-high→gpt-5.1-codex-mini-high - ✅
gpt-5-low→gpt-5.1-low - ✅
gpt-5-medium→gpt-5.1-medium - ✅
gpt-5-high→gpt-5.1-high
File: config/full-opencode.json
Display names updated:
- "GPT 5 Codex Low (OAuth)" → "GPT 5.1 Codex Low (OAuth)"
- All variants now clearly show "5.1" in the TUI
Deprecated Models Removed
Impact: 🔴 HIGH - Breaking change for users on GPT 5.0 models
Removed from config:
- ❌
gpt-5-minimal- No longer supported - ❌
gpt-5-mini- No longer supported - ❌
gpt-5-nano- No longer supported
Reason:
OpenAI is phasing out GPT 5.0 models. These models exhibited unreliable behavior and are being replaced by the GPT 5.1 family.
Migration:
Users on deprecated models should switch to:
gpt-5-minimal→gpt-5.1-low(similar fast performance)gpt-5-mini→gpt-5.1-low(lightweight reasoning)gpt-5-nano→gpt-5.1-low(minimal reasoning)
File: config/full-opencode.json - Now ships with 8 verified GPT 5.1 variants instead of 11 mixed 5.0/5.1 models
Codex Mini Context Limits Corrected
Impact: 🟢 LOW - Improves accuracy
Problem:
Codex Mini was configured with incorrect context limits (200k/100k), which didn't match actual API specifications.
Fix:
Updated Codex Mini limits to correct values:
- Context: 200k → 272k tokens
- Output: 100k → 128k tokens
Impact:
- ✅ OpenCode now displays accurate token usage for Codex Mini variants
- ✅ Auto-compaction works correctly with proper limits
- ✅ Matches actual API behavior
File: config/full-opencode.json:69-70, 85-86
⚠️ Configuration Enforcement
Full Config Now Required
Impact: 🔴 CRITICAL - Affects all users
What Changed:
The plugin now strongly enforces config/full-opencode.json as the only officially supported configuration.
Why This Matters:
GPT 5 models have proven to be temperamental:
- Some variants work reliably
- Some don't respond correctly
- Some may give errors unexpectedly
The full configuration has been thoroughly tested and verified to work consistently. Minimal configurations lack critical metadata and may fail unpredictably.
Documentation Updates:
README.md:
- Changed "Recommended: Full Configuration" → "
⚠️ REQUIRED: Full Configuration (Only Supported Setup)" - Added explicit warning: "GPT 5 models can be temperamental - some work, some don't, some may error"
- Marked minimal config section as "❌ NOT RECOMMENDED - DO NOT USE"
- Added detailed "Why this doesn't work" section explaining:
- Missing model metadata breaks OpenCode features
- No support for usage limits or context compaction
- Cannot guarantee stable operation
docs/getting-started.md:
- Removed "Option B: Minimal Configuration"
- Replaced with "❌ Minimal Configuration (NOT SUPPORTED - DO NOT USE)"
- Added comprehensive warnings about GPT 5 models requiring proper configuration
docs/configuration.md:
- Added warnings throughout about using official
full-opencode.json - Updated "Recommended" → "
⚠️ REQUIRED: Use Pre-Configured File" - Added migration guide showing GPT 5.0 → GPT 5.1 upgrade path
config/README.md:
- Complete restructure from "Configuration Examples" → "Configuration"
- Added "
⚠️ REQUIRED Configuration File" section - Marked
minimal-opencode.jsonas NOT SUPPORTED - Marked
full-opencode-gpt5.jsonas DEPRECATED - Clear "❌ Other Configurations (NOT SUPPORTED)" section
Impact:
- ✅ Users get reliable, tested configuration
- ✅ OpenCode features (auto-compaction, usage sidebar) work properly
- ✅ Reduces support issues from misconfiguration
⚠️ Users must migrate from minimal configs to full config
Why Minimal Configs Don't Work
Missing Metadata:
Minimal configs lack per-model limit metadata that OpenCode requires for:
- Token usage display
- Automatic context compaction
- Usage sidebar widgets
GPT 5 Temperamental Behavior:
Without proper configuration:
- Some model variants may fail
- Error messages may be unclear
- Behavior is unpredictable
No Support Guarantee:
The plugin team cannot guarantee stable operation with custom or minimal configs. Only full-opencode.json is tested and verified.
📝 Documentation Overhaul
Comprehensive Warning System
All documentation now includes:
⚠️ Prominent warnings about GPT 5 model temperamental behavior- 🔴 Clear "DO NOT USE" sections for unsupported configs
- ✅ Migration paths from deprecated models to GPT 5.1
- 📋 Detailed explanations of why full config is required
Files Updated:
README.md- Main plugin documentationdocs/getting-started.md- Installation guidedocs/configuration.md- Configuration referencedocs/index.md- Documentation homeconfig/README.md- Configuration directory guideAGENTS.md- Developer agent guidance
Model Variant Table Updated
README.md Model Table:
All 8 GPT 5.1 variants now clearly listed:
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|---|---|---|
gpt-5.1-codex-low |
GPT 5.1 Codex Low (OAuth) | Low | Fast code generation |
gpt-5.1-codex-medium |
GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks |
gpt-5.1-codex-high |
GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
gpt-5.1-codex-mini-medium |
GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier |
gpt-5.1-codex-mini-high |
GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
gpt-5.1-low |
GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning |
gpt-5.1-medium |
GPT 5.1 Medium (OAuth) | Medium | Balanced general-purpose tasks |
gpt-5.1-high |
GPT 5.1 High (OAuth) | High | Deep reasoning, complex problems |
Added warning:
⚠️ Important: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured infull-opencode.jsonfor best results.
Usage Examples Updated
All code examples now use GPT 5.1 naming:
# Use different reasoning levels for gpt-5.1-codex
opencode run "simple task" --model=openai/gpt-5.1-codex-low
opencode run "complex task" --model=openai/gpt-5.1-codex-high
# Use different reasoning levels for gpt-5.1
opencode run "quick question" --model=openai/gpt-5.1-low
opencode run "deep analysis" --model=openai/gpt-5.1-high
# Use Codex Mini variants
opencode run "balanced task" --model=openai/gpt-5.1-codex-mini-medium
opencode run "complex code" --model=openai/gpt-5.1-codex-mini-highFiles Updated:
README.md- All examples use 5.1 namingdocs/getting-started.md- Installation examples updateddocs/configuration.md- Configuration examples updatedconfig/README.md- Usage examples updated
🔧 Technical Improvements
Model Normalization Map
New file: lib/request/helpers/model-map.ts
Purpose:
Centralized model normalization logic for consistent handling of all GPT 5.1 variants.
Features:
- Explicit mapping of all known model variants
- Fallback pattern matching for custom names
- Support for both 5.0 (deprecated) and 5.1 families
- Handles provider prefixes (
openai/gpt-5.1-codex-low)
Impact:
- ✅ More maintainable code
- ✅ Easier to add new model variants
- ✅ Clear documentation of supported models
Model Validation Script
New file: scripts/validate-model-map.sh
Purpose:
Automated validation that ensures:
- All models in config are recognized by normalization
- No orphaned model definitions
- Config and code stay in sync
Usage:
./scripts/validate-model-map.shImpact:
- ✅ Catches configuration errors early
- ✅ Prevents regression when adding new models
- ✅ Automated quality assurance
Enhanced Test Coverage
File: test/request-transformer.test.ts
New tests added:
- GPT 5.1 model normalization
- Deprecated model handling
- Codex Mini limit verification
- Model variant recognition
Impact:
- ✅ Ensures 5.1 migration works correctly
- ✅ Verifies limit metadata accuracy
- ✅ Prevents regression
📋 CHANGELOG Updates
File: CHANGELOG.md
Added comprehensive entry for v3.3.0 documenting:
- GPT 5.1 standardization
- Deprecated model removal
- Configuration enforcement
- Documentation overhaul
🎯 Verification
Configuration verified:
- ✅ All 8 GPT 5.1 variants defined in
full-opencode.json - ✅ Correct context limits (272k/128k) for all models
- ✅ Proper reasoning effort settings per variant
- ✅ Required options (
store: false,include: ["reasoning.encrypted_content"])
Documentation verified:
- ✅ All examples use GPT 5.1 naming
- ✅ Warnings about temperamental behavior consistent across docs
- ✅ Migration paths from GPT 5.0 to GPT 5.1 clear
- ✅ Full config enf...
v3.1.0 - Support for Codex Mini (medium/high)
To update the plugin (same as README):
Clear plugin cache
(cd ~ && sed -i.bak '/"opencode-openai-codex-auth"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)
Restart OpenCode - it will reinstall the latest version
opencode
You'll need to add gpt-5-codex-mini-medium and gpt-5-codex-mini-high presets (200k input / 100k output) to your opencode.json. I recommend copying directly from the readme.
v3.0.0 - Prompt caching + usage-aware errors
Highlights
- Host-provided
prompt_cache_keyis now the single source of truth for Codex caching; we only forward headers and body fields when OpenCode supplies them. - Usage-limit errors mirror the Codex CLI summary (5-hour + weekly windows) so OpenCode users know exactly when quota resets.
- Documentation refreshed: canonical config snippets, auto-compaction caveats, and CHANGELOG now tracks every release.
- Security clean-up: pinned
[email protected]and[email protected]to address upstream advisories.
Full Changelog
- Detailed breakdown: https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/CHANGELOG.md#300---2025-11-04
- Compare diff: v2.1.2...v3.0.0
v2.1.2: Compliance Updates + Critical Bug Fixes
v2.1.2: Compliance Updates + Critical Bug Fixes
This release includes comprehensive OpenAI ToS compliance updates and fixes 4 critical bugs that prevented per-model options and multi-turn conversations from working correctly.
🔒 Compliance & Legal Updates
Terms of Service & Usage Guidelines
This release adds comprehensive compliance documentation to ensure users understand proper usage:
⚠️ Terms of Service & Usage Notice - Clear guidance on personal use only- 📋 Rate Limits & Responsible Use - Best practices for API usage
- ❓ Frequently Asked Questions - 6 common TOS compliance questions
- 📄 LICENSE Update - MIT with Usage Disclaimer for personal development
- 💼 Compliance Header - Added to index.ts documenting intended use
Key Points:
- ✅ For personal development only with your own ChatGPT Plus/Pro subscription
- ✅ Uses OpenAI's official OAuth authentication (same as Codex CLI)
- ❌ NOT for: Commercial resale, multi-user services, or API resale
- ❌ NOT a "free API alternative" - uses your existing subscription
- 📋 Users are responsible for compliance with OpenAI's Terms of Use
New Documentation Files
Compliance & Security:
CONTRIBUTING.md- Contribution guidelines with compliance requirementsSECURITY.md- Security policy, vulnerability reporting, best practicesdocs/privacy.md- Comprehensive privacy & data handling documentation
User Guides:
docs/getting-started.md- Complete installation guide with compliance noticedocs/configuration.md- Advanced configuration optionsdocs/troubleshooting.md- Debug techniques and compliance troubleshooting
Developer Guides:
docs/development/ARCHITECTURE.md- Technical architecture deep divedocs/development/CONFIG_FLOW.md- Config system internalsdocs/development/CONFIG_FIELDS.md- Field usage guidedocs/development/TESTING.md- Testing guide and verification
GitHub Templates:
.github/ISSUE_TEMPLATE/bug_report.md- Bug reports with compliance checklist.github/ISSUE_TEMPLATE/feature_request.md- Feature requests with compliance confirmation.github/ISSUE_TEMPLATE/config.yml- Links to OpenAI support and TOS
🐛 Critical Bug Fixes
Bug #1: Per-Model Options Ignored (Config Lookup Mismatch)
Severity: 🔴 HIGH
Problem:
Users configured different reasoningEffort for each model variant (low/medium/high), but all variants behaved identically. The plugin was normalizing model names before config lookup, causing it to miss per-model configurations.
Fix: lib/request/request-transformer.ts:277
- Use original model name for config lookup
- Normalize only for API request
- Per-model options now correctly applied ✅
Impact:
- ✅ Different reasoning levels properly applied per variant
- ✅ Users can customize each model independently
Bug #2: Multi-Turn Conversations Fail (AI SDK Compatibility)
Severity: 🔴 CRITICAL
Problem:
Multi-turn conversations failed with: AI_APICallError: Item with id 'msg_abc123' not found. Items are not persisted when store is set to false.
Root Causes:
- AI SDK sends
item_reference(not in Codex API spec) - IDs weren't completely stripped for stateless mode
- Only
rs_*IDs were filtered, butmsg_*,assistant_*leaked through
Research:
- Tested
store: true→ API returned error (ChatGPT backend requiresstore: false) - Codex CLI never sends ANY IDs in stateless mode
- Full message history needed for LLM context
Fix: lib/request/request-transformer.ts:114-135
- Filter out AI SDK
item_referenceitems - Strip ALL IDs from remaining items (not just
rs_*) - Preserve full message history for LLM context
Impact:
- ✅ No more "item not found" errors
- ✅ Multi-turn conversations work flawlessly
- ✅ Full context preserved via
reasoning.encrypted_content
Bug #3: Case-Sensitive Normalization
Severity: 🟡 MEDIUM
Problem:
Old verbose config names like "GPT 5 Codex Low (ChatGPT Subscription)" didn't normalize correctly because pattern matching was case-sensitive.
Fix: lib/request/request-transformer.ts:22-27
- Added
toLowerCase()for case-insensitive matching - Handles uppercase/mixed case user input
Impact:
- ✅ Backwards compatible with old verbose config names
- ✅ Old configs work seamlessly
Bug #4: GitHub API Rate Limiting
Severity: 🟡 MEDIUM
Problem:
Plugin checked GitHub on every request to fetch latest Codex instructions, exhausting the 60 req/hour rate limit during testing.
Fix: lib/prompts/codex.ts:50-53, lib/prompts/opencode-codex.ts:47-50
- Added 15-minute cache TTL check
- Only fetches if cache is stale
Impact:
- ✅ Prevents rate limit exhaustion
- ✅ Faster plugin initialization
✨ Enhancements
Debug Logging System
New environment variable for troubleshooting:
DEBUG_CODEX_PLUGIN=1 opencode run "test"Shows:
- Config lookups and option resolution
- Message ID filtering
- Model normalization
- Per-model vs global options
New functions: logDebug() and logWarn() in lib/logger.ts
Optimized Config Structure
New Recommended Config:
{
"models": {
"gpt-5-codex-low": {
"name": "GPT 5 Codex Low (OAuth)",
"options": { "reasoningEffort": "low" }
}
}
}Improvements:
- ✅ Matches Codex CLI official preset IDs
- ✅ Cleaner CLI usage:
--model=openai/gpt-5-codex-low - ✅ Removed redundant
idfield - ✅ Added
namefield for TUI display
File: config/full-opencode.json
Note: Old config format still fully supported (100% backwards compatible)
Automated Testing
Created scripts/test-all-models.sh for automated integration testing:
- Tests all 9 custom model configurations
- Tests all 4 default OpenCode models
- Verifies backwards compatibility
- Uses
ENABLE_PLUGIN_REQUEST_LOGGINGto verify actual API requests
🧪 Testing
Test Coverage
- ✅ 159 unit tests passing (30+ new tests added)
- ✅ 14 integration tests passing (actual API verification)
- ✅ All documented scenarios have test coverage
Integration Test Results
✅ 14/14 model configuration tests passing
✅ All 9 custom models verified
✅ All 4 default models verified
✅ Backwards compatibility confirmed
✅ Multi-turn conversations tested successfully
📝 Breaking Changes
None! This release is 100% backwards compatible.
Old configs continue to work:
- Verbose names like
"GPT 5 Codex Low (ChatGPT Subscription)"✅ - Configs with
idfield ✅ - All existing configurations ✅
📦 What's Changed
Modified (9 files, +600 lines)
lib/request/request-transformer.ts- Config lookup & ID filtering fixeslib/prompts/codex.ts- 15-minute cache TTLlib/prompts/opencode-codex.ts- 15-minute cache TTLlib/logger.ts- Debug logging systemconfig/full-opencode.json- Optimized structuretest/request-transformer.test.ts- 30+ new testsLICENSE- Added usage disclaimerindex.ts- Added compliance headerREADME.md- Added ToS, Rate Limits, FAQ sections
Created (20 files, +4,500 lines)
- Complete documentation suite (user + developer guides)
- Privacy, security, and contribution policies
- GitHub issue templates
- Automated testing script
🎯 Verification
All scenarios tested and verified:
- ✅ Default models work without custom config
- ✅ Custom models with per-variant options
- ✅ Old config format (backwards compatible)
- ✅ Multi-turn conversations (no errors)
- ✅ Model switching with different options
- ✅ Complete ID filtering
- ✅ 100% backwards compatibility
📚 Documentation
Complete documentation now available:
- User Guides: Installation, configuration, troubleshooting
- Developer Guides: Architecture, config system, testing
- Policies: Privacy, security, contribution guidelines
- GitHub Pages: https://numman-ali.github.io/opencode-openai-codex-auth/
🔗 Important Links
- OpenAI Terms of Use: https://openai.com/policies/terms-of-use/
- OpenAI Usage Policies: https://openai.com/policies/usage-policies/
- OpenAI Platform API: https://platform.openai.com/ (for production use)
- OpenAI Codex CLI: https://github.com/openai/codex
⚠️ Usage Reminder
This plugin is for personal development use only with your own ChatGPT Plus/Pro subscription. It uses OpenAI's official OAuth authentication (same as Codex CLI).
Not for: Commercial services, API resale, or multi-user applications.
For production applications, use the OpenAI Platform API.
🚀 Upgrade Instructions
Update the Plugin
OpenCode does NOT auto-update plugins. To get v2.1.2:
# Clear plugin cache
(cd ~ && sed -i.bak '/"opencode-openai-codex-auth"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)
# Restart OpenCode - it will reinstall latest version
opencodeOptional: Migrate Config
Your existing config continues to work, but you can optionally migrate to the cleaner structure:
Before:
{
"GPT 5 Codex Low (ChatGPT Subscription)": {
"id": "gpt-5-codex",
"options": { "reasoningEffort": "low" }
}
}After:
{
"gpt-5-codex-low": {
"name": "GPT 5 Codex Low (OAuth)",
"options": { "reasoningEffort": "low" }
}
}🎉 What's Next
This release completes the v2.1.x series with comprehensive bug fixes, compliance coverage, and documentation.
Coming in future releases:
- Additional model presets
- Enhanced debugging capabilities
- Performance optimizations
Full Changelog: v2.1.1...v2.1.2
v2.1.1 - Fix cache clear command
🐛 Bug Fix
Fixed cache clear command causing directory issues
Updated the plugin cache clearing command in README to run in a subshell, preventing directory navigation issues when users execute it from within the cache folder being deleted.
Before:
sed -i.bak '/"opencode-openai-codex-auth"/d' ~/.cache/opencode/package.json && rm -rf ~/.cache/opencode/node_modules/opencode-openai-codex-authAfter:
(cd ~ && sed -i.bak '/"opencode-openai-codex-auth"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)The command now:
- Runs in a subshell to avoid affecting the user's current directory
- Changes to home directory first
- Uses relative paths from home
- Returns to original directory after completion
📦 Installation
"plugin": ["opencode-openai-codex-auth"]Full Changelog: v2.1.0...v2.1.1
v2.1.0 - Task Tool & MCP Awareness with ETag-Based Prompt Verification
🎯 What's New
This release enhances the CODEX_MODE bridge prompt with awareness of OpenCode's advanced capabilities and implements a robust cache-based verification system for more reliable prompt filtering.
✨ New Features
🔧 Task Tool Awareness
The bridge prompt now documents OpenCode's sub-agent system, making the model aware it can delegate complex work to specialized agents:
- Explains Task tool availability and dynamic agent types
- Provides guidance on when to use sub-agents (complex analysis, isolated context)
- References tool schema for current agent capabilities
🔌 MCP Tool Awareness
Documents Model Context Protocol tool naming conventions so the model knows these capabilities exist:
- Explains
mcp__<server>__<tool>prefix convention - Encourages discovery of available MCP integrations
- Promotes usage when functionality matches task needs
✅ ETag-Based OpenCode Prompt Verification
Implements robust caching system for OpenCode's codex.txt to ensure 100% accurate prompt filtering:
- Fetches OpenCode's official codex.txt from GitHub
- Uses HTTP conditional requests (ETag) for efficient updates
- Dual verification: cached content match + text signature fallback
- Prevents accidentally filtering custom AGENTS.md content
📚 AGENTS.md (formerly CLAUDE.md)
Renamed development guide to be applicable to all AI agents:
- Updated header to reference Claude Code, Codex, and other AI agents
- Comprehensive coding guidance for agent-assisted development
- Documents 7-step fetch flow, module organization, and key design patterns
🔨 Technical Improvements
New Architecture:
- Made request transformation pipeline async to support cache fetching
- Created
lib/prompts/opencode-codex.tsfor prompt verification (109 lines) - Enhanced
isOpenCodeSystemPrompt()with dual verification system
Bridge Prompt Update:
- Added "Advanced Tools" section with Task tool and MCP documentation
- Increased from ~450 tokens to ~550 tokens
- Better model awareness of available capabilities
Test Coverage:
- Added 6 new tests verifying AGENTS.md content safety
- Total: 129 comprehensive tests (was 123)
- All tests passing ✅
📋 What's Changed
New Files:
lib/prompts/opencode-codex.ts- OpenCode prompt verification cache
Updated Files:
lib/prompts/codex-opencode-bridge.ts- Added Advanced Tools sectionlib/request/request-transformer.ts- Enhanced prompt detectionlib/request/fetch-helpers.ts- Made transformation asyncindex.ts- Await async transformationAGENTS.md- Renamed from CLAUDE.mdREADME.md- Updated test count, added Task/MCP mentions.gitignore- Added opencode.json and .opencode/package.json- Version bump to 2.1.0test/request-transformer.test.ts- Added 6 tests, async updates
🎁 Benefits
- Better Task Delegation - Model knows it can use Task tool for specialized work
- MCP Discoverability - Model aware of MCP tools and naming conventions
- Robust Filtering - Cache-based verification prevents false positives
- Future-Proof - Automatically updates when OpenCode changes their prompt
- Broader Applicability - AGENTS.md guidance applies to all AI agents
📦 Installation
# In your opencode config, use:
"plugin": ["opencode-openai-codex-auth"]
# Or upgrade from previous version:
sed -i.bak '/"opencode-openai-codex-auth"/d' ~/.cache/opencode/package.json && rm -rf ~/.cache/opencode/node_modules/opencode-openai-codex-auth⚠️ Breaking Changes
None - all changes are additive and backward compatible.
📊 Stats
- 10 files changed: +652 insertions, -671 deletions
- 129 tests: All passing ✅
- Bridge prompt: ~550 tokens (was ~450)
- Test coverage: Comprehensive including AGENTS.md safety verification
🔗 Full Changelog
See PR #15 for complete technical details.
Full Changelog: v2.0.0...v2.1.0
v2.0.0 - TypeScript Rewrite + CODEX_MODE
🚀 Version 2.0.0 - Major Release
This major release includes a complete TypeScript rewrite with enhanced configuration, CODEX_MODE for better Codex CLI parity, and comprehensive library reorganization.
✨ Highlights
🎯 CODEX_MODE (PR #10)
New configurable mode for better Codex CLI compatibility
- Enabled by default - Provides optimal Codex CLI experience out of the box
- Bridge Prompt - ~450 tokens (~90% reduction vs full OpenCode prompt)
- Configurable - Via
~/.opencode/openai-codex-auth-config.jsonorCODEX_MODEenv var - Smart Tool Mapping - Critical tool replacements, verification checklist, working style guidelines
Configuration:
{
"codexMode": true // default
}Environment Override:
CODEX_MODE=0 opencode run "task" # Disable temporarily
CODEX_MODE=1 opencode run "task" # Enable temporarily🔄 Complete TypeScript Rewrite (PR #9)
Modern, type-safe, well-tested codebase
- Strict TypeScript - Full type safety with comprehensive type definitions
- 123 Tests - Comprehensive test coverage (up from 0)
- Modular Architecture - 10 focused helper functions, semantic folder structure
- Enhanced Configuration - User-configurable reasoning, summaries, and verbosity
- Animated OAuth UI - Beautiful success page with matrix rain effects
📦 What's New
CODEX_MODE Features
- ✅ Codex-OpenCode bridge prompt for CLI parity
- ✅ Configurable via config file or environment variable
- ✅ Priority: env var > config file > default (true)
- ✅ OpenCode system prompt filtering
- ✅ Tool name confusion prevention
TypeScript Rewrite
- ✅ Complete migration from .mjs to .ts
- ✅ Strict type checking with comprehensive interfaces
- ✅ 123 comprehensive tests with Vitest
- ✅ Modular helper functions (all < 40 lines)
- ✅ Enhanced error handling and logging
Enhanced Configuration
- ✅ 9 pre-configured model variants matching Codex CLI
- ✅ User-configurable reasoning - effort, summary, verbosity
- ✅ Provider-level and model-level options - Fine-grained control
- ✅ Plugin configuration support -
~/.opencode/openai-codex-auth-config.json
Improved OAuth Flow
- ✅ Animated OAuth success page with matrix rain
- ✅ Better error handling and user feedback
- ✅ Automatic token refresh
📁 Library Reorganization
lib/
├── auth/ # OAuth authentication
│ ├── auth.ts
│ ├── browser.ts
│ └── server.ts
├── prompts/ # System prompts
│ ├── codex.ts
│ └── codex-opencode-bridge.ts
├── request/ # Request handling
│ ├── fetch-helpers.ts
│ ├── request-transformer.ts
│ └── response-handler.ts
├── config.ts # Plugin configuration
├── constants.ts
├── logger.ts
├── types.ts
└── oauth-success.html
🧪 Testing
- 123 total tests (all passing ✅)
- 8 test files covering all functionality
- 12 new tests for CODEX_MODE configuration
- Zero test regressions
Test suites:
- ✅ Authentication & OAuth
- ✅ Browser utilities
- ✅ Configuration parsing
- ✅ Plugin configuration
- ✅ Fetch helpers
- ✅ Logger
- ✅ Request transformation
- ✅ Response handling
📚 Documentation
README Updates
- ✅ Complete rewrite with better organization
- ✅ Plugin configuration section
- ✅ CODEX_MODE documentation
- ✅ Installation guide (full vs minimal config)
- ✅ Configuration examples
- ✅ Model variants table
- ✅ Troubleshooting guide
Configuration Files
- ✅
config/full-opencode.json- 9 model variants (recommended) - ✅
config/minimal-opencode.json- Minimal setup - ✅
config/README.md- Configuration guide
🔧 Configuration
Recommended: Full Configuration
For the complete experience with all reasoning variants:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"]
},
"models": {
"GPT 5 Codex Low": { ... },
"GPT 5 Codex Medium": { ... },
"GPT 5 Codex High": { ... },
// + 6 more variants
}
}
}
}See config/full-opencode.json for complete configuration.
Plugin Configuration
Create ~/.opencode/openai-codex-auth-config.json:
{
"codexMode": true
}⚠️ Breaking Changes
CODEX_MODE Enabled by Default
- Impact: Users now get the Codex-OpenCode bridge prompt instead of tool remap message
- Benefit: Better Codex CLI parity, fewer tool confusion errors
- Migration: No action needed - works better by default
- To disable: Set
{ "codexMode": false }in config file - Or use
CODEX_MODE=0environment variable
- To disable: Set
TypeScript Migration
- Impact: Distributed files are now .js (compiled from .ts) instead of .mjs
- Benefit: Better IDE support, type safety, source maps
- Migration: No action needed - imports remain compatible
Node.js 20+ Required
- Impact: Requires Node.js >= 20.0.0
- Benefit: Better performance, modern features
- Migration: Upgrade Node.js if using older version
📊 Stats
- 19 files changed (+717, -35)
- 123 tests (all passing)
- 2 major PRs (#9, #10)
- ~450 token bridge prompt (90% reduction)
- 9 model variants in full config
🙏 Credits
Built with ❤️ using Claude Code
📝 Upgrade Notes
From v1.0.x
-
Update plugin reference:
{ "plugin": ["[email protected]"] } -
Optional - Use full configuration:
- Copy
config/full-opencode.jsonto your opencode config - Get 9 pre-configured model variants
- Copy
-
Optional - Configure CODEX_MODE:
- CODEX_MODE is enabled by default (recommended)
- To disable: Create
~/.opencode/openai-codex-auth-config.jsonwith{ "codexMode": false }
-
Verify Node.js version:
node --version # Should be >= 20.0.0
🔗 Links
- PR #9: TypeScript Rewrite - #9
- PR #10: CODEX_MODE - #10
- Full Changelog: v1.0.4...v2.0.0
Full Changelog: v1.0.4...v2.0.0