Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,

## Overview

This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5-codex`, `gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5-codex`, `gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.

**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.

Expand Down Expand Up @@ -41,7 +41,7 @@ The main entry point orchestrates a **7-step fetch flow**:
1. **Token Management**: Check token expiration, refresh if needed
2. **URL Rewriting**: Transform OpenAI Platform API URLs → ChatGPT backend API (`https://chatgpt.com/backend-api/codex/responses`)
3. **Request Transformation**:
- Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
- Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
- Inject Codex system instructions from latest GitHub release
- Apply reasoning configuration (effort, summary, verbosity)
- Add CODEX_MODE bridge prompt (default) or tool remap message (legacy)
Expand Down Expand Up @@ -98,13 +98,14 @@ The main entry point orchestrates a **7-step fetch flow**:
- Plugin defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"`

**4. Model Normalization**:
- All `gpt-5.1-codex-max*` variants → `gpt-5.1-codex-max`
- All `gpt-5.1-codex*` variants → `gpt-5.1-codex`
- All `gpt-5.1-codex-mini*` variants → `gpt-5.1-codex-mini`
- All `gpt-5-codex` variants → `gpt-5-codex`
- All `gpt-5-codex-mini*` or `codex-mini-latest` variants → `codex-mini-latest`
- All `gpt-5.1` variants → `gpt-5.1`
- All `gpt-5` variants → `gpt-5`
- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation) and clamped to `medium` (or `high` when requested) for Codex Mini
- `minimal` effort auto-normalized to `low` for Codex families and clamped to `medium` (or `high` when requested) for Codex Mini

**5. Codex Instructions Caching**:
- Fetches from latest release tag (not main branch)
Expand Down Expand Up @@ -150,7 +151,7 @@ This plugin **intentionally differs from opencode defaults** because it accesses

| Setting | opencode Default | This Plugin Default | Reason |
|---------|-----------------|---------------------|--------|
| `reasoningEffort` | "high" (gpt-5) | "medium" | Matches Codex CLI default |
| `reasoningEffort` | "high" (gpt-5) | "medium" (Codex Max defaults to "high") | Matches Codex CLI default and Codex Max capabilities |
| `textVerbosity` | "low" (gpt-5) | "medium" | Matches Codex CLI default |
| `reasoningSummary` | "detailed" | "auto" | Matches Codex CLI default |
| gpt-5-codex config | (excluded) | Full support | opencode excludes gpt-5-codex from auto-config |
Expand Down
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,15 @@

All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).

## [3.3.0] - 2025-11-19
### Added
- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (`gpt-5.1-codex-max`, `gpt-5.1-codex-max-xhigh`) with extended reasoning options (including `none`/`xhigh`) while keeping the 272k context / 128k output limits.
- Typing and config support for new reasoning options (`none`/`xhigh`, summary `off`/`on`) plus updated test matrix entries.

### Changed
- Codex Mini clamping now downgrades unsupported `xhigh` to `high` and guards against `none`/`minimal` inputs.
- Documentation, config guides, and validation scripts now reflect 13 verified GPT 5.1 variants (3 codex, 5 codex-max, 2 codex-mini, 3 general), including Codex Max. See README for details on pre-configured variants.

## [3.2.0] - 2025-11-14
### Added
- GPT 5.1 model family support: normalization for `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` plus new GPT 5.1-only presets in the canonical `config/full-opencode.json`.
Expand Down
117 changes: 106 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
## Features

-**ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription
-**8 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, and GPT 5.1 Codex Mini presets for common reasoning levels
-**13 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including `gpt-5.1-codex-max` and `gpt-5.1-codex-max-low/medium/high/xhigh`)
- ⚠️ **GPT 5.1 only** - Older GPT 5.0 models are deprecated and may not work reliably
-**Zero external dependencies** - Lightweight with only @openauthjs/openauth
-**Auto-refreshing tokens** - Handles token expiration automatically
Expand Down Expand Up @@ -130,6 +130,86 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
"store": false
}
},
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-low": {
"name": "GPT 5.1 Codex Max Low (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-medium": {
"name": "GPT 5.1 Codex Max Medium (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-high": {
"name": "GPT 5.1 Codex Max High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-xhigh": {
"name": "GPT 5.1 Codex Max Extra High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-mini-medium": {
"name": "GPT 5.1 Codex Mini Medium (OAuth)",
"limit": {
Expand Down Expand Up @@ -219,8 +299,9 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
**Global config**: `~/.config/opencode/opencode.json`
**Project config**: `<project>/.opencode.json`

This gives you 8 GPT 5.1 variants with different reasoning levels:
This gives you 13 GPT 5.1 variants with different reasoning levels:
- **gpt-5.1-codex** (low/medium/high) - Latest Codex model presets
- **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets (`gpt-5.1-codex-max-low/medium/high/xhigh`)
- **gpt-5.1-codex-mini** (medium/high) - Latest Codex mini tier presets
- **gpt-5.1** (low/medium/high) - Latest general-purpose reasoning presets

Expand Down Expand Up @@ -293,6 +374,8 @@ If using the full configuration, select from the model picker in opencode, or sp
# Use different reasoning levels for gpt-5.1-codex
opencode run "simple task" --model=openai/gpt-5.1-codex-low
opencode run "complex task" --model=openai/gpt-5.1-codex-high
opencode run "large refactor" --model=openai/gpt-5.1-codex-max-high
opencode run "research-grade analysis" --model=openai/gpt-5.1-codex-max-xhigh

# Use different reasoning levels for gpt-5.1
opencode run "quick question" --model=openai/gpt-5.1-low
Expand All @@ -312,6 +395,11 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
| `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation |
| `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks |
| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | High | Default Codex Max preset with large-context support |
| `gpt-5.1-codex-max-low` | GPT 5.1 Codex Max Low (OAuth) | Low | Fast exploratory large-context work |
| `gpt-5.1-codex-max-medium` | GPT 5.1 Codex Max Medium (OAuth) | Medium | Balanced large-context builds |
| `gpt-5.1-codex-max-high` | GPT 5.1 Codex Max High (OAuth) | High | Long-horizon builds, large refactors |
| `gpt-5.1-codex-max-xhigh` | GPT 5.1 Codex Max Extra High (OAuth) | xHigh | Deep multi-hour agent loops, research/debug marathons |
| `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier |
| `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
| `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning |
Expand All @@ -322,6 +410,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
**Display**: TUI shows the friendly name (e.g., "GPT 5.1 Codex Low (OAuth)")

> **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output).
>
> **Note**: Codex Max presets use the `gpt-5.1-codex-max` slug with 272k context and 128k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick reasoning level (only `-xhigh` uses `xhigh` reasoning).
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results.
Expand Down Expand Up @@ -357,14 +447,16 @@ When no configuration is specified, the plugin uses these defaults for all GPT-5
- **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity
- **`textVerbosity: "medium"`** - Balanced output length

Codex Max defaults to `reasoningEffort: "high"` when selected, while other families default to `medium`.

These defaults match the official Codex CLI behavior and can be customized (see Configuration below).

## Configuration

### ⚠️ REQUIRED: Use Pre-Configured File

**YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration:
- 8 pre-configured GPT 5.1 model variants with verified settings
- 13 pre-configured GPT 5.1 model variants with verified settings
- Optimal configuration for each reasoning level
- All variants visible in the opencode model selector
- Required metadata for OpenCode features to work properly
Expand All @@ -379,16 +471,19 @@ If you want to customize settings yourself, you can configure options at provide

#### Available Settings

⚠️ **Important**: The two base models have different supported values.
⚠️ **Important**: Families have different supported values.

| Setting | GPT-5 Values | GPT-5-Codex Values | Plugin Default |
|---------|-------------|-------------------|----------------|
| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` |
| `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` |
| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` |
| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |
| Setting | GPT-5 / GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default |
|---------|-----------------------|----------------------|---------------------------|----------------|
| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `none`, `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` default for Codex Max |
| `reasoningSummary` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed`, `off`, `on` | `auto` |
| `textVerbosity` | `low`, `medium`, `high` | `medium` or `high` | `medium` or `high` | `medium` |
| `include` | Array of strings | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |

> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API).
> **Notes**:
> - `minimal` effort is auto-normalized to `low` for Codex models.
> - Codex Mini clamps to `medium`/`high`; `xhigh` downgrades to `high`.
> - Codex Max supports `none`/`xhigh` plus extended reasoning options while keeping the same 272k context / 128k output limits.
#### Global Configuration Example

Expand Down
8 changes: 4 additions & 4 deletions config/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json

**Why this is required:**
- GPT 5 models can be temperamental and need proper configuration
- Contains 8 verified GPT 5.1 model variants (Codex, Codex Mini, and general GPT 5.1)
- Contains 12+ verified GPT 5.1 model variants (Codex, Codex Max, Codex Mini, and general GPT 5.1 including `gpt-5.1-codex-max-low/medium/high/xhigh`)
- Includes all required metadata for OpenCode features
- Guaranteed to work reliably
- Global options for all models + per-model configuration overrides

**What's included:**
- All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini
- Proper reasoning effort settings for each variant
- Context limits (272k context / 128k output)
- All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
- Proper reasoning effort settings for each variant (including new `xhigh` for Codex Max)
- Context limits (272k context / 128k output for all Codex families, including Codex Max)
- Required options: `store: false`, `include: ["reasoning.encrypted_content"]`

### ❌ Other Configurations (NOT SUPPORTED)
Expand Down
82 changes: 81 additions & 1 deletion config/full-opencode.json
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,86 @@
"store": false
}
},
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
Comment on lines +66 to +81
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent output limit for gpt-5.1-codex-max. The documentation in README.md and docs/getting-started.md specifies 400000 for the base gpt-5.1-codex-max preset, indicating expanded output capacity, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correct value is 272000

OpenAIs context of 400000 comes from the combination of input and output

See here openai/codex#4728

"gpt-5.1-codex-max-low": {
"name": "GPT 5.1 Codex Max Low (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-medium": {
"name": "GPT 5.1 Codex Max Medium (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-high": {
"name": "GPT 5.1 Codex Max High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-max-xhigh": {
"name": "GPT 5.1 Codex Max Extra High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
Comment on lines +130 to +145
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent output limit for gpt-5.1-codex-max-xhigh. The documentation in README.md and docs/getting-started.md specifies 400000 for the xhigh preset, indicating it should have the expanded output window, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correct value is 272000

OpenAIs context of 400000 comes from the combination of input and output

See here openai/codex#4728

"gpt-5.1-codex-mini-medium": {
"name": "GPT 5.1 Codex Mini Medium (OAuth)",
"limit": {
Expand Down Expand Up @@ -146,4 +226,4 @@
}
}
}
}
}
Loading