Skip to content

Commit 13f4c88

Browse files
authored
docs: Enforce GPT 5.1 full-opencode.json as only supported configuration
docs: Enforce GPT 5.1 full-opencode.json as only supported configuration
2 parents 63f9496 + cf34b14 commit 13f4c88

File tree

16 files changed

+563
-370
lines changed

16 files changed

+563
-370
lines changed

AGENTS.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,
44

55
## Overview
66

7-
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5-codex`, `gpt-5-codex-mini`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
7+
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5-codex`, `gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
88

99
**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.
1010

@@ -41,7 +41,7 @@ The main entry point orchestrates a **7-step fetch flow**:
4141
1. **Token Management**: Check token expiration, refresh if needed
4242
2. **URL Rewriting**: Transform OpenAI Platform API URLs → ChatGPT backend API (`https://chatgpt.com/backend-api/codex/responses`)
4343
3. **Request Transformation**:
44-
- Normalize model names (all variants → `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
44+
- Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
4545
- Inject Codex system instructions from latest GitHub release
4646
- Apply reasoning configuration (effort, summary, verbosity)
4747
- Add CODEX_MODE bridge prompt (default) or tool remap message (legacy)
@@ -98,8 +98,11 @@ The main entry point orchestrates a **7-step fetch flow**:
9898
- Plugin defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"`
9999

100100
**4. Model Normalization**:
101+
- All `gpt-5.1-codex*` variants → `gpt-5.1-codex`
102+
- All `gpt-5.1-codex-mini*` variants → `gpt-5.1-codex-mini`
101103
- All `gpt-5-codex` variants → `gpt-5-codex`
102104
- All `gpt-5-codex-mini*` or `codex-mini-latest` variants → `codex-mini-latest`
105+
- All `gpt-5.1` variants → `gpt-5.1`
103106
- All `gpt-5` variants → `gpt-5`
104107
- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation) and clamped to `medium` (or `high` when requested) for Codex Mini
105108

CHANGELOG.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,15 @@
22

33
All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
44

5+
## [3.2.0] - 2025-11-14
6+
### Added
7+
- GPT 5.1 model family support: normalization for `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` plus new GPT 5.1-only presets in the canonical `config/full-opencode.json`.
8+
- Documentation updates (README, docs, AGENTS) describing the 5.1 families, their reasoning defaults, and how they map to ChatGPT slugs and token limits.
9+
10+
### Changed
11+
- Model normalization docs and tests now explicitly cover both 5.0 and 5.1 Codex/general families and the two Codex Mini tiers.
12+
- The legacy GPT 5.0 full configuration is now published as `config/full-opencode-gpt5.json`; new installs should prefer the 5.1 presets.
13+
514
## [3.1.0] - 2025-11-11
615
### Added
716
- Codex Mini support end-to-end: normalization to the `codex-mini-latest` slug, proper reasoning defaults, and two new presets (`gpt-5-codex-mini-medium` / `gpt-5-codex-mini-high`).

README.md

Lines changed: 77 additions & 112 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
3333
## Features
3434

3535
-**ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription
36-
-**11 pre-configured model variants** - Includes Codex Mini (medium/high) alongside all gpt-5 and gpt-5-codex presets
36+
-**8 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, and GPT 5.1 Codex Mini presets for common reasoning levels
37+
- ⚠️ **GPT 5.1 only** - Older GPT 5.0 models are deprecated and may not work reliably
3738
-**Zero external dependencies** - Lightweight with only @openauthjs/openauth
3839
-**Auto-refreshing tokens** - Handles token expiration automatically
3940
-**Prompt caching** - Reuses responses across turns via stable `prompt_cache_key`
@@ -52,9 +53,15 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
5253

5354
**No npm install needed!** opencode automatically installs plugins when you add them to your config.
5455

55-
#### Recommended: Full Configuration (Codex CLI Experience)
56+
#### ⚠️ REQUIRED: Full Configuration (Only Supported Setup)
5657

57-
For the complete experience with all reasoning variants matching the official Codex CLI:
58+
**IMPORTANT**: You MUST use the full configuration from [`config/full-opencode.json`](./config/full-opencode.json). Other configurations are not officially supported and may not work reliably.
59+
60+
**Why the full config is required:**
61+
- GPT 5 models can be temperamental - some work, some don't, some may error
62+
- The full config has been tested and verified to work
63+
- Minimal configs lack proper model metadata for OpenCode features
64+
- Older GPT 5.0 models are deprecated and being phased out by OpenAI
5865

5966
1. **Copy the full configuration** from [`config/full-opencode.json`](./config/full-opencode.json) to your opencode config file:
6067
```json
@@ -75,8 +82,8 @@ For the complete experience with all reasoning variants matching the official Co
7582
"store": false
7683
},
7784
"models": {
78-
"gpt-5-codex-low": {
79-
"name": "GPT 5 Codex Low (OAuth)",
85+
"gpt-5.1-codex-low": {
86+
"name": "GPT 5.1 Codex Low (OAuth)",
8087
"limit": {
8188
"context": 272000,
8289
"output": 128000
@@ -91,8 +98,8 @@ For the complete experience with all reasoning variants matching the official Co
9198
"store": false
9299
}
93100
},
94-
"gpt-5-codex-medium": {
95-
"name": "GPT 5 Codex Medium (OAuth)",
101+
"gpt-5.1-codex-medium": {
102+
"name": "GPT 5.1 Codex Medium (OAuth)",
96103
"limit": {
97104
"context": 272000,
98105
"output": 128000
@@ -107,8 +114,8 @@ For the complete experience with all reasoning variants matching the official Co
107114
"store": false
108115
}
109116
},
110-
"gpt-5-codex-high": {
111-
"name": "GPT 5 Codex High (OAuth)",
117+
"gpt-5.1-codex-high": {
118+
"name": "GPT 5.1 Codex High (OAuth)",
112119
"limit": {
113120
"context": 272000,
114121
"output": 128000
@@ -123,11 +130,11 @@ For the complete experience with all reasoning variants matching the official Co
123130
"store": false
124131
}
125132
},
126-
"gpt-5-codex-mini-medium": {
127-
"name": "GPT 5 Codex Mini Medium (OAuth)",
133+
"gpt-5.1-codex-mini-medium": {
134+
"name": "GPT 5.1 Codex Mini Medium (OAuth)",
128135
"limit": {
129-
"context": 200000,
130-
"output": 100000
136+
"context": 272000,
137+
"output": 128000
131138
},
132139
"options": {
133140
"reasoningEffort": "medium",
@@ -139,11 +146,11 @@ For the complete experience with all reasoning variants matching the official Co
139146
"store": false
140147
}
141148
},
142-
"gpt-5-codex-mini-high": {
143-
"name": "GPT 5 Codex Mini High (OAuth)",
149+
"gpt-5.1-codex-mini-high": {
150+
"name": "GPT 5.1 Codex Mini High (OAuth)",
144151
"limit": {
145-
"context": 200000,
146-
"output": 100000
152+
"context": 272000,
153+
"output": 128000
147154
},
148155
"options": {
149156
"reasoningEffort": "high",
@@ -155,24 +162,8 @@ For the complete experience with all reasoning variants matching the official Co
155162
"store": false
156163
}
157164
},
158-
"gpt-5-minimal": {
159-
"name": "GPT 5 Minimal (OAuth)",
160-
"limit": {
161-
"context": 272000,
162-
"output": 128000
163-
},
164-
"options": {
165-
"reasoningEffort": "minimal",
166-
"reasoningSummary": "auto",
167-
"textVerbosity": "low",
168-
"include": [
169-
"reasoning.encrypted_content"
170-
],
171-
"store": false
172-
}
173-
},
174-
"gpt-5-low": {
175-
"name": "GPT 5 Low (OAuth)",
165+
"gpt-5.1-low": {
166+
"name": "GPT 5.1 Low (OAuth)",
176167
"limit": {
177168
"context": 272000,
178169
"output": 128000
@@ -187,8 +178,8 @@ For the complete experience with all reasoning variants matching the official Co
187178
"store": false
188179
}
189180
},
190-
"gpt-5-medium": {
191-
"name": "GPT 5 Medium (OAuth)",
181+
"gpt-5.1-medium": {
182+
"name": "GPT 5.1 Medium (OAuth)",
192183
"limit": {
193184
"context": 272000,
194185
"output": 128000
@@ -203,8 +194,8 @@ For the complete experience with all reasoning variants matching the official Co
203194
"store": false
204195
}
205196
},
206-
"gpt-5-high": {
207-
"name": "GPT 5 High (OAuth)",
197+
"gpt-5.1-high": {
198+
"name": "GPT 5.1 High (OAuth)",
208199
"limit": {
209200
"context": 272000,
210201
"output": 128000
@@ -218,38 +209,6 @@ For the complete experience with all reasoning variants matching the official Co
218209
],
219210
"store": false
220211
}
221-
},
222-
"gpt-5-mini": {
223-
"name": "GPT 5 Mini (OAuth)",
224-
"limit": {
225-
"context": 272000,
226-
"output": 128000
227-
},
228-
"options": {
229-
"reasoningEffort": "low",
230-
"reasoningSummary": "auto",
231-
"textVerbosity": "low",
232-
"include": [
233-
"reasoning.encrypted_content"
234-
],
235-
"store": false
236-
}
237-
},
238-
"gpt-5-nano": {
239-
"name": "GPT 5 Nano (OAuth)",
240-
"limit": {
241-
"context": 272000,
242-
"output": 128000
243-
},
244-
"options": {
245-
"reasoningEffort": "minimal",
246-
"reasoningSummary": "auto",
247-
"textVerbosity": "low",
248-
"include": [
249-
"reasoning.encrypted_content"
250-
],
251-
"store": false
252-
}
253212
}
254213
}
255214
}
@@ -260,25 +219,25 @@ For the complete experience with all reasoning variants matching the official Co
260219
**Global config**: `~/.config/opencode/opencode.json`
261220
**Project config**: `<project>/.opencode.json`
262221

263-
This gives you 11 model variants with different reasoning levels:
264-
- **gpt-5-codex** (low/medium/high) - Code-optimized reasoning
265-
- **gpt-5-codex-mini** (medium/high) - Cheaper Codex tier with 200k/100k tokens
266-
- **gpt-5** (minimal/low/medium/high) - General-purpose reasoning
267-
- **gpt-5-mini** and **gpt-5-nano** - Lightweight variants
222+
This gives you 8 GPT 5.1 variants with different reasoning levels:
223+
- **gpt-5.1-codex** (low/medium/high) - Latest Codex model presets
224+
- **gpt-5.1-codex-mini** (medium/high) - Latest Codex mini tier presets
225+
- **gpt-5.1** (low/medium/high) - Latest general-purpose reasoning presets
268226

269-
All appear in the opencode model selector as "GPT 5 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc.
227+
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5.1 High (OAuth)", etc.
270228

271229
### Prompt caching & usage limits
272230

273231
Codex backend caching is enabled automatically. When OpenCode supplies a `prompt_cache_key` (its session identifier), the plugin forwards it unchanged so Codex can reuse work between turns. The plugin no longer synthesizes its own cache IDs—if the host omits `prompt_cache_key`, Codex will treat the turn as uncached. The bundled CODEX_MODE bridge prompt is synchronized with the latest Codex CLI release, so opencode and Codex stay in lock-step on tool availability. When your ChatGPT subscription nears a limit, opencode surfaces the plugin's friendly error message with the 5-hour and weekly windows, mirroring the Codex CLI summary.
274232

275-
> **Auto-compaction note:** OpenCode's context auto-compaction and usage sidebar only populate when the full configuration above is used (the minimal config lacks the per-model metadata OpenCode needs). Stick with `config/full-opencode.json` if you want live token counts and automatic history compaction inside the UI.
233+
> **⚠️ IMPORTANT:** You MUST use the full configuration above. OpenCode's context auto-compaction and usage sidebar only work with the full config. Additionally, GPT 5 models require proper configuration - minimal configs are NOT supported and may fail unpredictably.
276234
277-
#### Alternative: Minimal Configuration
235+
#### Minimal Configuration (NOT RECOMMENDED - DO NOT USE)
278236

279-
For a simpler setup (uses plugin defaults: medium reasoning, auto summaries):
237+
**DO NOT use minimal configurations** - they are not supported for GPT 5.1 and will not work reliably:
280238

281239
```json
240+
// ❌ DO NOT USE THIS - WILL NOT WORK RELIABLY
282241
{
283242
"$schema": "https://opencode.ai/config.json",
284243
"plugin": [
@@ -288,7 +247,11 @@ For a simpler setup (uses plugin defaults: medium reasoning, auto summaries):
288247
}
289248
```
290249

291-
**Note**: This gives you basic functionality but you won't see the different reasoning variants in the model selector.
250+
**Why this doesn't work:**
251+
- GPT 5 models are temperamental and need proper configuration
252+
- Missing model metadata breaks OpenCode features
253+
- No support for usage limits or context compaction
254+
- Cannot guarantee stable operation
292255

293256
2. **That's it!** opencode will auto-install the plugin on first run.
294257

@@ -327,17 +290,17 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
327290
If using the full configuration, select from the model picker in opencode, or specify via command line:
328291

329292
```bash
330-
# Use different reasoning levels for gpt-5-codex
331-
opencode run "simple task" --model=openai/gpt-5-codex-low
332-
opencode run "complex task" --model=openai/gpt-5-codex-high
293+
# Use different reasoning levels for gpt-5.1-codex
294+
opencode run "simple task" --model=openai/gpt-5.1-codex-low
295+
opencode run "complex task" --model=openai/gpt-5.1-codex-high
333296

334-
# Use different reasoning levels for gpt-5
335-
opencode run "quick question" --model=openai/gpt-5-minimal
336-
opencode run "deep analysis" --model=openai/gpt-5-high
297+
# Use different reasoning levels for gpt-5.1
298+
opencode run "quick question" --model=openai/gpt-5.1-low
299+
opencode run "deep analysis" --model=openai/gpt-5.1-high
337300

338-
# Or with minimal config (uses defaults)
339-
opencode run "create a hello world file" --model=openai/gpt-5-codex
340-
opencode run "solve this complex problem" --model=openai/gpt-5
301+
# Use Codex Mini variants
302+
opencode run "balanced task" --model=openai/gpt-5.1-codex-mini-medium
303+
opencode run "complex code" --model=openai/gpt-5.1-codex-mini-high
341304
```
342305

343306
### Available Model Variants (Full Config)
@@ -346,22 +309,21 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
346309

347310
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
348311
|--------------|------------------|-----------------|----------|
349-
| `gpt-5-codex-low` | GPT 5 Codex Low (OAuth) | Low | Fast code generation |
350-
| `gpt-5-codex-medium` | GPT 5 Codex Medium (OAuth) | Medium | Balanced code tasks |
351-
| `gpt-5-codex-high` | GPT 5 Codex High (OAuth) | High | Complex code & tools |
352-
| `gpt-5-codex-mini-medium` | GPT 5 Codex Mini Medium (OAuth) | Medium | Cheaper Codex tier (200k/100k) |
353-
| `gpt-5-codex-mini-high` | GPT 5 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
354-
| `gpt-5-minimal` | GPT 5 Minimal (OAuth) | Minimal | Quick answers, simple tasks |
355-
| `gpt-5-low` | GPT 5 Low (OAuth) | Low | Faster responses with light reasoning |
356-
| `gpt-5-medium` | GPT 5 Medium (OAuth) | Medium | Balanced general-purpose tasks |
357-
| `gpt-5-high` | GPT 5 High (OAuth) | High | Deep reasoning, complex problems |
358-
| `gpt-5-mini` | GPT 5 Mini (OAuth) | Low | Lightweight tasks |
359-
| `gpt-5-nano` | GPT 5 Nano (OAuth) | Minimal | Maximum speed |
360-
361-
**Usage**: `--model=openai/<CLI Model ID>` (e.g., `--model=openai/gpt-5-codex-low`)
362-
**Display**: TUI shows the friendly name (e.g., "GPT 5 Codex Low (OAuth)")
363-
364-
> **Note**: All `gpt-5-codex-mini*` presets normalize to the ChatGPT slug `codex-mini-latest` (200k input / 100k output tokens).
312+
| `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation |
313+
| `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks |
314+
| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
315+
| `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier |
316+
| `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
317+
| `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning |
318+
| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Balanced general-purpose tasks |
319+
| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep reasoning, complex problems |
320+
321+
**Usage**: `--model=openai/<CLI Model ID>` (e.g., `--model=openai/gpt-5.1-codex-low`)
322+
**Display**: TUI shows the friendly name (e.g., "GPT 5.1 Codex Low (OAuth)")
323+
324+
> **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output).
325+
326+
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results.
365327
366328
All accessed via your ChatGPT Plus/Pro subscription.
367329

@@ -371,10 +333,10 @@ All accessed via your ChatGPT Plus/Pro subscription.
371333

372334
```yaml
373335
# ✅ Correct
374-
model: openai/gpt-5-codex-low
336+
model: openai/gpt-5.1-codex-low
375337

376338
# ❌ Wrong - will fail
377-
model: gpt-5-codex-low
339+
model: gpt-5.1-codex-low
378340
```
379341
380342
See [Configuration Guide](https://numman-ali.github.io/opencode-openai-codex-auth/configuration) for advanced usage.
@@ -399,12 +361,15 @@ These defaults match the official Codex CLI behavior and can be customized (see
399361

400362
## Configuration
401363

402-
### Recommended: Use Pre-Configured File
364+
### ⚠️ REQUIRED: Use Pre-Configured File
403365

404-
The easiest way to get started is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides:
405-
- 11 pre-configured model variants matching Codex CLI presets
406-
- Optimal settings for each reasoning level
366+
**YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration:
367+
- 8 pre-configured GPT 5.1 model variants with verified settings
368+
- Optimal configuration for each reasoning level
407369
- All variants visible in the opencode model selector
370+
- Required metadata for OpenCode features to work properly
371+
372+
**Do NOT use other configurations** - they are not supported and may fail unpredictably with GPT 5 models.
408373

409374
See [Installation](#installation) for setup instructions.
410375

0 commit comments

Comments
 (0)