Skip to content

Commit 63f9496

Browse files
authored
feat: Add Codex Mini model tier
2 parents 2495b13 + de77d13 commit 63f9496

17 files changed

+256
-58
lines changed

AGENTS.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,
44

55
## Overview
66

7-
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5-codex` and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
7+
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5-codex`, `gpt-5-codex-mini`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
88

99
**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.
1010

@@ -41,7 +41,7 @@ The main entry point orchestrates a **7-step fetch flow**:
4141
1. **Token Management**: Check token expiration, refresh if needed
4242
2. **URL Rewriting**: Transform OpenAI Platform API URLs → ChatGPT backend API (`https://chatgpt.com/backend-api/codex/responses`)
4343
3. **Request Transformation**:
44-
- Normalize model names (all variants → `gpt-5` or `gpt-5-codex`)
44+
- Normalize model names (all variants → `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
4545
- Inject Codex system instructions from latest GitHub release
4646
- Apply reasoning configuration (effort, summary, verbosity)
4747
- Add CODEX_MODE bridge prompt (default) or tool remap message (legacy)
@@ -99,8 +99,9 @@ The main entry point orchestrates a **7-step fetch flow**:
9999

100100
**4. Model Normalization**:
101101
- All `gpt-5-codex` variants → `gpt-5-codex`
102+
- All `gpt-5-codex-mini*` or `codex-mini-latest` variants → `codex-mini-latest`
102103
- All `gpt-5` variants → `gpt-5`
103-
- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation)
104+
- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation) and clamped to `medium` (or `high` when requested) for Codex Mini
104105

105106
**5. Codex Instructions Caching**:
106107
- Fetches from latest release tag (not main branch)

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
44

5+
## [3.1.0] - 2025-11-11
6+
### Added
7+
- Codex Mini support end-to-end: normalization to the `codex-mini-latest` slug, proper reasoning defaults, and two new presets (`gpt-5-codex-mini-medium` / `gpt-5-codex-mini-high`).
8+
- Documentation & configuration updates describing the Codex Mini tier (200k input / 100k output tokens) plus refreshed totals (11 presets, 160+ unit tests).
9+
10+
### Fixed
11+
- Prevented Codex Mini from inheriting the lightweight (`minimal`) reasoning profile used by `gpt-5-mini`/`nano`, ensuring the API always receives supported effort levels.
12+
513
## [3.0.0] - 2025-11-04
614
### Added
715
- Codex-style usage-limit messaging that mirrors the 5-hour and weekly windows reported by the Codex CLI.

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ All contributions MUST:
3636
## Code Standards
3737

3838
- **TypeScript:** All code must be TypeScript with strict type checking
39-
- **Testing:** Include tests for new functionality (we maintain 159+ unit tests)
39+
- **Testing:** Include tests for new functionality (we maintain 160+ unit tests)
4040
- **Documentation:** Update README.md for user-facing changes
4141
- **Modular design:** Keep functions focused and under 40 lines
4242
- **No external dependencies:** Minimize dependencies (currently only @openauthjs/openauth)

README.md

Lines changed: 41 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
3333
## Features
3434

3535
-**ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription
36-
-**9 pre-configured model variants** - Low/Medium/High reasoning for both gpt-5 and gpt-5-codex
36+
-**11 pre-configured model variants** - Includes Codex Mini (medium/high) alongside all gpt-5 and gpt-5-codex presets
3737
-**Zero external dependencies** - Lightweight with only @openauthjs/openauth
3838
-**Auto-refreshing tokens** - Handles token expiration automatically
3939
-**Prompt caching** - Reuses responses across turns via stable `prompt_cache_key`
@@ -43,7 +43,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
4343
-**Automatic tool remapping** - Codex tools → opencode tools
4444
-**Configurable reasoning** - Control effort, summary verbosity, and text output
4545
-**Usage-aware errors** - Shows clear guidance when ChatGPT subscription limits are reached
46-
-**Type-safe & tested** - Strict TypeScript with 159 unit tests + 14 integration tests
46+
-**Type-safe & tested** - Strict TypeScript with 160+ unit tests + 14 integration tests
4747
-**Modular architecture** - Easy to maintain and extend
4848

4949
## Installation
@@ -123,6 +123,38 @@ For the complete experience with all reasoning variants matching the official Co
123123
"store": false
124124
}
125125
},
126+
"gpt-5-codex-mini-medium": {
127+
"name": "GPT 5 Codex Mini Medium (OAuth)",
128+
"limit": {
129+
"context": 200000,
130+
"output": 100000
131+
},
132+
"options": {
133+
"reasoningEffort": "medium",
134+
"reasoningSummary": "auto",
135+
"textVerbosity": "medium",
136+
"include": [
137+
"reasoning.encrypted_content"
138+
],
139+
"store": false
140+
}
141+
},
142+
"gpt-5-codex-mini-high": {
143+
"name": "GPT 5 Codex Mini High (OAuth)",
144+
"limit": {
145+
"context": 200000,
146+
"output": 100000
147+
},
148+
"options": {
149+
"reasoningEffort": "high",
150+
"reasoningSummary": "detailed",
151+
"textVerbosity": "medium",
152+
"include": [
153+
"reasoning.encrypted_content"
154+
],
155+
"store": false
156+
}
157+
},
126158
"gpt-5-minimal": {
127159
"name": "GPT 5 Minimal (OAuth)",
128160
"limit": {
@@ -228,8 +260,9 @@ For the complete experience with all reasoning variants matching the official Co
228260
**Global config**: `~/.config/opencode/opencode.json`
229261
**Project config**: `<project>/.opencode.json`
230262

231-
This gives you 9 model variants with different reasoning levels:
263+
This gives you 11 model variants with different reasoning levels:
232264
- **gpt-5-codex** (low/medium/high) - Code-optimized reasoning
265+
- **gpt-5-codex-mini** (medium/high) - Cheaper Codex tier with 200k/100k tokens
233266
- **gpt-5** (minimal/low/medium/high) - General-purpose reasoning
234267
- **gpt-5-mini** and **gpt-5-nano** - Lightweight variants
235268

@@ -316,6 +349,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
316349
| `gpt-5-codex-low` | GPT 5 Codex Low (OAuth) | Low | Fast code generation |
317350
| `gpt-5-codex-medium` | GPT 5 Codex Medium (OAuth) | Medium | Balanced code tasks |
318351
| `gpt-5-codex-high` | GPT 5 Codex High (OAuth) | High | Complex code & tools |
352+
| `gpt-5-codex-mini-medium` | GPT 5 Codex Mini Medium (OAuth) | Medium | Cheaper Codex tier (200k/100k) |
353+
| `gpt-5-codex-mini-high` | GPT 5 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
319354
| `gpt-5-minimal` | GPT 5 Minimal (OAuth) | Minimal | Quick answers, simple tasks |
320355
| `gpt-5-low` | GPT 5 Low (OAuth) | Low | Faster responses with light reasoning |
321356
| `gpt-5-medium` | GPT 5 Medium (OAuth) | Medium | Balanced general-purpose tasks |
@@ -326,6 +361,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
326361
**Usage**: `--model=openai/<CLI Model ID>` (e.g., `--model=openai/gpt-5-codex-low`)
327362
**Display**: TUI shows the friendly name (e.g., "GPT 5 Codex Low (OAuth)")
328363

364+
> **Note**: All `gpt-5-codex-mini*` presets normalize to the ChatGPT slug `codex-mini-latest` (200k input / 100k output tokens).
365+
329366
All accessed via your ChatGPT Plus/Pro subscription.
330367

331368
### Using in Custom Commands
@@ -365,7 +402,7 @@ These defaults match the official Codex CLI behavior and can be customized (see
365402
### Recommended: Use Pre-Configured File
366403

367404
The easiest way to get started is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides:
368-
- 9 pre-configured model variants matching Codex CLI presets
405+
- 11 pre-configured model variants matching Codex CLI presets
369406
- Optimal settings for each reasoning level
370407
- All variants visible in the opencode model selector
371408

config/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json
2828
This demonstrates:
2929
- Global options for all models
3030
- Per-model configuration overrides
31-
- All supported model variants (gpt-5-codex, gpt-5, gpt-5-mini, gpt-5-nano)
31+
- All supported model variants (gpt-5-codex, gpt-5-codex-mini, gpt-5, gpt-5-mini, gpt-5-nano)
3232

3333
## Usage
3434

config/full-opencode.json

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,38 @@
6363
"store": false
6464
}
6565
},
66+
"gpt-5-codex-mini-medium": {
67+
"name": "GPT 5 Codex Mini Medium (OAuth)",
68+
"limit": {
69+
"context": 200000,
70+
"output": 100000
71+
},
72+
"options": {
73+
"reasoningEffort": "medium",
74+
"reasoningSummary": "auto",
75+
"textVerbosity": "medium",
76+
"include": [
77+
"reasoning.encrypted_content"
78+
],
79+
"store": false
80+
}
81+
},
82+
"gpt-5-codex-mini-high": {
83+
"name": "GPT 5 Codex Mini High (OAuth)",
84+
"limit": {
85+
"context": 200000,
86+
"output": 100000
87+
},
88+
"options": {
89+
"reasoningEffort": "high",
90+
"reasoningSummary": "detailed",
91+
"textVerbosity": "medium",
92+
"include": [
93+
"reasoning.encrypted_content"
94+
],
95+
"store": false
96+
}
97+
},
6698
"gpt-5-minimal": {
6799
"name": "GPT 5 Minimal (OAuth)",
68100
"limit": {

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This plugin bridges two different systems with careful engineering:
2828
4. **15-Minute Caching** - Prevents GitHub API rate limit exhaustion
2929
5. **Per-Model Configuration** - Enables quality presets with quick switching
3030

31-
**Testing**: 159 unit tests + 14 integration tests with actual API verification
31+
**Testing**: 160+ unit tests + 14 integration tests with actual API verification
3232

3333
---
3434

docs/configuration.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,9 @@ Controls computational effort for reasoning.
5757
- `medium` - Balanced (default)
5858
- `high` - Maximum code quality
5959

60-
**Note**: `minimal` auto-converts to `low` for gpt-5-codex (API limitation)
60+
**Notes**:
61+
- `minimal` auto-converts to `low` for gpt-5-codex (API limitation)
62+
- `gpt-5-codex-mini*` only supports `medium` or `high`; lower settings are clamped to `medium`
6163

6264
**Example:**
6365
```json
@@ -379,7 +381,7 @@ CODEX_MODE=1 opencode run "task" # Temporarily enable
379381
## Configuration Files
380382

381383
**Provided Examples:**
382-
- [config/full-opencode.json](../config/full-opencode.json) - Complete with 9 variants
384+
- [config/full-opencode.json](../config/full-opencode.json) - Complete with 11 variants (adds Codex Mini presets)
383385
- [config/minimal-opencode.json](../config/minimal-opencode.json) - Minimal setup
384386

385387
> **Why choose the full config?** OpenCode's auto-compaction and usage widgets rely on the per-model `limit` metadata present only in `full-opencode.json`. Use the minimal config only if you don't need those UI features.

docs/development/ARCHITECTURE.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -262,8 +262,8 @@ let include: Vec<String> = if reasoning.is_some() {
262262
└─ tools: [...]
263263
264264
2. Model Normalization
265-
├─ Detect codex/gpt-5 variants
266-
└─ Normalize to "gpt-5" or "gpt-5-codex"
265+
├─ Detect codex/gpt-5/codex-mini variants
266+
└─ Normalize to "gpt-5", "gpt-5-codex", or "codex-mini-latest"
267267
268268
3. Config Merging
269269
├─ Global options (provider.openai.options)
@@ -314,7 +314,7 @@ let include: Vec<String> = if reasoning.is_some() {
314314
| **store Parameter** | `false` (ChatGPT) | `false` ||
315315
| **Message IDs** | Stripped in stateless | Stripped ||
316316
| **reasoning.encrypted_content** | ✅ Included | ✅ Included ||
317-
| **Model Normalization** | "gpt-5" / "gpt-5-codex" | Same ||
317+
| **Model Normalization** | "gpt-5" / "gpt-5-codex" / "codex-mini-latest" | Same ||
318318
| **Reasoning Effort** | medium (default) | medium (default) ||
319319
| **Text Verbosity** | medium (codex), low (gpt-5) | Same ||
320320

docs/development/TESTING.md

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -577,6 +577,8 @@ opencode
577577
normalizeModel("gpt-5-codex") // → "gpt-5-codex" ✅
578578
normalizeModel("gpt-5-codex-low") // → "gpt-5-codex" ✅
579579
normalizeModel("GPT-5-CODEX-HIGH") // → "gpt-5-codex" ✅
580+
normalizeModel("gpt-5-codex-mini-high")// → "codex-mini-latest" ✅
581+
normalizeModel("codex-mini-latest") // → "codex-mini-latest" ✅
580582
normalizeModel("my-codex-model") // → "gpt-5-codex" ✅
581583
normalizeModel("gpt-5") // → "gpt-5" ✅
582584
normalizeModel("gpt-5-mini") // → "gpt-5" ✅
@@ -590,17 +592,25 @@ normalizeModel("random-model") // → "gpt-5" ✅ (fallback)
590592
```typescript
591593
export function normalizeModel(model: string | undefined): string {
592594
if (!model) return "gpt-5";
593-
if (model.includes("codex")) return "gpt-5-codex"; // Check codex first
594-
if (model.includes("gpt-5")) return "gpt-5"; // Then gpt-5
595-
return "gpt-5"; // Safe fallback
595+
const normalized = model.toLowerCase();
596+
if (normalized.includes("codex-mini") || normalized.includes("codex-mini-latest")) {
597+
return "codex-mini-latest";
598+
}
599+
if (normalized.includes("codex")) {
600+
return "gpt-5-codex";
601+
}
602+
if (normalized.includes("gpt-5") || normalized.includes("gpt 5")) {
603+
return "gpt-5";
604+
}
605+
return "gpt-5";
596606
}
597607
```
598608

599609
**Why this works:**
600-
- ✅ Case-insensitive (`.includes()` works with any case)
610+
- ✅ Case-insensitive (`.toLowerCase()` + `.includes()`)
601611
- ✅ Pattern-based (works with any naming)
602612
- ✅ Safe fallback (unknown models → `gpt-5`)
603-
- ✅ Codex priority (checks "codex" before "gpt-5")
613+
- ✅ Codex priority with explicit Codex Mini support (`codex-mini*``codex-mini-latest`)
604614

605615
---
606616

@@ -657,12 +667,14 @@ describe('normalizeModel', () => {
657667
test('handles all default models', () => {
658668
expect(normalizeModel('gpt-5')).toBe('gpt-5')
659669
expect(normalizeModel('gpt-5-codex')).toBe('gpt-5-codex')
670+
expect(normalizeModel('gpt-5-codex-mini')).toBe('codex-mini-latest')
660671
expect(normalizeModel('gpt-5-mini')).toBe('gpt-5')
661672
expect(normalizeModel('gpt-5-nano')).toBe('gpt-5')
662673
})
663674

664675
test('handles custom preset names', () => {
665676
expect(normalizeModel('gpt-5-codex-low')).toBe('gpt-5-codex')
677+
expect(normalizeModel('openai/gpt-5-codex-mini-high')).toBe('codex-mini-latest')
666678
expect(normalizeModel('gpt-5-high')).toBe('gpt-5')
667679
})
668680

@@ -672,6 +684,7 @@ describe('normalizeModel', () => {
672684

673685
test('handles edge cases', () => {
674686
expect(normalizeModel(undefined)).toBe('gpt-5')
687+
expect(normalizeModel('codex-mini-latest')).toBe('codex-mini-latest')
675688
expect(normalizeModel('random')).toBe('gpt-5')
676689
})
677690
})

0 commit comments

Comments
 (0)