You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BREAKING CHANGE: Major prompt engineering overhaul
This release brings full parity with Codex CLI's prompt selection:
Model-Specific Prompts:
- gpt-5.1-codex-max* → gpt-5.1-codex-max_prompt.md (117 lines, frontend design)
- gpt-5.1-codex*, codex-* → gpt_5_codex_prompt.md (105 lines, focused coding)
- gpt-5.1* → gpt_5_1_prompt.md (368 lines, full behavioral guidance)
Legacy GPT-5.0 → GPT-5.1 Migration:
- gpt-5-codex → gpt-5.1-codex
- gpt-5 → gpt-5.1
- gpt-5-mini, gpt-5-nano → gpt-5.1
- codex-mini-latest → gpt-5.1-codex-mini
New Features:
- ModelFamily type for prompt selection ("codex-max" | "codex" | "gpt-5.1")
- getModelFamily() function for model family detection
- Lazy instruction loading per model family
- Separate caching per model family
- Model family logged in request logs
Fixes:
- OpenCode prompt cache URL (main → dev branch)
- Multi-model session log detection in test script
Test Coverage:
- 191 unit tests (16 new for model family detection)
- 13 integration tests with family verification
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
Copy file name to clipboardExpand all lines: AGENTS.md
+17-8Lines changed: 17 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,
4
4
5
5
## Overview
6
6
7
-
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5-codex`,`gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5`models through their ChatGPT subscription instead of using OpenAI Platform API credits.
7
+
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, and`gpt-5.1`models through their ChatGPT subscription instead of using OpenAI Platform API credits. Legacy GPT-5.0 models are automatically normalized to their GPT-5.1 equivalents.
8
8
9
9
**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.
10
10
@@ -97,19 +97,28 @@ The main entry point orchestrates a **7-step fetch flow**:
-`gpt-5.1*` → `gpt_5_1_prompt.md` (368 lines, full behavioral guidance)
14
+
- New `ModelFamily` type (`"codex-max" | "codex" | "gpt-5.1"`) for prompt selection.
15
+
- New `getModelFamily()` function to determine prompt selection based on normalized model name.
16
+
- Model family now logged in request logs for debugging (`modelFamily` field in after-transform logs).
17
+
- 16 new unit tests for model family detection (now **191 total unit tests**).
18
+
- Integration tests now verify correct model family selection (13 integration tests with family verification).
19
+
20
+
### Changed
21
+
-**Legacy GPT-5.0 models now map to GPT-5.1**: All legacy `gpt-5` model variants automatically normalize to their `gpt-5.1` equivalents as GPT-5.0 is being phased out by OpenAI:
22
+
-`gpt-5-codex` → `gpt-5.1-codex`
23
+
-`gpt-5` → `gpt-5.1`
24
+
-`gpt-5-mini`, `gpt-5-nano` → `gpt-5.1`
25
+
-`codex-mini-latest` → `gpt-5.1-codex-mini`
26
+
-**Lazy instruction loading**: Instructions are now fetched per-request based on model family (not pre-loaded at initialization).
27
+
-**Separate caching per model family**: Each model family has its own cached prompt file:
- Fixed OpenCode prompt cache URL to fetch from `dev` branch instead of non-existent `main` branch.
34
+
- Fixed model configuration test script to correctly identify model logs in multi-model sessions (opencode uses a small model like `gpt-5-nano` for title generation alongside the user's selected model).
35
+
36
+
### Technical Details
37
+
This release brings full parity with Codex CLI's prompt engineering:
38
+
-**Codex family** (105 lines): Concise, tool-focused prompt for coding tasks
39
+
-**Codex Max family** (117 lines): Adds frontend design guidelines for UI work
- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (`gpt-5.1-codex-max`, `gpt-5.1-codex-max-xhigh`) with extended reasoning options (including `none`/`xhigh`) while keeping the 272k context / 128k output limits.
0 commit comments