-
Notifications
You must be signed in to change notification settings - Fork 47
Feat/codex max models #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thought I had a ghost in the machine when trying to test out my implementation in the main opencode repo. Turns out it has this plugin already initialized in its .opencode config lol |
|
looking forward to it! |
|
This is very good @cau1k - going to finalise checks tonight, merge and then publish Would you consider being a permanent collaborator? You're the first one to cover it to the same patterns/standards I go for ^_^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request adds comprehensive support for the GPT-5.1 Codex Max model family, including the new xhigh (extra high) reasoning effort level. The changes introduce 5 new Codex Max presets (base, low, medium, high, and xhigh), expand the type system to support new reasoning options (none/xhigh for effort, off/on for summary), and update all configuration files, documentation, and tests to reflect the new capabilities.
Key Changes
- Adds GPT-5.1 Codex Max model normalization and reasoning configuration with
xhigheffort support (exclusive to Codex Max) - Extends type definitions to include
none/xhighreasoning effort andoff/onsummary options - Updates configuration files, documentation, and test scripts with 5 new Codex Max presets and expanded output limits
Reviewed Changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/types.ts | Extended ConfigOptions and ReasoningConfig to include new reasoning effort values (none, xhigh) and summary options (off, on) |
| lib/request/request-transformer.ts | Added Codex Max detection, default effort of high, and logic to preserve xhigh only for Codex Max while downgrading it to high for other models |
| lib/request/helpers/model-map.ts | Added 5 new Codex Max preset mappings (base, low, medium, high, xhigh) all normalizing to gpt-5.1-codex-max |
| test/request-transformer.test.ts | Added tests for Codex Max normalization, default effort, xhigh preservation, and downgrade behavior for non-Max models; updated default effort expectations |
| scripts/test-all-models.sh | Added 4 Codex Max test scenarios with different reasoning levels (low, medium, high, xhigh) |
| config/full-opencode.json | Added 5 Codex Max model configurations with varied reasoning efforts and output limits |
| docs/getting-started.md | Added Codex Max configuration examples and usage notes explaining expanded output capabilities |
| docs/configuration.md | Documented Codex Max-specific reasoning options and behavior differences from standard Codex |
| docs/index.md | Updated feature list to reflect 12 GPT 5.1 variants including Codex Max |
| README.md | Added Codex Max configurations, usage examples, and updated model comparison table |
| CHANGELOG.md | Documented version 3.3.0 changes including Codex Max support and new reasoning options |
| AGENTS.md | Updated architecture documentation to include Codex Max in model normalization flow |
| config/README.md | Updated variant count and descriptions to include Codex Max presets |
| package-lock.json | Dependency resolution changes (unrelated to feature) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "gpt-5.1-codex-max": { | ||
| "name": "GPT 5.1 Codex Max (OAuth)", | ||
| "limit": { | ||
| "context": 272000, | ||
| "output": 128000 | ||
| }, | ||
| "options": { | ||
| "reasoningEffort": "high", | ||
| "reasoningSummary": "detailed", | ||
| "textVerbosity": "medium", | ||
| "include": [ | ||
| "reasoning.encrypted_content" | ||
| ], | ||
| "store": false | ||
| } | ||
| }, |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent output limit for gpt-5.1-codex-max. The documentation in README.md and docs/getting-started.md specifies 400000 for the base gpt-5.1-codex-max preset, indicating expanded output capacity, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The correct value is 272000
OpenAIs context of 400000 comes from the combination of input and output
See here openai/codex#4728
| "gpt-5.1-codex-max-xhigh": { | ||
| "name": "GPT 5.1 Codex Max Extra High (OAuth)", | ||
| "limit": { | ||
| "context": 272000, | ||
| "output": 128000 | ||
| }, | ||
| "options": { | ||
| "reasoningEffort": "xhigh", | ||
| "reasoningSummary": "detailed", | ||
| "textVerbosity": "medium", | ||
| "include": [ | ||
| "reasoning.encrypted_content" | ||
| ], | ||
| "store": false | ||
| } | ||
| }, |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent output limit for gpt-5.1-codex-max-xhigh. The documentation in README.md and docs/getting-started.md specifies 400000 for the xhigh preset, indicating it should have the expanded output window, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The correct value is 272000
OpenAIs context of 400000 comes from the combination of input and output
See here openai/codex#4728
|
@numman-ali Of course, happy to help (: Feel free to reach out on discord @ caulkenstein Good catches by copilot! |
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
numman-ali
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Config changes are required as they affect compaction behaviour in opencode
Normally, I tell my coding agent to read all comments on the PR (using gh cli command)
And it then goes ahead with rectifying everything and I just review
|
openai/codex#6879 It might be worth it to look into these 2 PRs, as this is the main appeal of the new max models. |
|
looking forward to the new max model! |
The opencode compaction logic isn't in the purview of plugins, opencode doesn't expose plugin hooks related to compaction events or opencode storage access. The logic for opencode compaction would need to be modified in opencode's source IF Dax was looking to adopt/use similar logic as codex does client side and then calling the compaction endpoint. I don't think it makes sense, better all opencode compaction logic is consistent and doesn't change behavior depending on authentication type (i.e. OAuth use shouldn't differ from using OpenAI models through zen, vercel, openrouter, etc). |
|
If you are eager to try out codex max in opencode, I'd recommend using https://github.com/open-hax/codex or pulling this PR:
|
I might try this! Then I can remove the repo from my local machine? |
I would actually advise the clone approach for all users whether getting this feature early or not. Not that I don't trust maintainer or contributors (I'm one), but the dependency isn't tracked, it's auto updated periodically by opencode with no notice, it opens an attack vector for npm hijacking. You could also pin the version so you have more control as an alternative, but I like having copies of the source too just so I can look into what's going on if I want to tweak something. So I do this method (git clone) for all the opencode plugins I run. |
| - `high` - Maximum code quality | ||
|
|
||
| **GPT-5.1-Codex-Max Values:** | ||
| - `none` - No dedicated reasoning phase |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
none does not seem to be supported for GPT-5.1-Codex-Max:
[openai-codex-plugin] 400 error: {
"error": {
"message": "Unsupported value: 'none' is not supported with the 'gpt-5.1-codex-max' model. Supported values are: 'low', 'medium', 'high', and 'xhigh'.",
"type": "invalid_request_error",
"param": "reasoning.effort",
"code": "unsupported_value",
"status": 400
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surprisingly, it works with GPT-5.1, while the documented minimal errors out:
[openai-codex-plugin] 400 error: {
"error": {
"message": "Unsupported value: 'minimal' is not supported with the 'gpt-5.1' model. Supported values are: 'none', 'low', 'medium', and 'high'.",
"type": "invalid_request_error",
"param": "reasoning.effort",
"code": "unsupported_value",
"status": 400
}
}
|
Sorry, been quite busy, merging in now and reviewing final updates before release For everyone watching, I think this might be the last update, as OpenCode have confirmed they are working on their own plugin so only a matter of time before this becomes redundant |
Closes #43. Adds full gpt-5.1-codex-max support with new model and reasoning effort (xhigh/extra high). Tests updated to cover the new model. User tested with pro account.
Largely written by gpt-5.1-codex-max (: