Skip to content

Conversation

@cau1k
Copy link
Contributor

@cau1k cau1k commented Nov 19, 2025

Closes #43. Adds full gpt-5.1-codex-max support with new model and reasoning effort (xhigh/extra high). Tests updated to cover the new model. User tested with pro account.

Largely written by gpt-5.1-codex-max (:

@cau1k
Copy link
Contributor Author

cau1k commented Nov 20, 2025

Thought I had a ghost in the machine when trying to test out my implementation in the main opencode repo. Turns out it has this plugin already initialized in its .opencode config lol

@iamhenry
Copy link

looking forward to it!

@numman-ali
Copy link
Owner

This is very good @cau1k - going to finalise checks tonight, merge and then publish

Would you consider being a permanent collaborator? You're the first one to cover it to the same patterns/standards I go for ^_^

@numman-ali numman-ali requested a review from Copilot November 20, 2025 21:37
@numman-ali numman-ali self-assigned this Nov 20, 2025
@numman-ali numman-ali added the enhancement New feature or request label Nov 20, 2025
Copilot finished reviewing on behalf of numman-ali November 20, 2025 21:42
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request adds comprehensive support for the GPT-5.1 Codex Max model family, including the new xhigh (extra high) reasoning effort level. The changes introduce 5 new Codex Max presets (base, low, medium, high, and xhigh), expand the type system to support new reasoning options (none/xhigh for effort, off/on for summary), and update all configuration files, documentation, and tests to reflect the new capabilities.

Key Changes

  • Adds GPT-5.1 Codex Max model normalization and reasoning configuration with xhigh effort support (exclusive to Codex Max)
  • Extends type definitions to include none/xhigh reasoning effort and off/on summary options
  • Updates configuration files, documentation, and test scripts with 5 new Codex Max presets and expanded output limits

Reviewed Changes

Copilot reviewed 13 out of 14 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
lib/types.ts Extended ConfigOptions and ReasoningConfig to include new reasoning effort values (none, xhigh) and summary options (off, on)
lib/request/request-transformer.ts Added Codex Max detection, default effort of high, and logic to preserve xhigh only for Codex Max while downgrading it to high for other models
lib/request/helpers/model-map.ts Added 5 new Codex Max preset mappings (base, low, medium, high, xhigh) all normalizing to gpt-5.1-codex-max
test/request-transformer.test.ts Added tests for Codex Max normalization, default effort, xhigh preservation, and downgrade behavior for non-Max models; updated default effort expectations
scripts/test-all-models.sh Added 4 Codex Max test scenarios with different reasoning levels (low, medium, high, xhigh)
config/full-opencode.json Added 5 Codex Max model configurations with varied reasoning efforts and output limits
docs/getting-started.md Added Codex Max configuration examples and usage notes explaining expanded output capabilities
docs/configuration.md Documented Codex Max-specific reasoning options and behavior differences from standard Codex
docs/index.md Updated feature list to reflect 12 GPT 5.1 variants including Codex Max
README.md Added Codex Max configurations, usage examples, and updated model comparison table
CHANGELOG.md Documented version 3.3.0 changes including Codex Max support and new reasoning options
AGENTS.md Updated architecture documentation to include Codex Max in model normalization flow
config/README.md Updated variant count and descriptions to include Codex Max presets
package-lock.json Dependency resolution changes (unrelated to feature)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +66 to +81
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent output limit for gpt-5.1-codex-max. The documentation in README.md and docs/getting-started.md specifies 400000 for the base gpt-5.1-codex-max preset, indicating expanded output capacity, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correct value is 272000

OpenAIs context of 400000 comes from the combination of input and output

See here openai/codex#4728

Comment on lines +130 to +145
"gpt-5.1-codex-max-xhigh": {
"name": "GPT 5.1 Codex Max Extra High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent output limit for gpt-5.1-codex-max-xhigh. The documentation in README.md and docs/getting-started.md specifies 400000 for the xhigh preset, indicating it should have the expanded output window, but this configuration file uses 128000. This should be updated to 400000 to match the documented capabilities and the other configuration files.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correct value is 272000

OpenAIs context of 400000 comes from the combination of input and output

See here openai/codex#4728

@cau1k
Copy link
Contributor Author

cau1k commented Nov 20, 2025

@numman-ali Of course, happy to help (: Feel free to reach out on discord @ caulkenstein

Good catches by copilot!

cau1k and others added 2 commits November 20, 2025 17:04
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Copy link
Owner

@numman-ali numman-ali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config changes are required as they affect compaction behaviour in opencode

Normally, I tell my coding agent to read all comments on the PR (using gh cli command)

And it then goes ahead with rectifying everything and I just review

@cau1k cau1k requested a review from numman-ali November 20, 2025 22:55
@hellowodl
Copy link

openai/codex#6879
openai/codex#6894

It might be worth it to look into these 2 PRs, as this is the main appeal of the new max models.

@EvanNotFound
Copy link

looking forward to the new max model!

@ben-vargas
Copy link
Contributor

openai/codex#6879 openai/codex#6894

It might be worth it to look into these 2 PRs, as this is the main appeal of the new max models.

The opencode compaction logic isn't in the purview of plugins, opencode doesn't expose plugin hooks related to compaction events or opencode storage access. The logic for opencode compaction would need to be modified in opencode's source IF Dax was looking to adopt/use similar logic as codex does client side and then calling the compaction endpoint. I don't think it makes sense, better all opencode compaction logic is consistent and doesn't change behavior depending on authentication type (i.e. OAuth use shouldn't differ from using OpenAI models through zen, vercel, openrouter, etc).

@cau1k
Copy link
Contributor Author

cau1k commented Nov 24, 2025

If you are eager to try out codex max in opencode, I'd recommend using https://github.com/open-hax/codex or pulling this PR:

  1. gh repo clone numman-ali/opencode-openai-codex-auth
  2. cd opencode-openai-codex-auth
  3. gh pr checkout 44
  4. Add "plugin": [ "file://path/to/opencode-openai-codex-auth" ] to your opencode config

@iamhenry
Copy link

If you are eager to try out codex max in opencode, I'd recommend using https://github.com/open-hax/codex or pulling this PR:

  1. gh repo clone numman-ali/opencode-openai-codex-auth
  2. cd opencode-openai-codex-auth
  3. gh pr checkout 44
  4. Add "plugin": [ "file://path/to/opencode-openai-codex-auth" ] to your opencode config

I might try this! Then I can remove the repo from my local machine?

@ben-vargas
Copy link
Contributor

If you are eager to try out codex max in opencode, I'd recommend using https://github.com/open-hax/codex or pulling this PR:

  1. gh repo clone numman-ali/opencode-openai-codex-auth
  2. cd opencode-openai-codex-auth
  3. gh pr checkout 44
  4. Add "plugin": [ "file://path/to/opencode-openai-codex-auth" ] to your opencode config

I would actually advise the clone approach for all users whether getting this feature early or not. Not that I don't trust maintainer or contributors (I'm one), but the dependency isn't tracked, it's auto updated periodically by opencode with no notice, it opens an attack vector for npm hijacking.

You could also pin the version so you have more control as an alternative, but I like having copies of the source too just so I can look into what's going on if I want to tweak something. So I do this method (git clone) for all the opencode plugins I run.

- `high` - Maximum code quality

**GPT-5.1-Codex-Max Values:**
- `none` - No dedicated reasoning phase
Copy link

@chmistj chmistj Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

none does not seem to be supported for GPT-5.1-Codex-Max:

[openai-codex-plugin] 400 error: {
  "error": {
    "message": "Unsupported value: 'none' is not supported with the 'gpt-5.1-codex-max' model. Supported values are: 'low', 'medium', 'high', and 'xhigh'.",
    "type": "invalid_request_error",
    "param": "reasoning.effort",
    "code": "unsupported_value",
    "status": 400
  }
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surprisingly, it works with GPT-5.1, while the documented minimal errors out:

[openai-codex-plugin] 400 error: {
  "error": {
    "message": "Unsupported value: 'minimal' is not supported with the 'gpt-5.1' model. Supported values are: 'none', 'low', 'medium', and 'high'.",
    "type": "invalid_request_error",
    "param": "reasoning.effort",
    "code": "unsupported_value",
    "status": 400
  }
}

@numman-ali
Copy link
Owner

Sorry, been quite busy, merging in now and reviewing final updates before release

For everyone watching, I think this might be the last update, as OpenCode have confirmed they are working on their own plugin so only a matter of time before this becomes redundant

@numman-ali numman-ali merged commit c5afe9a into numman-ali:main Nov 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] New Max model just got released

7 participants