Skip to content

v1.0.2 - Smart Caching and Stability Improvements

Choose a tag to compare

@numman-ali numman-ali released this 02 Oct 11:14
· 57 commits to main since this release

🎯 Major Improvements

Smart ETag-Based Caching

  • Replaced 24-hour TTL cache with HTTP ETag-based conditional requests
  • Only downloads instructions when content actually changes (304 Not Modified responses)
  • Significantly reduces GitHub API calls while staying up-to-date

Release Tag Tracking for Stability

  • Now fetches Codex instructions from latest GitHub release tag instead of main branch
  • Ensures compatibility with ChatGPT Codex API (main branch may have unreleased features)
  • Prevents "Instructions are not valid" errors from bleeding-edge changes

🐛 Bug Fixes

Model Normalization

  • Fixed default model fallback: unsupported models now default to gpt-5 (not gpt-5-codex)
  • Preserves user's choice between gpt-5 and gpt-5-codex when explicitly specified
  • Only codex model variants normalize to gpt-5-codex

Error Prevention

  • Added body.text initialization check to prevent TypeError on body.text.verbosity
  • Improved error handling in request transformation

Code Quality

  • Standardized all console.error prefixes to [openai-codex-plugin]
  • Updated documentation to reflect ETag caching implementation
  • Added npm version and downloads badges to README

📚 Documentation

  • Updated README with accurate caching behavior description
  • Added npm package badges for version tracking
  • Clarified release-based fetching strategy

📦 Installation

npm install [email protected]

Or add to your opencode.json:

{
  "plugin": ["opencode-openai-codex-auth"],
  "model": "openai/gpt-5-codex"
}

Full Changelog: v1.0.1...v1.0.2