v1.0.2 - Smart Caching and Stability Improvements
🎯 Major Improvements
Smart ETag-Based Caching
- Replaced 24-hour TTL cache with HTTP ETag-based conditional requests
- Only downloads instructions when content actually changes (304 Not Modified responses)
- Significantly reduces GitHub API calls while staying up-to-date
Release Tag Tracking for Stability
- Now fetches Codex instructions from latest GitHub release tag instead of main branch
- Ensures compatibility with ChatGPT Codex API (main branch may have unreleased features)
- Prevents "Instructions are not valid" errors from bleeding-edge changes
🐛 Bug Fixes
Model Normalization
- Fixed default model fallback: unsupported models now default to
gpt-5(notgpt-5-codex) - Preserves user's choice between
gpt-5andgpt-5-codexwhen explicitly specified - Only codex model variants normalize to
gpt-5-codex
Error Prevention
- Added
body.textinitialization check to prevent TypeError onbody.text.verbosity - Improved error handling in request transformation
Code Quality
- Standardized all console.error prefixes to
[openai-codex-plugin] - Updated documentation to reflect ETag caching implementation
- Added npm version and downloads badges to README
📚 Documentation
- Updated README with accurate caching behavior description
- Added npm package badges for version tracking
- Clarified release-based fetching strategy
📦 Installation
npm install [email protected]Or add to your opencode.json:
{
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex"
}Full Changelog: v1.0.1...v1.0.2