Skip to content

Commit b51e9a9

Browse files
numman-aliclaude
andcommitted
fix: v4.0.2 - Fix compaction context loss and agent creation
**Fixed**: - Compaction losing context: Only filter orphaned function_call_output items, preserve matched pairs - Agent creation failing: Properly detect streaming vs non-streaming requests - SSE/JSON response handling: Convert SSE→JSON for generateText(), passthrough for streamText() **Added**: - gpt-5.1-chat-latest model support (normalizes to gpt-5.1) **Technical**: - Capture original stream value before transformation - API always gets stream=true, but response handling based on original intent - Orphan detection: only remove function_call_output without matching function_call 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
1 parent afbbf0b commit b51e9a9

File tree

9 files changed

+74
-25
lines changed

9 files changed

+74
-25
lines changed

CHANGELOG.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,23 @@
22

33
All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
44

5+
## [4.0.2] - 2025-11-27
6+
7+
**Bugfix release**: Fixes compaction context loss, agent creation, and SSE/JSON response handling.
8+
9+
### Fixed
10+
- **Compaction losing context**: v4.0.1 was too aggressive in filtering tool calls - it removed ALL `function_call`/`function_call_output` items when tools weren't present. Now only **orphaned** outputs (without matching calls) are filtered, preserving matched pairs for compaction context.
11+
- **Agent creation failing**: The `/agent create` command was failing with "Invalid JSON response" because we were returning SSE streams instead of JSON for `generateText()` requests.
12+
- **SSE/JSON response handling**: Properly detect original request intent - `streamText()` requests get SSE passthrough, `generateText()` requests get SSE→JSON conversion.
13+
14+
### Added
15+
- **`gpt-5.1-chat-latest` model support**: Added to model map, normalizes to `gpt-5.1`.
16+
17+
### Technical Details
18+
- Root cause of compaction issue: OpenCode sends `item_reference` with `fc_*` IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphaned `function_call_output` items (where no matching `function_call` exists).
19+
- Root cause of agent creation issue: We were forcing `stream: true` for all requests and returning SSE for all responses. Now we capture original `stream` value before transformation and convert SSE→JSON only when original request wasn't streaming.
20+
- The Codex API always receives `stream: true` (required), but response handling is based on original intent.
21+
522
## [4.0.1] - 2025-11-27
623

724
**Bugfix release**: Fixes API errors during summary/compaction and GitHub rate limiting.

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
6060
#### Recommended: Pin the Version
6161

6262
```json
63-
"plugin": ["[email protected].1"]
63+
"plugin": ["[email protected].2"]
6464
```
6565

6666
**Why pin versions?** OpenCode uses Bun's lockfile which pins resolved versions. If you use `"opencode-openai-codex-auth"` without a version, it resolves to "latest" once and **never updates** even when new versions are published.
@@ -74,7 +74,7 @@ Simply change the version in your config and restart OpenCode:
7474
"plugin": ["[email protected]"]
7575

7676
// To:
77-
"plugin": ["[email protected].1"]
77+
"plugin": ["[email protected].2"]
7878
```
7979

8080
OpenCode will detect the version mismatch and install the new version automatically.
@@ -108,7 +108,7 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
108108
{
109109
"$schema": "https://opencode.ai/config.json",
110110
"plugin": [
111-
111+
112112
],
113113
"provider": {
114114
"openai": {
@@ -517,7 +517,7 @@ Apply settings to all models:
517517
```json
518518
{
519519
"$schema": "https://opencode.ai/config.json",
520-
"plugin": ["[email protected].1"],
520+
"plugin": ["[email protected].2"],
521521
"model": "openai/gpt-5-codex",
522522
"provider": {
523523
"openai": {
@@ -537,7 +537,7 @@ Create your own named variants in the model selector:
537537
```json
538538
{
539539
"$schema": "https://opencode.ai/config.json",
540-
"plugin": ["[email protected].1"],
540+
"plugin": ["[email protected].2"],
541541
"provider": {
542542
"openai": {
543543
"models": {

config/full-opencode.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"$schema": "https://opencode.ai/config.json",
33
"plugin": [
4-
4+
55
],
66
"provider": {
77
"openai": {

index.ts

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -162,13 +162,17 @@ export const OpenAIAuthPlugin: Plugin = async ({ client }: PluginInput) => {
162162

163163
// Step 3: Transform request body with model-specific Codex instructions
164164
// Instructions are fetched per model family (codex-max, codex, gpt-5.1)
165+
// Capture original stream value before transformation
166+
// generateText() sends no stream field, streamText() sends stream=true
167+
const originalBody = init?.body ? JSON.parse(init.body as string) : {};
168+
const isStreaming = originalBody.stream === true;
169+
165170
const transformation = await transformRequestForCodex(
166171
init,
167172
url,
168173
userConfig,
169174
codexMode,
170175
);
171-
const hasTools = transformation?.body.tools !== undefined;
172176
const requestInit = transformation?.updatedInit ?? init;
173177

174178
// Step 4: Create headers with OAuth and ChatGPT account info
@@ -203,7 +207,7 @@ export const OpenAIAuthPlugin: Plugin = async ({ client }: PluginInput) => {
203207
return await handleErrorResponse(response);
204208
}
205209

206-
return await handleSuccessResponse(response, hasTools);
210+
return await handleSuccessResponse(response, isStreaming);
207211
},
208212
};
209213
},

lib/request/fetch-helpers.ts

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -279,24 +279,24 @@ export async function handleErrorResponse(
279279

280280
/**
281281
* Handles successful responses from the Codex API
282-
* Converts SSE to JSON for non-tool requests
282+
* Converts SSE to JSON for non-streaming requests (generateText)
283+
* Passes through SSE for streaming requests (streamText)
283284
* @param response - Success response from API
284-
* @param hasTools - Whether the request included tools
285-
* @returns Processed response (SSE→JSON for non-tool, stream for tool requests)
285+
* @param isStreaming - Whether this is a streaming request (stream=true in body)
286+
* @returns Processed response (SSE→JSON for non-streaming, stream for streaming)
286287
*/
287288
export async function handleSuccessResponse(
288289
response: Response,
289-
hasTools: boolean,
290+
isStreaming: boolean,
290291
): Promise<Response> {
291292
const responseHeaders = ensureContentType(response.headers);
292293

293-
// For non-tool requests (compact/summarize), convert streaming SSE to JSON
294-
// generateText() expects a non-streaming JSON response, not SSE
295-
if (!hasTools) {
294+
// For non-streaming requests (generateText), convert SSE to JSON
295+
if (!isStreaming) {
296296
return await convertSseToJson(response, responseHeaders);
297297
}
298298

299-
// For tool requests, return stream as-is (streamText handles SSE)
299+
// For streaming requests (streamText), return stream as-is
300300
return new Response(response.body, {
301301
status: response.status,
302302
statusText: response.statusText,

lib/request/helpers/model-map.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ export const MODEL_MAP: Record<string, string> = {
4343
"gpt-5.1-low": "gpt-5.1",
4444
"gpt-5.1-medium": "gpt-5.1",
4545
"gpt-5.1-high": "gpt-5.1",
46+
"gpt-5.1-chat-latest": "gpt-5.1",
4647

4748
// ============================================================================
4849
// GPT-5 Codex Models (LEGACY - maps to gpt-5.1-codex as gpt-5 is being phased out)

lib/request/request-transformer.ts

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -398,6 +398,7 @@ export async function transformRequestBody(
398398
// Codex required fields
399399
// ChatGPT backend REQUIRES store=false (confirmed via testing)
400400
body.store = false;
401+
// Always set stream=true for API - response handling detects original intent
401402
body.stream = true;
402403
body.instructions = codexInstructions;
403404

@@ -441,12 +442,22 @@ export async function transformRequestBody(
441442
body.input = addToolRemapMessage(body.input, !!body.tools);
442443
}
443444

445+
// Filter orphaned function_call_output items (where function_call was an item_reference that got filtered)
446+
// Keep matched pairs for compaction context
444447
if (!body.tools && body.input) {
445-
body.input = body.input.filter(
446-
(item) =>
447-
item.type !== "function_call" &&
448-
item.type !== "function_call_output",
448+
// Collect all call_ids from function_call items
449+
const functionCallIds = new Set(
450+
body.input
451+
.filter((item) => item.type === "function_call" && item.call_id)
452+
.map((item) => item.call_id),
449453
);
454+
// Only filter function_call_output items that don't have a matching function_call
455+
body.input = body.input.filter((item) => {
456+
if (item.type === "function_call_output") {
457+
return functionCallIds.has(item.call_id);
458+
}
459+
return true;
460+
});
450461
}
451462
}
452463

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "opencode-openai-codex-auth",
3-
"version": "4.0.1",
3+
"version": "4.0.2",
44
"description": "OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits",
55
"main": "./dist/index.js",
66
"types": "./dist/index.d.ts",

test/request-transformer.test.ts

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -847,13 +847,12 @@ describe('Request Transformer Module', () => {
847847
expect(result.reasoning?.effort).toBe('medium');
848848
});
849849

850-
it('should drop function_call items when no tools present', async () => {
850+
it('should drop orphaned function_call_output when no tools present', async () => {
851851
const body: RequestBody = {
852852
model: 'gpt-5-codex',
853853
input: [
854854
{ type: 'message', role: 'user', content: 'hello' },
855-
{ type: 'function_call', role: 'assistant', name: 'write', arguments: '{}' } as any,
856-
{ type: 'function_call_output', role: 'assistant', call_id: 'call_1', output: '{}' } as any,
855+
{ type: 'function_call_output', role: 'assistant', call_id: 'orphan_call', output: '{}' } as any,
857856
],
858857
};
859858

@@ -862,7 +861,24 @@ describe('Request Transformer Module', () => {
862861
expect(result.tools).toBeUndefined();
863862
expect(result.input).toHaveLength(1);
864863
expect(result.input![0].type).toBe('message');
865-
expect(result.input![0].role).toBe('user');
864+
});
865+
866+
it('should keep matched function_call pairs when no tools present (for compaction)', async () => {
867+
const body: RequestBody = {
868+
model: 'gpt-5-codex',
869+
input: [
870+
{ type: 'message', role: 'user', content: 'hello' },
871+
{ type: 'function_call', call_id: 'call_1', name: 'write', arguments: '{}' } as any,
872+
{ type: 'function_call_output', call_id: 'call_1', output: 'success' } as any,
873+
],
874+
};
875+
876+
const result = await transformRequestBody(body, codexInstructions);
877+
878+
expect(result.tools).toBeUndefined();
879+
expect(result.input).toHaveLength(3);
880+
expect(result.input![1].type).toBe('function_call');
881+
expect(result.input![2].type).toBe('function_call_output');
866882
});
867883

868884
describe('CODEX_MODE parameter', () => {

0 commit comments

Comments
 (0)