Skip to content

assistant-streaming example should decode the streamed reply #405

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
chrbsg opened this issue May 20, 2025 · 3 comments
Open

assistant-streaming example should decode the streamed reply #405

chrbsg opened this issue May 20, 2025 · 3 comments

Comments

@chrbsg
Copy link

chrbsg commented May 20, 2025

examples/beta/assistant-streaming/main.go sends a query requesting a streamed response.
But the code that handles the stream just prints the type of each streamed chunk:

        for stream.Next() {
                evt := stream.Current()
                println(fmt.Sprintf("%T", evt.Data))
        }

The example should demonstrate full decoding and printing of the streamed reply.

(The standard non-Assistant query streaming uses a openai.ChatCompletionAccumulator but this does not appear to handle openai.AssistantStreamEventUnion types.)

@jacobzim-stl
Copy link
Collaborator

jacobzim-stl commented May 20, 2025

Hi @chrbsg , unfortunately this SDK isn't going to support the assistants API post-beta (since latest release).

However, I'd be open to expanding the responses streaming example though. Would that be helpful?

@chrbsg
Copy link
Author

chrbsg commented May 21, 2025

Yes,

  • the regular streaming example should demonstrate decoding and not just receiving of events
  • if the Assistants API is no longer supported then it should be noted in the README, and the assistants code removed from the SDK (otherwise, if the code is still present, people will expect it to work, and will open bug reports when it doesn't, etc.)
  • just as background to this issue, I was specifically interested in streaming Assistant responses as we have been experiencing occasional (but reproducible at the time) extremely high latency with the Assistants API (e.g. 58 seconds to return a response to a simple "hi"), along with Runs becoming stuck in the "cancelled" state (which appears to be a longstanding bug). I was hoping that streamed responses might not have these issues, but it appears not. Given that Assistants are now deprecated and unsupported, we should probably just remove this option for our users.

@Betula-L
Copy link

@jacobzim-stl why this SDK isn't going to support the assistants API? Other SDK(i.e. Java, Python) still has assistant api

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants