You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚡️ Speed up function _execute_openai_request by 95% in PR #1214 (openai-apikey-passthrough)
Here’s an optimized version of your function. The **vast majority of runtime (over 99%)** comes from the two lines that interact with the OpenAI SDK.
- `client = OpenAI(api_key=openai_api_key)`
- `client.chat.completions.create(…)`
The first can be improved by **reusing the client instance** instead of creating a new one every call. For repeated calls in the same process, **persisting the OpenAI client** will save you much time.
Here’s an optimized implementation.
**Key optimizations:**
- The OpenAI client is created only once per unique API key, drastically reducing object creation overhead.
- No changes to the function signature or return values.
- Thread safety is not handled explicitly, but if you plan to use this concurrently you could add thread locks or use `threading.local` for clients.
**If you never use multiple API keys in one process,** you may further simplify by keeping a single module-global client instance.
This is as fast as possible on the **client side**. The remote API call, which dominates total runtime, cannot be further optimized from inside the client.
0 commit comments