Skip to content

added the missing encrypted content to the code example #1861

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 27, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 47 additions & 6 deletions examples/o-series/o3o4-mini_prompting_guide.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -176,10 +176,28 @@
"# Responses API\n",
"\n",
"### Reasoning Items for Better Performance\n",
"We’ve released a [cookbook](https://cookbook.openai.com/examples/responses_api/reasoning_items) detailing the benefits of using the responses API. It is worth restating a few of the main points in this guide as well. o3/o4-mini are both trained with its internal reasoning persisted between toolcalls within a single turn. Persisting these reasoning items between toolcalls during inference will therefore lead to higher intelligence and performance in the form of better decision in when and how a tool gets called. Responses allow you to persist these reasoning items (maintained either by us or yourself through encrypted content if you do not want us to handle state-management) while Chat Completion doesn’t. Switching to the responses API and allowing the model access to reasoning items between function calls is the easiest way to squeeze out as much performance as possible for function calls. Here is an the example in the cookbook, reproduced for convenience, showing how you can pass back the reasoning item using `encrypted_content` in a way which we do not retain any state on our end\n",
"\n",
"```\n",
"We’ve released a [cookbook](https://cookbook.openai.com/examples/responses_api/reasoning_items) detailing the benefits of using the responses API. It is worth restating a few of the main points in this guide as well. o3/o4-mini are both trained with its internal reasoning persisted between toolcalls within a single turn. Persisting these reasoning items between toolcalls during inference will therefore lead to higher intelligence and performance in the form of better decision in when and how a tool gets called. Responses allow you to persist these reasoning items (maintained either by us or yourself through encrypted content if you do not want us to handle state-management) while Chat Completion doesn’t. Switching to the responses API and allowing the model access to reasoning items between function calls is the easiest way to squeeze out as much performance as possible for function calls. Here is an the example in the cookbook, reproduced for convenience, showing how you can pass back the reasoning item using `encrypted_content` in a way which we do not retain any state on our end:\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The current temperature in Paris is about 18.8 °C.\n"
]
}
],
"source": [
"from openai import OpenAI\n",
"import requests\n",
"import json\n",
"client = OpenAI()\n",
"\n",
"\n",
"def get_weather(latitude, longitude):\n",
" response = requests.get(f\"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m\")\n",
Expand All @@ -206,14 +224,37 @@
"context = [{\"role\": \"user\", \"content\": \"What's the weather like in Paris today?\"}]\n",
"\n",
"response = client.responses.create(\n",
" model=\"o4-mini\",\n",
" model=\"o3\",\n",
" input=context,\n",
" tools=tools,\n",
" store=False,\n",
" include=[\"reasoning.encrypted_content\"] # Encrypted chain of thought is passed back in the response\n",
")\n",
"\n",
"\n",
"response.output\n",
"```\n"
"context += response.output # Add the response to the context (including the encrypted chain of thought)\n",
"tool_call = response.output[1]\n",
"args = json.loads(tool_call.arguments)\n",
"\n",
"\n",
"\n",
"result = get_weather(args[\"latitude\"], args[\"longitude\"])\n",
"\n",
"context.append({ \n",
" \"type\": \"function_call_output\",\n",
" \"call_id\": tool_call.call_id,\n",
" \"output\": str(result)\n",
"})\n",
"\n",
"response_2 = client.responses.create(\n",
" model=\"o3\",\n",
" input=context,\n",
" tools=tools,\n",
" store=False,\n",
" include=[\"reasoning.encrypted_content\"]\n",
")\n",
"\n",
"print(response_2.output_text)"
]
},
{
Expand Down