|
8 | 8 | "OpenAI now offers function calling using [reasoning models](https://platform.openai.com/docs/guides/reasoning?api-mode=responses). Reasoning models are trained to follow logical chains of thought, making them better suited for complex or multi-step tasks.\n",
|
9 | 9 | "> _Reasoning models like o3 and o4-mini are LLMs trained with reinforcement learning to perform reasoning. Reasoning models think before they answer, producing a long internal chain of thought before responding to the user. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows. They're also the best models for Codex CLI, our lightweight coding agent._\n",
|
10 | 10 | "\n",
|
11 |
| - "For the most part, using these models via the API is very simple and comparable to using familiar classic 'chat' models. \n", |
| 11 | + "For the most part, using these models via the API is very simple and comparable to using familiar 'chat' models. \n", |
12 | 12 | "\n",
|
13 | 13 | "However, there are some nuances to bear in mind, particularly when it comes to using features such as function calling. \n",
|
14 | 14 | "\n",
|
15 |
| - "All examples in this notebook use the newer [Responses API](https://community.openai.com/t/introducing-the-responses-api/1140929) which provides convenient abstractions for managing conversation state. The principles here are however relevant when using the older chat completions API." |
| 15 | + "All examples in this notebook use the newer [Responses API](https://community.openai.com/t/introducing-the-responses-api/1140929) which provides convenient abstractions for managing conversation state. However the principles here are relevant when using the older chat completions API." |
16 | 16 | ]
|
17 | 17 | },
|
18 | 18 | {
|
|
86 | 86 | "source": [
|
87 | 87 | "Nice and easy!\n",
|
88 | 88 | "\n",
|
89 |
| - "We're asking relatively complex questions that may requires the model to reason out a plan and proceed through it in steps, but this reasoning is hidden from us. We simply wait a little longer before being shown the output. \n", |
| 89 | + "We're asking relatively complex questions that may require the model to reason out a plan and proceed through it in steps, but this reasoning is hidden from us - we simply wait a little longer before being shown the response. \n", |
| 90 | + "\n", |
90 | 91 | "However, if we inspect the output we can see that the model has made use of a hidden set of 'reasoning' tokens that were included in the model context window, but not exposed to us as end users.\n",
|
91 |
| - "We can see these tokens and a summary of the reasoning (but not the literal tokens used) in the response" |
| 92 | + "We can see these tokens and a summary of the reasoning (but not the literal tokens used) in the response." |
92 | 93 | ]
|
93 | 94 | },
|
94 | 95 | {
|
|
129 | 130 | "cell_type": "markdown",
|
130 | 131 | "metadata": {},
|
131 | 132 | "source": [
|
132 |
| - "It is important to know about these reasoning tokens, because it means we will consume our available context window more quickly than with traditional chat models. More on this later.\n", |
| 133 | + "It is important to know about these reasoning tokens, because it means we will consume our available context window more quickly than with traditional chat models.\n", |
133 | 134 | "\n",
|
134 | 135 | "## Calling custom functions\n",
|
135 | 136 | "What happens if we ask the model a complex request that also requires the use of custom tools?\n",
|
|
182 | 183 | "# Let's add this to our defaults so we don't have to pass it every time\n",
|
183 | 184 | "MODEL_DEFAULTS[\"tools\"] = tools\n",
|
184 | 185 | "\n",
|
185 |
| - "response = client.responses.create(input=\"What's the internal ID for the lowest-temperature city?\", previous_response_id=response.id, **MODEL_DEFAULTS)\n", |
| 186 | + "response = client.responses.create(\n", |
| 187 | + " input=\"What's the internal ID for the lowest-temperature city?\",\n", |
| 188 | + " previous_response_id=response.id,\n", |
| 189 | + " **MODEL_DEFAULTS)\n", |
186 | 190 | "print(response.output_text)\n"
|
187 | 191 | ]
|
188 | 192 | },
|
|
219 | 223 | "metadata": {},
|
220 | 224 | "source": [
|
221 | 225 | "Along with the reasoning step, the model has successfully identified the need for a tool call and passed back instructions to send to our function call. \n",
|
222 |
| - "Let's invoke the function and pass the results back to the model so it can continue.\n", |
| 226 | + "\n", |
| 227 | + "Let's invoke the function and send the results to the model so it can continue reasoning.\n", |
223 | 228 | "Function responses are a special kind of message, so we need to structure our next message as a special kind of input:\n",
|
224 | 229 | "```json\n",
|
225 | 230 | "{\n",
|
|
410 | 415 | "* We may wish to store messages in our own database for audit purposes rather than relying on OpenAI's storage and orchestration\n",
|
411 | 416 | "* etc.\n",
|
412 | 417 | "\n",
|
413 |
| - "In these situations we will treat the API as stateless - rather than using `previous_message_id` we will instead make and maintain an array of conversation items that we add to and pass as input. This allows us full control of the conversation.\n", |
| 418 | + "In these situations we may wish to take full control of the conversation. Rather than using `previous_message_id` we can instead treat the API as 'stateless' and make and maintain an array of conversation items that we send to the model as input each time.\n", |
414 | 419 | "\n",
|
415 | 420 | "This poses some Reasoning model specific nuances to consider. \n",
|
416 | 421 | "* In particular, it is essential that we preserve any reasoning and function call responses in our conversation history.\n",
|
|
0 commit comments