Skip to content

Commit 4a81b4e

Browse files
release: 1.80.0 (#2367)
* codegen metadata * chore(docs): grammar improvements * feat(api): new API tools * release: 1.80.0 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 5bc7307 commit 4a81b4e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+1681
-81
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "1.79.0"
2+
".": "1.80.0"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 101
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-262e171d0a8150ea1192474d16ba3afdf9a054b399f1a49a9c9b697a3073c136.yml
3-
openapi_spec_hash: 33e00a48df8f94c94f46290c489f132b
4-
config_hash: d8d5fda350f6db77c784f35429741a2e
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a5651cb97f86d1e2531af6aef8c5230f1ea350560fbae790ca2e481b30a6c217.yml
3+
openapi_spec_hash: 66a5104fd3bb43383cf919225df7a6fd
4+
config_hash: bb657c3fed232a56930035de3aaed936

CHANGELOG.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,18 @@
11
# Changelog
22

3+
## 1.80.0 (2025-05-21)
4+
5+
Full Changelog: [v1.79.0...v1.80.0](https://github.com/openai/openai-python/compare/v1.79.0...v1.80.0)
6+
7+
### Features
8+
9+
* **api:** new API tools ([d36ae52](https://github.com/openai/openai-python/commit/d36ae528d55fe87067c4b8c6b2c947cbad5e5002))
10+
11+
12+
### Chores
13+
14+
* **docs:** grammar improvements ([e746145](https://github.com/openai/openai-python/commit/e746145a12b5335d8841aff95c91bbbde8bae8e3))
15+
316
## 1.79.0 (2025-05-16)
417

518
Full Changelog: [v1.78.1...v1.79.0](https://github.com/openai/openai-python/compare/v1.78.1...v1.79.0)

SECURITY.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,13 @@ before making any information public.
1616
## Reporting Non-SDK Related Security Issues
1717

1818
If you encounter security issues that are not directly related to SDKs but pertain to the services
19-
or products provided by OpenAI please follow the respective company's security reporting guidelines.
19+
or products provided by OpenAI, please follow the respective company's security reporting guidelines.
2020

2121
### OpenAI Terms and Policies
2222

2323
Our Security Policy can be found at [Security Policy URL](https://openai.com/policies/coordinated-vulnerability-disclosure-policy).
2424

25-
Please contact [email protected] for any questions or concerns regarding security of our services.
25+
Please contact [email protected] for any questions or concerns regarding the security of our services.
2626

2727
---
2828

api.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -717,6 +717,10 @@ from openai.types.responses import (
717717
ResponseFunctionToolCallItem,
718718
ResponseFunctionToolCallOutputItem,
719719
ResponseFunctionWebSearch,
720+
ResponseImageGenCallCompletedEvent,
721+
ResponseImageGenCallGeneratingEvent,
722+
ResponseImageGenCallInProgressEvent,
723+
ResponseImageGenCallPartialImageEvent,
720724
ResponseInProgressEvent,
721725
ResponseIncludable,
722726
ResponseIncompleteEvent,
@@ -730,14 +734,28 @@ from openai.types.responses import (
730734
ResponseInputMessageItem,
731735
ResponseInputText,
732736
ResponseItem,
737+
ResponseMcpCallArgumentsDeltaEvent,
738+
ResponseMcpCallArgumentsDoneEvent,
739+
ResponseMcpCallCompletedEvent,
740+
ResponseMcpCallFailedEvent,
741+
ResponseMcpCallInProgressEvent,
742+
ResponseMcpListToolsCompletedEvent,
743+
ResponseMcpListToolsFailedEvent,
744+
ResponseMcpListToolsInProgressEvent,
733745
ResponseOutputAudio,
734746
ResponseOutputItem,
735747
ResponseOutputItemAddedEvent,
736748
ResponseOutputItemDoneEvent,
737749
ResponseOutputMessage,
738750
ResponseOutputRefusal,
739751
ResponseOutputText,
752+
ResponseOutputTextAnnotationAddedEvent,
753+
ResponseQueuedEvent,
754+
ResponseReasoningDeltaEvent,
755+
ResponseReasoningDoneEvent,
740756
ResponseReasoningItem,
757+
ResponseReasoningSummaryDeltaEvent,
758+
ResponseReasoningSummaryDoneEvent,
741759
ResponseReasoningSummaryPartAddedEvent,
742760
ResponseReasoningSummaryPartDoneEvent,
743761
ResponseReasoningSummaryTextDeltaEvent,

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai"
3-
version = "1.79.0"
3+
version = "1.80.0"
44
description = "The official Python library for the openai API"
55
dynamic = ["readme"]
66
license = "Apache-2.0"

src/openai/_streaming.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ def __stream__(self) -> Iterator[_T]:
5959
if sse.data.startswith("[DONE]"):
6060
break
6161

62-
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
62+
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith("transcript."):
6363
data = sse.json()
6464
if is_mapping(data) and data.get("error"):
6565
message = None
@@ -161,7 +161,7 @@ async def __stream__(self) -> AsyncIterator[_T]:
161161
if sse.data.startswith("[DONE]"):
162162
break
163163

164-
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
164+
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith("transcript."):
165165
data = sse.json()
166166
if is_mapping(data) and data.get("error"):
167167
message = None

src/openai/_version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
22

33
__title__ = "openai"
4-
__version__ = "1.79.0" # x-release-please-version
4+
__version__ = "1.80.0" # x-release-please-version

src/openai/helpers/local_audio_player.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ async def play(
6565
if input.dtype == np.int16 and self.dtype == np.float32:
6666
audio_content = (input.astype(np.float32) / 32767.0).reshape(-1, self.channels)
6767
elif input.dtype == np.float32:
68-
audio_content = cast('npt.NDArray[np.float32]', input)
68+
audio_content = cast("npt.NDArray[np.float32]", input)
6969
else:
7070
raise ValueError(f"Unsupported dtype: {input.dtype}")
7171
else:

src/openai/lib/_parsing/_responses.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,13 @@ def parse_response(
103103
or output.type == "file_search_call"
104104
or output.type == "web_search_call"
105105
or output.type == "reasoning"
106+
or output.type == "mcp_call"
107+
or output.type == "mcp_approval_request"
108+
or output.type == "image_generation_call"
109+
or output.type == "code_interpreter_call"
110+
or output.type == "local_shell_call"
111+
or output.type == "mcp_list_tools"
112+
or output.type == 'exec'
106113
):
107114
output_list.append(output)
108115
elif TYPE_CHECKING: # type: ignore

src/openai/lib/streaming/responses/_events.py

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
ParsedResponse,
1010
ResponseErrorEvent,
1111
ResponseFailedEvent,
12+
ResponseQueuedEvent,
1213
ResponseCreatedEvent,
1314
ResponseTextDoneEvent as RawResponseTextDoneEvent,
1415
ResponseAudioDoneEvent,
@@ -19,22 +20,39 @@
1920
ResponseInProgressEvent,
2021
ResponseRefusalDoneEvent,
2122
ResponseRefusalDeltaEvent,
23+
ResponseMcpCallFailedEvent,
24+
ResponseReasoningDoneEvent,
2225
ResponseOutputItemDoneEvent,
26+
ResponseReasoningDeltaEvent,
2327
ResponseContentPartDoneEvent,
2428
ResponseOutputItemAddedEvent,
2529
ResponseContentPartAddedEvent,
30+
ResponseMcpCallCompletedEvent,
31+
ResponseMcpCallInProgressEvent,
32+
ResponseMcpListToolsFailedEvent,
2633
ResponseAudioTranscriptDoneEvent,
2734
ResponseTextAnnotationDeltaEvent,
2835
ResponseAudioTranscriptDeltaEvent,
36+
ResponseMcpCallArgumentsDoneEvent,
37+
ResponseReasoningSummaryDoneEvent,
38+
ResponseImageGenCallCompletedEvent,
39+
ResponseMcpCallArgumentsDeltaEvent,
40+
ResponseMcpListToolsCompletedEvent,
41+
ResponseReasoningSummaryDeltaEvent,
42+
ResponseImageGenCallGeneratingEvent,
43+
ResponseImageGenCallInProgressEvent,
44+
ResponseMcpListToolsInProgressEvent,
2945
ResponseWebSearchCallCompletedEvent,
3046
ResponseWebSearchCallSearchingEvent,
3147
ResponseFileSearchCallCompletedEvent,
3248
ResponseFileSearchCallSearchingEvent,
3349
ResponseWebSearchCallInProgressEvent,
3450
ResponseFileSearchCallInProgressEvent,
51+
ResponseImageGenCallPartialImageEvent,
3552
ResponseReasoningSummaryPartDoneEvent,
3653
ResponseReasoningSummaryTextDoneEvent,
3754
ResponseFunctionCallArgumentsDoneEvent,
55+
ResponseOutputTextAnnotationAddedEvent,
3856
ResponseReasoningSummaryPartAddedEvent,
3957
ResponseReasoningSummaryTextDeltaEvent,
4058
ResponseFunctionCallArgumentsDeltaEvent as RawResponseFunctionCallArgumentsDeltaEvent,
@@ -109,6 +127,24 @@ class ResponseCompletedEvent(RawResponseCompletedEvent, GenericModel, Generic[Te
109127
ResponseReasoningSummaryPartDoneEvent,
110128
ResponseReasoningSummaryTextDeltaEvent,
111129
ResponseReasoningSummaryTextDoneEvent,
130+
ResponseImageGenCallCompletedEvent,
131+
ResponseImageGenCallInProgressEvent,
132+
ResponseImageGenCallGeneratingEvent,
133+
ResponseImageGenCallPartialImageEvent,
134+
ResponseMcpCallCompletedEvent,
135+
ResponseMcpCallArgumentsDeltaEvent,
136+
ResponseMcpCallArgumentsDoneEvent,
137+
ResponseMcpCallFailedEvent,
138+
ResponseMcpCallInProgressEvent,
139+
ResponseMcpListToolsCompletedEvent,
140+
ResponseMcpListToolsFailedEvent,
141+
ResponseMcpListToolsInProgressEvent,
142+
ResponseOutputTextAnnotationAddedEvent,
143+
ResponseQueuedEvent,
144+
ResponseReasoningDeltaEvent,
145+
ResponseReasoningSummaryDeltaEvent,
146+
ResponseReasoningSummaryDoneEvent,
147+
ResponseReasoningDoneEvent,
112148
],
113149
PropertyInfo(discriminator="type"),
114150
]

src/openai/resources/audio/transcriptions.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -449,7 +449,7 @@ async def create(
449449
extra_headers: Send extra headers
450450
451451
extra_query: Add additional query parameters to the request
452-
"""
452+
"""
453453

454454
@overload
455455
async def create(

src/openai/resources/responses/responses.py

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@ def create(
7777
*,
7878
input: Union[str, ResponseInputParam],
7979
model: ResponsesModel,
80+
background: Optional[bool] | NotGiven = NOT_GIVEN,
8081
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
8182
instructions: Optional[str] | NotGiven = NOT_GIVEN,
8283
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -132,6 +133,9 @@ def create(
132133
[model guide](https://platform.openai.com/docs/models) to browse and compare
133134
available models.
134135
136+
background: Whether to run the model response in the background.
137+
[Learn more](https://platform.openai.com/docs/guides/background).
138+
135139
include: Specify additional output data to include in the model response. Currently
136140
supported values are:
137141
@@ -267,6 +271,7 @@ def create(
267271
input: Union[str, ResponseInputParam],
268272
model: ResponsesModel,
269273
stream: Literal[True],
274+
background: Optional[bool] | NotGiven = NOT_GIVEN,
270275
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
271276
instructions: Optional[str] | NotGiven = NOT_GIVEN,
272277
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -328,6 +333,9 @@ def create(
328333
[Streaming section below](https://platform.openai.com/docs/api-reference/responses-streaming)
329334
for more information.
330335
336+
background: Whether to run the model response in the background.
337+
[Learn more](https://platform.openai.com/docs/guides/background).
338+
331339
include: Specify additional output data to include in the model response. Currently
332340
supported values are:
333341
@@ -456,6 +464,7 @@ def create(
456464
input: Union[str, ResponseInputParam],
457465
model: ResponsesModel,
458466
stream: bool,
467+
background: Optional[bool] | NotGiven = NOT_GIVEN,
459468
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
460469
instructions: Optional[str] | NotGiven = NOT_GIVEN,
461470
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -517,6 +526,9 @@ def create(
517526
[Streaming section below](https://platform.openai.com/docs/api-reference/responses-streaming)
518527
for more information.
519528
529+
background: Whether to run the model response in the background.
530+
[Learn more](https://platform.openai.com/docs/guides/background).
531+
520532
include: Specify additional output data to include in the model response. Currently
521533
supported values are:
522534
@@ -644,6 +656,7 @@ def create(
644656
*,
645657
input: Union[str, ResponseInputParam],
646658
model: ResponsesModel,
659+
background: Optional[bool] | NotGiven = NOT_GIVEN,
647660
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
648661
instructions: Optional[str] | NotGiven = NOT_GIVEN,
649662
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -674,6 +687,7 @@ def create(
674687
{
675688
"input": input,
676689
"model": model,
690+
"background": background,
677691
"include": include,
678692
"instructions": instructions,
679693
"max_output_tokens": max_output_tokens,
@@ -965,6 +979,7 @@ async def create(
965979
*,
966980
input: Union[str, ResponseInputParam],
967981
model: ResponsesModel,
982+
background: Optional[bool] | NotGiven = NOT_GIVEN,
968983
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
969984
instructions: Optional[str] | NotGiven = NOT_GIVEN,
970985
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -1020,6 +1035,9 @@ async def create(
10201035
[model guide](https://platform.openai.com/docs/models) to browse and compare
10211036
available models.
10221037
1038+
background: Whether to run the model response in the background.
1039+
[Learn more](https://platform.openai.com/docs/guides/background).
1040+
10231041
include: Specify additional output data to include in the model response. Currently
10241042
supported values are:
10251043
@@ -1155,6 +1173,7 @@ async def create(
11551173
input: Union[str, ResponseInputParam],
11561174
model: ResponsesModel,
11571175
stream: Literal[True],
1176+
background: Optional[bool] | NotGiven = NOT_GIVEN,
11581177
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
11591178
instructions: Optional[str] | NotGiven = NOT_GIVEN,
11601179
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -1216,6 +1235,9 @@ async def create(
12161235
[Streaming section below](https://platform.openai.com/docs/api-reference/responses-streaming)
12171236
for more information.
12181237
1238+
background: Whether to run the model response in the background.
1239+
[Learn more](https://platform.openai.com/docs/guides/background).
1240+
12191241
include: Specify additional output data to include in the model response. Currently
12201242
supported values are:
12211243
@@ -1344,6 +1366,7 @@ async def create(
13441366
input: Union[str, ResponseInputParam],
13451367
model: ResponsesModel,
13461368
stream: bool,
1369+
background: Optional[bool] | NotGiven = NOT_GIVEN,
13471370
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
13481371
instructions: Optional[str] | NotGiven = NOT_GIVEN,
13491372
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -1405,6 +1428,9 @@ async def create(
14051428
[Streaming section below](https://platform.openai.com/docs/api-reference/responses-streaming)
14061429
for more information.
14071430
1431+
background: Whether to run the model response in the background.
1432+
[Learn more](https://platform.openai.com/docs/guides/background).
1433+
14081434
include: Specify additional output data to include in the model response. Currently
14091435
supported values are:
14101436
@@ -1532,6 +1558,7 @@ async def create(
15321558
*,
15331559
input: Union[str, ResponseInputParam],
15341560
model: ResponsesModel,
1561+
background: Optional[bool] | NotGiven = NOT_GIVEN,
15351562
include: Optional[List[ResponseIncludable]] | NotGiven = NOT_GIVEN,
15361563
instructions: Optional[str] | NotGiven = NOT_GIVEN,
15371564
max_output_tokens: Optional[int] | NotGiven = NOT_GIVEN,
@@ -1562,6 +1589,7 @@ async def create(
15621589
{
15631590
"input": input,
15641591
"model": model,
1592+
"background": background,
15651593
"include": include,
15661594
"instructions": instructions,
15671595
"max_output_tokens": max_output_tokens,

0 commit comments

Comments
 (0)