You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -17,6 +17,8 @@ Connect to OpenAI™ Chat Completion API
17
17
18
18
`chat = openAIChat(___,Name=Value)`
19
19
20
+
---
21
+
20
22
## Description
21
23
22
24
Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI.
@@ -36,19 +38,32 @@ To connect to the OpenAI API, you need a valid API key. For information on how t
36
38
37
39
`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments.
38
40
41
+
42
+
`chat = openAIChat(___,PropertyName=PropertyValue)` specifies properties that are settable at construction using one or more name\-value arguments.
43
+
44
+
---
45
+
39
46
## Input Arguments
40
-
### `systemPrompt`\- System prompt
47
+
48
+
---
49
+
50
+
### `systemPrompt` – System prompt
41
51
42
52
character vector | string scalar
43
53
44
54
45
-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
55
+
Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
46
56
47
57
48
58
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
49
59
60
+
---
61
+
50
62
## Name\-Value Arguments
51
-
### `APIKey`\- OpenAI API key
63
+
64
+
---
65
+
66
+
### `APIKey` – OpenAI API key
52
67
53
68
character vector | string scalar
54
69
@@ -58,7 +73,25 @@ OpenAI API key to access OpenAI APIs such as ChatGPT.
58
73
59
74
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md).
60
75
61
-
### `ModelName`\- Model name
76
+
---
77
+
78
+
### `Tools` – OpenAI functions to use during output generation
79
+
80
+
`openAIFunction` object | array of `openAIFunction` objects
81
+
82
+
83
+
Custom functions used by the model to collect or generate additional data.
84
+
85
+
For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md).
86
+
87
+
---
88
+
89
+
## Properties Settable at Construction
90
+
Optionally specify these properties at construction using name-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value.
@@ -68,28 +101,27 @@ Name of the OpenAI model to use for text or image generation.
68
101
69
102
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
70
103
71
-
### `Temperature`\- Temperature
104
+
---
105
+
106
+
### `Temperature` – Temperature
72
107
73
108
`1` (default) | numeric scalar between `0` and `2`
74
109
75
110
76
111
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.
77
112
78
-
### `TopP`\- Top probability mass
79
-
80
-
`1` (default) | numeric scalar between `0` and `1`
81
-
113
+
---
82
114
83
-
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
115
+
### `TopP` – Top probability mass
84
116
85
-
### `Tools`\- OpenAI functions to use during output generation
117
+
`1` (default) | numeric scalar between `0` and `1`
86
118
87
-
`openAIFunction` object | array of `openAIFunction` objects
88
119
120
+
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
89
121
90
-
Custom functions used by the model to process its input and output.
122
+
---
91
123
92
-
### `StopSequences`\- Stop sequences
124
+
### `StopSequences`– Stop sequences
93
125
94
126
`""` (default) | string array with between `1` and `4` elements
95
127
@@ -99,7 +131,9 @@ Sequences that stop generation of tokens.
99
131
100
132
**Example:**`["The end.","And that is all she wrote."]`
101
133
102
-
### `PresencePenalty`\- Presence penalty
134
+
---
135
+
136
+
### `PresencePenalty` – Presence penalty
103
137
104
138
`0` (default) | numeric scalar between `-2` and `2`
105
139
@@ -109,7 +143,9 @@ Penalty value for using a token that has already been used at least once in the
109
143
110
144
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
111
145
112
-
### `FrequencyPenalty`\- Frequency penalty
146
+
---
147
+
148
+
### `FrequencyPenalty` – Frequency penalty
113
149
114
150
`0` (default) | numeric scalar between `-2` and `2`
115
151
@@ -119,150 +155,83 @@ Penalty value for repeatedly using the same token in the generated output. Highe
119
155
120
156
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
121
157
122
-
### `TimeOut`\- Connection timeout in seconds
158
+
---
159
+
160
+
### `TimeOut` – Connection timeout in seconds
123
161
124
162
`10` (default) | nonnegative numeric scalar
125
163
164
+
After construction, this property is read-only.
126
165
127
166
If the OpenAI server does not respond within the timeout, then the function throws an error.
128
167
129
-
### `StreamFun`\- Custom streaming function
168
+
---
169
+
170
+
### `StreamFun` – Custom streaming function
130
171
131
172
function handle
132
173
133
174
134
175
Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.
135
176
136
-
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
177
+
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md).
137
178
138
179
139
180
**Example:**`@(token) fprint("%s",token)`
140
181
141
-
### `ResponseFormat`\- Response format
142
-
143
-
`"text"` (default) | `"json"`
144
-
145
-
146
-
Format of generated output.
147
-
148
-
149
-
If you set the response format to `"text"`, then the generated output is a string.
150
-
151
-
152
-
If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
153
-
154
-
-`ModelName="gpt-4"`
155
-
-`ModelName="gpt-4-0613"`
156
-
157
-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
158
-
159
-
# Properties
160
-
### `SystemPrompt`\- System prompt
161
-
162
-
character vector | string scalar
163
-
164
-
165
-
This property is read\-only.
166
-
167
-
168
-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
169
-
170
-
171
-
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
Name of the OpenAI model to use for text or image generation.
182
+
---
179
183
184
+
### `ResponseFormat` – Response format
180
185
181
-
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
182
-
183
-
### `Temperature`\- Temperature
184
-
185
-
`1` (default) | numeric scalar between `0` and `2`
186
-
187
-
188
-
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.
189
-
190
-
### `TopP`\- Top probability mass
191
-
192
-
`1` (default) | numeric scalar between `0` and `1`
193
-
194
-
195
-
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
196
-
197
-
### `StopSequences`\- Stop sequences
198
-
199
-
`""` (default) | string array with between `1` and `4` elements
200
-
201
-
202
-
Sequences that stop generation of tokens.
203
-
204
-
205
-
**Example:**`["The end.","And that is all she wrote."]`
186
+
`"text"` (default) | `"json"`
206
187
207
-
### `PresencePenalty`\- Presence penalty
208
188
209
-
`0` (default) | numeric scalar between `-2` and `2`
189
+
After construction, this property is read\-only.
210
190
211
191
212
-
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
192
+
Format of generated output.
213
193
214
194
215
-
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
195
+
If the response format is `"text"`, then the generated output is a string.
216
196
217
-
### `FrequencyPenalty`\- Frequency penalty
218
197
219
-
`0` (default) | numeric scalar between `-2` and `2`
198
+
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
220
199
221
200
222
-
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
201
+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
223
202
224
203
225
-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
204
+
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
226
205
227
-
### `TimeOut`\- Connection timeout in seconds
228
206
229
-
`10` (default) | nonnegative numeric scalar
207
+
The JSON response format is not supported for these models:
230
208
209
+
-`ModelName="gpt-4"`
210
+
-`ModelName="gpt-4-0613"`
231
211
232
-
This property is read\-only.
233
212
213
+
## Other Properties
234
214
235
-
If the OpenAI server does not respond within the timeout, then the function throws an error.
215
+
---
236
216
237
-
### `ResponseFormat`\- Response format
217
+
### `SystemPrompt` – System prompt
238
218
239
-
`"text"` (default) | `"json"`
219
+
character vector | string scalar
240
220
241
221
242
222
This property is read\-only.
243
223
244
224
245
-
Format of generated output.
246
-
247
-
248
-
If the response format is `"text"`, then the generated output is a string.
249
-
250
-
251
-
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
252
-
253
-
254
-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
255
-
225
+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
256
226
257
-
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
227
+
Specify the `SystemPrompt` property at construction using the `systemPrompt` input argument.
258
228
259
229
260
-
The JSON response format is not supported for these models:
230
+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
261
231
262
-
-`ModelName="gpt-4"`
263
-
-`ModelName="gpt-4-0613"`
232
+
---
264
233
265
-
### `FunctionNames`\- Names of OpenAI functions to use during output generation
234
+
### `FunctionNames`– Names of OpenAI functions to use during output generation
266
235
267
236
string array
268
237
@@ -272,26 +241,30 @@ This property is read\-only.
272
241
273
242
Names of the custom functions specified in the `Tools` name\-value argument.
274
243
244
+
275
245
# Object Functions
276
246
277
-
`generate`\- Generate text
247
+
`generate`– Generate text
278
248
279
249
# Examples
280
250
## Create OpenAI Chat
281
251
```matlab
282
-
modelName = "gpt-3.5-turbo";
283
-
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
252
+
modelName = "gpt-4o-mini";
253
+
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName);
284
254
```
255
+
256
+
---
257
+
285
258
## Generate and Stream Text
286
259
```matlab
287
260
sf = @(x) fprintf("%s",x);
288
261
chat = openAIChat(StreamFun=sf);
289
-
generate(chat,"Why is a raven like a writing desk?")
262
+
generate(chat,"Why is a raven like a writing desk?");
0 commit comments