Skip to content

Commit b272344

Browse files
Technical feedback and cosmetic changes for openAIChat reference page
1 parent 8426bb7 commit b272344

File tree

1 file changed

+90
-117
lines changed

1 file changed

+90
-117
lines changed

doc/functions/openAIChat.md

Lines changed: 90 additions & 117 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ Connect to OpenAI™ Chat Completion API
1717

1818
`chat = openAIChat(___,Name=Value)`
1919

20+
---
21+
2022
## Description
2123

2224
Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI.
@@ -36,19 +38,32 @@ To connect to the OpenAI API, you need a valid API key. For information on how t
3638

3739
`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments.
3840

41+
42+
`chat = openAIChat(___,PropertyName=PropertyValue)` specifies properties that are settable at construction using one or more name\-value arguments.
43+
44+
---
45+
3946
## Input Arguments
40-
### `systemPrompt` \- System prompt
47+
48+
---
49+
50+
### `systemPrompt` – System prompt
4151

4252
character vector | string scalar
4353

4454

45-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
55+
Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
4656

4757

4858
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
4959

60+
---
61+
5062
## Name\-Value Arguments
51-
### `APIKey` \- OpenAI API key
63+
64+
---
65+
66+
### `APIKey` – OpenAI API key
5267

5368
character vector | string scalar
5469

@@ -58,7 +73,25 @@ OpenAI API key to access OpenAI APIs such as ChatGPT.
5873

5974
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md).
6075

61-
### `ModelName` \- Model name
76+
---
77+
78+
### `Tools` – OpenAI functions to use during output generation
79+
80+
`openAIFunction` object | array of `openAIFunction` objects
81+
82+
83+
Custom functions used by the model to collect or generate additional data.
84+
85+
For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md).
86+
87+
---
88+
89+
## Properties Settable at Construction
90+
Optionally specify these properties at construction using name-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value.
91+
92+
---
93+
94+
### `ModelName` – Model name
6295

6396
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
6497

@@ -68,28 +101,27 @@ Name of the OpenAI model to use for text or image generation.
68101

69102
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
70103

71-
### `Temperature` \- Temperature
104+
---
105+
106+
### `Temperature` – Temperature
72107

73108
`1` (default) | numeric scalar between `0` and `2`
74109

75110

76111
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.
77112

78-
### `TopP` \- Top probability mass
79-
80-
`1` (default) | numeric scalar between `0` and `1`
81-
113+
---
82114

83-
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
115+
### `TopP` – Top probability mass
84116

85-
### `Tools` \- OpenAI functions to use during output generation
117+
`1` (default) | numeric scalar between `0` and `1`
86118

87-
`openAIFunction` object | array of `openAIFunction` objects
88119

120+
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
89121

90-
Custom functions used by the model to process its input and output.
122+
---
91123

92-
### `StopSequences` \- Stop sequences
124+
### `StopSequences` Stop sequences
93125

94126
`""` (default) | string array with between `1` and `4` elements
95127

@@ -99,7 +131,9 @@ Sequences that stop generation of tokens.
99131

100132
**Example:** `["The end.","And that is all she wrote."]`
101133

102-
### `PresencePenalty` \- Presence penalty
134+
---
135+
136+
### `PresencePenalty` – Presence penalty
103137

104138
`0` (default) | numeric scalar between `-2` and `2`
105139

@@ -109,7 +143,9 @@ Penalty value for using a token that has already been used at least once in the
109143

110144
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
111145

112-
### `FrequencyPenalty` \- Frequency penalty
146+
---
147+
148+
### `FrequencyPenalty` – Frequency penalty
113149

114150
`0` (default) | numeric scalar between `-2` and `2`
115151

@@ -119,150 +155,83 @@ Penalty value for repeatedly using the same token in the generated output. Highe
119155

120156
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
121157

122-
### `TimeOut` \- Connection timeout in seconds
158+
---
159+
160+
### `TimeOut` – Connection timeout in seconds
123161

124162
`10` (default) | nonnegative numeric scalar
125163

164+
After construction, this property is read-only.
126165

127166
If the OpenAI server does not respond within the timeout, then the function throws an error.
128167

129-
### `StreamFun` \- Custom streaming function
168+
---
169+
170+
### `StreamFun` – Custom streaming function
130171

131172
function handle
132173

133174

134175
Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.
135176

136-
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
177+
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md).
137178

138179

139180
**Example:** `@(token) fprint("%s",token)`
140181

141-
### `ResponseFormat` \- Response format
142-
143-
`"text"` (default) | `"json"`
144-
145-
146-
Format of generated output.
147-
148-
149-
If you set the response format to `"text"`, then the generated output is a string.
150-
151-
152-
If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
153-
154-
- `ModelName="gpt-4"`
155-
- `ModelName="gpt-4-0613"`
156-
157-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
158-
159-
# Properties
160-
### `SystemPrompt` \- System prompt
161-
162-
character vector | string scalar
163-
164-
165-
This property is read\-only.
166-
167-
168-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
169-
170-
171-
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
172-
173-
### `ModelName` \- Model name
174-
175-
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
176-
177-
178-
Name of the OpenAI model to use for text or image generation.
182+
---
179183

184+
### `ResponseFormat` – Response format
180185

181-
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
182-
183-
### `Temperature` \- Temperature
184-
185-
`1` (default) | numeric scalar between `0` and `2`
186-
187-
188-
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.
189-
190-
### `TopP` \- Top probability mass
191-
192-
`1` (default) | numeric scalar between `0` and `1`
193-
194-
195-
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
196-
197-
### `StopSequences` \- Stop sequences
198-
199-
`""` (default) | string array with between `1` and `4` elements
200-
201-
202-
Sequences that stop generation of tokens.
203-
204-
205-
**Example:** `["The end.","And that is all she wrote."]`
186+
`"text"` (default) | `"json"`
206187

207-
### `PresencePenalty` \- Presence penalty
208188

209-
`0` (default) | numeric scalar between `-2` and `2`
189+
After construction, this property is read\-only.
210190

211191

212-
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
192+
Format of generated output.
213193

214194

215-
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
195+
If the response format is `"text"`, then the generated output is a string.
216196

217-
### `FrequencyPenalty` \- Frequency penalty
218197

219-
`0` (default) | numeric scalar between `-2` and `2`
198+
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
220199

221200

222-
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
201+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
223202

224203

225-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
204+
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
226205

227-
### `TimeOut` \- Connection timeout in seconds
228206

229-
`10` (default) | nonnegative numeric scalar
207+
The JSON response format is not supported for these models:
230208

209+
- `ModelName="gpt-4"`
210+
- `ModelName="gpt-4-0613"`
231211

232-
This property is read\-only.
233212

213+
## Other Properties
234214

235-
If the OpenAI server does not respond within the timeout, then the function throws an error.
215+
---
236216

237-
### `ResponseFormat` \- Response format
217+
### `SystemPrompt` – System prompt
238218

239-
`"text"` (default) | `"json"`
219+
character vector | string scalar
240220

241221

242222
This property is read\-only.
243223

244224

245-
Format of generated output.
246-
247-
248-
If the response format is `"text"`, then the generated output is a string.
249-
250-
251-
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
252-
253-
254-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
255-
225+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
256226

257-
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
227+
Specify the `SystemPrompt` property at construction using the `systemPrompt` input argument.
258228

259229

260-
The JSON response format is not supported for these models:
230+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
261231

262-
- `ModelName="gpt-4"`
263-
- `ModelName="gpt-4-0613"`
232+
---
264233

265-
### `FunctionNames` \- Names of OpenAI functions to use during output generation
234+
### `FunctionNames` Names of OpenAI functions to use during output generation
266235

267236
string array
268237

@@ -272,26 +241,30 @@ This property is read\-only.
272241

273242
Names of the custom functions specified in the `Tools` name\-value argument.
274243

244+
275245
# Object Functions
276246

277-
`generate` \- Generate text
247+
`generate` Generate text
278248

279249
# Examples
280250
## Create OpenAI Chat
281251
```matlab
282-
modelName = "gpt-3.5-turbo";
283-
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
252+
modelName = "gpt-4o-mini";
253+
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName);
284254
```
255+
256+
---
257+
285258
## Generate and Stream Text
286259
```matlab
287260
sf = @(x) fprintf("%s",x);
288261
chat = openAIChat(StreamFun=sf);
289-
generate(chat,"Why is a raven like a writing desk?")
262+
generate(chat,"Why is a raven like a writing desk?");
290263
```
291264
# See Also
292265
- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md)
293266
- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
294267
- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md)
295268
- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md)
296269

297-
Copyright 2024 The MathWorks, Inc.
270+
*Copyright 2024 The MathWorks, Inc.*

0 commit comments

Comments
 (0)