Skip to content

Commit a7b0627

Browse files
Create functions directory in the doc folder and add openAIChat documentation.
1 parent c607360 commit a7b0627

File tree

1 file changed

+289
-0
lines changed

1 file changed

+289
-0
lines changed

doc/functions/openAIChat.md

Lines changed: 289 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,289 @@
1+
2+
# openAIChat
3+
4+
Connect to OpenAI Chat Completion API
5+
6+
# Creation
7+
## Syntax
8+
9+
`chat = openAIChat`
10+
11+
12+
`chat = openAIChat(systemPrompt)`
13+
14+
15+
`chat = openAIChat(___,ApiKey=key)`
16+
17+
18+
`chat = openAIChat(___,Name=Value)`
19+
20+
## Description
21+
22+
Connect to the OpenAI™ Chat Completion API to generate text using large language models developed by OpenAI.
23+
24+
25+
To connect to the OpenAI API, you need a valid API key. For information on how to obtain an API key, see [https://platform.openai.com/docs/quickstart](https://platform.openai.com/docs/quickstart).
26+
27+
28+
`chat = openAIChat` creates an `openAIChat` object. Connecting to the OpenAI API requires a valid API key. Either set the environment variable `OPENAI_API_KEY` or specify the `APIKey` name\-value argument.
29+
30+
31+
`chat = openAIChat(``systemPrompt``)` creates an `openAIChat` object with the specified system prompt.
32+
33+
34+
`chat = openAIChat(___,APIKey=key)` uses the specified API key.
35+
36+
37+
`chat = openAIChat(___,``Name=Value``)` specifies additional options using one or more name\-value arguments.
38+
39+
## Input Arguments
40+
### `systemPrompt` \- System prompt
41+
42+
character vector | string scalar
43+
44+
45+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
46+
47+
48+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
49+
50+
## Name\-Value Arguments
51+
### `APIKey` \- OpenAI API key
52+
53+
character vector | string scalar
54+
55+
56+
OpenAI API key to access OpenAI APIs such as ChatGPT.
57+
58+
59+
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md).
60+
61+
### `ModelName` \- Model name
62+
63+
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
64+
65+
66+
Name of the OpenAI model to use for text or image generation.
67+
68+
69+
For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md).
70+
71+
### `Temperature` \- Temperature
72+
73+
`1` (default) | numeric scalar between `0` and `2`
74+
75+
76+
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.
77+
78+
### `TopP` \- Top probability mass
79+
80+
`1` (default) | numeric scalar between `0` and `1`
81+
82+
83+
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
84+
85+
### `Tools` \- OpenAI functions to use during output generation
86+
87+
`openAIFunction` object | array of `openAIFunction` objects
88+
89+
90+
Custom functions used by the model to process its input and output.
91+
92+
### `StopSequences` \- Stop sequences
93+
94+
`""` (default) | string array with between `1` and `4` elements
95+
96+
97+
Sequences that stop generation of tokens.
98+
99+
100+
**Example:** `["The end.","And that is all she wrote."]`
101+
102+
### `PresencePenalty` \- Presence penalty
103+
104+
`0` (default) | numeric scalar between `-2` and `2`
105+
106+
107+
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
108+
109+
110+
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
111+
112+
### `FrequencyPenalty` \- Frequency penalty
113+
114+
`0` (default) | numeric scalar between `-2` and `2`
115+
116+
117+
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
118+
119+
120+
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
121+
122+
### `TimeOut` \- Connection timeout in seconds
123+
124+
`10` (default) | nonnegative numeric scalar
125+
126+
127+
If the OpenAI server does not respond within the timeout, then the function throws an error.
128+
129+
### `StreamFun` \- Custom streaming function
130+
131+
function handle
132+
133+
134+
Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.
135+
136+
137+
**Example:** `@(token) fprint("%s",token)`
138+
139+
### `ResponseFormat` \- Response format
140+
141+
`"text"` (default) | `"json"`
142+
143+
144+
Format of generated output.
145+
146+
147+
If you set the response format to `"text"`, then the generated output is a string.
148+
149+
150+
If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
151+
152+
- `ModelName="gpt-4"`
153+
- `ModelName="gpt-4-0613"`
154+
155+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
156+
157+
# Properties
158+
### `SystemPrompt` \- System prompt
159+
160+
character vector | string scalar
161+
162+
163+
This property is read\-only.
164+
165+
166+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
167+
168+
169+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
170+
171+
### `ModelName` \- Model name
172+
173+
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
174+
175+
176+
Name of the OpenAI model to use for text or image generation.
177+
178+
179+
For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md).
180+
181+
### `Temperature` \- Temperature
182+
183+
`1` (default) | numeric scalar between `0` and `2`
184+
185+
186+
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.
187+
188+
### `TopP` \- Top probability mass
189+
190+
`1` (default) | numeric scalar between `0` and `1`
191+
192+
193+
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
194+
195+
### `StopSequences` \- Stop sequences
196+
197+
`""` (default) | string array with between `1` and `4` elements
198+
199+
200+
Sequences that stop generation of tokens.
201+
202+
203+
**Example:** `["The end.","And that is all she wrote."]`
204+
205+
### `PresencePenalty` \- Presence penalty
206+
207+
`0` (default) | numeric scalar between `-2` and `2`
208+
209+
210+
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
211+
212+
213+
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
214+
215+
### `FrequencyPenalty` \- Frequency penalty
216+
217+
`0` (default) | numeric scalar between `-2` and `2`
218+
219+
220+
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
221+
222+
223+
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
224+
225+
### `TimeOut` \- Connection timeout in seconds
226+
227+
`10` (default) | nonnegative numeric scalar
228+
229+
230+
This property is read\-only.
231+
232+
233+
If the OpenAI server does not respond within the timeout, then the function throws an error.
234+
235+
### `ResponseFormat` \- Response format
236+
237+
`"text"` (default) | `"json"`
238+
239+
240+
This property is read\-only.
241+
242+
243+
Format of generated output.
244+
245+
246+
If the response format is `"text"`, then the generated output is a string.
247+
248+
249+
If the response format is `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
250+
251+
- `ModelName="gpt-4"`
252+
- `ModelName="gpt-4-0613"`
253+
254+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
255+
256+
### `FunctionNames` \- Names of OpenAI functions to use during output generation
257+
258+
string array
259+
260+
261+
This property is read\-only.
262+
263+
264+
Names of the custom functions specified in the `Tools` name\-value argument.
265+
266+
# Object Functions
267+
268+
`generate` \- Generate text
269+
270+
# Examples
271+
## Create OpenAI Chat
272+
```matlab
273+
modelName = "gpt-3.5-turbo";
274+
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
275+
```
276+
## Generate and Stream Text
277+
```matlab
278+
sf = @(x) fprintf("%s",x);
279+
chat = openAIChat(StreamFun=sf);
280+
generate(chat,"Why is a raven like a writing desk?")
281+
```
282+
# See Also
283+
- [Create Simple Chat Bot](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/CreateSimpleChatBot.md)
284+
- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
285+
- [Analyze Scientific Papers Using Function Calls](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeScientificPapersUsingFunctionCalls.md)
286+
- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md)
287+
288+
Copyright 2024 The MathWorks, Inc.
289+

0 commit comments

Comments
 (0)