Skip to content

fix(broken-link): changed link from beta.openai.com to platform.opena… #1834

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion articles/text_comparison_examples.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Text comparison examples

The [OpenAI API embeddings endpoint](https://beta.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text.
The [OpenAI API embeddings endpoint](https://platform.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text.

By leveraging GPT-3's understanding of text, these embeddings [achieved state-of-the-art results](https://arxiv.org/abs/2201.10005) on benchmarks in unsupervised learning and transfer learning settings.

2 changes: 1 addition & 1 deletion examples/Embedding_long_inputs.ipynb
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
"\n",
"OpenAI's embedding models cannot embed text that exceeds a maximum length. The maximum length varies by model, and is measured by _tokens_, not string length. If you are unfamiliar with tokenization, check out [How to count tokens with tiktoken](How_to_count_tokens_with_tiktoken.ipynb).\n",
"\n",
"This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://beta.openai.com/docs/guides/embeddings).\n"
"This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://platform.openai.com/docs/guides/embeddings).\n"
]
},
{
2 changes: 1 addition & 1 deletion examples/How_to_count_tokens_with_tiktoken.ipynb
Original file line number Diff line number Diff line change
@@ -57,7 +57,7 @@
"\n",
"## How strings are typically tokenized\n",
"\n",
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://beta.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp."
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://platform.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp."
]
},
{
2 changes: 1 addition & 1 deletion examples/How_to_stream_completions.ipynb
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@
"\n",
"## Downsides\n",
"\n",
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://beta.openai.com/docs/usage-guidelines).\n",
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://platform.openai.com/docs/usage-guidelines).\n",
"\n",
"## Example code\n",
"\n",
Original file line number Diff line number Diff line change
@@ -86,7 +86,7 @@
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
"- `size` (str): The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. Defaults to \"1024x1024\".\n",
"- `style`(str | null): The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.\n",
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)"
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)"
]
},
{
@@ -166,7 +166,7 @@
"- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n",
"- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n",
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n"
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n"
]
},
{
@@ -248,7 +248,7 @@
"- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n",
"- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n",
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n"
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n"
]
},
{
8 changes: 4 additions & 4 deletions examples/fine-tuned_qa/olympics-2-create-qa.ipynb
Original file line number Diff line number Diff line change
@@ -12,7 +12,7 @@
"metadata": {},
"source": [
"# 2. Creating a synthetic Q&A dataset\n",
"We use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
"We use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
"\n",
"This is expensive, and will also take a long time, as we call the davinci engine for each section. You can simply download the final dataset instead.\n",
"\n",
@@ -306,7 +306,7 @@
"metadata": {},
"source": [
"## 2.5 Search file (DEPRECATED)\n",
"We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n",
"We create a search file ([API reference](https://platform.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n",
"\n",
"<span style=\"color:orange; font-weight:bold\">DEPRECATED: The /search endpoint is deprecated in favour of using embeddings. Embeddings are cheaper, faster and can support a better search experience. See <a href=\"https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb\">Question Answering Guide</a> for a search implementation using the embeddings</span>\n"
]
@@ -333,7 +333,7 @@
"source": [
"## 2.6 Answer questions based on the context provided\n",
"\n",
"We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://beta.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model."
"We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://platform.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model."
]
},
{
@@ -393,7 +393,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
]
},
{
2 changes: 1 addition & 1 deletion examples/fine-tuned_qa/olympics-3-train-qa.ipynb
Original file line number Diff line number Diff line change
@@ -593,7 +593,7 @@
"metadata": {},
"source": [
"## 3.4 Answering the question based on a knowledge base\n",
"Finally we can use a logic similar to the [/answers](https://beta.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file."
"Finally we can use a logic similar to the [/answers](https://platform.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file."
]
},
{
Original file line number Diff line number Diff line change
@@ -41,7 +41,7 @@
"\n",
"1. PolarDB-PG cloud server instance.\n",
"2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys)."
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys)."
]
},
{
@@ -79,7 +79,7 @@
"Prepare your OpenAI API key\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys.\n",
"If you don't have an OpenAI API key, you can get one from https://platform.openai.com/account/api-keys.\n",
"\n",
"Once you get your key, please add it to your environment variables as OPENAI_API_KEY.\n",
"\n",
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@
"\n",
"1. AnalyticDB cloud server instance.\n",
"2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
"\n"
]
},
@@ -78,7 +78,7 @@
"\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]
Original file line number Diff line number Diff line change
@@ -53,7 +53,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We use OpenAI's API's throughout this notebook. You can get an API key from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys)\n",
"We use OpenAI's API's throughout this notebook. You can get an API key from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)\n",
"\n",
"You can add your API key as an environment variable by executing the command `export OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` in a terminal. Note that you will need to reload the notebook if the environment variable wasn't set yet. Alternatively, you can set it in the notebook, see below. "
]
Original file line number Diff line number Diff line change
@@ -38,7 +38,7 @@
"\n",
"1. Hologres cloud server instance.\n",
"2. The 'psycopg2-binary' library to interact with the vector database. Any other postgresql client library is ok.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
"\n"
]
},
@@ -83,7 +83,7 @@
"\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@
"\n",
"1. A MyScale cluster deployed by following the [quickstart guide](https://docs.myscale.com/en/quickstart/).\n",
"2. The 'clickhouse-connect' library to interact with MyScale.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys) for vectorization of queries."
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys) for vectorization of queries."
]
},
{
Original file line number Diff line number Diff line change
@@ -132,7 +132,7 @@
"\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:"
]
Original file line number Diff line number Diff line change
@@ -29,7 +29,7 @@
"1. Qdrant server instance. In our case a local Docker container.\n",
"2. The [qdrant-client](https://github.com/qdrant/qdrant_client) library to interact with the vector database.\n",
"3. [Langchain](https://github.com/hwchase17/langchain) as a framework.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
"\n",
"### Start Qdrant server\n",
"\n",
@@ -120,7 +120,7 @@
"\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:"
]
Original file line number Diff line number Diff line change
@@ -43,7 +43,7 @@
"* start a Redis database with RediSearch (redis-stack)\n",
"* install libraries\n",
" * [Redis-py](https://github.com/redis/redis-py)\n",
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
"\n",
"===========================================================\n",
"\n",
@@ -92,7 +92,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of query data.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:"
]
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
"* start a Redis database with RediSearch (redis-stack)\n",
"* install libraries\n",
" * [Redis-py](https://github.com/redis/redis-py)\n",
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
"\n",
"===========================================================\n",
"\n",
@@ -100,7 +100,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of query data.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:"
]
Original file line number Diff line number Diff line change
@@ -38,7 +38,7 @@
"\n",
"1. Tair cloud server instance.\n",
"2. The 'tair' library to interact with the tair database.\n",
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
"\n"
]
},
@@ -109,7 +109,7 @@
"\n",
"The OpenAI API key is used for vectorization of the documents and queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com./account/api-keys).\n",
"\n",
"Once you get your key, please add it by getpass."
]
Original file line number Diff line number Diff line change
@@ -30,7 +30,7 @@
"* completed [Getting Started cookbook](./getting-started-with-weaviate-and-openai.ipynb),\n",
"* crated a `Weaviate` instance,\n",
"* imported data into your `Weaviate` instance,\n",
"* you have an [OpenAI API key](https://beta.openai.com/account/api-keys)"
"* you have an [OpenAI API key](https://platform.openai.com/account/api-keys)"
]
},
{
@@ -43,7 +43,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]
Original file line number Diff line number Diff line change
@@ -95,7 +95,7 @@
" * `weaviate-client`\n",
" * `datasets`\n",
" * `apache-beam`\n",
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
"\n",
"===========================================================\n",
"### Create a Weaviate instance\n",
@@ -172,7 +172,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]
Original file line number Diff line number Diff line change
@@ -95,7 +95,7 @@
" * `weaviate-client`\n",
" * `datasets`\n",
" * `apache-beam`\n",
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
"\n",
"===========================================================\n",
"### Create a Weaviate instance\n",
@@ -172,7 +172,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
"\n",
"This notebook is prepared for a scenario where:\n",
"* Your data is not vectorized\n",
"* You want to run Q&A ([learn more](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/qna-openai)) on your data based on the [OpenAI completions](https://beta.openai.com/docs/api-reference/completions) endpoint.\n",
"* You want to run Q&A ([learn more](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/qna-openai)) on your data based on the [OpenAI completions](https://platform.openai.com/docs/api-reference/completions) endpoint.\n",
"* You want to use Weaviate with the OpenAI module ([text2vec-openai](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-openai)), to generate vector embeddings for you.\n",
"\n",
"This notebook takes you through a simple flow to set up a Weaviate instance, connect to it (with OpenAI API key), configure data schema, import data (which will automatically generate vector embeddings for your data), and run question answering.\n",
@@ -94,7 +94,7 @@
" * `weaviate-client`\n",
" * `datasets`\n",
" * `apache-beam`\n",
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
"\n",
"===========================================================\n",
"### Create a Weaviate instance\n",
@@ -171,7 +171,7 @@
"\n",
"The `OpenAI API key` is used for vectorization of your data at import, and for queries.\n",
"\n",
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
"\n",
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
]