You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-- This helps AI Studio understand and compare things in a way that's similar to how humans do. When you're working on something, AI Studio can automatically identify related documents and data by comparing their digital fingerprints. For instance, if you're writing about customer service, AI Studio can instantly find other documents in your data that discuss similar topics or experiences, even if they use different words.
2111
2114
UI_TEXT_CONTENT["AISTUDIO::COMPONENTS::SETTINGS::SETTINGSPANELEMBEDDINGS::T3251217940"] ="This helps AI Studio understand and compare things in a way that's similar to how humans do. When you're working on something, AI Studio can automatically identify related documents and data by comparing their digital fingerprints. For instance, if you're writing about customer service, AI Studio can instantly find other documents in your data that discuss similar topics or experiences, even if they use different words."
-- Are you sure you want to delete the transcription provider '{0}'?
2285
+
UI_TEXT_CONTENT["AISTUDIO::COMPONENTS::SETTINGS::SETTINGSPANELTRANSCRIPTION::T789660305"] ="Are you sure you want to delete the transcription provider '{0}'?"
2286
+
2287
+
-- With the support of transcription models, MindWork AI Studio can convert human speech into text. This is useful, for example, when you need to dictate text. You can choose from dedicated transcription models, but not multimodal LLMs (large language models) that can handle both speech and text. The configuration of multimodal models is done in the \"Configure providers\" section.
2288
+
UI_TEXT_CONTENT["AISTUDIO::COMPONENTS::SETTINGS::SETTINGSPANELTRANSCRIPTION::T799338148"] ="With the support of transcription models, MindWork AI Studio can convert human speech into text. This is useful, for example, when you need to dictate text. You can choose from dedicated transcription models, but not multimodal LLMs (large language models) that can handle both speech and text. The configuration of multimodal models is done in the \\\"Configure providers\\\" section."
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::EMBEDDINGPROVIDERDIALOG::T2331453405"] ="(Optional) API Key"
3108
3156
3109
-
-- Currently, we cannot query the embedding models of self-hosted systems. Therefore, enter the model name manually.
3110
-
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::EMBEDDINGPROVIDERDIALOG::T2615586687"] ="Currently, we cannot query the embedding models of self-hosted systems. Therefore, enter the model name manually."
-- Currently, we cannot query the embedding models for the selected provider and/or host. Therefore, please enter the model name manually.
3167
+
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::EMBEDDINGPROVIDERDIALOG::T290547799"] ="Currently, we cannot query the embedding models for the selected provider and/or host. Therefore, please enter the model name manually."
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::PROVIDERDIALOG::T3763891899"] ="Show available models"
3330
3378
3379
+
-- Currently, we cannot query the models for the selected provider and/or host. Therefore, please enter the model name manually.
3380
+
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::PROVIDERDIALOG::T4116737656"] ="Currently, we cannot query the models for the selected provider and/or host. Therefore, please enter the model name manually."
-- Failed to store the API key in the operating system. The message was: {0}. Please try again.
4559
+
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::TRANSCRIPTIONPROVIDERDIALOG::T1122745046"] ="Failed to store the API key in the operating system. The message was: {0}. Please try again."
-- Currently, we cannot query the transcription models for the selected provider and/or host. Therefore, please enter the model name manually.
4568
+
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::TRANSCRIPTIONPROVIDERDIALOG::T1381635232"] ="Currently, we cannot query the transcription models for the selected provider and/or host. Therefore, please enter the model name manually."
-- Failed to load the API key from the operating system. The message was: {0}. You might ignore this message and provide the API key again.
4580
+
UI_TEXT_CONTENT["AISTUDIO::DIALOGS::TRANSCRIPTIONPROVIDERDIALOG::T1870831108"] ="Failed to load the API key from the operating system. The message was: {0}. You might ignore this message and provide the API key again."
-- Plugins: Preview of our plugin system where you can extend the functionality of the app
5375
5483
UI_TEXT_CONTENT["AISTUDIO::SETTINGS::DATAMODEL::PREVIEWFEATURESEXTENSIONS::T2056842933"] ="Plugins: Preview of our plugin system where you can extend the functionality of the app"
5376
5484
5377
-
-- Speech to Text: Preview of our speech to text system where you can transcribe recordings and audio files into text
5378
-
UI_TEXT_CONTENT["AISTUDIO::SETTINGS::DATAMODEL::PREVIEWFEATURESEXTENSIONS::T221133923"] ="Speech to Text: Preview of our speech to text system where you can transcribe recordings and audio files into text"
5379
-
5380
5485
-- RAG: Preview of our RAG implementation where you can refer your files or integrate enterprise data within your company
5381
5486
UI_TEXT_CONTENT["AISTUDIO::SETTINGS::DATAMODEL::PREVIEWFEATURESEXTENSIONS::T2708939138"] ="RAG: Preview of our RAG implementation where you can refer your files or integrate enterprise data within your company"
-- Transcription: Preview of our speech to text system where you can transcribe recordings and audio files into text
5492
+
UI_TEXT_CONTENT["AISTUDIO::SETTINGS::DATAMODEL::PREVIEWFEATURESEXTENSIONS::T714355911"] ="Transcription: Preview of our speech to text system where you can transcribe recordings and audio files into text"
5493
+
5386
5494
-- Use no data sources, when sending an assistant result to a chat
5387
5495
UI_TEXT_CONTENT["AISTUDIO::SETTINGS::DATAMODEL::SENDTOCHATDATASOURCEBEHAVIOREXTENSIONS::T1223925477"] ="Use no data sources, when sending an assistant result to a chat"
@T("Embeddings are a way to represent words, sentences, entire documents, or even images and videos as digital fingerprints. Just like each person has a unique fingerprint, embedding models create unique digital patterns that capture the meaning and characteristics of the content they analyze. When two things are similar in meaning or content, their digital fingerprints will look very similar. For example, the fingerprints for 'happy' and 'joyful' would be more alike than those for 'happy' and 'sad'.")
0 commit comments