Skip to content

Improve chat API documentation clarity based on customer feedback #3280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 75 additions & 12 deletions guides/ai/getting_started_with_chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,24 @@ The chat completions feature is experimental and must be enabled before use. See
## Prerequisites

Before starting, ensure you have:

- Meilisearch instance running (v1.15.1 or later)
- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server)
- At least one index with searchable content
- The chat completions experimental feature enabled

## Quick start

### Understanding workspaces

Think of workspaces as different "assistants" you can create for various purposes. Each workspace can have its own personality (system prompt) and capabilities. The best part? **Workspaces are created automatically** when you configure them – no separate creation step needed!

For example:

- `customer-support` - A helpful assistant for customer queries
- `product-search` - An expert at finding the perfect product
- `docs-helper` - A technical assistant for documentation

### Enable the chat completions feature

First, enable the chat completions experimental feature:
Expand Down Expand Up @@ -134,18 +145,6 @@ curl \
}'
```

## Understanding workspaces

Workspaces allow you to create isolated chat configurations for different use cases:

- **Customer support**: Configure with support-focused prompts
- **Product search**: Optimize for e-commerce queries
- **Documentation**: Tune for technical Q&A

Each workspace maintains its own:
- LLM provider configuration
- System prompt

## Building a chat interface with OpenAI SDK

Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK:
Expand Down Expand Up @@ -267,6 +266,70 @@ except Exception as error:

</CodeGroup>

## Managing conversations

Since Meilisearch keeps your data private and doesn't store conversations, you'll need to manage conversation history in your application. Here's a simple approach:

<CodeGroup>

```javascript JavaScript
// Store conversation history in your app
const conversation = [];

// Add user message
conversation.push({ role: 'user', content: 'What is Meilisearch?' });

// Get response and add to history
const response = await client.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: conversation,
stream: true,
});

// Add assistant response to history
let assistantMessage = '';
for await (const chunk of response) {
assistantMessage += chunk.choices[0]?.delta?.content || '';
}
conversation.push({ role: 'assistant', content: assistantMessage });

// Use the full conversation for follow-up questions
conversation.push({ role: 'user', content: 'Can it handle typos?' });
// ... continue the conversation
```

```python Python
# Store conversation history in your app
conversation = []

# Add user message
conversation.append({"role": "user", "content": "What is Meilisearch?"})

# Get response and add to history
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=conversation,
stream=True,
)

# Add assistant response to history
assistant_message = ""
for chunk in response:
if chunk.choices[0].delta.content is not None:
assistant_message += chunk.choices[0].delta.content
conversation.append({"role": "assistant", "content": assistant_message})

# Use the full conversation for follow-up questions
conversation.append({"role": "user", "content": "Can it handle typos?"})
# ... continue the conversation
```

</CodeGroup>

<Tip>
Remember: Each request is independent, so always send the full conversation history if you want the AI to remember previous exchanges.
</Tip>

## Next steps

- Explore [advanced chat API features](/reference/api/chats)
Expand Down
29 changes: 28 additions & 1 deletion reference/api/chats.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ import { RouteHighlighter } from '/snippets/route_highlighter.mdx';

The `/chats` route enables AI-powered conversational search by integrating Large Language Models (LLMs) with your Meilisearch data. This feature allows users to ask questions in natural language and receive contextual answers based on your indexed content.

<Tip>
To optimize how your content is presented to the LLM, configure the [conversation settings for each index](/reference/api/settings#conversation). This allows you to customize descriptions, document templates, and search parameters for better AI responses.
</Tip>

<Note>
This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it:

Expand All @@ -19,6 +23,7 @@ curl \
"chatCompletions": true
}'
```

</Note>

## Chat completions workspace object
Expand All @@ -39,6 +44,10 @@ curl \

Configure the LLM provider and settings for a chat workspace.

<Note>
If the specified workspace doesn't exist, this endpoint will automatically create it for you. No need to explicitly create workspaces beforehand!
</Note>

```json
{
"source": "openAi",
Expand Down Expand Up @@ -82,7 +91,6 @@ Configure the LLM provider and settings for a chat workspace.
| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it |
| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it |


### Request body

```json
Expand Down Expand Up @@ -391,6 +399,19 @@ curl \
}
```

## Privacy and data storage

<Note>
🔒 **Your conversations are private**: Meilisearch does not store any conversation history or context between requests. Each chat completion request is stateless and independent. Any conversation continuity must be managed by your application.
</Note>

This design ensures:

- Complete privacy of user conversations
- No data retention of questions or answers
- Full control over conversation history in your application
- Compliance with data protection regulations

## Authentication

The chat feature integrates with Meilisearch's authentication system:
Expand Down Expand Up @@ -549,11 +570,13 @@ This tool reports real-time progress of internal search operations. When declare
**Purpose**: Provides transparency about search operations and reduces perceived latency by showing users what's happening behind the scenes.

**Arguments**:

- `call_id`: Unique identifier to track the search operation
- `function_name`: Name of the internal function being executed (e.g., "_meiliSearchInIndex")
- `function_parameters`: JSON-encoded string containing search parameters like `q` (query) and `index_uid`

**Example Response**:

```json
{
"function": {
Expand All @@ -570,12 +593,14 @@ Since the `/chats/{workspace}/chat/completions` endpoint is stateless, this tool
**Purpose**: Maintains conversation context for better response quality in subsequent requests by preserving tool calls and results.

**Arguments**:

- `role`: Message author role ("user" or "assistant")
- `content`: Message content (for tool results)
Comment on lines 597 to 598
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect value in role description

The description says either role or assistant – the first option should be user.

- "description": "The role of the messages author, either `role` or `assistant`"
+ "description": "The role of the message author, either `user` or `assistant`"

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~597-~597: Loose punctuation mark.
Context: ...s and results. Arguments: - role: Message author role ("user" or "assista...

(UNLIKELY_OPENING_PUNCTUATION)

🤖 Prompt for AI Agents
In reference/api/chats.mdx at lines 597 to 598, the description of the `role`
field incorrectly states the possible values as "role" or "assistant". Update
the description to correctly specify that `role` can be either "user" or
"assistant" to accurately reflect the message author roles.

- `tool_calls`: Array of tool calls made by the assistant
- `tool_call_id`: ID of the tool call this message responds to

**Example Response**:

```json
{
"function": {
Expand All @@ -592,10 +617,12 @@ This tool provides the source documents that were used by the LLM to generate re
**Purpose**: Shows users which documents were used to generate responses, improving trust and enabling source verification.

**Arguments**:

- `call_id`: Matches the `call_id` from `_meiliSearchProgress` to associate queries with results
- `documents`: JSON object containing the source documents with only displayed attributes

**Example Response**:

```json
{
"function": {
Expand Down
6 changes: 6 additions & 0 deletions snippets/samples/code_samples_typo_tolerance_guide_5.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ client.index('movies').updateTypoTolerance({
})
```

```python Python
client.index('movies').update_typo_tolerance({
'disableOnNumbers': True
})
```

```php PHP
$client->index('movies')->updateTypoTolerance([
'disableOnNumbers' => true
Expand Down