diff --git a/content/llm-full.txt b/content/llm-full.txt new file mode 100644 index 0000000000..ad838ee14f --- /dev/null +++ b/content/llm-full.txt @@ -0,0 +1,418284 @@ +--- +description: An overview of Redis APIs for developers and operators +hideListLinks: true +linkTitle: APIs +title: APIs +type: develop +--- + +Redis provides a number of APIs for developers and operators. The following sections provide you easy access to the client API, the several programmability APIs, the RESTFul management APIs and the Kubernetes resource defintions. + +## APIs for Developers + +### Client API + +Redis comes with a wide range of commands that help you to develop real-time applications. You can find a complete overview of the Redis commands here: + +- [Redis commands]({{< relref "/commands/" >}}) + +As a developer, you will likely use one of our supported client libraries for connecting and executing commands. + +- [Connect with Redis clients introduction]({{< relref "/develop/clients" >}}) + +### Programmability APIs + +The existing Redis commands cover most use cases, but if low latency is a critical requirement, you might need to extend Redis' server-side functionality. + +Lua scripts have been available since early versions of Redis. With Lua, the script is provided by the client and cached on the server side, which implies the risk that different clients might use a different script version. + +- [Redis Lua API reference]({{< relref "/develop/interact/programmability/lua-api" >}}) +- [Scripting with Lua introduction]({{< relref "/develop/interact/programmability/eval-intro" >}}) + +The Redis functions feature, which became available in Redis 7, supersedes the use of Lua in prior versions of Redis. The client is still responsible for invoking the execution, but unlike the previous Lua scripts, functions can now be replicated and persisted. + +- [Functions and scripting in Redis 7 and beyond]({{< relref "/develop/interact/programmability/functions-intro" >}}) + +If none of the previous methods fulfills your needs, then you can extend the functionality of Redis with new commands using the Redis Modules API. + +- [Redis Modules API introduction]({{< relref "/develop/reference/modules/" >}}) +- [Redis Modules API reference]({{< relref "/develop/reference/modules/modules-api-ref" >}}) + +## APIs for Operators + +### Redis Cloud API +Redis Cloud is a fully managed Database as a Service offering and the fastest way to deploy Redis at scale. You can programmatically manage your databases, accounts, access, and credentials using the Redis Cloud REST API. + +- [Redis Cloud REST API introduction]({{< relref "/operate/rc/api/" >}}) +- [Redis Cloud REST API examples]({{< relref "/operate/rc/api/examples/" >}}) +- [Redis Cloud REST API reference]({{< relref "/operate/rc/api/api-reference" >}}) + + +### Redis Enterprise Software API +If you have installed Redis Enterprise Software, you can automate operations with the Redis Enterprise REST API. + +- [Redis Enterprise Software REST API introduction]({{< relref "/operate/rs/references/rest-api/" >}}) +- [Redis Enterprise Software REST API requests]({{< relref "/operate/rs/references/rest-api/requests/" >}}) +- [Redis Enterprise Software REST API objects]({{< relref "/operate/rs/references/rest-api/objects/" >}}) + + +### Redis Enterprise for Kubernetes API + +If you need to install Redis Enterprise on Kubernetes, then you can use the [Redis Enterprise for Kubernetes Operators]({{< relref "/operate/Kubernetes/" >}}). You can find the resource definitions here: + +- [Redis Enterprise Cluster API]({{}}) +- [Redis Enterprise Database API]({{}}) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Discover the differences between `ioredis` and `node-redis`. +linkTitle: Migrate from ioredis +title: Migrate from ioredis +weight: 6 +--- + +Redis previously recommended the [`ioredis`](https://github.com/redis/ioredis) +client library for development with [Node.js](https://nodejs.org/en), +but this library is now deprecated in favor of +[`node-redis`]({{< relref "/develop/clients/nodejs" >}}). This guide +outlines the main similarities and differences between the two libraries. +You may find this information useful if you are an `ioredis` user and you want to +start a new Node.js project or migrate an existing `ioredis` project to `node-redis`. + +## Comparison of `ioredis` and `node-redis` + +The tables below summarize how `ioredis` and `node-redis` implement some +key features of Redis. See the following sections for more information about +each feature. + +### Connection + +| Feature | `ioredis` | `node-redis` | +| :-- | :-- | :-- | +| [Initial connection](#initial-connection) | Happens when you create a client instance | Requires you to call a method on the client instance | +| [Reconnection after a connection is lost](#reconnection) | Automatic by default | Manual by default | +| [Connection events](#connection-events) | Emits `connect`, `ready`, `error`, and `close` events | Emits `connect`, `ready`, `error`, `end`, and `reconnecting` events | + +### Command handling + +| Feature | `ioredis` | `node-redis` | +| :-- | :-- | :-- | +| [Command case](#command-case) | Lowercase only (eg, `hset`) | Uppercase or camel case (eg, `HSET` or `hSet`) | +| [Command argument handling](#command-argument-handling) | Argument objects flattened and items passed directly | Argument objects parsed to generate correct argument list | +| [Asynchronous command result handling](#async-result) | Callbacks and Promises | Promises only | +| [Arbitrary command execution](#arbitrary-command-execution) | Uses the `call()` method | Uses the `sendCommand()` method | + +### Techniques + +| Feature | `ioredis` | `node-redis` | +| :-- | :-- | :-- | +| [Pipelining](#pipelining) | Automatic, or with `pipeline()` command | Automatic, or with `multi()` command | +| [Scan iteration](#scan-iteration) | Uses `scanStream()`, etc | Uses `scanIterator()`, etc | +| [Subscribing to channels](#subscribing-to-channels) | Uses `client.on('message', ...)` event | Uses `subscribe(...)` command | + +### Specific commands + +| Command | `ioredis` | `node-redis` | +| :-- | :-- | :-- | +| [`SETNX`](#setnx-command) | Supported explicitly | Supported as an option for `SET` | +| [`HMSET`](#hmset-command) | Supported explicitly | Supported with standard `HSET` functionality | +| [`CONFIG`](#config-command) | Supported explicitly | Supported with separate `configGet()`, `configSet()`, etc |co + +## Details + +The sections below explain the points of comparison between `ioredis` and +`node-redis` in more detail. + +### Initial connection + +`ioredis` makes the connection to the Redis server when you create an instance +of the client object: + +```js +const client = require('ioredis'); + +// Connects to localhost:6379 on instantiation. +const client = new Redis(); +``` + +`node-redis` requires you to call the `connect()` method on the client object +to make the connection: + +```js +import { createClient } from 'redis'; + +const client = await createClient(); +await client.connect(); // Requires explicit connection. +``` + +### Reconnection after a connection is lost {#reconnection} + +`ioredis` automatically attempts to reconnect if the connection +was lost due to an error. By default, `node-redis` doesn't attempt +to reconnect, but you can enable a custom reconnection strategy +when you create the client object. See +[Reconnect after disconnection]({{< relref "/develop/clients/nodejs/connect#reconnect-after-disconnection" >}}) +for more information. + +### Connection events + +The `connect`, `ready`, `error`, and `close` events that `ioredis` emits +are equivalent to the `connect`, `ready`, `error`, and `end` events +in `node-redis`, but `node-redis` also emits a `reconnecting` event. +See [Connection events]({{< relref "/develop/clients/nodejs/connect#connection-events" >}}) +for more information. + +### Command case + +Command methods in `ioredis` are always lowercase. With `node-redis`, you can +use uppercase or camel case versions of the method names. + +```js +// ioredis +client.hset('key', 'field', 'value'); + +// node-redis +client.HSET('key', 'field', 'value'); + +// ...or +client.hSet('key', 'field', 'value'); +``` + +### Command argument handling + +`ioredis` parses command arguments to strings and then passes them to +the server, in a similar way to [`redis-cli`]({{< relref "/develop/tools/cli" >}}). + +```js +// Equivalent to the command line `SET key 100 EX 10`. +client.set('key', 100, 'EX', 10); +``` + +Arrays passed as arguments are flattened into individual elements and +objects are flattened into sequential key-value pairs: + +```js +// These commands are all equivalent. +client.hset('user' { + name: 'Bob', + age: 20, + description: 'I am a programmer', +}); + +client.hset('user', ['name', 'Bob', 'age', 20, 'description', 'I am a programmer']); + +client.hset('user', 'name', 'Bob', 'age', 20, 'description', 'I am a programmer'); +``` + +`node-redis` uses predefined formats for command arguments. These include specific +classes for commmand options that generally don't correspond to the syntax +of the CLI command. Internally, `node-redis` constructs the correct command using +the method arguments you pass: + +```js +// Equivalent to the command line `SET bike:5 bike EX 10`. +client.set('bike:5', 'bike', {EX: 10}); +``` + +### Asynchronous command result handling {#async-result} + +All commands for both `ioredis` and `node-redis` are executed +asynchronously. `ioredis` supports both callbacks and +[`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) +return values to respond to command results: + +```js +// Callback +client.get('mykey', (err, result) => { + if (err) { + console.error(err); + } else { + console.log(result); + } +}); + +// Promise +client.get('mykey').then( + (result) => { + console.log(result); + }, + (err) => { + console.error(err); + } +); +``` + +`node-redis` supports only `Promise` objects for results, so +you must always use a `then()` handler or the +[`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) +operator to receive them. + +### Arbitrary command execution + +`ioredis` lets you issue arbitrary commands in a similar format to +[`redis-cli`]({{< relref "/develop/tools/cli" >}}) using the `call()` +command: + +```js +await client.call('JSON.SET', 'doc', "$", '{"f1": {"a":1}, "f2":{"a":2}}'); +``` + +In `node-redis`, you can get the same effect outside a transaction using `sendCommand()`: + +```js +await client.sendCommand(['hset', 'hash2', 'number', '3']); +``` + +Within a transaction, use `addCommand()` to include arbitrary commands. Note that +you can freely mix `addCommand()` calls with standard commands in the same +transaction: + +```js +const responses = await client.multi() + .addCommand(['hset', 'hash3', 'number', '4']) + .hGet('hash3', 'number') + .exec(); +``` + +### Pipelining + +Both `ioredis` and `node-redis` will pipeline commands automatically if +they are executed in the same "tick" of the +[event loop](https://nodejs.org/en/learn/asynchronous-work/event-loop-timers-and-nexttick#what-is-the-event-loop) +(see +[Execute a pipeline]({{< relref "/develop/clients/nodejs/transpipe#execute-a-pipeline" >}}) +for more information). + +You can also create a pipeline with explicit commands in both clients. +With `ioredis`, you use the `pipeline()` command with a chain of +commands, ending with `exec()` to run the pipeline: + +```js +// ioredis example +client.pipeline() + .set('foo', '1') + .get('foo') + .set('foo', '2') + .incr('foo') + .get('foo') + .exec(function (err, results) { + // Handle results or errors. + }); +``` + +For `node-redis`, the approach is similar, except that you call the `multi()` +command to start the pipeline and `execAsPipeline()` to run it: + +```js +client.multi() + .set('seat:3', '#3') + .set('seat:4', '#4') + .set('seat:5', '#5') + .execAsPipeline() + .then((results) => { + // Handle array of results. + }, + (err) => { + // Handle errors. + }); +``` + +### Scan iteration + +`ioredis` supports the `scanStream()` method to create a readable stream +from the set of keys returned by the [`SCAN`]({{< relref "/commands/scan" >}}) +command: + +```js +const client = new Redis(); +// Create a readable stream (object mode) +const stream = client.scanStream(); +stream.on('data', (resultKeys) => { + // `resultKeys` is an array of strings representing key names. + // Note that resultKeys may contain 0 keys, and that it will sometimes + // contain duplicates due to SCAN's implementation in Redis. + for (let i = 0; i < resultKeys.length; i++) { + console.log(resultKeys[i]); + } +}); +stream.on('end', () => { + console.log('all keys have been visited'); +}); +``` + +You can also use the similar `hscanStream()`, `sscanStream()`, and +`zscanStream()` to iterate over the items of a hash, set, or sorted set, +respectively. + +`node-redis` handles scan iteration using the `scanIterator()` method +(and the corresponding `hscanIterator()`, `sscanIterator()`, and +`zscanIterator()` methods). These return a collection object for +each page scanned by the cursor (this can be helpful to improve +efficiency using [`MGET`]({{< relref "/commands/mget" >}}) and +other multi-key commands): + +```js +for await (const keys of client.scanIterator()) { + const values = await client.mGet(keys); + // Process values... +} +``` + +### Subscribing to channels + +`ioredis` reports incoming pub/sub messages with a `message` +event on the client object (see +[Publish/subscribe]({{< relref "/develop/interact/pubsub" >}}) for more +information about messages): + +```js +client.on('message', (channel, message) => { + console.log(Received message from ${channel}: ${message}); +}); +``` + +With `node-redis`, you use the `subscribe()` command to register the +message callback. Also, when you use a connection to subscribe, that +connection can't issue any other commands, so you must create a +dedicated connection for the subscription. Use the `client.duplicate()` +method to create a new connection with the same settings as the original: + +```js +const subscriber = client.duplicate(); +await subscriber.connect(); + +await subscriber.subscribe('channel', (message) => { + console.log(Received message: ${message}); +}); +``` + +### `SETNX` command + +`ioredis` implements the [`SETNX`]({{< relref "/commands/setnx" >}}) +command with an explicit method: + +```js +client.setnx('bike:1', 'bike'); +``` + +`node-redis` doesn't provide a `SETNX` method but implements the same +functionality with the `NX` option to the [`SET`]({{< relref "/commands/set" >}}) +command: + +```js +await client.set('bike:1', 'bike', {'NX': true}); +``` + +### `HMSET` command + +The [`HMSET`]({{< relref "/commands/hmset" >}}) command has been deprecated +since Redis v4.0.0, but it is still supported by `ioredis`. With `node-redis` +you should use the [`HSET`]({{< relref "/commands/hset" >}}) command with +multiple key-value pairs. See the [`HSET`]({{< relref "/commands/hset" >}}) +command page for more information. + +### `CONFIG` command + +`ioredis` supports a `config()` method to set or get server configuration +options: + +```js +client.config('SET', 'notify-keyspace-events', 'KEA'); +``` + +`node-redis` doesn't have a `config()` method, but instead supports the +standard commands [`configSet()`]({{< relref "/commands/config-set" >}}), +[`configGet()`]({{< relref "/commands/config-get" >}}), +[`configResetStat()`]({{< relref "/commands/config-resetstat" >}}), and +[`configRewrite`]({{< relref "/commands/config-rewrite" >}}): + +```js +await client.configSet('maxclients', '2000'); +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 4 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +There are two ways to execute commands in a pipeline. Firstly, `node-redis` will +automatically pipeline commands that execute within the same "tick" of the +[event loop](https://nodejs.org/en/learn/asynchronous-work/event-loop-timers-and-nexttick#what-is-the-event-loop). +You can ensure that commands happen in the same tick very easily by including them in a +[`Promise.all()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) +call, as shown in the following example. The chained `then(...)` callback is optional +and you can often omit it for commands that write data and only return a +status result. + +```js +await Promise.all([ + client.set('seat:0', '#0'), + client.set('seat:1', '#1'), + client.set('seat:2', '#2'), +]).then((results) =>{ + console.log(results); + // >>> ['OK', 'OK', 'OK'] +}); + +await Promise.all([ + client.get('seat:0'), + client.get('seat:1'), + client.get('seat:2'), +]).then((results) =>{ + console.log(results); + // >>> ['#0', '#1', '#2'] +}); +``` + +You can also create a pipeline object using the +[`multi()`]({{< relref "/commands/multi" >}}) method +and then add commands to it using methods that resemble the standard +command methods (for example, `set()` and `get()`). The commands are +buffered in the pipeline and only execute when you call the +`execAsPipeline()` method on the pipeline object. Again, the +`then(...)` callback is optional. + +```js +await client.multi() + .set('seat:3', '#3') + .set('seat:4', '#4') + .set('seat:5', '#5') + .execAsPipeline() + .then((results) => { + console.log(results); + // >>> ['OK', 'OK', 'OK'] + }); +``` + +The two approaches are almost equivalent, but they have different behavior +when the connection is lost during the execution of the pipeline. After +the connection is re-established, a `Promise.all()` pipeline will +continue execution from the point where the interruption happened, +but a `multi()` pipeline will discard any remaining commands that +didn't execute. + +## Execute a transaction + +A transaction works in a similar way to a pipeline. Create a +transaction object with the `multi()` command, call command methods +on that object, and then call the transaction object's +`exec()` method to execute it. + +```js +const [res1, res2, res3] = await client.multi() + .incrBy("counter:1", 1) + .incrBy("counter:2", 2) + .incrBy("counter:3", 3) + .exec(); + +console.log(res1); // >>> 1 +console.log(res2); // >>> 2 +console.log(res3); // >>> 3 +``` + +## Watch keys for changes + +Redis supports *optimistic locking* to avoid inconsistent updates +to different keys. The basic idea is to watch for changes to any +keys that you use in a transaction while you are are processing the +updates. If the watched keys do change, you must restart the updates +with the latest data from the keys. See +[Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information about optimistic locking. + +The code below reads a string +that represents a `PATH` variable for a command shell, then appends a new +command path to the string before attempting to write it back. If the watched +key is modified by another client before writing, the transaction aborts. +Note that you should call read-only commands for the watched keys synchronously on +the usual `client` object but you still call commands for the transaction on the +transaction object created with `multi()`. + +For production usage, you would generally call code like the following in +a loop to retry it until it succeeds or else report or log the failure. + +```js +// Set initial value of `shellpath`. +client.set('shellpath', '/usr/syscmds/'); + +// Watch the key we are about to update. +await client.watch('shellpath'); + +const currentPath = await client.get('shellpath'); +const newPath = currentPath + ':/usr/mycmds/'; + +// Attempt to write the watched key. +await client.multi() + .set('shellpath', newPath) + .exec() + .then((result) => { + // This is called when the pipeline executes + // successfully. + console.log(result); + }, (err) => { + // This is called when a watched key was changed. + // Handle the error here. + console.log(err); + }); + +const updatedPath = await client.get('shellpath'); +console.log(updatedPath); +// >>> /usr/syscmds/:/usr/mycmds/ +``` + +In an environment where multiple concurrent requests are sharing a connection +(such as a web server), you must use a connection pool to get an isolated connection, +as shown below: + +```js +import { createClientPool } from 'redis'; + +const pool = await createClientPool() + .on('error', err => console.error('Redis Client Pool Error', err)); + +try { + await pool.execute(async client => { + await client.watch('key'); + + const multi = client.multi() + .ping() + .get('key'); + + if (Math.random() > 0.5) { + await client.watch('another-key'); + multi.set('another-key', await client.get('another-key') / 2); + } + + return multi.exec(); + }); +} catch (err) { + if (err instanceof WatchError) { + // the transaction aborted + } +} +``` + +This is important because the server tracks the state of the WATCH on a +per-connection basis, and concurrent WATCH and MULTI/EXEC calls on the same +connection will interfere with one another. See +[`RedisClientPool`](https://github.com/redis/node-redis/blob/master/docs/pool.md) +for more information. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 3 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). + +Vector fields can store *text embeddings*, which are AI-generated vector +representations of text content. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings measures their semantic similarity. When you compare the +similarity of a query embedding with stored embeddings, Redis can retrieve documents +that closely match the query's meaning. + +In the example below, we use the +[`@xenova/transformers`](https://www.npmjs.com/package/@xenova/transformers) +library to generate vector embeddings to store and index with +Redis Query Engine. The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +Install the required dependencies: + +1. Install [`node-redis`]({{< relref "/develop/clients/nodejs" >}}) if you haven't already. +2. Install `@xenova/transformers`: + +```bash +npm install @xenova/transformers +``` + +In your JavaScript source file, import the required classes: + +```js +import * as transformers from '@xenova/transformers'; +import { + VectorAlgorithms, + createClient, + SCHEMA_FIELD_TYPE, +} from 'redis'; +``` + +The `@xenova/transformers` module handles embedding models. This example uses the +[`all-distilroberta-v1`](https://huggingface.co/sentence-transformers/all-distilroberta-v1) +model, which: +- Generates 768-dimensional vectors +- Truncates input to 128 tokens +- Uses word piece tokenization (see [Word piece tokenization](https://huggingface.co/learn/nlp-course/en/chapter6/6) + at the [Hugging Face](https://huggingface.co/) docs for details) + +The `pipe` function generates embeddings. The `pipeOptions` object specifies how to generate sentence embeddings from token embeddings (see the +[`all-distilroberta-v1`](https://huggingface.co/sentence-transformers/all-distilroberta-v1) +documentation for details): + +```js +let pipe = await transformers.pipeline( + 'feature-extraction', 'Xenova/all-distilroberta-v1' +); + +const pipeOptions = { + pooling: 'mean', + normalize: true, +}; +``` + +## Create the index + +First, connect to Redis and remove any existing index named `vector_idx`: + +```js +const client = createClient({url: 'redis://localhost:6379'}); +await client.connect(); + +try { + await client.ft.dropIndex('vector_idx'); +} catch (e) { + // Index doesn't exist, which is fine +} +``` + +Next, create the index with the following schema: +- `content`: Text field for the content to index +- `genre`: [Tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) + field representing the text's genre +- `embedding`: [Vector]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) + field with: + - [HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) + indexing + - [L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) + distance metric + - Float32 values + - 768 dimensions (matching the embedding model) + +```js +await client.ft.create('vector_idx', { + 'content': { + type: SchemaFieldTypes.TEXT, + }, + 'genre': { + type: SchemaFieldTypes.TAG, + }, + 'embedding': { + type: SchemaFieldTypes.VECTOR, + TYPE: 'FLOAT32', + ALGORITHM: VectorAlgorithms.HNSW, + DISTANCE_METRIC: 'L2', + DIM: 768, + } +}, { + ON: 'HASH', + PREFIX: 'doc:' +}); +``` + +## Add data + +Add data objects to the index using `hSet()`. The index automatically processes objects with the `doc:` prefix. + +For each document: +1. Generate an embedding using the `pipe()` function and `pipeOptions` +2. Convert the embedding to a binary string using `Buffer.from()` +3. Store the document with `hSet()` + +Use `Promise.all()` to batch the commands and reduce network round trips: + +```js +const sentence1 = 'That is a very happy person'; +const doc1 = { + 'content': sentence1, + 'genre': 'persons', + 'embedding': Buffer.from( + (await pipe(sentence1, pipeOptions)).data.buffer + ), +}; + +const sentence2 = 'That is a happy dog'; +const doc2 = { + 'content': sentence2, + 'genre': 'pets', + 'embedding': Buffer.from( + (await pipe(sentence2, pipeOptions)).data.buffer + ) +}; + +const sentence3 = 'Today is a sunny day'; +const doc3 = { + 'content': sentence3, + 'genre': 'weather', + 'embedding': Buffer.from( + (await pipe(sentence3, pipeOptions)).data.buffer + ) +}; + +await Promise.all([ + client.hSet('doc:1', doc1), + client.hSet('doc:2', doc2), + client.hSet('doc:3', doc3) +]); +``` + +## Run a query + +To query the index: +1. Generate an embedding for your query text +2. Pass the embedding as a parameter to the search +3. Redis calculates vector distances and ranks results + +The query returns an array of document objects. Each object contains: +- `id`: The document's key +- `value`: An object with fields specified in the `RETURN` option + +```js +const similar = await client.ft.search( + 'vector_idx', + '*=>[KNN 3 @embedding $B AS score]', + { + 'PARAMS': { + B: Buffer.from( + (await pipe('That is a happy person', pipeOptions)).data.buffer + ), + }, + 'RETURN': ['score', 'content'], + 'DIALECT': '2' + }, +); + +for (const doc of similar.documents) { + console.log(`${doc.id}: '${doc.value.content}', Score: ${doc.value.score}`); +} + +await client.quit(); +``` + +The first run may take longer as it downloads the model data. The output shows results ordered by score (vector distance), with lower scores indicating greater similarity: + +``` +doc:1: 'That is a very happy person', Score: 0.127055495977 +doc:2: 'That is a happy dog', Score: 0.836842417717 +doc:3: 'Today is a sunny day', Score: 1.50889515877 +``` + +## Differences with JSON documents + +JSON documents support richer data modeling with nested fields. Key differences from hash documents: + +1. Use paths in the schema to identify fields +2. Declare aliases for paths using the `AS` option +3. Set `ON` to `JSON` when creating the index +4. Use arrays instead of binary strings for vectors +5. Use `json.set()` instead of `hSet()` + +Create the index with path aliases: + +```js +await client.ft.create('vector_json_idx', { + '$.content': { + type: SchemaFieldTypes.TEXT, + AS: 'content', + }, + '$.genre': { + type: SchemaFieldTypes.TAG, + AS: 'genre', + }, + '$.embedding': { + type: SchemaFieldTypes.VECTOR, + TYPE: 'FLOAT32', + ALGORITHM: VectorAlgorithms.HNSW, + DISTANCE_METRIC: 'L2', + DIM: 768, + AS: 'embedding', + } +}, { + ON: 'JSON', + PREFIX: 'jdoc:' +}); +``` + +Add data using `json.set()`. Convert the `Float32Array` to a standard JavaScript array using the spread operator: + +```js +const jSentence1 = 'That is a very happy person'; +const jdoc1 = { + 'content': jSentence1, + 'genre': 'persons', + 'embedding': [...(await pipe(jSentence1, pipeOptions)).data], +}; + +const jSentence2 = 'That is a happy dog'; +const jdoc2 = { + 'content': jSentence2, + 'genre': 'pets', + 'embedding': [...(await pipe(jSentence2, pipeOptions)).data], +}; + +const jSentence3 = 'Today is a sunny day'; +const jdoc3 = { + 'content': jSentence3, + 'genre': 'weather', + 'embedding': [...(await pipe(jSentence3, pipeOptions)).data], +}; + +await Promise.all([ + client.json.set('jdoc:1', '$', jdoc1), + client.json.set('jdoc:2', '$', jdoc2), + client.json.set('jdoc:3', '$', jdoc3) +]); +``` + +Query JSON documents using the same syntax, but note that the vector parameter must still be a binary string: + +```js +const jsons = await client.ft.search( + 'vector_json_idx', + '*=>[KNN 3 @embedding $B AS score]', + { + "PARAMS": { + B: Buffer.from( + (await pipe('That is a happy person', pipeOptions)).data.buffer + ), + }, + 'RETURN': ['score', 'content'], + 'DIALECT': '2' + }, +); +``` + +The results are identical to the hash document query, except for the `jdoc:` prefix: + +``` +jdoc:1: 'That is a very happy person', Score: 0.127055495977 +jdoc:2: 'That is a happy dog', Score: 0.836842417717 +jdoc:3: 'Today is a sunny day', Score: 1.50889515877 +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about indexing options, distance metrics, and query format. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Node.js application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 2 +--- + +## Basic connection + +Connect to localhost on port 6379. + +```js +import { createClient } from 'redis'; + +const client = createClient(); + +client.on('error', err => console.log('Redis Client Error', err)); + +await client.connect(); +``` + +Store and retrieve a simple string. + +```js +await client.set('key', 'value'); +const value = await client.get('key'); +``` + +Store and retrieve a map. + +```js +await client.hSet('user-session:123', { + name: 'John', + surname: 'Smith', + company: 'Redis', + age: 29 +}) + +let userSession = await client.hGetAll('user-session:123'); +console.log(JSON.stringify(userSession, null, 2)); +/* +{ + "surname": "Smith", + "name": "John", + "company": "Redis", + "age": "29" +} + */ +``` + +To connect to a different host or port, use a connection string in the format `redis[s]://[[username][:password]@][host][:port][/db-number]`: + +```js +createClient({ + url: 'redis://alice:foobared@awesome.redis.server:6380' +}); +``` +To check if the client is connected and ready to send commands, use `client.isReady`, which returns a Boolean. `client.isOpen` is also available. This returns `true` when the client's underlying socket is open, and `false` when it isn't (for example, when the client is still connecting or reconnecting after a network error). + +## Connect to a Redis cluster + +To connect to a Redis cluster, use `createCluster`. + +```js +import { createCluster } from 'redis'; + +const cluster = createCluster({ + rootNodes: [ + { + url: 'redis://127.0.0.1:16379' + }, + { + url: 'redis://127.0.0.1:16380' + }, + // ... + ] +}); + +cluster.on('error', (err) => console.log('Redis Cluster Error', err)); + +await cluster.connect(); + +await cluster.set('foo', 'bar'); +const value = await cluster.get('foo'); +console.log(value); // returns 'bar' + +await cluster.close(); +``` + +## Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) guidelines. + +```js +const client = createClient({ + username: 'default', // use your Redis user. More info https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ + password: 'secret', // use your password here + socket: { + host: 'my-redis.cloud.redislabs.com', + port: 6379, + tls: true, + key: readFileSync('./redis_user_private.key'), + cert: readFileSync('./redis_user.crt'), + ca: [readFileSync('./redis_ca.pem')] + } +}); + +client.on('error', (err) => console.log('Redis Client Error', err)); + +await client.connect(); + +await client.set('foo', 'bar'); +const value = await client.get('foo'); +console.log(value) // returns 'bar' + +await client.destroy(); +``` + +You can also use discrete parameters and UNIX sockets. Details can be found in the [client configuration guide](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md). + +## Reconnect after disconnection + +`node-redis` can attempt to reconnect automatically when +the connection to the server is lost. By default, it will retry +the connection using an +[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) +strategy with some random "jitter" added to avoid multiple +clients retrying in sync with each other. + +You can also set the +`socket.reconnectionStrategy` field in the configuration to decide +whether to try to reconnect and how to approach it. Choose one of the following values for +`socket.reconnectionStrategy`: + +- `false`: (Default) Don't attempt to reconnect. +- `number`: Wait for this number of milliseconds and then attempt to reconnect. +- ``: Use a custom + function to decide how to handle reconnection. + +The custom function has the following signature: + +```js +(retries: number, cause: Error) => false | number | Error +``` + +It is called before each attempt to reconnect, with the `retries` +indicating how many attempts have been made so far. The `cause` parameter is an +[`Error`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error) +object with information about how the connection was lost. The return value +from the function can be any of the following: + +- `false`: Don't attempt to reconnect. +- `number`: Wait this number of milliseconds and then try again. +- `Error`: Same as `false`, but lets you supply extra information about why + no attempt was made to reconnect. + +The example below shows a `reconnectionStrategy` function that implements a +custom exponential backoff strategy: + +```js +createClient({ + socket: { + reconnectStrategy: retries => { + // Generate a random jitter between 0 – 100 ms: + const jitter = Math.floor(Math.random() * 100); + + // Delay is an exponential backoff, (2^retries) * 50 ms, with a + // maximum value of 3000 ms: + const delay = Math.min(Math.pow(2, retries) * 50, 3000); + + return delay + jitter; + } + } +}); +``` + +## Connection events + +The client object emits the following +[events](https://developer.mozilla.org/en-US/docs/Web/API/Event) that are +related to connection: + +- `connect`: (No parameters) The client is about to start connecting to the server. +- `ready`: (No parameters) The client has connected and is ready to use. +- `end`: (No parameters) The client has been intentionally closed using `client.quit()`. +- `error`: An error has occurred, which is described by the + [`Error`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error) + parameter. This is usually a network issue such as "Socket closed unexpectedly". +- `reconnecting`: (No parameters) The client is about to try reconnecting after the + connection was lost due to an error. +- `sharded-channel-moved`: The cluster slot of a subscribed + [sharded pub/sub channel]({{< relref "/develop/interact/pubsub#sharded-pubsub" >}}) + has been moved to another shard. Note that when you use a + [`RedisCluster`](#connect-to-a-redis-cluster) connection, this event is automatically + handled for you. See + [`sharded-channel-moved` event](https://github.com/redis/node-redis/blob/master/docs/pub-sub.md#sharded-channel-moved-event) for more information. + +Use code like the following to respond to these events: + +```js +client.on('error', error => { + console.error(`Redis client error:`, error); +}); +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis Query Engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 2 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) +or another Redis server available. Also install the +[`node-redis`]({{< relref "/develop/clients/nodejs" >}}) client library if you +haven't already done so. + +Add the following dependencies: + +```js +import { + createClient, + SCHEMA_FIELD_TYPE, + FT_AGGREGATE_GROUP_BY_REDUCERS, + FT_AGGREGATE_STEPS, +} from 'redis'; +``` + +## Create data + +Create some test data to add to your database. The example data shown +below is compatible with both JSON and hash objects. + +```js +const user1 = { + name: 'Paul John', + email: 'paul.john@example.com', + age: 42, + city: 'London' +}; + +const user2 = { + name: 'Eden Zamir', + email: 'eden.zamir@example.com', + age: 29, + city: 'Tel Aviv' +}; + +const user3 = { + name: 'Paul Zamir', + email: 'paul.zamir@example.com', + age: 35, + city: 'Tel Aviv' +}; +``` + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/nodejs/connect" >}}) +to learn more about the available connection options. + +```js +const client = await createClient(); +await client.connect(); +``` + +Create an index. In this example, only JSON documents with the key prefix `user:` are indexed. For more information, see [Query syntax]({{< relref "/develop/interact/search-and-query/query/" >}}). + +```js +await client.ft.create('idx:users', { + '$.name': { + type: SchemaFieldTypes.TEXT, + AS: 'name' + }, + '$.city': { + type: SchemaFieldTypes.TEXT, + AS: 'city' + }, + '$.age': { + type: SchemaFieldTypes.NUMERIC, + AS: 'age' + } +}, { + ON: 'JSON', + PREFIX: 'user:' +}); +``` + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them. Note that placing +the commands in a `Promise.all()` call is an easy way to create a +[pipeline]({{< relref "/develop/clients/nodejs/transpipe" >}}), +which is more efficient than sending the commands individually. + +```js +const [user1Reply, user2Reply, user3Reply] = await Promise.all([ + client.json.set('user:1', '$', user1), + client.json.set('user:2', '$', user2), + client.json.set('user:3', '$', user3) +]); +``` + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +```js +let findPaulResult = await client.ft.search('idx:users', 'Paul @age:[30 40]'); + +console.log(findPaulResult.total); // >>> 1 + +findPaulResult.documents.forEach(doc => { + console.log(`ID: ${doc.id}, name: ${doc.value.name}, age: ${doc.value.age}`); +}); +``` + +Specify query options to return only the `city` field: + +```js +let citiesResult = await client.ft.search('idx:users', '*',{ + RETURN: 'city' +}); + +console.log(citiesResult.total); // >>> 3 + +citiesResult.documents.forEach(cityDoc => { + console.log(cityDoc.value); +}); +``` + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +```js +let aggResult = await client.ft.aggregate('idx:users', '*', { + STEPS: [{ + type: AggregateSteps.GROUPBY, + properties: '@city', + REDUCE: [{ + type: AggregateGroupByReducers.COUNT, + AS: 'count' + }] + }] +}); + +console.log(aggResult.total); // >>> 2 + +aggResult.results.forEach(result => { + console.log(`${result.city} - ${result.count}`); +}); +``` + +Finally, close the connection to Redis. + +```js +await client.quit(); +``` + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must use `HASH` for the `ON` option +when you create the index. The code below shows these changes with +a new index called `hash-idx:users`, which is otherwise the same as +the `idx:users` index used for JSON documents in the previous examples. + +```js +await client.ft.create('hash-idx:users', { + 'name': { + type: SchemaFieldTypes.TEXT + }, + 'city': { + type: SchemaFieldTypes.TEXT + }, + 'age': { + type: SchemaFieldTypes.NUMERIC + } +}, { + ON: 'HASH', + PREFIX: 'huser:' +}); +``` + +You use [`hSet()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`json.set()`]({{< relref "/commands/json.set" >}}), +but the same flat `userX` objects work equally well with either +hash or JSON: + +```js +const [huser1Reply, huser2Reply, huser3Reply] = await Promise.all([ + client.hSet('huser:1', user1), + client.hSet('huser:2', user2), + client.hSet('huser:3', user3) +]); +``` + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The format of the result is +also the same: + +```js +let findPaulHashResult = await client.ft.search( + 'hash-idx:users', 'Paul @age:[30 40]' +); + +console.log(findPaulHashResult.total); // >>> 1 + +findPaulHashResult.documents.forEach(doc => { + console.log(`ID: ${doc.id}, name: ${doc.value.name}, age: ${doc.value.age}`); +}); +// >>> ID: huser:3, name: Paul Zamir, age: 35 +``` + +## More information + +See the [Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +aliases: /develop/connect/clients/nodejs +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Node.js/JavaScript application to a Redis database +linkTitle: node-redis (JavaScript) +title: node-redis guide (JavaScript) +weight: 4 +--- + +[node-redis](https://github.com/redis/node-redis) is the Redis client for Node.js/JavaScript. +The sections below explain how to install `node-redis` and connect your application +to a Redis database. + +`node-redis` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +You can also access Redis with an object-mapping client interface. See +[RedisOM for Node.js]({{< relref "/integrate/redisom-for-node-js" >}}) +for more information. + +## Install + +To install node-redis, run: + +```bash +npm install redis +``` + +## Connect and test + +Connect to localhost on port 6379. + +```js +import { createClient } from 'redis'; + +const client = createClient(); + +client.on('error', err => console.log('Redis Client Error', err)); + +await client.connect(); +``` + +Store and retrieve a simple string. + +```js +await client.set('key', 'value'); +const value = await client.get('key'); +``` + +Store and retrieve a map. + +```js +await client.hSet('user-session:123', { + name: 'John', + surname: 'Smith', + company: 'Redis', + age: 29 +}) + +let userSession = await client.hGetAll('user-session:123'); +console.log(JSON.stringify(userSession, null, 2)); +/* +{ + "surname": "Smith", + "name": "John", + "company": "Redis", + "age": "29" +} + */ +``` + +To connect to a different host or port, use a connection string in the format `redis[s]://[[username][:password]@][host][:port][/db-number]`: + +```js +createClient({ + url: 'redis://alice:foobared@awesome.redis.server:6380' +}); +``` +To check if the client is connected and ready to send commands, use `client.isReady`, which returns a Boolean. `client.isOpen` is also available. This returns `true` when the client's underlying socket is open, and `false` when it isn't (for example, when the client is still connecting or reconnecting after a network error). + +## More information + +The [`node-redis` website](https://redis.js.org/) has more examples. +The [Github repository](https://github.com/redis/node-redis) also has useful +information, including a guide to the +[connection configuration options](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md) you can use. + +See also the other pages in this section for more information and examples: +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get your Node.js app ready for production +linkTitle: Production usage +title: Production usage +weight: 5 +--- + +This guide offers recommendations to get the best reliability and +performance in your production environment. + +## Checklist + +Each item in the checklist below links to the section +for a recommendation. Use the checklist icons to record your +progress in implementing the recommendations. + +{{< checklist "nodeprodlist" >}} + {{< checklist-item "#handling-errors" >}}Handling errors{{< /checklist-item >}} + {{< checklist-item "#handling-reconnections" >}}Handling reconnections{{< /checklist-item >}} + {{< checklist-item "#timeouts" >}}Timeouts{{< /checklist-item >}} +{{< /checklist >}} + +## Recommendations + +### Handling errors + +Node-Redis provides [multiple events to handle various scenarios](https://github.com/redis/node-redis?tab=readme-ov-file#events), among which the most critical is the `error` event. + +This event is triggered whenever an error occurs within the client. + +It is crucial to listen for error events. + +If a client does not register at least one error listener and an error occurs, the system will throw that error, potentially causing the Node.js process to exit unexpectedly. +See [the EventEmitter docs](https://nodejs.org/api/events.html#events_error_events) for more details. + +```typescript +const client = createClient({ + // ... client options +}); +// Always ensure there's a listener for errors in the client to prevent process crashes due to unhandled errors +client.on('error', error => { + console.error(`Redis client error:`, error); +}); +``` + +### Handling reconnections + +When the socket closes unexpectedly (without calling the `quit()` or `disconnect()` methods), +the client can automatically restore the connection. A simple +[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) strategy +for reconnection is enabled by default, but you can replace this with your +own custom strategy. See +[Reconnect after disconnection]({{< relref "/develop/clients/nodejs/connect#reconnect-after-disconnection" >}}) +for more information. + +### Timeouts + +To set a timeout for a connection, use the `connectTimeout` option: +```typescript +const client = createClient({ + socket: { + // setting a 10-second timeout + connectTimeout: 10000 // in milliseconds + } +}); +client.on('error', error => console.error('Redis client error:', error)); +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 4 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +To execute commands in a pipeline, you first create a pipeline object +and then add commands to it using methods that resemble the standard +command methods (for example, `Set()` and `Get()`). The commands are +buffered in the pipeline and only execute when you call the `Exec()` +method on the pipeline object. + +The main difference with the pipeline commands is that their return +values contain a valid result only after the pipeline has finished executing. +You can access the result using the `Val()` method instead of +`Result()` (note that errors are reported by the `Exec()` method rather +than by the individual commands). + +{{< clients-example pipe_trans_tutorial basic_pipe Go >}} +{{< /clients-example >}} + +You can also create a pipeline using the `Pipelined()` method. +This executes pipeline commands in a callback function that you +provide and calls `Exec()` automatically after it returns: + +{{< clients-example pipe_trans_tutorial basic_pipe_pipelined Go >}} +{{< /clients-example >}} + +## Execute a transaction + +A transaction works in a similar way to a pipeline. Create a +transaction object with the `TxPipeline()` method, call command methods +on that object, and then call the transaction object's +`Exec()` method to execute it. You can access the results +from commands in the transaction after it completes using the +`Val()` method. + +{{< clients-example pipe_trans_tutorial basic_trans Go >}} +{{< /clients-example >}} + +There is also a `TxPipelined()` method that works in a similar way +to `Pipelined()`, described above: + +{{< clients-example pipe_trans_tutorial basic_trans_txpipelined Go >}} +{{< /clients-example >}} + +## Watch keys for changes + +Redis supports *optimistic locking* to avoid inconsistent updates +to different keys. The basic idea is to watch for changes to any +keys that you use in a transaction while you are are processing the +updates. If the watched keys do change, you must restart the updates +with the latest data from the keys. See +[Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information about optimistic locking. + +The code below reads a string +that represents a `PATH` variable for a command shell, then appends a new +command path to the string before attempting to write it back. If the watched +key is modified by another client before writing, the transaction aborts. +The `Watch()` method receives a callback function where you execute the +commands you want to watch. In the body of this callback, you can execute +read-only commands before the transaction using the usual client object +(called `rdb` in our examples) and receive an immediate result. Start the +transaction itself by calling `TxPipeline()` or `TxPipelined()` on the +`Tx` object passed to the callback. `Watch()` also receives one or more +`string` parameters after the callback that represent the keys you want +to watch. + +For production usage, you would generally call code like the following in +a loop to retry it until it succeeds or else report or log the failure: + +{{< clients-example pipe_trans_tutorial trans_watch Go >}} +{{< /clients-example >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 3 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). +Among other things, vector fields can store *text embeddings*, which are AI-generated vector +representations of the semantic information in pieces of text. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings indicates how similar they are semantically. By comparing the +similarity of an embedding generated from some query text with embeddings stored in hash +or JSON fields, Redis can retrieve documents that closely match the query in terms +of their meaning. + +In the example below, we use the +[`huggingfaceembedder`](https://pkg.go.dev/github.com/henomis/lingoose@v0.3.0/embedder/huggingface) +package from the [`LinGoose`](https://pkg.go.dev/github.com/henomis/lingoose@v0.3.0) +framework to generate vector embeddings to store and index with +Redis Query Engine. The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +Start a new Go module with the following command: + +```bash +go mod init vecexample +``` + +Then, in your module folder, install +[`go-redis`]({{< relref "/develop/clients/go" >}}) +and the +[`huggingfaceembedder`](https://pkg.go.dev/github.com/henomis/lingoose@v0.3.0/embedder/huggingface) +package: + +```bash +go get github.com/redis/go-redis/v9 +go get github.com/henomis/lingoose/embedder/huggingface +``` + +Add the following imports to your module's main program file: + +```go +package main + +import ( + "context" + "encoding/binary" + "fmt" + "math" + + huggingfaceembedder "github.com/henomis/lingoose/embedder/huggingface" + "github.com/redis/go-redis/v9" +) +``` + +You must also create a [HuggingFace account](https://huggingface.co/join) +and add a new access token to use the embedding model. See the +[HuggingFace](https://huggingface.co/docs/hub/en/security-tokens) +docs to learn how to create and manage access tokens. Note that the +account and the `all-MiniLM-L6-v2` model that we will use to produce +the embeddings for this example are both available for free. + +## Add a helper function + +The `huggingfaceembedder` model outputs the embeddings as a +`[]float32` array. If you are storing your documents as +[hash]({{< relref "/develop/data-types/hashes" >}}) objects, then you +must convert this array to a `byte` string before adding it as a hash field. +The function shown below uses Go's [`binary`](https://pkg.go.dev/encoding/binary) +package to produce the `byte` string: + +```go +func floatsToBytes(fs []float32) []byte { + buf := make([]byte, len(fs)*4) + + for i, f := range fs { + u := math.Float32bits(f) + binary.NativeEndian.PutUint32(buf[i*4:], u) + } + + return buf +} +``` + +Note that if you are using [JSON]({{< relref "/develop/data-types/json" >}}) +objects to store your documents instead of hashes, then you should store +the `[]float32` array directly without first converting it to a `byte` +string (see [Differences with JSON documents](#differences-with-json-documents) +below). + +## Create the index + +In the `main()` function, connect to Redis and delete any index previously +created with the name `vector_idx`: + +```go +ctx := context.Background() +rdb := redis.NewClient(&redis.Options{ + Addr: "localhost:6379", + Password: "", // no password docs + DB: 0, // use default DB + Protocol: 2, +}) + +rdb.FTDropIndexWithArgs(ctx, + "vector_idx", + &redis.FTDropIndexOptions{ + DeleteDocs: true, + }, +) +``` + +Next, create the index. +The schema in the example below specifies hash objects for storage and includes +three fields: the text content to index, a +[tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +field to represent the "genre" of the text, and the embedding vector generated from +the original text content. The `embedding` field specifies +[HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) +indexing, the +[L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +vector distance metric, `Float32` values to represent the vector's components, +and 384 dimensions, as required by the `all-MiniLM-L6-v2` embedding model. + +```go +_, err := rdb.FTCreate(ctx, + "vector_idx", + &redis.FTCreateOptions{ + OnHash: true, + Prefix: []any{"doc:"}, + }, + &redis.FieldSchema{ + FieldName: "content", + FieldType: redis.SearchFieldTypeText, + }, + &redis.FieldSchema{ + FieldName: "genre", + FieldType: redis.SearchFieldTypeTag, + }, + &redis.FieldSchema{ + FieldName: "embedding", + FieldType: redis.SearchFieldTypeVector, + VectorArgs: &redis.FTVectorArgs{ + HNSWOptions: &redis.FTHNSWOptions{ + Dim: 384, + DistanceMetric: "L2", + Type: "FLOAT32", + }, + }, + }, +).Result() + +if err != nil { + panic(err) +} +``` + +## Create an embedder instance + +You need an instance of the `huggingfaceembedder` class to +generate the embeddings. Use the code below to create an +instance that uses the `sentence-transformers/all-MiniLM-L6-v2` +model, passing your HuggingFace access token to the `WithToken()` +method. + +```go +hf := huggingfaceembedder.New(). + WithToken(""). + WithModel("sentence-transformers/all-MiniLM-L6-v2") +``` + +## Add data + +You can now supply the data objects, which will be indexed automatically +when you add them with [`HSet()`]({{< relref "/commands/hset" >}}), as long as +you use the `doc:` prefix specified in the index definition. + +Use the `Embed()` method of `huggingfacetransformer` +as shown below to create the embeddings that represent the `content` fields. +This method takes an array of strings and outputs a corresponding +array of `Embedding` objects. +Use the `ToFloat32()` method of `Embedding` to produce the array of float +values that we need, and use the `floatsToBytes()` function we defined +above to convert this array to a `byte` string. + +```go +sentences := []string{ + "That is a very happy person", + "That is a happy dog", + "Today is a sunny day", +} + +tags := []string{ + "persons", "pets", "weather", +} + +embeddings, err := hf.Embed(ctx, sentences) + +if err != nil { + panic(err) +} + +for i, emb := range embeddings { + buffer := floatsToBytes(emb.ToFloat32()) + + if err != nil { + panic(err) + } + + _, err = rdb.HSet(ctx, + fmt.Sprintf("doc:%v", i), + map[string]any{ + "content": sentences[i], + "genre": tags[i], + "embedding": buffer, + }, + ).Result() + + if err != nil { + panic(err) + } +} +``` + +## Run a query + +After you have created the index and added the data, you are ready to run a query. +To do this, you must create another embedding vector from your chosen query +text. Redis calculates the similarity between the query vector and each +embedding vector in the index as it runs the query. It then ranks the +results in order of this numeric similarity value. + +The code below creates the query embedding using `Embed()`, as with +the indexing, and passes it as a parameter when the query executes +(see +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about using query parameters with embeddings). + +```go +queryEmbedding, err := hf.Embed(ctx, []string{ + "That is a happy person", +}) + +if err != nil { + panic(err) +} + +buffer := floatsToBytes(queryEmbedding[0].ToFloat32()) + +if err != nil { + panic(err) +} + +results, err := rdb.FTSearchWithArgs(ctx, + "vector_idx", + "*=>[KNN 3 @embedding $vec AS vector_distance]", + &redis.FTSearchOptions{ + Return: []redis.FTSearchReturn{ + {FieldName: "vector_distance"}, + {FieldName: "content"}, + }, + DialectVersion: 2, + Params: map[string]any{ + "vec": buffer, + }, + }, +).Result() + +if err != nil { + panic(err) +} + +for _, doc := range results.Docs { + fmt.Printf( + "ID: %v, Distance:%v, Content:'%v'\n", + doc.ID, doc.Fields["vector_distance"], doc.Fields["content"], + ) +} +``` + +The code is now ready to run, but note that it may take a while to complete when +you run it for the first time (which happens because `huggingfacetransformer` +must download the `all-MiniLM-L6-v2` model data before it can +generate the embeddings). When you run the code, it outputs the following text: + +``` +ID: doc:0, Distance:0.114169843495, Content:'That is a very happy person' +ID: doc:1, Distance:0.610845327377, Content:'That is a happy dog' +ID: doc:2, Distance:1.48624765873, Content:'Today is a sunny day' +``` + +The results are ordered according to the value of the `vector_distance` +field, with the lowest distance indicating the greatest similarity to the query. +As you would expect, the result for `doc:0` with the content text *"That is a very happy person"* +is the result that is most similar in meaning to the query text +*"That is a happy person"*. + +## Differences with JSON documents + +Indexing JSON documents is similar to hash indexing, but there are some +important differences. JSON allows much richer data modelling with nested fields, so +you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema +to identify each field you want to index. However, you can declare a short alias for each +of these paths (using the `As` option) to avoid typing it in full for +every query. Also, you must set `OnJSON` to `true` when you create the index. + +The code below shows these differences, but the index is otherwise very similar to +the one created previously for hashes: + +```go +_, err = rdb.FTCreate(ctx, + "vector_json_idx", + &redis.FTCreateOptions{ + OnJSON: true, + Prefix: []any{"jdoc:"}, + }, + &redis.FieldSchema{ + FieldName: "$.content", + As: "content", + FieldType: redis.SearchFieldTypeText, + }, + &redis.FieldSchema{ + FieldName: "$.genre", + As: "genre", + FieldType: redis.SearchFieldTypeTag, + }, + &redis.FieldSchema{ + FieldName: "$.embedding", + As: "embedding", + FieldType: redis.SearchFieldTypeVector, + VectorArgs: &redis.FTVectorArgs{ + HNSWOptions: &redis.FTHNSWOptions{ + Dim: 384, + DistanceMetric: "L2", + Type: "FLOAT32", + }, + }, + }, +).Result() +``` + +Use [`JSONSet()`]({{< relref "/commands/json.set" >}}) to add the data +instead of [`HSet()`]({{< relref "/commands/hset" >}}). The maps +that specify the fields have the same structure as the ones used for `HSet()`. + +An important difference with JSON indexing is that the vectors are +specified using lists instead of binary strings. The loop below is similar +to the one used previously to add the hash data, but it doesn't use the +`floatsToBytes()` function to encode the `float32` array. + +```go +for i, emb := range embeddings { + _, err = rdb.JSONSet(ctx, + fmt.Sprintf("jdoc:%v", i), + "$", + map[string]any{ + "content": sentences[i], + "genre": tags[i], + "embedding": emb.ToFloat32(), + }, + ).Result() + + if err != nil { + panic(err) + } +} +``` + +The query is almost identical to the one for the hash documents. This +demonstrates how the right choice of aliases for the JSON paths can +save you having to write complex queries. An important thing to notice +is that the vector parameter for the query is still specified as a +binary string (using the `floatsToBytes()` method), even though the data for +the `embedding` field of the JSON was specified as an array. + +```go +jsonQueryEmbedding, err := hf.Embed(ctx, []string{ + "That is a happy person", +}) + +if err != nil { + panic(err) +} + +jsonBuffer := floatsToBytes(jsonQueryEmbedding[0].ToFloat32()) + +jsonResults, err := rdb.FTSearchWithArgs(ctx, + "vector_json_idx", + "*=>[KNN 3 @embedding $vec AS vector_distance]", + &redis.FTSearchOptions{ + Return: []redis.FTSearchReturn{ + {FieldName: "vector_distance"}, + {FieldName: "content"}, + }, + DialectVersion: 2, + Params: map[string]any{ + "vec": jsonBuffer, + }, + }, +).Result() +``` + +Apart from the `jdoc:` prefixes for the keys, the result from the JSON +query is the same as for hash: + +``` +ID: jdoc:0, Distance:0.114169843495, Content:'That is a very happy person' +ID: jdoc:1, Distance:0.610845327377, Content:'That is a happy dog' +ID: jdoc:2, Distance:1.48624765873, Content:'Today is a sunny day' +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about the indexing options, distance metrics, and query format +for vectors. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Go application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 1 +--- + +## Basic connection + +The following example shows the simplest way to connect to a Redis server: + +```go +import ( + "context" + "fmt" + "github.com/redis/go-redis/v9" +) + +func main() { + client := redis.NewClient(&redis.Options{ + Addr: "localhost:6379", + Password: "", // No password set + DB: 0, // Use default DB + Protocol: 2, // Connection protocol + }) +} +``` + +You can also connect using a connection string: + +```go +opt, err := redis.ParseURL("redis://:@localhost:6379/") +if err != nil { + panic(err) +} + +client := redis.NewClient(opt) +``` + +After connecting, you can test the connection by storing and retrieving +a simple [string]({{< relref "/develop/data-types/strings" >}}): + +```go +ctx := context.Background() + +err := client.Set(ctx, "foo", "bar", 0).Err() +if err != nil { + panic(err) +} + +val, err := client.Get(ctx, "foo").Result() +if err != nil { + panic(err) +} +fmt.Println("foo", val) +``` + +## Connect to a Redis cluster + +To connect to a Redis cluster, use `NewClusterClient()`. You can specify +one or more cluster endpoints with the `Addrs` option: + +```go +client := redis.NewClusterClient(&redis.ClusterOptions{ + Addrs: []string{":16379", ":16380", ":16381", ":16382", ":16383", ":16384"}, + + // To route commands by latency or randomly, enable one of the following. + //RouteByLatency: true, + //RouteRandomly: true, +}) +``` + +## Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the +[Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) guidelines. + +Establish a secure connection with your Redis database: + +```go +// Load client cert +cert, err := tls.LoadX509KeyPair("redis_user.crt", "redis_user_private.key") +if err != nil { + log.Fatal(err) +} + +// Load CA cert +caCert, err := os.ReadFile("redis_ca.pem") +if err != nil { + log.Fatal(err) +} +caCertPool := x509.NewCertPool() +caCertPool.AppendCertsFromPEM(caCert) + +client := redis.NewClient(&redis.Options{ + Addr: "my-redis.cloud.redislabs.com:6379", + Username: "default", // use your Redis user. More info https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ + Password: "secret", // use your Redis password + TLSConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + Certificates: []tls.Certificate{cert}, + RootCAs: caCertPool, + }, +}) + +//send SET command +err = client.Set(ctx, "foo", "bar", 0).Err() +if err != nil { + panic(err) +} + +//send GET command and print the value +val, err := client.Get(ctx, "foo").Result() +if err != nil { + panic(err) +} +fmt.Println("foo", val) +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis query engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 2 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) +or another Redis server available. Also install the +[`go-redis`]({{< relref "/develop/clients/go" >}}) client library if you +haven't already done so. + +Add the following dependencies: + +{{< clients-example go_home_json import >}} +{{< /clients-example >}} + +## Create data + +Create some test data to add to your database. The example data shown +below is compatible with both JSON and hash objects. + +{{< clients-example go_home_json create_data >}} +{{< /clients-example >}} + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/go/connect" >}}) +to learn more about the available connection options. + +{{< clients-example go_home_json connect >}} +{{< /clients-example >}} + +{{< note >}}The connection options in the example specify +[RESP2]({{< relref "/develop/reference/protocol-spec" >}}) in the `Protocol` +field. We recommend that you use RESP2 for Redis query engine operations in `go-redis` +because some of the response structures for the default RESP3 are currently +incomplete and so you must handle the "raw" responses in your own code. + +If you do want to use RESP3, you should set the `UnstableResp3` option when +you connect: + +```go +rdb := redis.NewClient(&redis.Options{ + UnstableResp3: true, + // Other options... +}) +``` + +You must also access command results using the `RawResult()` and `RawVal()` methods +rather than the usual `Result()` and `Val()`: + +```go +res1, err := client.FTSearchWithArgs( + ctx, "txt", "foo bar", &redis.FTSearchOptions{}, +).RawResult() +val1 := client.FTSearchWithArgs( + ctx, "txt", "foo bar", &redis.FTSearchOptions{}, +).RawVal() +``` +{{< /note >}} + +Use the code below to create a search index. The `FTCreateOptions` parameter enables +indexing only for JSON objects where the key has a `user:` prefix. +The +[schema]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for the index has three fields for the user's name, age, and city. +The `FieldName` field of the `FieldSchema` struct specifies a +[JSON path]({{< relref "/develop/data-types/json/path" >}}) +that identifies which data field to index. Use the `As` struct field +to provide an alias for the JSON path expression. You can use +the alias in queries as a short and intuitive way to refer to the +expression, instead of typing it in full: + +{{< clients-example go_home_json make_index >}} +{{< /clients-example >}} + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them: + +{{< clients-example go_home_json add_data >}} +{{< /clients-example >}} + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +{{< clients-example go_home_json query1 >}} +{{< /clients-example >}} + +Specify query options to return only the `city` field: + +{{< clients-example go_home_json query2 >}} +{{< /clients-example >}} + +You can also use the same query with the `CountOnly` option +enabled to get the number of documents found without +returning the documents themselves. + +{{< clients-example go_home_json query2count_only >}} +{{< /clients-example >}} + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +{{< clients-example go_home_json query3 >}} +{{< /clients-example >}} + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must set `OnHash` to `true` in the `FTCreateOptions` +object when you create the index. The code below shows these changes with +a new index called `hash-idx:users`, which is otherwise the same as +the `idx:users` index used for JSON documents in the previous examples. + +{{< clients-example go_home_json make_hash_index >}} +{{< /clients-example >}} + +You use [`HSet()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`JSONSet()`]({{< relref "/commands/json.set" >}}), +but the same flat `userX` maps work equally well with either +hash or JSON: + +{{< clients-example go_home_json add_hash_data >}} +{{< /clients-example >}} + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The format of the result is +almost the same except that the fields are returned directly in the +`Document` object map of the result (for JSON, the fields are all enclosed +in a string under the key "$"): + +{{< clients-example go_home_json query1_hash >}} +{{< /clients-example >}} + +## More information + +See the [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +aliases: /develop/connect/clients/go +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Go application to a Redis database +linkTitle: go-redis (Go) +title: go-redis guide (Go) +weight: 7 +--- + +[`go-redis`](https://github.com/redis/go-redis) is the [Go](https://go.dev/) client for Redis. +The sections below explain how to install `go-redis` and connect your application to a Redis database. + +`go-redis` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +## Install + +`go-redis` supports the last two Go versions. You can only use it from within +a Go module, so you must initialize a Go module before you start, or add your code to +an existing module: + +``` +go mod init github.com/my/repo +``` + +Use the `go get` command to install `go-redis/v9`: + +``` +go get github.com/redis/go-redis/v9 +``` + +## Connect + +The following example shows the simplest way to connect to a Redis server: + +```go +import ( + "context" + "fmt" + "github.com/redis/go-redis/v9" +) + +func main() { + client := redis.NewClient(&redis.Options{ + Addr: "localhost:6379", + Password: "", // No password set + DB: 0, // Use default DB + Protocol: 2, // Connection protocol + }) +} +``` + +You can also connect using a connection string: + +```go +opt, err := redis.ParseURL("redis://:@localhost:6379/") +if err != nil { + panic(err) +} + +client := redis.NewClient(opt) +``` + +After connecting, you can test the connection by storing and retrieving +a simple [string]({{< relref "/develop/data-types/strings" >}}): + +```go +ctx := context.Background() + +err := client.Set(ctx, "foo", "bar", 0).Err() +if err != nil { + panic(err) +} + +val, err := client.Get(ctx, "foo").Result() +if err != nil { + panic(err) +} +fmt.Println("foo", val) +``` + +You can also easily store and retrieve a [hash]({{< relref "/develop/data-types/hashes" >}}): + +```go +hashFields := []string{ + "model", "Deimos", + "brand", "Ergonom", + "type", "Enduro bikes", + "price", "4972", +} + +res1, err := client.HSet(ctx, "bike:1", hashFields).Result() + +if err != nil { + panic(err) +} + +fmt.Println(res1) // >>> 4 + +res2, err := client.HGet(ctx, "bike:1", "model").Result() + +if err != nil { + panic(err) +} + +fmt.Println(res2) // >>> Deimos + +res3, err := client.HGet(ctx, "bike:1", "price").Result() + +if err != nil { + panic(err) +} + +fmt.Println(res3) // >>> 4972 + +res4, err := client.HGetAll(ctx, "bike:1").Result() + +if err != nil { + panic(err) +} + +fmt.Println(res4) +// >>> map[brand:Ergonom model:Deimos price:4972 type:Enduro bikes] + ``` + + Use + [struct tags](https://stackoverflow.com/questions/10858787/what-are-the-uses-for-struct-tags-in-go) + of the form `redis:""` with the `Scan()` method to parse fields from + a hash directly into corresponding struct fields: + + ```go +type BikeInfo struct { + Model string `redis:"model"` + Brand string `redis:"brand"` + Type string `redis:"type"` + Price int `redis:"price"` +} + +var res4a BikeInfo +err = client.HGetAll(ctx, "bike:1").Scan(&res4a) + +if err != nil { + panic(err) +} + +fmt.Printf("Model: %v, Brand: %v, Type: %v, Price: $%v\n", + res4a.Model, res4a.Brand, res4a.Type, res4a.Price) +// >>> Model: Deimos, Brand: Ergonom, Type: Enduro bikes, Price: $4972 + ``` + +## Observability + +`go-redis` supports [OpenTelemetry](https://opentelemetry.io/) instrumentation. +to monitor performance and trace the execution of Redis commands. +For example, the following code instruments Redis commands to collect traces, logs, and metrics: + +```go +import ( + "github.com/redis/go-redis/v9" + "github.com/redis/go-redis/extra/redisotel/v9" +) + +client := redis.NewClient(&redis.Options{...}) + +// Enable tracing instrumentation. +if err := redisotel.InstrumentTracing(client); err != nil { + panic(err) +} + +// Enable metrics instrumentation. +if err := redisotel.InstrumentMetrics(client); err != nil { + panic(err) +} +``` + +See the `go-redis` [GitHub repo](https://github.com/redis/go-redis/blob/master/example/otel/README.md). +for more OpenTelemetry examples. + +## More information + +See the other pages in this section for more information and examples. +Further examples are available at the [`go-redis`](https://redis.uptrace.dev/guide/) website +and the [GitHub repository](https://github.com/redis/go-redis). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Java application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 2 +--- + +Start by creating a connection to your Redis server. There are many ways to achieve this using Lettuce. Here are a few. + +## Basic connection + +```java +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; + +public class ConnectBasicTest { + + public void connectBasic() { + RedisURI uri = RedisURI.Builder + .redis("localhost", 6379) + .withAuthentication("default", "yourPassword") + .build(); + RedisClient client = RedisClient.create(uri); + StatefulRedisConnection connection = client.connect(); + RedisCommands commands = connection.sync(); + + commands.set("foo", "bar"); + String result = commands.get("foo"); + System.out.println(result); // >>> bar + + connection.close(); + + client.shutdown(); + } +} +``` + +## Connect to a Redis cluster + +To connect to a Redis cluster, use `RedisClusterClient`. + +```java +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; + +//... +try (RedisClusterClient clusterClient = RedisClusterClient.create(redisURI)) { + StatefulRedisClusterConnection connection = clusterClient.connect(); + + //... + + connection.close(); +} +``` + +Learn more about Cluster connections and how to configure them in [the reference guide](https://redis.github.io/lettuce/ha-sharding/#redis-cluster). + +## Asynchronous connection + +```java +package org.example; +import java.util.*; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.*; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.StatefulRedisConnection; + +public class Async { + public static void main(String[] args) { + RedisClient redisClient = RedisClient.create("redis://localhost:6379"); + + try (StatefulRedisConnection connection = redisClient.connect()) { + RedisAsyncCommands asyncCommands = connection.async(); + + // Asynchronously store & retrieve a simple string + asyncCommands.set("foo", "bar").get(); + System.out.println(asyncCommands.get("foo").get()); // prints bar + + // Asynchronously store key-value pairs in a hash directly + Map hash = new HashMap<>(); + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + asyncCommands.hset("user-session:123", hash).get(); + + System.out.println(asyncCommands.hgetall("user-session:123").get()); + // Prints: {name=John, surname=Smith, company=Redis, age=29} + } catch (ExecutionException | InterruptedException e) { + throw new RuntimeException(e); + } finally { + redisClient.shutdown(); + } + } +} +``` + +Learn more about asynchronous Lettuce API in [the reference guide](https://redis.github.io/lettuce/#asynchronous-api). + +## Reactive connection + +```java +package org.example; +import java.util.*; +import io.lettuce.core.*; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.StatefulRedisConnection; + +public class Main { + public static void main(String[] args) { + RedisClient redisClient = RedisClient.create("redis://localhost:6379"); + + try (StatefulRedisConnection connection = redisClient.connect()) { + RedisReactiveCommands reactiveCommands = connection.reactive(); + + // Reactively store & retrieve a simple string + reactiveCommands.set("foo", "bar").block(); + reactiveCommands.get("foo").doOnNext(System.out::println).block(); // prints bar + + // Reactively store key-value pairs in a hash directly + Map hash = new HashMap<>(); + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + + reactiveCommands.hset("user-session:124", hash).then( + reactiveCommands.hgetall("user-session:124") + .collectMap(KeyValue::getKey, KeyValue::getValue).doOnNext(System.out::println)) + .block(); + // Prints: {surname=Smith, name=John, company=Redis, age=29} + + } finally { + redisClient.shutdown(); + } + } +} +``` + +Learn more about reactive Lettuce API in [the reference guide](https://redis.github.io/lettuce/#reactive-api). + +## Connect to a Redis cluster + +```java +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; + +// ... + +RedisURI redisUri = RedisURI.Builder.redis("localhost").withPassword("authentication").build(); + +RedisClusterClient clusterClient = RedisClusterClient.create(redisUri); +StatefulRedisClusterConnection connection = clusterClient.connect(); +RedisAdvancedClusterAsyncCommands commands = connection.async(); + +// ... + +connection.close(); +clusterClient.shutdown(); +``` + +### TLS connection + +When you deploy your application, use TLS and follow the [Redis security guidelines]({{< relref "/operate/oss_and_stack/management/security/" >}}). + +```java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("secret!") // use your Redis password + .build(); + +RedisClient client = RedisClient.create(redisUri); +``` + +## Connection Management in Lettuce + +Lettuce uses `ClientResources` for efficient management of shared resources like event loop groups and thread pools. +For connection pooling, Lettuce leverages `RedisClient` or `RedisClusterClient`, which can handle multiple concurrent connections efficiently. + +## Connection pooling + +A typical approach with Lettuce is to create a single `RedisClient` instance and reuse it to establish connections to your Redis server(s). +These connections are multiplexed; that is, multiple commands can be run concurrently over a single or a small set of connections, making explicit pooling less practical. +See +[Connection pools and multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) +for more information. + +Lettuce provides pool config to be used with Lettuce asynchronous connection methods. + +```java +package org.example; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TransactionResult; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.support.*; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; + +public class Pool { + public static void main(String[] args) { + RedisClient client = RedisClient.create(); + + String host = "localhost"; + int port = 6379; + + CompletionStage>> poolFuture + = AsyncConnectionPoolSupport.createBoundedObjectPoolAsync( + () -> client.connectAsync(StringCodec.UTF8, RedisURI.create(host, port)), + BoundedPoolConfig.create()); + + // await poolFuture initialization to avoid NoSuchElementException: Pool exhausted when starting your application + AsyncPool> pool = poolFuture.toCompletableFuture() + .join(); + + // execute work + CompletableFuture transactionResult = pool.acquire() + .thenCompose(connection -> { + + RedisAsyncCommands async = connection.async(); + + async.multi(); + async.set("key", "value"); + async.set("key2", "value2"); + System.out.println("Executed commands in pipeline"); + return async.exec().whenComplete((s, throwable) -> pool.release(connection)); + }); + transactionResult.join(); + + // terminating + pool.closeAsync(); + + // after pool completion + client.shutdownAsync(); + } +} +``` + +In this setup, `LettuceConnectionFactory` is a custom class you would need to implement, adhering to Apache Commons Pool's `PooledObjectFactory` interface, to manage lifecycle events of pooled `StatefulRedisConnection` objects. +--- +aliases: /develop/connect/clients/java/lettuce +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Lettuce application to a Redis database +linkTitle: Lettuce (Java) +title: Lettuce guide (Java) +weight: 6 +--- + +[Lettuce](https://github.com/redis/lettuce/tree/main/src/main) is an advanced Java client for Redis +that supports synchronous, asynchronous, and reactive connections. +If you only need synchronous connections then you may find the other Java client +[Jedis]({{< relref "/develop/clients/jedis" >}}) easier to use. + +The sections below explain how to install `Lettuce` and connect your application +to a Redis database. + +`Lettuce` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +## Install + +To include Lettuce as a dependency in your application, edit the appropriate dependency file as shown below. + +If you use Maven, add the following dependency to your `pom.xml`: + +```xml + + io.lettuce + lettuce-core + 6.3.2.RELEASE + +``` + +If you use Gradle, include this line in your `build.gradle` file: + +``` +dependencies { + compileOnly 'io.lettuce:lettuce-core:6.3.2.RELEASE' +} +``` + +If you wish to use the JAR files directly, download the latest Lettuce and, optionally, Apache Commons Pool2 JAR files from Maven Central or any other Maven repository. + +To build from source, see the instructions on the [Lettuce source code GitHub repo](https://github.com/lettuce-io/lettuce-core). + +## Connect and test + +Connect to a local server using the following code. This example +also stores and retrieves a simple string value to test the connection. + +```java +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; + +public class ConnectBasicTest { + + public void connectBasic() { + RedisURI uri = RedisURI.Builder + .redis("localhost", 6379) + .build(); + + RedisClient client = RedisClient.create(uri); + StatefulRedisConnection connection = client.connect(); + RedisCommands commands = connection.sync(); + + commands.set("foo", "bar"); + String result = commands.get("foo"); + System.out.println(result); // >>> bar + + connection.close(); + + client.shutdown(); + } +} +``` + +## More information + +The [Lettuce reference guide](https://redis.github.io/lettuce/) has more examples +and an API reference. You may also be interested in the +[Project Reactor](https://projectreactor.io/) library that Lettuce uses. + +See also the other pages in this section for more information and examples: +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get your Lettuce app ready for production +linkTitle: Production usage +title: Production usage +weight: 3 +--- + +This guide offers recommendations to get the best reliability and +performance in your production environment. + +## Checklist + +Each item in the checklist below links to the section +for a recommendation. Use the checklist icons to record your +progress in implementing the recommendations. + +{{< checklist "lettuceprodlist" >}} + {{< checklist-item "#timeouts" >}}Timeouts{{< /checklist-item >}} + {{< checklist-item "#cluster-topology-refresh">}}Cluster topology refresh{{< /checklist-item >}} + {{< checklist-item "#dns-cache-and-redis" >}}DNS cache and Redis{{< /checklist-item >}} + {{< checklist-item "#exception-handling" >}}Exception handling{{< /checklist-item >}} +{{< /checklist >}} + +## Recommendations + +The sections below offer recommendations for your production environment. Some +of them may not apply to your particular use case. + +## Timeouts + +Lettuce provides timeouts for many operations, such as command execution, SSL handshake, and Sentinel discovery. By default, Lettuce uses a global timeout value of 60 seconds for these operations, but you can override the global timeout value with individual timeout values for each operation. + +{{% alert title="Tip" color="warning" %}} +Choosing suitable timeout values is crucial for your application's performance and stability and is specific to each environment. +Configuring timeouts is only necessary if you have issues with the default values. +In some cases, the defaults are based on environment-specific settings (e.g., operating system settings), while in other cases, they are built into the Lettuce driver. +For more details on setting specific timeouts, see the [Lettuce reference guide](https://redis.github.io/lettuce/). +{{% /alert %}} + +### Prerequisites + +To set TCP-level timeouts, you need to ensure you have one of [Netty Native Transports](https://netty.io/wiki/native-transports.html) installed. The most common one is `netty-transport-native-epoll`, which is used for Linux systems. You can add it to your project by including the following dependency in your `pom.xml` file: + +```xml + + io.netty + netty-transport-native-epoll + ${netty.version} + linux-x86_64 + +``` + +Once you have the native transport dependency, you can verify that by using the following code: + +```java +logger.info("Lettuce epool is available: {}", EpollProvider.isAvailable()); +``` + +If the snippet above returns `false`, you need to enable debugging logging for `io.lettuce.core` and `io.netty` to see why the native transport is not available. + +For more information on using Netty Native Transport, see the [Lettuce reference guide](https://redis.github.io/lettuce/advanced-usage/#native-transports). + +### Setting timeouts + +Below is an example of setting socket-level timeouts. The `TCP_USER_TIMEOUT` setting is useful for scenarios where the server stops responding without acknowledging the last request, while the `KEEPALIVE` setting is good for detecting dead connections where there is no traffic between the client and the server. + +```java +RedisURI redisURI = RedisURI.Builder + .redis("localhost") + // set the global default from the default 60 seconds to 30 seconds + .withTimeout(Duration.ofSeconds(30)) + .build(); + +try (RedisClient client = RedisClient.create(redisURI)) { + // or set specific timeouts for things such as the TCP_USER_TIMEOUT and TCP_KEEPALIVE + + // A good general rule of thumb is to follow the rule + // TCP_USER_TIMEOUT = TCP_KEEP_IDLE+TCP_KEEPINTVL * TCP_KEEPCNT + // in this case, 20 = 5 + 5 * 3 + + SocketOptions.TcpUserTimeoutOptions tcpUserTimeout = SocketOptions.TcpUserTimeoutOptions.builder() + .tcpUserTimeout(Duration.ofSeconds(20)) + .enable().build(); + + SocketOptions.KeepAliveOptions keepAliveOptions = SocketOptions.KeepAliveOptions.builder() + .interval(Duration.ofSeconds(5)) + .idle(Duration.ofSeconds(5)) + .count(3).enable().build(); + + SocketOptions socketOptions = SocketOptions.builder() + .tcpUserTimeout(tcpUserTimeout) + .keepAlive(keepAliveOptions) + .build(); + + client.setOptions(ClientOptions.builder() + .socketOptions(socketOptions) + .build()); + + StatefulRedisConnection connection = client.connect(); + System.out.println(connection.sync().ping()); +} +``` + +## Cluster topology refresh + +The Redis Cluster configuration is dynamic and can change at runtime. +New nodes may be added, and the primary node for a specific slot can shift. +Lettuce automatically handles [MOVED]({{< relref "/operate/oss_and_stack/reference/cluster-spec#moved-redirection" >}}) and [ASK]({{< relref "/operate/oss_and_stack/reference/cluster-spec#ask-redirection" >}}) redirects, but to enhance your application's resilience, you should enable adaptive topology refreshing: + +```java +RedisURI redisURI = RedisURI.Builder + .redis("localhost") + // set the global default from the default 60 seconds to 30 seconds + .withTimeout(Duration.ofSeconds(30)) + .build(); + +// Create a RedisClusterClient with adaptive topology refresh +try (RedisClusterClient clusterClient = RedisClusterClient.create(redisURI)) { + // Enable TCP keep-alive and TCP user timeout just like in the standalone example + SocketOptions.TcpUserTimeoutOptions tcpUserTimeout = SocketOptions.TcpUserTimeoutOptions.builder() + .tcpUserTimeout(Duration.ofSeconds(20)) + .enable() + .build(); + + SocketOptions.KeepAliveOptions keepAliveOptions = SocketOptions.KeepAliveOptions.builder() + .interval(Duration.ofSeconds(5)) + .idle(Duration.ofSeconds(5)) + .count(3) + .enable() + .build(); + + SocketOptions socketOptions = SocketOptions.builder() + .tcpUserTimeout(tcpUserTimeout) + .keepAlive(keepAliveOptions) + .build(); + + // Enable adaptive topology refresh + // Configure adaptive topology refresh options + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enableAllAdaptiveRefreshTriggers() + .adaptiveRefreshTriggersTimeout(Duration.ofSeconds(30)) + .build(); + + ClusterClientOptions options = ClusterClientOptions.builder() + .topologyRefreshOptions(topologyRefreshOptions) + .socketOptions(socketOptions).build(); + + clusterClient.setOptions(options); + + StatefulRedisClusterConnection connection = clusterClient.connect(); + System.out.println(connection.sync().ping()); + connection.close(); +} +``` +Learn more about topology refresh configuration settings in [the reference guide](https://redis.github.io/lettuce/ha-sharding/#redis-cluster). + + +## DNS cache and Redis + +When you connect to a Redis server with multiple endpoints, such as [Redis Enterprise Active-Active](https://redis.com/redis-enterprise/technology/active-active-geo-distribution/), you *must* +disable the JVM's DNS cache. If a server node or proxy fails, the IP address for any database +affected by the failure will change. When this happens, your app will keep +trying to use the stale IP address if DNS caching is enabled. + +Use the following code to disable the DNS cache: + +```java +java.security.Security.setProperty("networkaddress.cache.ttl","0"); +java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); +``` + +## Exception handling + +Redis handles many errors using return values from commands, but there +are also situations where exceptions can be thrown. In production code, +you should handle exceptions as they occur. + +See the Error handling sections of the +[Lettuce async](https://redis.github.io/lettuce/user-guide/async-api/#error-handling) and +[Lettuce reactive](https://redis.github.io/lettuce/user-guide/reactive-api/#error-handling) +API guides to learn more about handling exceptions. +--- +aliases: /develop/connect/clients/pools-and-muxing +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Manage Redis connections efficiently +linkTitle: Pooling/multiplexing +title: Connection pools and multiplexing +weight: 40 +--- + +Redis example code generally opens a connection, demonstrates +a command or feature, and then closes. Real-world code typically +has short bursts of communication with the server and periods of +inactivity in between. Opening and closing connections +involves some overhead and leads to inefficiency if you do +it frequently. This means that you can improve the performance of production +code by making as few separate connections as possible. + +Managing connections in your own code can be tricky, so the Redis +client libraries give you some help. The two basic approaches to +connection management are called *connection pooling* and *multiplexing*. +The [`redis-py`]({{< relref "/develop/clients/redis-py" >}}), +[`jedis`]({{< relref "/develop/clients/jedis" >}}), and +[`go-redis`]({{< relref "/develop/clients/go" >}}) clients support +connection pooling, while +[`NRedisStack`]({{< relref "/develop/clients/dotnet" >}}) +supports multiplexing. +[`Lettuce`]({{< relref "/develop/clients/lettuce" >}}) +supports both approaches. + +## Connection pooling + +When you initialize a connection pool, the client opens a small number +of connections and adds them to the pool. + +{{< image filename="/images/dev/connect/pool-and-mux/ConnPoolInit.drawio.svg" >}} + +Each time you "open" a connection +from the pool, the client returns one of these existing +connections and notes the fact that it is in use. + +{{< image filename="/images/dev/connect/pool-and-mux/ConnPoolInUse.drawio.svg" >}} + +When you later "close" +the connection, the client puts it back into the pool of available +connections without actually closing it. + +{{< image filename="/images/dev/connect/pool-and-mux/ConnPoolDiscon.drawio.svg" >}} + +If all connections in the pool are in use but the app needs more, then +the client can simply open new connections as necessary. In this way, the client +eventually finds the right number of connections to satisfy your +app's demands. + +## Multiplexing + +Instead of pooling several connections, a multiplexer keeps a +single connection open and uses it for all traffic between the +client and the server. The "connections" returned to your code are +used to identify where to send the response data from your commands. + +{{< image filename="/images/dev/connect/pool-and-mux/ConnMux.drawio.svg" >}} + +Note that it is not a problem if the multiplexer receives several commands close +together in time. When this happens, the multiplexer can often combine the commands into a +[pipeline]({{< relref "/develop/use/pipelining" >}}), which +improves efficiency. + +Multiplexing offers high efficiency but works transparently without requiring +any special code to enable it in your app. The main disadvantage of multiplexing compared to +connection pooling is that it can't support the blocking "pop" commands (such as +[`BLPOP`]({{< relref "/commands/blpop" >}})) since these would stall the +connection for all callers. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Construct commands and send them to the Redis server. +linkTitle: Issue commands +title: Issue commands +weight: 5 +--- + +Unlike the other [client libraries]({{< relref "/develop/clients" >}}), +`hiredis` doesn't provide an extensive API to construct the many different +Redis [commands]({{< relref "/commands" >}}). However, it does provide a lightweight and +flexible API to help you construct commands and parse their replies from +your own code. + +The sections below describe the available functions in +detail. + +## Construct synchronous commands + +Use the `redisCommand()` function to send commands to the server: + +```c +void *redisCommand(redisContext *c, const char *format, ...); +``` + +This function receives a `redisContext` pointer and a pointer +to a string containing the command (see +[Connect]({{< relref "/develop/clients/hiredis/connect" >}}) +to learn how to obtain the context pointer). The command text is the +same as the equivalent [`redis-cli`]({{< relref "/develop/tools/cli" >}}) +command. For example, to issue the command: + +``` +SET foo bar +``` + +you would use the following command with an existing `redisContext* c`: + +```c +redisReply *reply = redisCommand(c, "SET foo bar"); +``` + +See the [Command reference]({{< relref "/commands" >}}) for examples +of CLI commands that you can use with `hiredis`. Most code examples +in other sections of the docs also have a CLI tab showing +command sequences that are equivalent to the code. + +The command string is interpreted in a similar way to the format +string for `printf()`, so you can easily interpolate string values from +your code into the command with the `%s` format specifier: + +```c +char *myKeyNumber = "1"; +char *myValue = "Hello"; + +// This issues the command 'SET key:1 Hello'. +redisReply *reply = redisCommand(c, "SET key:%s %s", myKeyNumber, myValue); +``` + +You may need to include binary data in the command (for example, to store +[vector embeddings]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +in fields of a [hash]({{< relref "/develop/data-types/hashes" >}})) object. +To do this, use the `%b` format specifier and pass a pointer to the +data buffer, followed by a `size_t` value indicating its length in bytes. +As the example below shows, you can freely mix `%s` and `%b` specifiers +in the same format string. Also, you can use the sequence `%%` to +denote a literal percent sign, but the other `printf()` specifiers, +such as `%d`, are not supported. + +```c +char *entryNumber = "1"; +char *embedding = ""; +char *url = "https://redis.io/"; +size_t embLength = 13; + +redisReply *reply = redisCommand(c, + "HSET entry:%s embedding %b url %s", + entryNumber, + embedding, embLength, + url +); +``` + +The `redisCommand()` function has a variant called `redisCommandArgv()`: + +```c +void *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen); +``` + +This doesn't take a format string but instead builds the command from an array +of strings passed in the `argv` parameter. + +Use the `argc` value to +specify the length of this array and the `argvlen` array to specify +the lengths of each of the strings in the array. If you pass `NULL` +for `argvlen` then the function will attempt to use `strlen()` to +get the length of each string. However, this will not work if any of +the strings contains binary data, so you should pass `argvlen` +explicitly in this case. The example below shows how to use +`redisCommandArgv()` with a simple command: + +```c +const char *argv[3] = { "SET", "greeting", "hello"}; +int argc = 3; +const size_t argvlen[] = {3, 8, 5}; + +redisReply *reply = redisCommandArgv(c, argc, argv, argvlen); +``` + +## Construct asynchronous commands + +Use the `redisAsyncCommand()` and `redisAsyncCommandArgv()` +functions to send commands to the server asynchronously: + +```c +#include + . + . + . +int redisAsyncCommand( + redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, + const char *format, ...); +int redisAsyncCommandArgv( + redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, + int argc, const char **argv, const size_t *argvlen); +``` + +These work the same way as `redisCommand()` and `redisCommandArgv()` +(see [Construct synchronous commands](#construct-synchronous-commands) +above) but they have two extra parameters. The first is a pointer to +a optional callback function and the second is a pointer to your own +custom data, which will be passed to the callback when it +executes. Pass `NULL` for both of these pointers if you don't need +to use them. + +The callback has the following signature: + +```c +void(redisAsyncContext *c, void *reply, void *privdata); +``` + +The first parameter is the asynchronous connection context and +the second is a pointer to the reply object. Use a cast to +`(redisReply *)` to access the reply in the usual way (see +[Handle command replies]({{< relref "/develop/clients/hiredis/handle-replies" >}}) +for a full description of `redisReply`). The last parameter +is the custom data pointer that you supplied during the +`redisAsyncCommand()` call. This is passed to your function +without any modification. + +The example below shows how you can use `redisAsyncCommand()` with +or without a reply callback: + +```c +// The callback expects the key for the data in the `privdata` +// custom data parameter. +void getCallback(redisAsyncContext *c, void *r, void *privdata) { + redisReply *reply = r; + char *key = privdata; + + if (reply == NULL) { + if (c->errstr) { + printf("errstr: %s\n", c->errstr); + } + return; + } + + printf("Key: %s, value: %s\n", key, reply->str); + + /* Disconnect after receiving the reply to GET */ + redisAsyncDisconnect(c); +} + . + . + . + +// Key and string value to pass to `SET`. +char *key = "testkey"; +char *value = "testvalue"; + +// We aren't interested in the simple status reply for +// `SET`, so use NULL for the callback and custom data +// pointers. +redisAsyncCommand(c, NULL, NULL, "SET %s %s", key, value); + +// The reply from `GET` is essential, so set a callback +// to retrieve it. Also, pass the key to the callback +// as the custom data. +redisAsyncCommand(c, getCallback, key, "GET %s", key); +``` + +Note that you should normally disconnect asynchronously from a +callback when you have finished using the connection. +Use `redisAsyncDisconnect()` to disconnect gracefully, letting +pending commands execute and activate their callbacks. +Use `redisAsyncFree()` to disconnect immediately. If you do this then +any pending callbacks from commands that have already executed will be +called with a `NULL` reply pointer. + +## Command replies + +The information in the `redisReply` object has several formats, +and the format for a particular reply depends on the command that generated it. +See +[Handle replies]({{< relref "/develop/clients/hiredis/handle-replies" >}}) +to learn about the different reply formats and how to use them. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Handle command replies with `hiredis`. +linkTitle: Handle command replies +title: Handle command replies +weight: 10 +--- + +The `redisCommand()` and `redisCommandArgv()` functions return +a pointer to a `redisReply` object when you issue a command (see +[Issue commands]({{< relref "/develop/clients/hiredis/issue-commands" >}}) +for more information). This type supports all +reply formats defined in the +[RESP2 and RESP3]({{< relref "/develop/reference/protocol-spec#resp-protocol-description" >}}) +protocols, so its content varies greatly between calls. + +A simple example is the status response returned by the [`SET`]({{< relref "/commands/set" >}}) +command. The code below shows how to get this from the `redisReply` +object: + +```c +redisReply *reply = redisCommand(c, "SET greeting Hello"); + +// Check and free the reply. +if (reply != NULL) { + printf("Reply: %s\n", reply->str); + freeReplyObject(reply); + reply = NULL; +} +``` + +A null reply indicates an error, so you should always check for this. +If an error does occur, then the `redisContext` object will have a +non-zero error number in its integer `err` field and a textual +description of the error in its `errstr` field. + +For `SET`, a successful call will simply return an "OK" string that you +can access with the `reply->str` field. The code in the example prints +this to the console, but you should check for the specific value to ensure +the command executed correctly. + +The `redisCommand()` call allocates memory for the reply, so you should +always free it using `freeReplyObject()` when you have finished using +the reply. If you want to reuse the reply variable then it is wise to +set it to `NULL` after you free it, so that you don't accidentally use +the stale pointer later. + +## Reply formats + +The Redis +[`RESP`]({{< relref "/develop/reference/protocol-spec#resp-protocol-description" >}}) +protocols support several different reply formats for commands. + +You can find the reply format for a command at the end of its +reference page in the RESP2/RESP3 Reply section (for example, the +[`INCRBY`]({{< relref "/commands/incrby" >}}) page shows that the +command has an integer result). You can also determine the format +using the `type` field of the reply object. This contains a +different integer value for each type. The `hiredis.h` header file +defines constants for all of these integer values (for example `REDIS_REPLY_STRING`). + +The `redisReply` struct has several fields to contain different +types of replies, with different fields being set depending on +the value of the `type` field. The table below shows the type +constants, the corresponding reply type, and the fields you can +use to access the reply value: + +| Constant | Type | Relevant fields of `redisReply` | RESP protocol | +| :- | :- |:- | :- | +| `REDIS_REPLY_STATUS` | [Simple string]({{< relref "/develop/reference/protocol-spec#simple-strings" >}}) | `reply->str`: the string value (`char*`)
`reply->len`: the string length (`size_t`) | 2, 3 | +| `REDIS_REPLY_ERROR` | [Simple error]({{< relref "/develop/reference/protocol-spec#simple-errors" >}}) | `reply->str`: the string value (`char*`)
`reply->len`: the string length (`size_t`) | 2, 3 | +| `REDIS_REPLY_INTEGER` | [Integer]({{< relref "/develop/reference/protocol-spec#integers" >}}) | `reply->integer`: the integer value (`long long`)| 2, 3 | +| `REDIS_REPLY_NIL` | [Null]({{< relref "/develop/reference/protocol-spec#nulls" >}}) | No data | 2, 3 | +| `REDIS_REPLY_STRING` | [Bulk string]({{< relref "/develop/reference/protocol-spec#bulk-strings" >}}) |`reply->str`: the string value (`char*`)
`reply->len`: the string length (`size_t`) | 2, 3 | +| `REDIS_REPLY_ARRAY` | [Array]({{< relref "/develop/reference/protocol-spec#arrays" >}}) | `reply->elements`: number of elements (`size_t`)
`reply->element`: array elements (`redisReply`) | 2, 3 | +| `REDIS_REPLY_DOUBLE` | [Double]({{< relref "/develop/reference/protocol-spec#doubles" >}}) | `reply->str`: double value as string (`char*`)
`reply->len`: the string length (`size_t`) | 3 | +| `REDIS_REPLY_BOOL` | [Boolean]({{< relref "/develop/reference/protocol-spec#booleans" >}}) | `reply->integer`: the boolean value, 0 or 1 (`long long`) | 3 | +| `REDIS_REPLY_MAP` | [Map]({{< relref "/develop/reference/protocol-spec#maps" >}}) | `reply->elements`: number of elements (`size_t`)
`reply->element`: array elements (`redisReply`) | 3 | +| `REDIS_REPLY_SET` | [Set]({{< relref "/develop/reference/protocol-spec#sets" >}}) | `reply->elements`: number of elements (`size_t`)
`reply->element`: array elements (`redisReply`) | 3 | +| `REDIS_REPLY_PUSH` | [Push]({{< relref "/develop/reference/protocol-spec#pushes" >}}) | `reply->elements`: number of elements (`size_t`)
`reply->element`: array elements (`redisReply`) | 3 | +| `REDIS_REPLY_BIGNUM` | [Big number]({{< relref "/develop/reference/protocol-spec#big-numbers" >}}) | `reply->str`: number value as string (`char*`)
`reply->len`: the string length (`size_t`) | 3 | +| `REDIS_REPLY_VERB` | [Verbatim string]({{< relref "/develop/reference/protocol-spec#verbatim-strings" >}}) |`reply->str`: the string value (`char*`)
`reply->len`: the string length (`size_t`)
`reply->vtype`: content type (`char[3]`) | 3 | + +## Reply format processing examples + +The sections below explain how to process specific reply types in +more detail. + +### Integers + +The `REDIS_REPLY_INTEGER` and `REDIS_REPLY_BOOL` reply types both +contain values in `reply->integer`. However, `REDIS_REPLY_BOOL` is +rarely used. Even when the command essentially returns a boolean value, +the reply is usually reported as an integer. + +```c +// Add some values to a set. +redisReply *reply = redisCommand(c, "SADD items bread milk peas"); + +if (reply->type == REDIS_REPLY_INTEGER) { + // Report status. + printf("Integer reply\n"); + printf("Number added: %lld\n", reply->integer); + // >>> Number added: 3 +} + +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "SISMEMBER items bread"); + +// This also gives an integer reply but you should interpret +// it as a boolean value. +if (reply->type == REDIS_REPLY_INTEGER) { + // Respond to boolean integer value. + printf("Integer reply\n"); + + if (reply->integer == 0) { + printf("Items set has no member 'bread'\n"); + } else { + printf("'Bread' is a member of items set\n"); + } + // >>> 'Bread' is a member of items set +} + +freeReplyObject(reply); +reply = NULL; +``` + +### Strings + +The `REDIS_REPLY_STATUS`, `REDIS_REPLY_ERROR`, `REDIS_REPLY_STRING`, +`REDIS_REPLY_DOUBLE`, `REDIS_REPLY_BIGNUM`, and `REDIS_REPLY_VERB` +are all returned as strings, with the main difference lying in how +you interpret them. For all these types, the string value is +returned in `reply->str` and the length of the string is in +`reply->len`. The example below shows some of the possibilities. + +```c +// Set a numeric value in a string. +reply = redisCommand(c, "SET number 1.5"); + +// This gives a status reply. +if (reply->type == REDIS_REPLY_STATUS) { + // Report status. + printf("Status reply\n"); + printf("Reply: %s\n", reply->str); // >>> Reply: OK +} + +freeReplyObject(reply); +reply = NULL; + + +// Attempt to interpret the key as a hash. +reply = redisCommand(c, "HGET number field1"); + +// This gives an error reply. +if (reply->type == REDIS_REPLY_ERROR) { + // Report the error. + printf("Error reply\n"); + printf("Reply: %s\n", reply->str); + // >>> Reply: WRONGTYPE Operation against a key holding the wrong kind of value +} + +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "GET number"); + +// This gives a simple string reply. +if (reply->type == REDIS_REPLY_STRING) { + // Display the string. + printf("Simple string reply\n"); + printf("Reply: %s\n", reply->str); // >>> Reply: 1.5 +} + +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "ZADD prices 1.75 bread 5.99 beer"); + +// This gives an integer reply. +if (reply->type == REDIS_REPLY_INTEGER) { + // Display the integer. + printf("Integer reply\n"); + printf("Number added: %lld\n", reply->integer); + // >>> Number added: 2 +} + +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "ZSCORE prices bread"); + +// This gives a string reply with RESP2 and a double reply +// with RESP3, but you handle it the same way in either case. +if (reply->type == REDIS_REPLY_STRING) { + printf("String reply\n"); + + char *endptr; // Not used. + double price = strtod(reply->str, &endptr); + double discounted = price * 0.75; + printf("Discounted price: %.2f\n", discounted); + // >>> Discounted price: 1.31 +} + +freeReplyObject(reply); +reply = NULL; +``` + +### Arrays and maps + +Arrays (reply type `REDIS_REPLY_ARRAY`) and maps (reply type `REDIS_REPLY_MAP`) +are returned by commands that retrieve several values at the +same time. For both types, the number of elements in the reply is contained in +`reply->elements` and the pointer to the array itself is is `reply->element`. +Each item in the array is of type `redisReply`. The array elements +are typically simple types rather than arrays or maps. + +The example below shows how to get the items from a +[list]({{< relref "/develop/data-types/lists" >}}): + +```c +reply = redisCommand(c, "RPUSH things thing0 thing1 thing2 thing3"); + +printf("Added %lld items\n", reply->integer); +// >>> Added 4 items +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "LRANGE things 0 -1"); + +for (int i = 0; i < reply->elements; ++i) { + if (reply->element[i]->type == REDIS_REPLY_STRING) { + printf("List item %d: %s\n", i, reply->element[i]->str); + } +} +// >>> List item 0: thing0 +// >>> List item 1: thing1 +// >>> List item 2: thing2 +// >>> List item 3: thing3 +``` + +A map is essentially the same as an array but it has the extra +guarantee that the items will be listed in key-value pairs. +The example below shows how to get all the fields from a +[hash]({{< relref "/develop/data-types/hashes" >}}) using +[`HGETALL`]({{< relref "/commands/hgetall" >}}): + +```c +const char *hashCommand[] = { + "HSET", "details", + "name", "Mr Benn", + "address", "52 Festive Road", + "hobbies", "Cosplay" +}; + +reply = redisCommandArgv(c, 8, hashCommand, NULL); + +printf("Added %lld fields\n", reply->integer); +// >>> Added 3 fields + +freeReplyObject(reply); +reply = NULL; + + +reply = redisCommand(c, "HGETALL details"); + +// This gives an array reply with RESP2 and a map reply with +// RESP3, but you handle it the same way in either case. +if (reply->type == REDIS_REPLY_ARRAY) { + for (int i = 0; i < reply->elements; i += 2) { + char *key = reply->element[i]->str; + char *value = reply->element[i + 1]->str; + printf("Key: %s, value: %s\n", key, value); + } + // >>> Key: name, value: Mr Benn + // >>> Key: address, value: 52 Festive Road + // >>> Key: hobbies, value: Cosplay +} +``` + +## Handling errors + +When a command executes successfully, the `err` field of the context +object will be set to zero. If a command fails, it will return either +`NULL` or `REDIS_ERR`, depending on which function and command you used. When +this happens, `context->err` will contain an error code + +- `REDIS_ERR_IO`: There was an I/O error while creating the connection, + or while trying to write or read data. Whenever `context->err` contains + `REDIS_ERR_IO`, you can use the features of the standard library file + [`errno.h`](https://en.wikipedia.org/wiki/Errno.h) to find out more + information about the error. +- `REDIS_ERR_EOF`: The server closed the connection which resulted in an empty read. +- `REDIS_ERR_PROTOCOL`: There was an error while parsing the + [RESP protocol]({{< relref "/develop/reference/protocol-spec" >}}). +- `REDIS_ERR_OTHER`: Any other error. Currently, it is only used when the connection + hostname can't be resolved. + +The context object also has an `errstr` field that contains a descriptive error message. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 20 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +There is no command to explicitly start a pipeline with `hiredis`, +but if you issue a command with the `redisAppendCommand()` function, +it will be added to an output buffer without being sent +immediately to the server. + +There is also an input buffer that receives replies from +commands. If you call `redisGetReply()` when the input buffer is empty, +it will first send any commands that are queued in the output buffer and +then wait for replies to arrive in the input buffer. It will then return +the first reply only. + +If you then make subsequent `redisGetReply()` calls, they will +find the input buffer is not empty, but still has replies +queued from previous commands. In this case, `redisGetReply()` +will just remove and return replies from the input buffer +until it is empty again. + +The example below shows how to use `redisAppendCommand()` +and `redisGetReply()` together: + +```c +redisAppendCommand(c, "SET fruit:0 Apple"); +redisAppendCommand(c, "SET fruit:1 Banana"); +redisAppendCommand(c, "SET fruit:2 Cherry"); + +redisAppendCommand(c, "GET fruit:0"); +redisAppendCommand(c, "GET fruit:1"); +redisAppendCommand(c, "GET fruit:2"); + + +redisReply *reply; + +// Iterate once for each of the six commands in the +// pipeline. +for (int i = 0; i < 6; ++i) { + redisGetReply(c, (void**) &reply); + + // If an error occurs, the context object will + // contain an error code and/or an error string. + if (reply->type == REDIS_REPLY_ERROR) { + printf("Error: %s", c->errstr); + } else { + printf("%s\n", reply->str); + } + + freeReplyObject(reply); +} +// >>> OK +// >>> OK +// >>> OK +// >>> Apple +// >>> Banana +// >>> Cherry +``` + +`redisAppendCommand()` has the same call signature as `redisCommand()` except that +it doesn't return a `redisReply`. There is also a `redisAppendCommandArgv()` +function that is analogous to `redisCommandArgv()` (see +[Issue commands]({{< relref "/develop/clients/hiredis/issue-commands" >}}) +for more information). + +`redisGetReply()` receives the usual +context pointer and a pointer to a `redisReply` pointer (which you +must cast to `void**`). After `redisGetReply()` returns, +the reply pointer will point to the `redisReply` object returned by +the queued command (see +[Handle command replies]({{< relref "/develop/clients/hiredis/handle-replies" >}}) +for more information). + +Call `redisGetReply()` once for each command that you added to the pipeline. +You should check for errors after each call and free each reply object +when you have finished processing it, as in the example above. + +## Transactions + +`hiredis` doesn't provide any special API to handle transactions, but +you can implement them yourself using the [`MULTI`]({{< relref "/commands/multi" >}}), +[`EXEC`]({{< relref "/commands/exec" >}}), and [`WATCH`]({{< relref "/commands/watch" >}}) +commands as you would from [`redis-cli`]({{< relref "/develop/tools/cli" >}}). +See [Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect to the server with `hiredis`. +linkTitle: Connect +title: Connect +weight: 1 +--- + +## Basic synchronous connection + +The example below creates a simple synchronous connection to a local +Redis server and tests the connection, before closing it with +`redisFree()`. The `redisConnect()` function takes just a hostname +and port as its arguments, and returns a context object. + +```c +#include + +#include + . + . + . + +// The `redisContext` type represents the connection +// to the Redis server. Here, we connect to the +// default host and port. +redisContext *c = redisConnect("127.0.0.1", 6379); + +// Check if the context is null or if a specific +// error occurred. +if (c == NULL || c->err) { + if (c != NULL) { + printf("Error: %s\n", c->errstr); + // handle error + } else { + printf("Can't allocate redis context\n"); + } + + exit(1); +} + +// Set a string key. +redisReply *reply = redisCommand(c, "SET foo bar"); +printf("Reply: %s\n", reply->str); // >>> Reply: OK +freeReplyObject(reply); + +// Get the key we have just stored. +reply = redisCommand(c, "GET foo"); +printf("Reply: %s\n", reply->str); // >>> Reply: bar +freeReplyObject(reply); + +// Close the connection. +redisFree(c); +``` + +## Asynchronous connection + +You can also connect to Redis using an asynchronous API. +The `redisAsyncConnect()` call that creates the context is +similar to the synchronous function `redisConnect()`, but it returns the +context object immediately before the connection is complete. +It lets you supply callbacks to respond when a connection is successful +or to handle any errors that may occur. + +The following code creates an asynchronous connection and +sets the context callbacks. Note that you must also include the +`async.h` header to access the asynchronous API. + +```c +#include + +#include +#include + . + . + . + +redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379); + +if (c->err) { + printf("Error: %s\n", c->errstr); + return 1; +} + +// Set callbacks to respond to successful or unsuccessful +// connection and disconnection. +redisAsyncSetConnectCallback(c, connectCallback); +redisAsyncSetDisconnectCallback(c, disconnectCallback); + +char *key = "testkey"; +char *value = "testvalue"; + +// Status reply is ignored. +redisAsyncCommand(c, NULL, NULL, "SET %s %s", key, value); + +// Reply handled by `getCallback()` function. +redisAsyncCommand(c, getCallback, key, "GET %s", key); +``` + +The callback functions have a simple signature that receives +the context object and a status code. See +[Handling errors]({{< relref "/develop/clients/hiredis/handle-replies#handling-errors" >}}) +for a list of the possible status codes. + +```c +void connectCallback(const redisAsyncContext *c, int status) { + if (status != REDIS_OK) { + printf("Error: %s\n", c->errstr); + return; + } + printf("Connected...\n"); +} + +void disconnectCallback(const redisAsyncContext *c, int status) { + if (status != REDIS_OK) { + printf("Error: %s\n", c->errstr); + return; + } + printf("Disconnected...\n"); +} +``` + +Use the `redisAsyncCommand()` function to issue Redis commands +with an asynchronous connection. This is similar to the equivalent +synchronous function `redisCommand()` but also lets you supply a callback +and a custom data pointer to process the response to the command. See +[Construct asynchronous commands]({{< relref "/develop/clients/hiredis/issue-commands#construct-asynchronous-commands" >}}) for more +information. + +Note that you should normally disconnect asynchronously from a +callback when you have finished using the connection. +Use `redisAsyncDisconnect()` to disconnect gracefully, letting +pending commands execute and activate their callbacks. +Use `redisAsyncFree()` to disconnect immediately. If you do this then +any pending callbacks from commands that have already executed will be +called with a `NULL` reply pointer. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Use `hiredis` in conjunction with the `libevent` framework. +linkTitle: libevent integration +title: Integrate hiredis with a libevent app +weight: 60 +--- + +The [`libevent`](https://libevent.org/) library provides an +implementation of an event loop that lets you call functions +asynchronously in response to events. This guide explains +how to use `hiredis` to connect to a Redis server from a +`libevent` app. + +## Install `libevent` + +The [`libevent` home page](https://libevent.org/) has links to download +all versions of the library, but you should use the latest version +unless there is a specific version you need to target. + +When you have downloaded `libevent`, follow the instructions in the +`README` file to compile and install the library. + +## Create a simple app + +For a real project, you would build your code with a makefile, but for +this simple test, you can just place it in a file called `main.c` and +build it with the following command (assuming you used `make install` to +install the `libhiredis` and `libevent` libraries): + +```bash +cc main.c -L/usr/local/lib -lhiredis -levent +``` + +See [Build and install]({{< relref "/develop/clients/hiredis#build-and-install" >}}) +to learn how to build `hiredis`, if you have not already done so. + +Now, add the following code in `main.c`. An explanation follows the +code example: + +```c +#include +#include +#include +#include + +#include +#include +#include + +// Callback for the `GET` command. +void getCallback(redisAsyncContext *c, void *r, void *privdata) { + redisReply *reply = r; + char *key = privdata; + + if (reply == NULL) { + if (c->errstr) { + printf("errstr: %s\n", c->errstr); + } + return; + } + + printf("Key: %s, value: %s\n", key, reply->str); + + /* Disconnect after receiving the reply to GET */ + redisAsyncDisconnect(c); +} + +// Callback to respond to successful or unsuccessful connection. +void connectCallback(const redisAsyncContext *c, int status) { + if (status != REDIS_OK) { + printf("Error: %s\n", c->errstr); + return; + } + printf("Connected...\n"); +} + +// Callback to respond to intentional or unexpected disconnection. +void disconnectCallback(const redisAsyncContext *c, int status) { + if (status != REDIS_OK) { + printf("Error: %s\n", c->errstr); + return; + } + printf("Disconnected...\n"); +} + + +int main (int argc, char **argv) { +#ifndef _WIN32 + signal(SIGPIPE, SIG_IGN); +#endif + + // Create the libevent `event_base` object to track all + // events. + struct event_base *base = event_base_new(); + + redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379); + + if (c->err) { + printf("Error: %s\n", c->errstr); + return 1; + } + + // Use the Redis libevent adapter to attach the Redis connection + // to the libevent main loop. + redisLibeventAttach(c,base); + + redisAsyncSetConnectCallback(c, connectCallback); + redisAsyncSetDisconnectCallback(c, disconnectCallback); + + char *key = "testkey"; + char *value = "testvalue"; + + redisAsyncCommand(c, NULL, NULL, "SET %s %s", key, value); + redisAsyncCommand(c, getCallback, key, "GET %s", key); + + // Run the event loop. + event_base_dispatch(base); + + return 0; +} +``` + +The code calls +[`event_base_new()`](https://libevent.org/doc/event_8h.html#af34c025430d445427a2a5661082405c3) +to initialize the core +[`event_base`](https://libevent.org/doc/structevent__base.html) +object that manages the event loop. It then creates a standard +[asynchronous connection]({{< relref "/develop/clients/hiredis/connect#asynchronous-connection" >}}) +to Redis and uses the `libevent` adapter function `redisLibeventAttach()` to +attach the connection to the event loop. + +After setting the [connection callbacks]({{< relref "/develop/clients/hiredis/connect#asynchronous-connection" >}}), the code issues two asynchronous +Redis commands (see +[Construct asynchronous commands]({{< relref "/develop/clients/hiredis/issue-commands#construct-asynchronous-commands" >}}) +for more information). +The final step is to call +[`event_base_dispatch()`](https://libevent.org/doc/event_8h.html#a19d60cb72a1af398247f40e92cf07056) +to start the event loop. This will wait for the commands to be processed and +then exit when the Redis connection is closed in the `getCallback()` function. + +## Run the code + +If you compile and run the code, you will see the following output, +showing that the callbacks executed correctly: + +``` +Connected... +Key: testkey, value: testvalue +Disconnected... +``` + +You can use the +[`KEYS`]({{< relref "/commands/keys" >}}) command from +[`redis-cli`]({{< relref "/develop/tools/cli" >}}) or +[Redis Insight]({{< relref "/develop/tools/insight" >}}) to check +that the "testkey" string key was added to the Redis database. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Integrate hiredis with C++ and external frameworks. +linkTitle: Integration guides +title: Integration guides +weight: 50 +--- + +`hiredis` is compatible with C++ and the library source includes a set of +[adapters](https://github.com/redis/hiredis/tree/master/adapters) +to help you use it in conjunction with C and C++ libraries and frameworks. +The pages in this section explain how to integrate `hiredis` into +your app. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Use `hiredis` in conjunction with the Qt app framework. +linkTitle: Qt integration +title: Integrate hiredis with a Qt app +weight: 50 +--- + +[Qt](https://www.qt.io/) is a popular cross-platform C++ framework that +you can use to build command line and GUI apps. This guide explains how +to use `hiredis` to connect to a Redis server from a Qt app. + +## Install Qt + +You should first download and install the +[Qt development environment](https://www.qt.io/download-dev) for your +development platform, if you have not already done so. The example +below briefly explains how to use Qt Creator +to manage your project, but see the [Qt Creator](https://doc.qt.io/qtcreator/) +docs for an extensive set of examples and tutorials. + +## Create a simple app + +We will use a simple console app to demonstrate how to connect +to Redis from Qt. Create the app project in Qt Creator using the +**File > New Project** command. The generated source code is a +single C++ file, called `main.cpp`, that uses a +[`QCoreApplication`](https://doc.qt.io/qt-6/qcoreapplication.html) +object to handle the main event loop. Although it will compile and run, +it doesn't do anything useful at this stage. + +## Add `hiredis` files + +Build `hiredis` if you have not already done so (see +[Build and install]({{< relref "/develop/clients/hiredis#build-and-install" >}}) +for more information). + +You should also make the `libhiredis` library available to the project. For example, +if you have used the default option of [`cmake`](https://cmake.org/) as the project +build tool and you have installed the `.dylib` or `.so` file for `hiredis` in `/usr/local/lib`, +you should add the following lines to the `CMakeLists.txt` file: + +``` +add_library(hiredis SHARED IMPORTED) +set_property(TARGET hiredis PROPERTY + IMPORTED_LOCATION "/usr/local/lib/libhiredis.dylib") +``` + +You should also modify the `target_link_libraries` directive to include +`hiredis`: + +``` +target_link_libraries(ConsoleTest Qt${QT_VERSION_MAJOR}::Core hiredis) +``` + +## Add code to access Redis + +You can add a class using the **Add new** context menu +on the project folder in Qt Creator. The sections below give +examples of the code you should add to this class to +connect to Redis. The code is separated into header and +implementation files. + +### Header file + +The header file for a class called `RedisExample` is shown below. +An explanation follows the code. + +```c++ +// redisexample.h + +#ifndef REDISEXAMPLE_H +#define REDISEXAMPLE_H + +#include + +#include +#include +#include + + +class RedisExample : public QObject +{ + Q_OBJECT + +public: + // Constructor + RedisExample(const char *keyForRedis, const char *valueForRedis, QObject *parent = 0) + :QObject(parent), m_key(keyForRedis), m_value(valueForRedis) {} + +public slots: + // Slot method to hold the code that connects to Redis and issues + // commands. + void run(); + +signals: + // Signal to indicate that our code has finished executing. + void finished(); + +public: + // Method to close the Redis connection and signal that we've + // finished. + void finish(); + +private: + const char *m_key; // Key for Redis string. + const char *m_value; // Value for Redis string. + redisAsyncContext *m_ctx; // Redis connection context. + RedisQtAdapter m_adapter; // Adapter to let `hiredis` work with Qt. +}; + +#endif // REDISEXAMPLE_H +``` + +[`QObject`](https://doc.qt.io/qt-6/qobject.html) is a key Qt class that +implements the [Object model](https://doc.qt.io/qt-6/object.html) for +communication between objects. When you create your class in Qt Creator, +you can specify that you want it to be a subclass of `QObject` (this will +add the appropriate header files and include the `Q_OBJECT` macro in the +class declaration). + +The `QObject` communication model uses some instance methods as *signals* +to report events and others as *slots* to act as callbacks that process the +events (see [Signals and slots](https://doc.qt.io/qt-6/signalsandslots.html) +for an introduction). The Qt [meta-object compiler](https://doc.qt.io/qt-6/moc.html) +recognizes the non-standard C++ access specifiers `signals:` and `slots:` in the +class declaration and adds extra code for them during compilation to enable +the communication mechanism. + +In our class, there is a `run()` slot that will implement the code to access Redis. +The code eventually emits a `finished()` signal when it is complete to indicate that +the app should exit. + +Our simple example code just sets and gets a Redis +[string]({{< relref "/develop/data-types/strings" >}}) key. The class contains +private attributes for the key and value (following the Qt `m_xxx` naming convention +for class members). These are set by the constructor along with a call to the +`QObject` constructor. The other attributes represent the connection context for +Redis (which should generally be +[asynchronous]({{< relref "/develop/clients/hiredis/connect#asynchronous-connection" >}}) +for a Qt app) and an adapter object that `hiredis` uses to integrate with Qt. + +### Implementation file + +The file that implements the methods declared in the header is shown +below. A full explanation follows the code. + +```c++ +// redisexample.cpp + +#include + +#include "redisexample.h" + + +void RedisExample::finish() { + // Disconnect gracefully. + redisAsyncDisconnect(m_ctx); + + // Emit the `finished()` signal to indicate that the + // execution is complete. + emit finished(); +} + + +// Callback used by our `GET` command in the `run()` method. +void getCallback(redisAsyncContext *, void * r, void * privdata) { + + // Cast data pointers to their appropriate types. + redisReply *reply = static_cast(r); + RedisExample *ex = static_cast(privdata); + + if (reply == nullptr || ex == nullptr) { + return; + } + + std::cout << "Value: " << reply->str << std::endl; + + // Close the Redis connection and quit the app. + ex->finish(); +} + + +void RedisExample::run() { + // Open the connection to Redis. + m_ctx = redisAsyncConnect("localhost", 6379); + + if (m_ctx->err) { + std::cout << "Error: " << m_ctx->errstr << std::endl; + finish(); + } + + // Configure the connection to work with Qt. + m_adapter.setContext(m_ctx); + + // Issue some simple commands. For the `GET` command, pass a + // callback function and a pointer to this object instance + // so that we can access the object's members from the callback. + redisAsyncCommand(m_ctx, NULL, NULL, "SET %s %s", m_key, m_value); + redisAsyncCommand(m_ctx, getCallback, this, "GET %s", m_key); +} +``` + +The code that accesses Redis is in the `run()` method (recall that this +implements a Qt slot that will be called in response to a signal). The +code connects to Redis and stores the connection context pointer in the +`m_ctx` attribute of the class instance. The call to `m_adapter.setContext()` +initializes the Qt support for the context. Note that we need an +asynchronous connection for Qt. See +[Asynchronous connection]({{< relref "/develop/clients/hiredis/connect#asynchronous-connection" >}}) +for more information. + +The code then issues two Redis commands to [`SET`]({{< relref "/commands/set" >}}) +the string key and value that were supplied using the class's constructor. We are +not interested in the response returned by this command, but we are interested in the +response from the [`GET`]({{< relref "/commands/get" >}}) command that follows it. +Because the commands are asynchronous, we need to set a callback to handle +the `GET` response when it arrives. In the `redisAsyncCommand()` call, we pass +a pointer to our `getCallback()` function and also pass a pointer to the +`RedisExample` instance. This is a custom data field that will simply +be passed on to the callback when it executes (see +[Construct asynchronous commands]({{< relref "/develop/clients/hiredis/issue-commands#construct-asynchronous-commands" >}}) +for more information). + +The code in the `getCallback()` function starts by casting the reply pointer +parameter to [`redisReply`]({{< relref "/develop/clients/hiredis/handle-replies" >}}) +and the custom data pointer to `RedisExample`. Here, the example just prints +the reply string to the console, but you can process it in any way you like. +You can add methods to your class and call them within the callback using the +custom data pointer passed during the `redisAsyncCommand()` call. Here, we +simply use the pointer to call the `finish()` method. + +The `finish()` method calls +`redisAsyncDisconnect()` to close the connection and then uses the +Qt signalling mechanism to emit the `finished()` signal. You may need to +process several commands with a particular connection context, but you should +close it from a callback when you have finished using it. + +### Main program + +To access the `RedisExample` class, you should use code like the +following in the `main()` function defined in `main.cpp`: + +```c++ +#include +#include + +#include "redisexample.h" + + +int main(int argc, char *argv[]) +{ + QCoreApplication app(argc, argv); + + // Instance of our object. + RedisExample r("url", "https://redis.io/"); + + // Call the `run()` slot on our `RedisExample` instance to + // run our Redis commands. + QTimer::singleShot(0, &r, SLOT(run())); + + // Set up a communication connection between our `finished()` + // signal and the application's `quit()` slot. + QObject::connect(&r, SIGNAL(finished()), &app, SLOT(quit())); + + // Start the app's main event loop. + return app.exec(); +} +``` + +This creates the [`QCoreApplication`](https://doc.qt.io/qt-6/qcoreapplication.html) +instance that manages the main event loop for a console app. It +then creates the instance of `RedisExample` with the key ("url") and +value ("https://redis.io/") for our Redis string. + +The two lines below set up the `QObject` communication mechanism +for the app. The call to +[`QTimer::singleShot()`](https://doc.qt.io/qt-6/qtimer.html#singleShot-2) +activates the `run()` +slot method on our `RedisExample` instance. The +[`QObject::connect()`](https://doc.qt.io/qt-6/qobject.html#connect-5) +call creates a communication link between the `finished()` signal of +out `RedisExample` instance and the `quit()` slot of our +`QCoreApplication` instance. This quits the application event loop and +exits the app when the `finished()` signal is emitted by the +`RedisExample` object. This happens when the `finish()` method is called +at the end of the `GET` command callback. + +## Run the code + +When you have added the code, you can run it from the **Build** menu of +Qt Creator or from the toolbar at the left hand side of the window. +Assuming the connection to Redis succeeds, it will print the message +`Value: https://redis.io/` and quit. You can use the +[`KEYS`]({{< relref "/commands/keys" >}}) command from +[`redis-cli`]({{< relref "/develop/tools/cli" >}}) or +[Redis Insight]({{< relref "/develop/tools/insight" >}}) to check +that the "url" string key was added to the Redis database. + +## Key information + +There are many ways you could use Redis with a Qt app, but our example +demonstrates some techniques that are broadly useful: + +- Use the `QObject` communication mechanism to simplify your code. +- Use the `hiredis` asynchronous API. Add a `RedisQtAdapter` instance + to your code and ensure you call its `setContext()` method to + initialize it before issuing Redis commands. +- Place all code and data you need to interact with Redis + (including the connection context) in a single + class or ensure it is available from a class via pointers and + Qt signals. Pass a pointer to an instance of your class in the + custom data parameter when you issue a Redis command with + `redisAsyncCommand()` and use this to process the reply or + issue more commands from the callback. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your C application to a Redis database. +linkTitle: hiredis (C) +title: hiredis guide (C) +weight: 9 +--- + +[`hiredis`](https://github.com/redis/hiredis) is the +[C language](https://en.wikipedia.org/wiki/C_(programming_language)) +client for Redis. +The sections below explain how to install `hiredis` and connect your application +to a Redis database. + +`hiredis` requires a running Redis or [Redis Stack]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) server. See [Getting started]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis installation instructions. + +## Build and install + +Clone or download the `hiredis` source from the [Github repository](https://github.com/redis/hiredis). +Then, in a terminal, go into the `hiredis` folder and run the `make` command to build +the dynamically-loaded library for `hiredis` (this has the name `libhiredis.dylib` on +MacOS and `libhiredis.so` on Linux). You can copy this library to your +project folder or run `sudo make install` to install it to `/usr/local/lib`. + +## Connect and test + +The code in the example below connects to the server, stores and retrieves +a string key using [`SET`]({{< relref "/commands/set" >}}) and +[`GET`]({{< relref "/commands/get" >}}), and then finally closes the +connection. An explanation of the code follows the example. + +```c +#include + +#include + +int main() { + // The `redisContext` type represents the connection + // to the Redis server. Here, we connect to the + // default host and port. + redisContext *c = redisConnect("127.0.0.1", 6379); + + // Check if the context is null or if a specific + // error occurred. + if (c == NULL || c->err) { + if (c != NULL) { + printf("Error: %s\n", c->errstr); + // handle error + } else { + printf("Can't allocate redis context\n"); + } + + exit(1); + } + + // Set a string key. + redisReply *reply = redisCommand(c, "SET foo bar"); + printf("Reply: %s\n", reply->str); // >>> Reply: OK + freeReplyObject(reply); + + // Get the key we have just stored. + reply = redisCommand(c, "GET foo"); + printf("Reply: %s\n", reply->str); // >>> Reply: bar + freeReplyObject(reply); + + // Close the connection. + redisFree(c); +} +``` + +For a real project, you would build your code with a makefile, but for +this simple test, you can just place it in a file called `main.c` and +build it with the following command. (If you didn't install `hiredis` +using `make install`, then you should also use the `-I` option to +specify the folder that contains the `hiredis` headers.) + +```bash +cc main.c -L/usr/local/lib -lhiredis +``` + +The default executable filename is `a.out`. If you run this file from +the terminal, you should see the following output: + +``` +% ./a.out +Reply: OK +Reply: bar +``` + +The code first uses `redisConnect()` to open the connection for +all subsequent commands to use. See +[Connect]({{< relref "/develop/clients/hiredis/connect" >}}) for +more information about connecting to Redis. + +The `redisCommand()` function +issues commands to the server, each of which returns a +`redisReply` pointer. Here, the reply is a string, which you can +access using the `str` field of the reply. The `redisCommand()` +call allocates memory for the reply, so you should free this +with `freeReplyObject()` when you have finished using it. +See [Issue commands]({{< relref "/develop/clients/hiredis/issue-commands" >}}) +and [Handle replies]({{< relref "/develop/clients/hiredis/handle-replies" >}}) +for more information. + +Finally, you should close the connection to Redis with a +call to `redisFree()`. This is not strictly necessary +for this short test program, but real-world code will typically +open and use many connections. You must free them after using them +to prevent errors. + +## More information + +The [`hiredis`](https://github.com/redis/hiredis) Github repository contains +examples and details that may be useful if you are using `hiredis` to +implement a higher-level client for another programming language. There are +also examples showing how to use `hiredis` adapter headers to integrate with +various event handling frameworks. + +See the other pages in this section for more information and examples. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 30 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). +Among other things, vector fields can store *text embeddings*, which are AI-generated vector +representations of the semantic information in pieces of text. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings indicates how similar they are semantically. By comparing the +similarity of an embedding generated from some query text with embeddings stored in hash +or JSON fields, Redis can retrieve documents that closely match the query in terms +of their meaning. + +The example below uses the [HuggingFace](https://huggingface.co/) model +[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) +to generate the vector embeddings to store and index with Redis Query Engine. +The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +You can use the [TransformersPHP](https://transformers.codewithkyrian.com/) +library to create the vector embeddings. Install the library with the following +command: + +```bash +composer require codewithkyrian/transformers +``` + +## Import dependencies + +Import the following classes and function in your source file: + +```php +}}) +call throws an exception if the index doesn't already exist, which is +why you need the `try...catch` block.) + +```php + $client = new Predis\Client([ + 'host' => 'localhost', + 'port' => 6379, +]); + +try { + $client->ftdropindex("vector_idx"); +} catch (Exception $e){} +``` + +Next, create the index. +The schema in the example below includes three fields: the text content to index, a +[tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +field to represent the "genre" of the text, and the embedding vector generated from +the original text content. The `embedding` field specifies +[HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) +indexing, the +[L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +vector distance metric, `Float32` values to represent the vector's components, +and 384 dimensions, as required by the `all-MiniLM-L6-v2` embedding model. + +The `CreateArguments` parameter to [`ftcreate()`]({{< relref "/commands/ft.create" >}}) +specifies hash objects for storage and a prefix `doc:` that identifies the hash objects +to index. + +```php +$schema = [ + new TextField("content"), + new TagField("genre"), + new VectorField( + "embedding", + "HNSW", + [ + "TYPE", "FLOAT32", + "DIM", 384, + "DISTANCE_METRIC", "L2" + ] + ) +]; + +$client->ftcreate("vector_idx", $schema, + (new CreateArguments()) + ->on('HASH') + ->prefix(["doc:"]) +); +``` + +## Add data + +You can now supply the data objects, which will be indexed automatically +when you add them with [`hmset()`]({{< relref "/commands/hset" >}}), as long as +you use the `doc:` prefix specified in the index definition. + +Use the `$extractor()` function as shown below to create the embedding that +represents the `content` field. Note that `$extractor()` can generate multiple +embeddings from multiple strings parameters at once, so it returns an array of +embedding vectors. Here, there is only one embedding in the returned array. +The `normalize:` and `pooling:` named parameters relate to details +of the embedding model (see the +[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) +page for more information). + +To add an embedding as a field of a hash object, you must encode the +vector array as a binary string. The built-in +[`pack()`](https://www.php.net/manual/en/function.pack.php) function is a convenient +way to do this in PHP, using the `g*` format specifier to denote a packed +array of `float` values. Note that if you are using +[JSON]({{< relref "/develop/data-types/json" >}}) +objects to store your documents instead of hashes, then you should store +the `float` array directly without first converting it to a binary +string (see [Differences with JSON documents](#differences-with-json-documents) +below). + +```php +$content = "That is a very happy person"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->hmset("doc:0",[ + "content" => $content, + "genre" => "persons", + "embedding" => pack('g*', ...$emb[0]) +]); + +$content = "That is a happy dog"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->hmset("doc:1",[ + "content" => $content, + "genre" => "pets", + "embedding" => pack('g*', ...$emb[0]) +]); + +$content = "Today is a sunny day"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->hmset("doc:2",[ + "content" => $content, + "genre" => "weather", + "embedding" => pack('g*', ...$emb[0]) +]); +``` + +## Run a query + +After you have created the index and added the data, you are ready to run a query. +To do this, you must create another embedding vector from your chosen query +text. Redis calculates the vector distance between the query vector and each +embedding vector in the index as it runs the query. You can request the results to be +sorted to rank them in order of ascending distance. + +The code below creates the query embedding using the `$extractor()` function, as with +the indexing, and passes it as a parameter when the query executes (see +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about using query parameters with embeddings). +The query is a +[K nearest neighbors (KNN)]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}}) +search that sorts the results in order of vector distance from the query vector. + +The results are returned as an array with the number of results in the +first element. The remaining elements are alternating pairs with the +key of the returned document (for example, `doc:0`) first, followed by an array containing +the fields you requested (again as alternating key-value pairs). + +```php +$queryText = "That is a happy person"; +$queryEmb = $extractor($queryText, normalize: true, pooling: 'mean'); + +$result = $client->ftsearch( + "vector_idx", + '*=>[KNN 3 @embedding $vec AS vector_distance]', + new SearchArguments() + ->addReturn(1, "vector_distance") + ->dialect("2") + ->params([ + "vec", pack('g*', ...$queryEmb[0]) + ]) + ->sortBy("vector_distance") +); + +$numResults = $result[0]; +echo "Number of results: $numResults" . PHP_EOL; +// >>> Number of results: 3 + +for ($i = 1; $i < ($numResults * 2 + 1); $i += 2) { + $key = $result[$i]; + echo "Key: $key" . PHP_EOL; + $fields = $result[$i + 1]; + echo "Field: {$fields[0]}, Value: {$fields[1]}" . PHP_EOL; +} +// >>> Key: doc:0 +// >>> Field: vector_distance, Value: 3.76152896881 +// >>> Key: doc:1 +// >>> Field: vector_distance, Value: 18.6544265747 +// >>> Key: doc:2 +// >>> Field: vector_distance, Value: 44.6189727783 +``` + +Assuming you have added the code from the steps above to your source file, +it is now ready to run, but note that it may take a while to complete when +you run it for the first time (which happens because the tokenizer must download the +`all-MiniLM-L6-v2` model data before it can +generate the embeddings). When you run the code, it outputs the following result text: + +``` +Number of results: 3 +Key: doc:0 +Field: vector_distance, Value: 3.76152896881 +Key: doc:1 +Field: vector_distance, Value: 18.6544265747 +Key: doc:2 +Field: vector_distance, Value: 44.6189727783 +``` + +Note that the results are ordered according to the value of the `distance` +field, with the lowest distance indicating the greatest similarity to the query. +As you would expect, the text *"That is a very happy person"* (from the `doc:0` +document) +is the result judged to be most similar in meaning to the query text +*"That is a happy person"*. + +## Differences with JSON documents + +Indexing JSON documents is similar to hash indexing, but there are some +important differences. JSON allows much richer data modeling with nested fields, so +you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema +to identify each field you want to index. However, you can declare a short alias for each +of these paths to avoid typing it in full for +every query. Also, you must specify `JSON` with the `on()` option when you create the index. + +The code below shows these differences, but the index is otherwise very similar to +the one created previously for hashes: + +```php +$jsonSchema = [ + new TextField("$.content", "content"), + new TagField("$.genre", "genre"), + new VectorField( + "$.embedding", + "HNSW", + [ + "TYPE", "FLOAT32", + "DIM", 384, + "DISTANCE_METRIC", "L2" + ], + "embedding", + ) +]; + +$client->ftcreate("vector_json_idx", $jsonSchema, + (new CreateArguments()) + ->on('JSON') + ->prefix(["jdoc:"]) +); +``` + +Use [`jsonset()`]({{< relref "/commands/json.set" >}}) to add the data +instead of [`hmset()`]({{< relref "/commands/hset" >}}). The arrays +that specify the fields have roughly the same structure as the ones used for +`hmset()` but you should use the standard library function +[`json_encode()`](https://www.php.net/manual/en/function.json-encode.php) +to generate a JSON string representation of the array. + +An important difference with JSON indexing is that the vectors are +specified using arrays instead of binary strings. Simply add the +embedding as an array field without using the `pack()` function as you +would with a hash. + +```php +$content = "That is a very happy person"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->jsonset("jdoc:0", "$", + json_encode( + [ + "content" => $content, + "genre" => "persons", + "embedding" => $emb[0] + ], + JSON_THROW_ON_ERROR + ) +); + +$content = "That is a happy dog"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->jsonset("jdoc:1","$", + json_encode( + [ + "content" => $content, + "genre" => "pets", + "embedding" => $emb[0] + ], + JSON_THROW_ON_ERROR + ) +); + +$content = "Today is a sunny day"; +$emb = $extractor($content, normalize: true, pooling: 'mean'); + +$client->jsonset("jdoc:2", "$", + json_encode( + [ + "content" => $content, + "genre" => "weather", + "embedding" => $emb[0] + ], + JSON_THROW_ON_ERROR + ) +); +``` + +The query is almost identical to the one for the hash documents. This +demonstrates how the right choice of aliases for the JSON paths can +save you having to write complex queries. An important thing to notice +is that the vector parameter for the query is still specified as a +binary string (using the `pack()` function), even though the data for +the `embedding` field of the JSON was specified as an array. + +```php +$queryText = "That is a happy person"; +$queryEmb = $extractor($queryText, normalize: true, pooling: 'mean'); + +$result = $client->ftsearch( + "vector_json_idx", + '*=>[KNN 3 @embedding $vec AS vector_distance]', + new SearchArguments() + ->addReturn(1, "vector_distance") + ->dialect("2") + ->params([ + "vec", pack('g*', ...$queryEmb[0]) + ]) + ->sortBy("vector_distance") +); +``` + +Apart from the `jdoc:` prefixes for the keys, the result from the JSON +query is the same as for hash: + +``` +Number of results: 3 +Key: jdoc:0 +Field: vector_distance, Value: 3.76152896881 +Key: jdoc:1 +Field: vector_distance, Value: 18.6544265747 +Key: jdoc:2 +Field: vector_distance, Value: 44.6189727783 +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about the indexing options, distance metrics, and query format +for vectors. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your PHP application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 10 +--- + +## Basic connection + +Connect to a locally-running server on the standard port (6379) +with the following code: + +```php + 'tcp', + 'host' => '127.0.0.1', + 'port' => 6379, + 'password' => '', + 'database' => 0, + ]); +``` + +Store and retrieve a simple string to test the connection: + +```php +echo $r->set('foo', 'bar'), PHP_EOL; +// >>> OK + +echo $r->get('foo'), PHP_EOL; +// >>> bar +``` + +Store and retrieve a [hash]({{< relref "/develop/data-types/hashes" >}}) +object: + +```php +$r->hset('user-session:123', 'name', 'John'); +$r->hset('user-session:123', 'surname', 'Smith'); +$r->hset('user-session:123', 'company', 'Redis'); +$r->hset('user-session:123', 'age', 29); + +echo var_export($r->hgetall('user-session:123')), PHP_EOL; +/* >>> +array ( + 'name' => 'John', + 'surname' => 'Smith', + 'company' => 'Redis', + 'age' => '29', +) +*/ +``` + +## Connect to a Redis cluster + +To connect to a Redis cluster, specify one or more of the nodes in +the `clusterNodes` parameter and set `'cluster'=>'redis'` in +`options`: + +```php +$clusterNodes = [ + 'tcp://127.0.0.1:30001', // Node 1 + 'tcp://127.0.0.1:30002', // Node 2 + 'tcp://127.0.0.1:30003', // Node 3 +]; +$options = ['cluster' => 'redis']; + +// Create a Predis client for the cluster +$rc = new PredisClient($clusterNodes, $options); + +echo $rc->cluster('nodes'), PHP_EOL; +/* >>> +d8773e888e92d015b7c52fc66798fd6815afefec 127.0.0.1:30004@40004 slave cde97d1f7dce13e9253ace5cafd3fb0aa67cda63 0 1730713764217 1 connected +58fe1346de4c425d60db24e9b153926fbde0d174 127.0.0.1:30002@40002 master - 0 1730713763361 2 connected 5461-10922 +015ecc8148a05377dda22f19921d16efcdd6d678 127.0.0.1:30006@40006 slave c019b75d8b52e83e7e52724eccc716ac553f71d6 0 1730713764218 3 connected +aca365963a72642e6ae0c9503aabf3be5c260806 127.0.0.1:30005@40005 slave 58fe1346de4c425d60db24e9b153926fbde0d174 0 1730713763363 2 connected +c019b75d8b52e83e7e52724eccc716ac553f71d6 127.0.0.1:30003@40003 myself,master - 0 1730713764000 3 connected 10923-16383 +cde97d1f7dce13e9253ace5cafd3fb0aa67cda63 127.0.0.1:30001@40001 master - 0 1730713764113 1 connected 0-5460 +*/ + +echo $rc->set('foo', 'bar'), PHP_EOL; +// >>> OK +echo $rc->get('foo'), PHP_EOL; +// >>> bar +``` + +## Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the +[Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) +guidelines. + +Use the following commands to generate the client certificate and private key: + +```bash +openssl genrsa -out redis_user_private.key 2048 +openssl req -new -key redis_user_private.key -out redis_user.csr +openssl x509 -req -days 365 -in redis_user.csr -signkey redis_user_private.key -out redis_user.crt +``` + +If you have the [Redis source folder](https://github.com/redis/redis) available, +you can also generate the certificate and private key with these commands: + +```bash +./utils/gen-test-certs.sh +./src/redis-server --tls-port 6380 --port 0 --tls-cert-file ./tests/tls/redis.crt --tls-key-file ./tests/tls/redis.key --tls-ca-cert-file ./tests/tls/ca.crt +``` + +Pass this information during connection using the `ssl` section of `options`: + +```php +$options = [ + 'scheme' => 'tls', // Use 'tls' for SSL connections + 'host' => '127.0.0.1', // Redis server hostname + 'port' => 6379, // Redis server port + 'username' => 'default', // Redis username + 'password' => '', // Redis password + 'options' => [ + 'ssl' => [ + 'verify_peer' => true, // Verify the server's SSL certificate + 'cafile' => './redis_ca.pem', // Path to CA certificate + 'local_cert' => './redis_user.crt', // Path to client certificate + 'local_pk' => './redis_user_private.key', // Path to client private key + ], + ], +]; + +$tlsConnection = new PredisClient($options); + +echo $tlsConnection->set('foo', 'bar'), PHP_EOL; +// >>> OK +echo $tlsConnection->get('foo'), PHP_EOL; +// >>> bar +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis query engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 20 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) +or another Redis server available. Also install the +[`Predis`]({{< relref "/develop/clients/php" >}}) client library if you +haven't already done so. + +Add the following dependencies: + +```php + 'Paul John', + 'email' => 'paul.john@example.com', + 'age' => 42, + 'city' => 'London', +], JSON_THROW_ON_ERROR); + +$user2 = json_encode([ + 'name' => 'Eden Zamir', + 'email' => 'eden.zamir@example.com', + 'age' => 29, + 'city' => 'Tel Aviv', +], JSON_THROW_ON_ERROR); + +$user3 = json_encode([ + 'name' => 'Paul Zamir', + 'email' => 'paul.zamir@example.com', + 'age' => 35, + 'city' => 'Tel Aviv', +], JSON_THROW_ON_ERROR); +``` + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/php/connect" >}}) +to learn more about the available connection options. + +```php +$r = new PredisClient([ + 'scheme' => 'tcp', + 'host' => '127.0.0.1', + 'port' => 6379, + 'password' => '', + 'database' => 0, + ]); +``` + +Create an +[index]({{< relref "/develop/interact/search-and-query/indexing" >}}). +In this example, only JSON documents with the key prefix `user:` are indexed. +For more information, see +[Query syntax]({{< relref "/develop/interact/search-and-query/query/" >}}). + +```php +$schema = [ + new TextField('$.name', 'name'), + new TagField('$.city', 'city'), + new NumericField('$.age', "age"), +]; + +try { +$r->ftCreate("idx:users", $schema, + (new CreateArguments()) + ->on('JSON') + ->prefix(["user:"])); +} +catch (Exception $e) { + echo $e->getMessage(), PHP_EOL; +} +``` + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them: + +```php +$r->jsonset('user:1', '$', $user1); +$r->jsonset('user:2', '$', $user2); +$r->jsonset('user:3', '$', $user3); +``` + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +```php +$res = $r->ftSearch("idx:users", "Paul @age:[30 40]"); +echo json_encode($res), PHP_EOL; +// >>> [1,"user:3",["$","{\"name\":\"Paul Zamir\",\"email\":\"paul.zamir@example.com\",\"age\":35,\"city\":\"London\"}"]] +``` + +Specify query options to return only the `city` field: + +```php +$arguments = new SearchArguments(); +$arguments->addReturn(3, '$.city', true, 'thecity'); +$arguments->dialect(2); +$arguments->limit(0, 5); + +$res = $r->ftSearch("idx:users", "Paul", $arguments); + +echo json_encode($res), PHP_EOL; +// >>> [2,"user:1",["thecity","London"],"user:3",["thecity","Tel Aviv"]] +``` + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +```php +$ftAggregateArguments = (new AggregateArguments()) +->groupBy('@city') +->reduce('COUNT', true, 'count'); + +$res = $r->ftAggregate('idx:users', '*', $ftAggregateArguments); +echo json_encode($res), PHP_EOL; +// >>> [2,["city","London","count","1"],["city","Tel Aviv","count","2"]] +``` + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must use `HASH` for the `On()` option +when you create the index. The code below shows these changes with +a new index called `hash-idx:users`, which is otherwise the same as +the `idx:users` index used for JSON documents in the previous examples. + +```php +$hashSchema = [ + new TextField('name'), + new TagField('city'), + new NumericField('age'), +]; + +try { +$r->ftCreate("hash-idx:users", $hashSchema, + (new CreateArguments()) + ->on('HASH') + ->prefix(["huser:"])); +} +catch (Exception $e) { + echo $e->getMessage(), PHP_EOL; +} +``` + +You use [`hmset()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`jsonset()`]({{< relref "/commands/json.set" >}}). +Supply the fields as an array directly, without using +[`json_encode()`](https://www.php.net/manual/en/function.json-encode.php). + +```php +$r->hmset('huser:1', [ + 'name' => 'Paul John', + 'email' => 'paul.john@example.com', + 'age' => 42, + 'city' => 'London', +]); + +$r->hmset('huser:2', [ + 'name' => 'Eden Zamir', + 'email' => 'eden.zamir@example.com', + 'age' => 29, + 'city' => 'Tel Aviv', +]); + +$r->hmset('huser:3', [ + 'name' => 'Paul Zamir', + 'email' => 'paul.zamir@example.com', + 'age' => 35, + 'city' => 'Tel Aviv', +]); +``` + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The format of the result is +almost the same except that the fields are returned directly in the +result array rather than in a JSON string with `$` as its key: + +```php +$res = $r->ftSearch("hash-idx:users", "Paul @age:[30 40]"); +echo json_encode($res), PHP_EOL; +// >>> [1,"huser:3",["age","35","city","Tel Aviv","email","paul.zamir@example.com","name","Paul Zamir"]] +``` + +## More information + +See the [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +aliases: /develop/connect/clients/php +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your PHP application to a Redis database +linkTitle: Predis (PHP) +title: Predis guide (PHP) +weight: 8 +--- + +[`Predis`](https://github.com/predis/predis) is the recommended [PHP](https://php.net/) +client for Redis. +The sections below explain how to install `Predis` and connect your application to a Redis database. + +{{< note >}}Although we provide basic documentation for `Predis`, it is a third-party +client library and is not developed or supported directly by Redis. +{{< /note >}} + +`Predis` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +## Install + +Use [Composer](https://getcomposer.org/) to install the `Predis` library +with the following command line: + +```bash +composer require predis/predis +``` + +## Connect and test + +Connect to a locally-running server on the standard port (6379) +with the following code: + +```php + 'tcp', + 'host' => '127.0.0.1', + 'port' => 6379, + 'password' => '', + 'database' => 0, + ]); +``` + +Store and retrieve a simple string to test the connection: + +```php +echo $r->set('foo', 'bar'), PHP_EOL; +// >>> OK + +echo $r->get('foo'), PHP_EOL; +// >>> bar +``` + +Store and retrieve a [hash]({{< relref "/develop/data-types/hashes" >}}) +object: + +```php +$r->hset('user-session:123', 'name', 'John'); +$r->hset('user-session:123', 'surname', 'Smith'); +$r->hset('user-session:123', 'company', 'Redis'); +$r->hset('user-session:123', 'age', 29); + +echo var_export($r->hgetall('user-session:123')), PHP_EOL; +/* >>> +array ( + 'name' => 'John', + 'surname' => 'Smith', + 'company' => 'Redis', + 'age' => '29', +) +*/ +``` + +## More information + +The [Predis wiki on Github](https://github.com/predis/predis/wiki) has +information about the different connection options you can use. + +See also the pages in this section for more information and examples: +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 50 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +To execute commands in a pipeline, you first create a pipeline object +and then add commands to it using methods that resemble the *asynchronous* +versions of the standard command methods +(for example, `StringSetAsync()` and `StringGetAsync()`). The commands are +buffered in the pipeline and only execute when you call the `Execute()` +method on the pipeline object. + +{{< clients-example pipe_trans_tutorial basic_pipe "C#" >}} +{{< /clients-example >}} + +## Execute a transaction + +A transaction works in a similar way to a pipeline. Create an +instance of the `Transaction` class, call async command methods +on that object, and then call the transaction object's +`Execute()` method to execute it. + +{{< clients-example pipe_trans_tutorial basic_trans "C#" >}} +{{< /clients-example >}} + +## Watch keys for changes + +Redis supports *optimistic locking* to avoid inconsistent updates +to different keys. The basic idea is to watch for changes to any +keys that you use in a transaction while you are are processing the +updates. If the watched keys do change, you must restart the updates +with the latest data from the keys. See +[Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information about optimistic locking. + +The approach to optimistic locking that other clients use +(adding the [`WATCH`]({{< relref "/commands/watch" >}}) command +explicitly to a transaction) doesn't work well with the +[multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) +system that `NRedisStack` uses. +Instead, `NRedisStack` relies on conditional execution of commands +to get a similar effect. + +Use the `AddCondition()` method to abort a transaction if a particular +condition doesn't hold throughout its execution. If the transaction +does abort then the `Execute()` method returns a `false` value, +but otherwise returns `true`. + +For example, the `KeyNotExists` condition aborts the transaction +if a specified key exists or is added by another client while the +transaction executes: + +{{< clients-example pipe_trans_tutorial trans_watch "C#" >}} +{{< /clients-example >}} + +You can also use a `When` condition on certain individual commands to +specify that they only execute when a certain condition holds +(for example, the command does not change an existing key). +See +[Conditional execution]({{< relref "/develop/clients/dotnet/condexec" >}}) +for a full description of transaction and command conditions. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 40 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). +Among other things, vector fields can store *text embeddings*, which are AI-generated vector +representations of the semantic information in pieces of text. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings indicates how similar they are semantically. By comparing the +similarity of an embedding generated from some query text with embeddings stored in hash +or JSON fields, Redis can retrieve documents that closely match the query in terms +of their meaning. + +In the example below, we use [Microsoft.ML](https://dotnet.microsoft.com/en-us/apps/ai/ml-dotnet) +to generate the vector embeddings to store and index with Redis Query Engine. +We also show how to adapt the code to use +[Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=csharp) +for the embeddings. The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +The example is probably easiest to follow if you start with a new +console app, which you can create using the following command: + +```bash +dotnet new console -n VecQueryExample +``` + +In the app's project folder, add +[`NRedisStack`]({{< relref "/develop/clients/dotnet" >}}`): + +```bash +dotnet add package NRedisStack +``` + +Then, add the `Microsoft.ML` package. + +```bash +dotnet add package Microsoft.ML +``` + +If you want to try the optional +[Azure embedding](#generate-an-embedding-from-azure-openai) +described below, you should also add `Azure.AI.OpenAI`: + +``` +dotnet add package Azure.AI.OpenAI --prerelease +``` + +## Import dependencies + +Add the following imports to your source file: + +```csharp +// Redis connection and Query Engine. +using NRedisStack.RedisStackCommands; +using StackExchange.Redis; +using NRedisStack.Search; +using static NRedisStack.Search.Schema; +using NRedisStack.Search.Literals.Enums; + +// Text embeddings. +using Microsoft.ML; +using Microsoft.ML.Transforms.Text; +``` + +If you are using the Azure embeddings, also add: + +```csharp +// Azure embeddings. +using Azure; +using Azure.AI.OpenAI; +``` + +## Define a function to obtain the embedding model + +{{< note >}}Ignore this step if you are using an Azure OpenAI +embedding model. +{{< /note >}} + +A few steps are involved in initializing the embedding model +(known as a `PredictionEngine`, in Microsoft terminology), so +we declare a function to contain those steps together. +(See the Microsoft.ML docs for more information about the +[`ApplyWordEmbedding`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.ml.textcatalog.applywordembedding?view=ml-dotnet) +method, including example code.) + +Note that we use two classes, `TextData` and `TransformedTextData`, to +specify the `PredictionEngine` model. C# syntax requires us to place these +classes after the main code in a console app source file. The section +[Declare `TextData` and `TransformedTextData`](#declare-textdata-and-transformedtextdata) +below shows how to declare them. + +```csharp +static PredictionEngine GetPredictionEngine(){ + // Create a new ML context, for ML.NET operations. It can be used for + // exception tracking and logging, as well as the source of randomness. + var mlContext = new MLContext(); + + // Create an empty list as the dataset + var emptySamples = new List(); + + // Convert sample list to an empty IDataView. + var emptyDataView = mlContext.Data.LoadFromEnumerable(emptySamples); + + // A pipeline for converting text into a 150-dimension embedding vector + var textPipeline = mlContext.Transforms.Text.NormalizeText("Text") + .Append(mlContext.Transforms.Text.TokenizeIntoWords("Tokens", + "Text")) + .Append(mlContext.Transforms.Text.ApplyWordEmbedding("Features", + "Tokens", WordEmbeddingEstimator.PretrainedModelKind + .SentimentSpecificWordEmbedding)); + + // Fit to data. + var textTransformer = textPipeline.Fit(emptyDataView); + + // Create the prediction engine to get the embedding vector from the input text/string. + var predictionEngine = mlContext.Model.CreatePredictionEngine(textTransformer); + + return predictionEngine; +} +``` + +## Define a function to generate an embedding + +{{< note >}}Ignore this step if you are using an Azure OpenAI +embedding model. +{{< /note >}} + +Our embedding model represents the vectors as an array of `float` values, +but when you store vectors in a Redis hash object, you must encode the vector +array as a `byte` string. To simplify this, we declare a +`GetEmbedding()` function that applies the `PredictionEngine` model described +[above](#define-a-function-to-obtain-the-embedding-model), and +then encodes the returned `float` array as a `byte` string. If you are +storing your documents as JSON objects instead of hashes, then you should +use the `float` array for the embedding directly, without first converting +it to a `byte` string (see [Differences with JSON documents](#differences-with-json-documents) +below). + + +```csharp +static byte[] GetEmbedding( + PredictionEngine model, string sentence +) +{ + // Call the prediction API to convert the text into embedding vector. + var data = new TextData() + { + Text = sentence + }; + + var prediction = model.Predict(data); + + // Convert prediction.Features to a binary blob + float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x); + byte[] byteArray = new byte[floatArray.Length * sizeof(float)]; + Buffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length); + + return byteArray; +} +``` + +## Generate an embedding from Azure OpenAI + +{{< note >}}Ignore this step if you are using a Microsoft.ML +embedding model. +{{< /note >}} + +Azure OpenAI can be a convenient way to access an embedding model, because +you don't need to manage and scale the server infrastructure yourself. + +You can create an Azure OpenAI service and deployment to serve embeddings of +whatever type you need. Select your region, note the service endpoint and key, +and add them where you see placeholders in the function below. +See +[Learn how to generate embeddings with Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=csharp) +for more information. + +```csharp +private static byte[] GetEmbeddingFromAzure(string sentence){ + Uri oaiEndpoint = new ("your-azure-openai-endpoint”); + string oaiKey = "your-openai-key"; + + AzureKeyCredential credentials = new (oaiKey); + OpenAIClient openAIClient = new (oaiEndpoint, credentials); + + EmbeddingsOptions embeddingOptions = new() { + DeploymentName = "your-deployment-name", + Input = { sentence }, + }; + + // Generate the vector embedding + var returnValue = openAIClient.GetEmbeddings(embeddingOptions); + + // Convert the array of floats to binary blob + float[] floatArray = Array.ConvertAll(returnValue.Value.Data[0].Embedding.ToArray(), x => (float)x); + byte[] byteArray = new byte[floatArray.Length * sizeof(float)]; + Buffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length); + return byteArray; +} +``` + +## Create the index + +Connect to Redis and delete any index previously created with the +name `vector_idx`. (The `DropIndex()` call throws an exception if +the index doesn't already exist, which is why you need the +`try...catch` block.) + +```csharp +var muxer = ConnectionMultiplexer.Connect("localhost:6379"); +var db = muxer.GetDatabase(); + +try { db.FT().DropIndex("vector_idx");} catch {} +``` + +Next, create the index. +The schema in the example below includes three fields: the text content to index, a +[tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +field to represent the "genre" of the text, and the embedding vector generated from +the original text content. The `embedding` field specifies +[HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) +indexing, the +[L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +vector distance metric, `Float32` values to represent the vector's components, +and 150 dimensions, as required by our embedding model. + +The `FTCreateParams` object specifies hash objects for storage and a +prefix `doc:` that identifies the hash objects we want to index. + +```csharp +var schema = new Schema() + .AddTextField(new FieldName("content", "content")) + .AddTagField(new FieldName("genre", "genre")) + .AddVectorField("embedding", VectorField.VectorAlgo.HNSW, + new Dictionary() + { + ["TYPE"] = "FLOAT32", + ["DIM"] = "150", + ["DISTANCE_METRIC"] = "L2" + } + ); + +db.FT().Create( + "vector_idx", + new FTCreateParams() + .On(IndexDataType.HASH) + .Prefix("doc:"), + schema +); +``` + +## Add data + +You can now supply the data objects, which will be indexed automatically +when you add them with [`HashSet()`]({{< relref "/commands/hset" >}}), as long as +you use the `doc:` prefix specified in the index definition. + +Firstly, create an instance of the `PredictionEngine` model using our +`GetPredictionEngine()` function. +You can then pass this to the `GetEmbedding()` function +to create the embedding that represents the `content` field, as shown below . + +(If you are using an Azure OpenAI model for the embeddings, then +use `GetEmbeddingFromAzure()` instead of `GetEmbedding()`, and note that +the `PredictionModel` is managed by the server, so you don't need to create +an instance yourself.) + +```csharp +var predEngine = GetPredictionEngine(); + +var sentence1 = "That is a very happy person"; + +HashEntry[] doc1 = { + new("content", sentence1), + new("genre", "persons"), + new("embedding", GetEmbedding(predEngine, sentence1)) +}; + +db.HashSet("doc:1", doc1); + +var sentence2 = "That is a happy dog"; + +HashEntry[] doc2 = { + new("content", sentence2), + new("genre", "pets"), + new("embedding", GetEmbedding(predEngine, sentence2)) +}; + +db.HashSet("doc:2", doc2); + +var sentence3 = "Today is a sunny day"; + +HashEntry[] doc3 = { + new("content", sentence3), + new("genre", "weather"), + new("embedding", GetEmbedding(predEngine, sentence3)) +}; + +db.HashSet("doc:3", doc3); +``` + +## Run a query + +After you have created the index and added the data, you are ready to run a query. +To do this, you must create another embedding vector from your chosen query +text. Redis calculates the vector distance between the query vector and each +embedding vector in the index as it runs the query. We can request the results to be +sorted to rank them in order of ascending distance. + +The code below creates the query embedding using the `GetEmbedding()` method, as with +the indexing, and passes it as a parameter when the query executes (see +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about using query parameters with embeddings). +The query is a +[K nearest neighbors (KNN)]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}}) +search that sorts the results in order of vector distance from the query vector. + +(As before, replace `GetEmbedding()` with `GetEmbeddingFromAzure()` if you are using +Azure OpenAI.) + +```csharp +var res = db.FT().Search("vector_idx", + new Query("*=>[KNN 3 @embedding $query_vec AS score]") + .AddParam("query_vec", GetEmbedding(predEngine, "That is a happy person")) + .ReturnFields( + new FieldName("content", "content"), + new FieldName("score", "score") + ) + .SetSortBy("score") + .Dialect(2)); + +foreach (var doc in res.Documents) { + var props = doc.GetProperties(); + var propText = string.Join( + ", ", + props.Select(p => $"{p.Key}: '{p.Value}'") + ); + + Console.WriteLine( + $"ID: {doc.Id}, Properties: [\n {propText}\n]" + ); +} +``` + +## Declare `TextData` and `TransformedTextData` + +{{< note >}}Ignore this step if you are using an Azure OpenAI +embedding model. +{{< /note >}} + +As we noted in the section above about the +[embedding model](#define-a-function-to-obtain-the-embedding-model), +we must declare two very simple classes at the end of the source +file. These are required because the API that generates the model +expects classes with named fields for the input `string` and output +`float` array. + +```csharp +class TextData +{ + public string Text { get; set; } +} + +class TransformedTextData : TextData +{ + public float[] Features { get; set; } +} +``` + +## Run the code + +Assuming you have added the code from the steps above to your source file, +it is now ready to run, but note that it may take a while to complete when +you run it for the first time (which happens because the tokenizer must download the +embedding model data before it can generate the embeddings). When you run the code, +it outputs the following result text: + +``` +ID: doc:1, Properties: [ + score: '4.30777168274', content: 'That is a very happy person' +] +ID: doc:2, Properties: [ + score: '25.9752807617', content: 'That is a happy dog' +] +ID: doc:3, Properties: [ + score: '68.8638000488', content: 'Today is a sunny day' +] +``` + +The results are ordered according to the value of the `score` +field, which represents the vector distance here. The lowest distance indicates +the greatest similarity to the query. +As you would expect, the result for `doc:1` with the content text +*"That is a very happy person"* +is the result that is most similar in meaning to the query text +*"That is a happy person"*. + +## Differences with JSON documents + +Indexing JSON documents is similar to hash indexing, but there are some +important differences. JSON allows much richer data modeling with nested fields, so +you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema +to identify each field you want to index. However, you can declare a short alias for each +of these paths to avoid typing it in full for +every query. Also, you must specify `IndexType.JSON` with the `On()` option when you +create the index. + +The code below shows these differences, but the index is otherwise very similar to +the one created previously for hashes: + +```cs +var jsonSchema = new Schema() + .AddTextField(new FieldName("$.content", "content")) + .AddTagField(new FieldName("$.genre", "genre")) + .AddVectorField( + new FieldName("$.embedding", "embedding"), + VectorField.VectorAlgo.HNSW, + new Dictionary() + { + ["TYPE"] = "FLOAT32", + ["DIM"] = "150", + ["DISTANCE_METRIC"] = "L2" + } + ); + + +db.FT().Create( + "vector_json_idx", + new FTCreateParams() + .On(IndexDataType.JSON) + .Prefix("jdoc:"), + jsonSchema +); +``` + +An important difference with JSON indexing is that the vectors are +specified using arrays of `float` instead of binary strings. This requires a modification +to the `GetEmbedding()` function declared in +[Define a function to generate an embedding](#define-a-function-to-generate-an-embedding) +above: + +```cs +static float[] GetFloatEmbedding( + PredictionEngine model, string sentence +) +{ + // Call the prediction API to convert the text into embedding vector. + var data = new TextData() + { + Text = sentence + }; + + var prediction = model.Predict(data); + + float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x); + return floatArray; +} +``` + +You should make a similar modification to the `GetEmbeddingFromAzure()` function +if you are using Azure OpenAI with JSON. + +Use [`JSON().set()`]({{< relref "/commands/json.set" >}}) to add the data +instead of [`HashSet()`]({{< relref "/commands/hset" >}}): + +```cs +var jSentence1 = "That is a very happy person"; + +var jdoc1 = new { + content = jSentence1, + genre = "persons", + embedding = GetFloatEmbedding(predEngine, jSentence1), +}; + +db.JSON().Set("jdoc:1", "$", jdoc1); + +var jSentence2 = "That is a happy dog"; + +var jdoc2 = new { + content = jSentence2, + genre = "pets", + embedding = GetFloatEmbedding(predEngine, jSentence2), +}; + +db.JSON().Set("jdoc:2", "$", jdoc2); + +var jSentence3 = "Today is a sunny day"; + +var jdoc3 = new { + content = jSentence3, + genre = "weather", + embedding = GetFloatEmbedding(predEngine, jSentence3), +}; + +db.JSON().Set("jdoc:3", "$", jdoc3); +``` + +The query is almost identical to the one for the hash documents. This +demonstrates how the right choice of aliases for the JSON paths can +save you having to write complex queries. The only significant difference is +that the `FieldName` objects created for the `ReturnFields()` option must +include the JSON path for the field. + +An important thing to notice +is that the vector parameter for the query is still specified as a +binary string (using the `GetEmbedding()` method), even though the data for +the `embedding` field of the JSON was specified as a `float` array. + +```cs +var jRes = db.FT().Search("vector_json_idx", + new Query("*=>[KNN 3 @embedding $query_vec AS score]") + .AddParam("query_vec", GetEmbedding(predEngine, "That is a happy person")) + .ReturnFields( + new FieldName("$.content", "content"), + new FieldName("$.score", "score") + ) + .SetSortBy("score") + .Dialect(2)); + +foreach (var doc in jRes.Documents) { + var props = doc.GetProperties(); + var propText = string.Join( + ", ", + props.Select(p => $"{p.Key}: '{p.Value}'") + ); + + Console.WriteLine( + $"ID: {doc.Id}, Properties: [\n {propText}\n]" + ); +} +``` + +Apart from the `jdoc:` prefixes for the keys, the result from the JSON +query is the same as for hash: + +``` +ID: jdoc:1, Properties: [ + score: '4.30777168274', content: 'That is a very happy person' +] +ID: jdoc:2, Properties: [ + score: '25.9752807617', content: 'That is a happy dog' +] +ID: jdoc:3, Properties: [ + score: '68.8638000488', content: 'Today is a sunny day' +] +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about the indexing options, distance metrics, and query format +for vectors. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your .NET application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 20 +--- + +## Basic connection + +You can connect to the server simply by passing a string of the +form "hostname:port" to the `Connect()` method (for example, +"localhost:6379"). However, you can also connect using a +`ConfigurationOptions` parameter. Use this to specify a +username, password, and many other options: + +```csharp +using NRedisStack; +using NRedisStack.RedisStackCommands; +using StackExchange.Redis; + +ConfigurationOptions conf = new ConfigurationOptions { + EndPoints = { "localhost:6379" }, + User = "yourUsername", + Password = "yourPassword" +}; + +ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(conf); +IDatabase db = redis.GetDatabase(); + +db.StringSet("foo", "bar"); +Console.WriteLine(db.StringGet("foo")); // prints bar +``` + +## Connect to a Redis cluster + +The basic connection will use the +[Cluster API]({{< relref "/operate/rs/clusters/optimize/oss-cluster-api" >}}) +if it is available without any special configuration. However, if you know +the addresses and ports of several cluster nodes, you can specify them all +during connection in the `Endpoints` parameter: + +```csharp +ConfigurationOptions options = new ConfigurationOptions +{ + //list of available nodes of the cluster along with the endpoint port. + EndPoints = { + { "localhost", 16379 }, + { "localhost", 16380 }, + // ... + }, +}; + +ConnectionMultiplexer cluster = ConnectionMultiplexer.Connect(options); +IDatabase db = cluster.GetDatabase(); + +db.StringSet("foo", "bar"); +Console.WriteLine(db.StringGet("foo")); // prints bar +``` + +## Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) guidelines. + +Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. + +To convert user certificate and private key from the PEM format to `pfx`, use this command: + +```bash +openssl pkcs12 -inkey redis_user_private.key -in redis_user.crt -export -out redis.pfx +``` + +Enter password to protect your `pfx` file. + +Establish a secure connection with your Redis database using this snippet. + +```csharp +ConfigurationOptions options = new ConfigurationOptions +{ + EndPoints = { { "my-redis.cloud.redislabs.com", 6379 } }, + User = "default", // use your Redis user. More info https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ + Password = "secret", // use your Redis password + Ssl = true, + SslProtocols = System.Security.Authentication.SslProtocols.Tls12 +}; + +options.CertificateSelection += delegate +{ + return new X509Certificate2("redis.pfx", "secret"); // use the password you specified for pfx file +}; +options.CertificateValidation += ValidateServerCertificate; + +bool ValidateServerCertificate( + object sender, + X509Certificate? certificate, + X509Chain? chain, + SslPolicyErrors sslPolicyErrors) +{ + if (certificate == null) { + return false; + } + + var ca = new X509Certificate2("redis_ca.pem"); + bool verdict = (certificate.Issuer == ca.Subject); + if (verdict) { + return true; + } + Console.WriteLine("Certificate error: {0}", sslPolicyErrors); + return false; +} + +ConnectionMultiplexer muxer = ConnectionMultiplexer.Connect(options); + +//Creation of the connection to the DB +IDatabase conn = muxer.GetDatabase(); + +//send SET command +conn.StringSet("foo", "bar"); + +//send GET command and print the value +Console.WriteLine(conn.StringGet("foo")); +``` + +## Multiplexing + +Although example code typically works with a single connection, +real-world code often uses multiple connections at the same time. +Opening and closing connections repeatedly is inefficient, so it is best +to manage open connections carefully to avoid this. + +Several other +Redis client libraries use *connection pools* to reuse a set of open +connections efficiently. NRedisStack uses a different approach called +*multiplexing*, which sends all client commands and responses over a +single connection. NRedisStack manages multiplexing for you automatically. +This gives high performance without requiring any extra coding. +See +[Connection pools and multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) +for more information. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis query engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 30 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack" >}}) +or another Redis server available. Also install the +[`NRedisStack`]({{< relref "/develop/clients/dotnet" >}}) client library if you +haven't already done so. + +Add the following dependencies: + +{{< clients-example cs_home_json import >}} +{{< /clients-example >}} + +## Create data + +Create some test data to add to the database: + +{{< clients-example cs_home_json create_data >}} +{{< /clients-example >}} + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/dotnet/connect" >}}) +to learn more about the available connection options. + +{{< clients-example cs_home_json connect >}} +{{< /clients-example >}} + +Create an index. In this example, only JSON documents with the key prefix `user:` are indexed. For more information, see [Query syntax]({{< relref "/develop/interact/search-and-query/query/" >}}). + +{{< clients-example cs_home_json make_index >}} +{{< /clients-example >}} + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them: + +{{< clients-example cs_home_json add_data >}} +{{< /clients-example >}} + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +{{< clients-example cs_home_json query1 >}} +{{< /clients-example >}} + +Specify query options to return only the `city` field: + +{{< clients-example cs_home_json query2 >}} +{{< /clients-example >}} + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +{{< clients-example cs_home_json query3 >}} +{{< /clients-example >}} + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must set the `On` option to `IndexDataType.HASH` +in the `FTCreateParams` object when you create the index. The code below shows +these changes with a new index called `hash-idx:users`, which is otherwise the +same as the `idx:users` index used for JSON documents in the previous examples. + +{{< clients-example cs_home_json make_hash_index >}} +{{< /clients-example >}} + +You use [`HashSet()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`JSON.Set()`]({{< relref "/commands/json.set" >}}). +Also, you must add the fields as key-value pairs instead of combining them +into a single object. + +{{< clients-example cs_home_json add_hash_data >}} +{{< /clients-example >}} + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The format of the result is +almost the same except that the fields are returned directly in the +`Document` object of the result (for JSON, the fields are all enclosed +in a string under the key `json`): + +{{< clients-example cs_home_json query1_hash >}} +{{< /clients-example >}} + +## More information + +See the [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how `NRedisStack` uses conditional execution +linkTitle: Conditional execution +title: Conditional execution +weight: 60 +--- + +Most Redis client libraries use transactions with the +[`WATCH`]({{< relref "/commands/watch" >}}) command as the main way to prevent +two clients writing to the same key at once (see [Transactions]({{< relref "/develop/interact/transactions" >}}) for more information). Unfortunately, this approach is +difficult to use explicitly in `NRedisStack`. Its +[multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) system +is highly efficient and convenient but can also cause bad interactions +when different connections use watched transactions at the same time. + +Instead, `NRedisStack` relies more heavily on conditional execution. This comes +in two basic forms, `When` conditions and transaction conditions, both of which +are explained in the sections below. + +## `When` conditions + +Several commands have variants that only execute if the key they change +already exists (or alternatively, if it doesn't already exist). For +example, the [`SET`]({{< relref "/commands/set" >}}) command has the +variants [`SETEX`]({{< relref "/commands/setex" >}}) (set when the key exists), +and [`SETNX`]({{< relref "/commands/setnx" >}}) (set when the key doesn't exist). + +Instead of providing the different variants of these commands, `NRedisStack` +lets you add a `When` condition to the basic command to access its variants. +The following example demonstrates this for the +[`HashSet()`]({{< relref "/commands/hset" >}}) command. + + + +```csharp +bool resp7 = db.HashSet("Details", "SerialNumber", "12345"); +Console.WriteLine(resp7); // >>> true + +db.HashSet("Details", "SerialNumber", "12345A", When.NotExists); +string resp8 = db.HashGet("Details", "SerialNumber"); +Console.WriteLine(resp8); // >>> 12345 + +db.HashSet("Details", "SerialNumber", "12345A"); +string resp9 = db.HashGet("Details", "SerialNumber"); +Console.WriteLine(resp9); // >>> 12345A +``` + +The available conditions are `When.Exists`, `When.NotExists`, and the default +`When.Always`. + +## Transaction conditions + +`NRedisStack` also supports a more extensive set of conditions that you +can add to transactions. They are implemented internally using +[`WATCH`]({{< relref "/commands/watch" >}}) commands in a way that is +guaranteed to be safe, without interactions between different clients. +Although conditions don't provide exactly the same behavior as +explicit `WATCH` commands, they are convenient to use and execute +efficiently. + +The example below shows how to use the `AddCondition()` method on +a transaction to let it run only if a specified hash key does not +already exist. See +[Pipelines and transactions]({{< relref "/develop/clients/dotnet/transpipe" >}}) +for more information about transactions. + + + +```csharp +var watchedTrans = new Transaction(db); + +watchedTrans.AddCondition(Condition.KeyNotExists("customer:39182")); + +watchedTrans.Db.HashSetAsync( + "customer:39182", + new HashEntry[]{ + new HashEntry("name", "David"), + new HashEntry("age", "27") + } +); + +bool succeeded = watchedTrans.Execute(); +Console.WriteLine(succeeded); // >>> true +``` + +The table below describes the full set of conditions you can add to +a transaction. Note that you can add more than one condition to the +same transaction if necessary. + +| Condition | Description | +| :-- | :-- | +| `HashEqual` | Enforces that the given hash-field must have the specified value. | +| `HashExists` | Enforces that the given hash-field must exist. | +| `HashNotEqual` | Enforces that the given hash-field must not have the specified value. | +| `HashNotExists` | Enforces that the given hash-field must not exist. | +| `KeyExists` | Enforces that the given key must exist. | +| `KeyNotExists` | Enforces that the given key must not exist. | +| `ListIndexEqual` | Enforces that the given list index must have the specified value. | +| `ListIndexExists` | Enforces that the given list index must exist. | +| `ListIndexNotEqual` | Enforces that the given list index must not have the specified value. | +| `ListIndexNotExists` | Enforces that the given list index must not exist. | +| `StringEqual` | Enforces that the given key must have the specified value. | +| `StringNotEqual` | Enforces that the given key must not have the specified value. | +| `HashLengthEqual` | Enforces that the given hash length is a certain value. | +| `HashLengthLessThan` | Enforces that the given hash length is less than a certain value. | +| `HashLengthGreaterThan` | Enforces that the given hash length is greater than a certain value. | +| `StringLengthEqual` | Enforces that the given string length is a certain value. | +| `StringLengthLessThan` | Enforces that the given string length is less than a certain value. | +| `StringLengthGreaterThan` | Enforces that the given string length is greater than a certain value. | +| `ListLengthEqual` | Enforces that the given list length is a certain value. | +| `ListLengthLessThan` | Enforces that the given list length is less than a certain value. | +| `ListLengthGreaterThan` | Enforces that the given list length is greater than a certain value. | +| `SetLengthEqual` | Enforces that the given set cardinality is a certain value. | +| `SetLengthLessThan` | Enforces that the given set cardinality is less than a certain value. | +| `SetLengthGreaterThan` | Enforces that the given set cardinality is greater than a certain value. | +| `SetContains` | Enforces that the given set contains a certain member. | +| `SetNotContains` | Enforces that the given set does not contain a certain member. | +| `SortedSetLengthEqual` | Enforces that the given sorted set cardinality is a certain value. | +| `SortedSetLengthEqual` | Enforces that the given sorted set contains a certain number of members with scores in the given range. | +| `SortedSetLengthLessThan` | Enforces that the given sorted set cardinality is less than a certain value. | +| `SortedSetLengthLessThan` | Enforces that the given sorted set contains less than a certain number of members with scores in the given range. | +| `SortedSetLengthGreaterThan` | Enforces that the given sorted set cardinality is greater than a certain value. | +| `SortedSetLengthGreaterThan` | Enforces that the given sorted set contains more than a certain number of members with scores in the given range. | +| `SortedSetContains` | Enforces that the given sorted set contains a certain member. | +| `SortedSetNotContains` | Enforces that the given sorted set does not contain a certain member. | +| `SortedSetEqual` | Enforces that the given sorted set member must have the specified score. | +| `SortedSetNotEqual` | Enforces that the given sorted set member must not have the specified score. | +| `SortedSetScoreExists` | Enforces that the given sorted set must have the given score. | +| `SortedSetScoreNotExists` | Enforces that the given sorted set must not have the given score. | +| `SortedSetScoreExists` | Enforces that the given sorted set must have the specified count of the given score. | +| `SortedSetScoreNotExists` | Enforces that the given sorted set must not have the specified count of the given score. | +| `StreamLengthEqual` | Enforces that the given stream length is a certain value. | +| `StreamLengthLessThan` | Enforces that the given stream length is less than a certain value. | +| `StreamLengthGreaterThan` | Enforces that the given stream length is greater than a certain value. | +--- +aliases: /develop/connect/clients/dotnet +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your .NET application to a Redis database +linkTitle: NRedisStack (C#/.NET) +title: NRedisStack guide (C#/.NET) +weight: 3 +--- + +[NRedisStack](https://github.com/redis/NRedisStack) is the .NET client for Redis. +The sections below explain how to install `NRedisStack` and connect your application +to a Redis database. + +`NRedisStack` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +You can also access Redis with an object-mapping client interface. See +[Redis OM for .NET]({{< relref "/integrate/redisom-for-net" >}}) +for more information. + +## Install + +Using the `dotnet` CLI, run: + +```bash +dotnet add package NRedisStack +``` + +## Connect and test + +Connect to localhost on port 6379. + +```csharp +using NRedisStack; +using NRedisStack.RedisStackCommands; +using StackExchange.Redis; +//... +ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost"); +IDatabase db = redis.GetDatabase(); +``` + +You can test the connection by storing and retrieving a simple string. + +```csharp +db.StringSet("foo", "bar"); +Console.WriteLine(db.StringGet("foo")); // prints bar +``` + +Store and retrieve a HashMap. + +```csharp +var hash = new HashEntry[] { + new HashEntry("name", "John"), + new HashEntry("surname", "Smith"), + new HashEntry("company", "Redis"), + new HashEntry("age", "29"), + }; +db.HashSet("user-session:123", hash); + +var hashFields = db.HashGetAll("user-session:123"); +Console.WriteLine(String.Join("; ", hashFields)); +// Prints: +// name: John; surname: Smith; company: Redis; age: 29 +``` +## Redis Open Source modules + +To access Redis Open Source capabilities, use the appropriate interface like this: + +``` +IBloomCommands bf = db.BF(); +ICuckooCommands cf = db.CF(); +ICmsCommands cms = db.CMS(); +IGraphCommands graph = db.GRAPH(); +ITopKCommands topk = db.TOPK(); +ITdigestCommands tdigest = db.TDIGEST(); +ISearchCommands ft = db.FT(); +IJsonCommands json = db.JSON(); +ITimeSeriesCommands ts = db.TS(); +``` + +## More information + +See the other pages in this section for more information and examples. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get your NRedisStack app ready for production +linkTitle: Production usage +title: Production usage +weight: 70 +--- + +This guide offers recommendations to get the best reliability and +performance in your production environment. + +## Checklist + +Each item in the checklist below links to the section +for a recommendation. Use the checklist icons to record your +progress in implementing the recommendations. + +{{< checklist "dotnetprodlist" >}} + {{< checklist-item "#event-handling" >}}Event handling{{< /checklist-item >}} + {{< checklist-item "#timeouts" >}}Timeouts{{< /checklist-item >}} + {{< checklist-item "#exception-handling" >}}Exception handling{{< /checklist-item >}} +{{< /checklist >}} + +## Recommendations + +The sections below offer recommendations for your production environment. Some +of them may not apply to your particular use case. + +### Event handling + +The `ConnectionMultiplexer` class publishes several different types of +[events](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/events/) +for situations such as configuration changes and connection failures. +Use these events to record server activity in a log, which you can then use +to monitor performance and diagnose problems when they occur. +See +the StackExchange.Redis +[Events](https://stackexchange.github.io/StackExchange.Redis/Events) +page for the full list of events. + +#### Server notification events + +Some servers (such as Azure Cache for Redis) send notification events shortly +before scheduled maintenance is due to happen. You can use code like the +following to respond to these events (see the +[StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/ServerMaintenanceEvent) +docs for the full list of supported events). For example, you could +inform users who try to connect that service is temporarily unavailable +rather than letting them run into errors. + +```cs +using NRedisStack; +using StackExchange.Redis; + +ConnectionMultiplexer muxer = ConnectionMultiplexer.Connect("localhost:6379"); + +muxer.ServerMaintenanceEvent += (object sender, ServerMaintenanceEvent e) => { + // Identify the event and respond to it here. + Console.WriteLine($"Maintenance event: {e.RawMessage}"); +}; +``` + +### Timeouts + +If a network or server error occurs while your code is opening a +connection or issuing a command, it can end up hanging indefinitely. +To prevent this, `NRedisStack` sets timeouts for socket +reads and writes and for opening connections. + +By default, the timeout is five seconds for all operations, but +you can set the time (in milliseconds) separately for connections +and commands using the `ConnectTimeout`, `SyncTimeout`, and +`AsyncTimeout` configuration options: + +```cs +var muxer = ConnectionMultiplexer.Connect(new ConfigurationOptions { + ConnectTimeout = 1000, // 1 second timeout for connections. + SyncTimeout = 2000, // 2 seconds for synchronous commands. + AsyncTimeout = 3000 // 3 seconds for asynchronous commands. + . + . +}); + +var db = muxer.GetDatabase(); +``` + +The default timeouts are a good starting point, but you may be able +to improve performance by adjusting the values to suit your use case. + +### Exception handling + +Redis handles many errors using return values from commands, but there +are also situations where exceptions can be thrown. In production code, +you should handle exceptions as they occur. The list below describes some +the most common Redis exceptions: + +- `RedisConnectionException`: Thrown when a connection attempt fails. +- `RedisTimeoutException`: Thrown when a command times out. +- `RedisCommandException`: Thrown when you issue an invalid command. +- `RedisServerException`: Thrown when you attempt an invalid operation + (for example, trying to access a + [stream entry]({{< relref "/develop/data-types/streams#entry-ids" >}}) + using an invalid ID). +--- +aliases: /develop/connect/om-clients +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Object-Mapper libraries for Redis Open Source +linkTitle: Object mapping +stack: true +title: Object-Mapper libraries +weight: 20 +--- + +Redis OM (pronounced *REDiss OHM*) is a library that provides object mapping for Redis. With the help of Redis OM, you can map Redis data types, specifically Hashes and JSON documents, to objects of your preferred programming language or framework. Redis OM relies on the JSON and Redis Query Engine features of Redis Open Source, allowing you to search and/or query for objects. + +You can use Redis OM with the following four programming languages: + +* [Python]({{< relref "/integrate/redisom-for-python" >}}) +* [C#/.NET]({{< relref "/integrate/redisom-for-net" >}}) +* [Node.js]({{< relref "/integrate/redisom-for-node-js" >}}) +* [Java/Spring]({{< relref "/integrate/redisom-for-java" >}}) +--- +aliases: /develop/connect/clients/client-side-caching +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Server-assisted, client-side caching in Redis +linkTitle: Client-side caching +title: Client-side caching introduction +weight: 30 +--- + +*Client-side caching* reduces network traffic between +a Redis client and the server, which generally improves performance. + +By default, an [application server](https://en.wikipedia.org/wiki/Application_server) +(which sits between the user app and the database) contacts the +Redis database server through the client library for every read request. +The diagram below shows the flow of communication from the user app, +through the application server to the database and back again: + +{{< image filename="images/csc/CSCNoCache.drawio.svg" >}} + +When you use client-side caching, the client library +maintains a local cache of data items as it retrieves them +from the database. When the same items are needed again, the client +can satisfy the read requests from the cache instead of the database: + +{{< image filename="images/csc/CSCWithCache.drawio.svg" >}} + +Accessing the cache is much faster than communicating with the database over the +network and it reduces network traffic. Client-side caching reduces +the load on the database server, so you may be able to run it using less hardware +resources. + +As with other forms of [caching](https://en.wikipedia.org/wiki/Cache_(computing)), +client-side caching works well in the very common use case where a small subset of the data +is accessed much more frequently than the rest of the data (according +to the [Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle)). + +## Updating the cache when the data changes {#tracking} + +All caching systems must implement a scheme to update data in the cache +when the corresponding data changes in the main database. Redis uses an +approach called *tracking*. + +When client-side caching is enabled, the Redis server remembers or *tracks* the set of keys +that each client connection has previously read. This includes cases where the client +reads data directly, as with the [`GET`]({{< relref "/commands/get" >}}) +command, and also where the server calculates values from the stored data, +as with [`STRLEN`]({{< relref "/commands/strlen" >}}). When any client +writes new data to a tracked key, the server sends an invalidation message +to all clients that have accessed that key previously. This message warns +the clients that their cached copies of the data are no longer valid and the clients +will evict the stale data in response. Next time a client reads from +the same key, it will access the database directly and refresh its cache +with the updated data. + +{{< note >}}If any connection from a client gets disconnected (including +one from a connection pool), then the client will flush all keys from the +client-side cache. Caching then resumes for subsequent reads from the +connections that are still active. +{{< /note >}} + +The sequence diagram below shows how two clients might interact as they +access and update the same key: + +{{< image filename="images/csc/CSCSeqDiagram.drawio.svg" >}} + +## Which client libraries support client-side caching? + +The following client libraries support CSC from the stated version onwards: + +| Client | Version | +| :-- | :-- | +| [`redis-py`]({{< relref "/develop/clients/redis-py/connect#connect-using-client-side-caching" >}}) | v5.1.0 | +| [`Jedis`]({{< relref "/develop/clients/jedis/connect#connect-using-client-side-caching" >}}) | v5.2.0 | + +## Which commands can cache data? + +All read-only commands (with the `@read` +[ACL category]({{< relref "/operate/oss_and_stack/management/security/acl" >}})) +will use cached data, except for the following: + +- Any commands for the + [probabilistic]({{< relref "/develop/data-types/probabilistic" >}}) and + [time series]({{< relref "/develop/data-types/timeseries" >}}) data types. + These types are designed to be updated frequently, which means that caching + has little or no benefit. +- Non-deterministic commands such as [`HRANDFIELD`]({{< relref "/commands/hrandfield" >}}), + [`HSCAN`]({{< relref "/commands/hscan" >}}), + and [`ZRANDMEMBER`]({{< relref "/commands/zrandmember" >}}). By design, these commands + give different results each time they are called. +- Redis Query Engine commands (with the `FT.*` prefix), such as + [`FT.SEARCH`]({{< relref "commands/ft.search" >}}). + +You can use the [`MONITOR`]({{< relref "/commands/monitor" >}}) command to +check the server's behavior when you are using client-side caching. Because `MONITOR` only +reports activity from the server, you should find the first cacheable +access to a key causes a response from the server. However, subsequent +accesses are satisfied by the cache, and so `MONITOR` should report no +server activity if client-side caching is working correctly. + +## What data gets cached for a command? + +Broadly speaking, the data from the specific response to a command invocation +gets cached after it is used for the first time. Subsets of that data +or values calculated from it are retrieved from the server as usual and +then cached separately. For example: + +- The whole string retrieved by [`GET`]({{< relref "/commands/get" >}}) + is added to the cache. Parts of the same string retrieved by + [`SUBSTR`]({{< relref "/commands/substr" >}}) are calculated on the + server the first time and then cached separately from the original + string. +- Using [`GETBIT`]({{< relref "/commands/getbit" >}}) or + [`BITFIELD`]({{< relref "/commands/bitfield" >}}) on a string + caches the returned values separately from the original string. +- For composite data types accessed by keys + ([hash]({{< relref "/develop/data-types/hashes" >}}), + [JSON]({{< relref "/develop/data-types/json" >}}), + [set]({{< relref "/develop/data-types/sets" >}}), and + [sorted set]({{< relref "/develop/data-types/sorted-sets" >}})), + the whole object is cached separately from the individual fields. + So the results of `JSON.GET mykey $` and `JSON.GET mykey $.myfield` create + separate entries in the cache. +- Ranges from [lists]({{< relref "/develop/data-types/lists" >}}), + [streams]({{< relref "/develop/data-types/streams" >}}), + and [sorted sets]({{< relref "/develop/data-types/sorted-sets" >}}) + are cached separately from the object they form a part of. Likewise, + subsets returned by [`SINTER`]({{< relref "/commands/sinter" >}}) and + [`SDIFF`]({{< relref "/commands/sdiff" >}}) create separate cache entries. +- For multi-key read commands such as [`MGET`]({{< relref "/commands/mget" >}}), + the ordering of the keys is significant. For example `MGET name:1 name:2` is + cached separately from `MGET name:2 name:1` because the server returns the + values in the order you specify. +- Boolean or numeric values calculated from data types (for example + [`SISMEMBER`]({{< relref "/commands/sismember" >}})) and + [`LLEN`]({{< relref "/commands/llen" >}}) are cached separately from the + object they refer to. + +## Usage recommendations + +Like any caching system, client-side caching has some limitations: + +- The cache has only a limited amount of memory available. When the limit + is reached, the client must *evict* potentially useful items from the + cache to make room for new ones. +- Cache misses, tracking, and invalidation messages always add a slight + performance penalty. + +Below are some guidelines to help you use client-side caching efficiently, within these +limitations: + +- **Use a separate connection for data that is not cache-friendly**: + Caching gives the most benefit + for keys that are read frequently and updated infrequently. However, you + may also have data, such as counters and scoreboards, that receives frequent + updates. In cases like this, the performance overhead of the invalidation + messages can be greater than the savings made by caching. Avoid this problem + by using a separate connection *without* client-side caching for any data that is + not cache-friendly. +- **Estimate how many items you can cache**: The client libraries let you + specify the maximum number of items you want to hold in the cache. You + can calculate an estimate for this number by dividing the + maximum desired size of the + cache in memory by the average size of the items you want to store + (use the [`MEMORY USAGE`]({{< relref "/commands/memory-usage" >}}) + command to get the memory footprint of a key). For example, if you had + 10MB (or 10485760 bytes) available for the cache, and the average + size of an item was 80 bytes, you could fit approximately + 10485760 / 80 = 131072 items in the cache. Monitor memory usage + on your server with a realistic test load to adjust your estimate + up or down. + + ## Reference + + The Redis server implements extra features for client-side caching that are not used by + the main Redis clients, but may be useful for custom clients and other + advanced applications. See + [Client-side caching reference]({{< relref "/develop/reference/client-side-caching" >}}) + for a full technical guide to all the options available for client-side caching. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 50 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +To execute commands in a pipeline, you first create a pipeline object +and then add commands to it using methods that resemble the standard +command methods (for example, `set()` and `get()`). The commands are +buffered in the pipeline and only execute when you call the `execute()` +method on the pipeline object. This method returns a list that contains +the results from all the commands in order. + +Note that the command methods for a pipeline always return the original +pipeline object, so you can "chain" several commands together, as the +example below shows: + +{{< clients-example pipe_trans_tutorial basic_pipe Python >}} +{{< /clients-example >}} + +## Execute a transaction + +A pipeline actually executes as a transaction by default (that is to say, +all commands are executed in an uninterrupted sequence). However, if you +need to switch this behavior off, you can set the `transaction` parameter +to `False` when you create the pipeline: + +```python +pipe = r.pipeline(transaction=False) +``` + +## Watch keys for changes + +Redis supports *optimistic locking* to avoid inconsistent updates +to different keys. The basic idea is to watch for changes to any +keys that you use in a transaction while you are are processing the +updates. If the watched keys do change, you must restart the updates +with the latest data from the keys. See +[Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information about optimistic locking. + +The example below shows how to repeatedly attempt a transaction with a watched +key until it succeeds. The code reads a string +that represents a `PATH` variable for a command shell, then appends a new +command path to the string before attempting to write it back. If the watched +key is modified by another client before writing, the transaction aborts +with a `WatchError` exception, and the loop executes again for another attempt. +Otherwise, the loop terminates successfully. + +{{< clients-example pipe_trans_tutorial trans_watch Python >}} +{{< /clients-example >}} + +Because this is a common pattern, the library includes a convenience +method called `transaction()` that handles the code to watch keys, +execute the transaction, and retry if necessary. Pass +`transaction()` a function that implements your main transaction code, +and also pass the keys you want to watch. The example below implements +the same basic transaction as the previous example but this time +using `transaction()`. Note that `transaction()` can't add the `multi()` +call automatically, so you must still place this correctly in your +transaction function. + +{{< clients-example pipe_trans_tutorial watch_conv_method Python >}} +{{< /clients-example >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Iterate through results from `SCAN`, `HSCAN`, etc. +linkTitle: Scan iteration +title: Scan iteration +weight: 60 +--- + +Redis has a small family of related commands that retrieve +keys and, in some cases, their associated values: + +- [`SCAN`]({{< relref "/commands/scan" >}}) retrieves keys + from the main Redis keyspace. +- [`HSCAN`]({{< relref "/commands/hscan" >}}) retrieves keys and optionally, + their values from a + [hash]({{< relref "/develop/data-types/hashes" >}}) object. +- [`SSCAN`]({{< relref "/commands/sscan" >}}) retrieves keys from a + [set]({{< relref "/develop/data-types/sets" >}}) object. +- [`ZSCAN`]({{< relref "/commands/zscan" >}}) retrieves keys and their score values from a + [sorted set]({{< relref "/develop/data-types/sorted-sets" >}}) object. + +These commands can potentially return large numbers of results, so Redis +provides a paging mechanism to access the results in small, separate batches. +With the basic commands, you must maintain a cursor value in your code +to keep track of the current page. As a convenient alternative, `redis-py` +also lets you access the results using an +[iterator](https://docs.python.org/3/glossary.html#term-iterable). +This handles the paging transparently, so you simply need to process +the items it returns one-by-one in a `for` loop or pass the iterator +object itself in place of a +[sequence](https://docs.python.org/3/glossary.html#term-sequence). + +Each of the commands has its own equivalent iterator. The following example shows +how to use a `SCAN` iterator on the Redis keyspace. Note that, as with the `SCAN` +command, the results are not sorted into any particular order, . Also, you +can pass `match`, `count`, and `_type` parameters to `scan_iter()` to constrain +the set of keys it returns (see the [`SCAN`]({{< relref "/commands/scan" >}}) +command page for examples). + +```py +import redis + +r = redis.Redis(decode_responses=True) + +r.set("key:1", "a") +r.set("key:2", "b") +r.set("key:3", "c") +r.set("key:4", "d") +r.set("key:5", "e") + +for key in r.scan_iter(): + print(f"Key: {key}, value: {r.get(key)}") +# >>> Key: key:1, value: a +# >>> Key: key:4, value: d +# >>> Key: key:3, value: c +# >>> Key: key:2, value: b +# >>> Key: key:5, value: e +``` + +The iterators for the other commands are also named with `_iter()` after +the name of the basic command (`hscan_iter()`, `sscan_iter()`, and `zscan_iter()`). +They work in a similar way to `scan_iter()` except that you must pass a +key to identify the object you want to scan. The example below shows how to +iterate through the items in a sorted set using `zscan_iter()`. + +```py +r.zadd("battles", mapping={ + "hastings": 1066, + "agincourt": 1415, + "trafalgar": 1805, + "somme": 1916, +}) + +for item in r.zscan_iter("battles"): + print(f"Key: {item[0]}, value: {int(item[1])}") +# >>> Key: hastings, value: 1066 +# >>> Key: agincourt, value: 1415 +# >>> Key: trafalgar, value: 1805 +# >>> Key: somme, value: 1916 +``` + +Note that in this case, the item returned by the iterator is a +[tuple](https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences) +with two elements for the key and score. By default, `hscan_iter()` +also returns a 2-tuple for the key and value, but you can +pass a value of `True` for the `no_values` parameter to retrieve just +the keys: + +```py +r.hset("details", mapping={ + "name": "Mr Benn", + "address": "52 Festive Road", + "hobbies": "Cosplay" +}) + +for key in r.hscan_iter("details", no_values=True): + print(f"Key: {key}, value: {r.hget("details", key)}") +# >>> Key: name, value: Mr Benn +# >>> Key: address, value: 52 Festive Road +# >>> Key: hobbies, value: Cosplay +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 40 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). +Among other things, vector fields can store *text embeddings*, which are AI-generated vector +representations of the semantic information in pieces of text. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings indicates how similar they are semantically. By comparing the +similarity of an embedding generated from some query text with embeddings stored in hash +or JSON fields, Redis can retrieve documents that closely match the query in terms +of their meaning. + +The example below uses the +[`sentence-transformers`](https://pypi.org/project/sentence-transformers/) +library to generate vector embeddings to store and index with +Redis Query Engine. The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +Install [`redis-py`]({{< relref "/develop/clients/redis-py" >}}) if you +have not already done so. Also, install `sentence-transformers` with the +following command: + +```bash +pip install sentence-transformers +``` + +In a new Python source file, start by importing the required classes: + +```python +from sentence_transformers import SentenceTransformer +from redis.commands.search.query import Query +from redis.commands.search.field import TextField, TagField, VectorField +from redis.commands.search.indexDefinition import IndexDefinition, IndexType +from redis.commands.json.path import Path + +import numpy as np +import redis +``` + +The first of these imports is the +`SentenceTransformer` class, which generates an embedding from a section of text. +Here, we create an instance of `SentenceTransformer` that uses the +[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) +model for the embeddings. This model generates vectors with 384 dimensions, regardless +of the length of the input text, but note that the input is truncated to 256 +tokens (see +[Word piece tokenization](https://huggingface.co/learn/nlp-course/en/chapter6/6) +at the [Hugging Face](https://huggingface.co/) docs to learn more about the way tokens +are related to the original text). + +```python +model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2") +``` + +## Create the index + +Connect to Redis and delete any index previously created with the +name `vector_idx`. (The `dropindex()` call throws an exception if +the index doesn't already exist, which is why you need the +`try: except:` block.) + +```python +r = redis.Redis(decode_responses=True) + +try: + r.ft("vector_idx").dropindex(True) +except redis.exceptions.ResponseError: + pass +``` + +Next, create the index. +The schema in the example below specifies hash objects for storage and includes +three fields: the text content to index, a +[tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +field to represent the "genre" of the text, and the embedding vector generated from +the original text content. The `embedding` field specifies +[HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) +indexing, the +[L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +vector distance metric, `Float32` values to represent the vector's components, +and 384 dimensions, as required by the `all-MiniLM-L6-v2` embedding model. + +```python +schema = ( + TextField("content"), + TagField("genre"), + VectorField("embedding", "HNSW", { + "TYPE": "FLOAT32", + "DIM": 384, + "DISTANCE_METRIC":"L2" + }) +) + +r.ft("vector_idx").create_index( + schema, + definition=IndexDefinition( + prefix=["doc:"], index_type=IndexType.HASH + ) +) +``` + +## Add data + +You can now supply the data objects, which will be indexed automatically +when you add them with [`hset()`]({{< relref "/commands/hset" >}}), as long as +you use the `doc:` prefix specified in the index definition. + +Use the `model.encode()` method of `SentenceTransformer` +as shown below to create the embedding that represents the `content` field. +The `astype()` option that follows the `model.encode()` call specifies that +we want a vector of `float32` values. The `tobytes()` option encodes the +vector components together as a single binary string. +Use the binary string representation when you are indexing hashes +or running a query (but use a list of `float` for +[JSON documents](#differences-with-json-documents)). + +```python +content = "That is a very happy person" + +r.hset("doc:0", mapping={ + "content": content, + "genre": "persons", + "embedding": model.encode(content).astype(np.float32).tobytes(), +}) + +content = "That is a happy dog" + +r.hset("doc:1", mapping={ + "content": content, + "genre": "pets", + "embedding": model.encode(content).astype(np.float32).tobytes(), +}) + +content = "Today is a sunny day" + +r.hset("doc:2", mapping={ + "content": content, + "genre": "weather", + "embedding": model.encode(content).astype(np.float32).tobytes(), +}) +``` + +## Run a query + +After you have created the index and added the data, you are ready to run a query. +To do this, you must create another embedding vector from your chosen query +text. Redis calculates the similarity between the query vector and each +embedding vector in the index as it runs the query. It then ranks the +results in order of this numeric similarity value. + +The code below creates the query embedding using `model.encode()`, as with +the indexing, and passes it as a parameter when the query executes +(see +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about using query parameters with embeddings). + +```python +q = Query( + "*=>[KNN 3 @embedding $vec AS vector_distance]" +).return_field("score").dialect(2) + +query_text = "That is a happy person" + +res = r.ft("vector_idx").search( + q, query_params={ + "vec": model.encode(query_text).astype(np.float32).tobytes() + } +) + +print(res) +``` + +The code is now ready to run, but note that it may take a while to complete when +you run it for the first time (which happens because RedisVL must download the +`all-MiniLM-L6-v2` model data before it can +generate the embeddings). When you run the code, it outputs the following result +object (slightly formatted here for clarity): + +```Python +Result{ + 3 total, + docs: [ + Document { + 'id': 'doc:0', + 'payload': None, + 'vector_distance': '0.114169985056', + 'content': 'That is a very happy person' + }, + Document { + 'id': 'doc:1', + 'payload': None, + 'vector_distance': '0.610845386982', + 'content': 'That is a happy dog' + }, + Document { + 'id': 'doc:2', + 'payload': None, + 'vector_distance': '1.48624813557', + 'content': 'Today is a sunny day' + } + ] +} +``` + +Note that the results are ordered according to the value of the `vector_distance` +field, with the lowest distance indicating the greatest similarity to the query. +As you would expect, the result for `doc:0` with the content text *"That is a very happy person"* +is the result that is most similar in meaning to the query text +*"That is a happy person"*. + +## Differences with JSON documents + +Indexing JSON documents is similar to hash indexing, but there are some +important differences. JSON allows much richer data modelling with nested fields, so +you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema +to identify each field you want to index. However, you can declare a short alias for each +of these paths (using the `as_name` keyword argument) to avoid typing it in full for +every query. Also, you must specify `IndexType.JSON` when you create the index. + +The code below shows these differences, but the index is otherwise very similar to +the one created previously for hashes: + +```py +schema = ( + TextField("$.content", as_name="content"), + TagField("$.genre", as_name="genre"), + VectorField( + "$.embedding", "HNSW", { + "TYPE": "FLOAT32", + "DIM": 384, + "DISTANCE_METRIC": "L2" + }, + as_name="embedding" + ) +) + +r.ft("vector_json_idx").create_index( + schema, + definition=IndexDefinition( + prefix=["jdoc:"], index_type=IndexType.JSON + ) +) +``` + +Use [`json().set()`]({{< relref "/commands/json.set" >}}) to add the data +instead of [`hset()`]({{< relref "/commands/hset" >}}). The dictionaries +that specify the fields have the same structure as the ones used for `hset()` +but `json().set()` receives them in a positional argument instead of +the `mapping` keyword argument. + +An important difference with JSON indexing is that the vectors are +specified using lists instead of binary strings. Generate the list +using the `tolist()` method instead of `tobytes()` as you would with a +hash. + +```py +content = "That is a very happy person" + +r.json().set("jdoc:0", Path.root_path(), { + "content": content, + "genre": "persons", + "embedding": model.encode(content).astype(np.float32).tolist(), +}) + +content = "That is a happy dog" + +r.json().set("jdoc:1", Path.root_path(), { + "content": content, + "genre": "pets", + "embedding": model.encode(content).astype(np.float32).tolist(), +}) + +content = "Today is a sunny day" + +r.json().set("jdoc:2", Path.root_path(), { + "content": content, + "genre": "weather", + "embedding": model.encode(content).astype(np.float32).tolist(), +}) +``` + +The query is almost identical to the one for the hash documents. This +demonstrates how the right choice of aliases for the JSON paths can +save you having to write complex queries. An important thing to notice +is that the vector parameter for the query is still specified as a +binary string (using the `tobytes()` method), even though the data for +the `embedding` field of the JSON was specified as a list. + +```py +q = Query( + "*=>[KNN 3 @embedding $vec AS vector_distance]" +).return_field("vector_distance").return_field("content").dialect(2) + +query_text = "That is a happy person" + +res = r.ft("vector_json_idx").search( + q, query_params={ + "vec": model.encode(query_text).astype(np.float32).tobytes() + } +) +``` + +Apart from the `jdoc:` prefixes for the keys, the result from the JSON +query is the same as for hash: + +``` +Result{ + 3 total, + docs: [ + Document { + 'id': 'jdoc:0', + 'payload': None, + 'vector_distance': '0.114169985056', + 'content': 'That is a very happy person' + }, + . + . + . +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about the indexing options, distance metrics, and query format +for vectors. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Python application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 20 +--- + +## Basic connection + +Connect to localhost on port 6379, set a value in Redis, and retrieve it. All responses are returned as bytes in Python. To receive decoded strings, set `decode_responses=True`. For more connection options, see [these examples](https://redis.readthedocs.io/en/stable/examples.html). + +```python +r = redis.Redis(host='localhost', port=6379, decode_responses=True) +``` + +Store and retrieve a simple string. + +```python +r.set('foo', 'bar') +# True +r.get('foo') +# bar +``` + +Store and retrieve a dict. + +```python +r.hset('user-session:123', mapping={ + 'name': 'John', + "surname": 'Smith', + "company": 'Redis', + "age": 29 +}) +# True + +r.hgetall('user-session:123') +# {'surname': 'Smith', 'name': 'John', 'company': 'Redis', 'age': '29'} +``` + +## Connect to a Redis cluster + +To connect to a Redis cluster, use `RedisCluster`. + +```python +from redis.cluster import RedisCluster + +rc = RedisCluster(host='localhost', port=16379) + +print(rc.get_nodes()) +# [[host=127.0.0.1,port=16379,name=127.0.0.1:16379,server_type=primary,redis_connection=Redis>>], ... + +rc.set('foo', 'bar') +# True + +rc.get('foo') +# b'bar' +``` +For more information, see [redis-py Clustering](https://redis-py.readthedocs.io/en/stable/clustering.html). + +## Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) guidelines. + +```python +import redis + +r = redis.Redis( + host="my-redis.cloud.redislabs.com", port=6379, + username="default", # use your Redis user. More info https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ + password="secret", # use your Redis password + ssl=True, + ssl_certfile="./redis_user.crt", + ssl_keyfile="./redis_user_private.key", + ssl_ca_certs="./redis_ca.pem", +) +r.set('foo', 'bar') +# True + +r.get('foo') +# b'bar' +``` +For more information, see [redis-py TLS examples](https://redis-py.readthedocs.io/en/stable/examples/ssl_connection_examples.html). + +## Connect using client-side caching + +Client-side caching is a technique to reduce network traffic between +the client and server, resulting in better performance. See +[Client-side caching introduction]({{< relref "/develop/clients/client-side-caching" >}}) +for more information about how client-side caching works and how to use it effectively. + +To enable client-side caching, add some extra parameters when you connect +to the server: + +- `protocol`: (Required) You must pass a value of `3` here because + client-side caching requires the [RESP3]({{< relref "/develop/reference/protocol-spec#resp-versions" >}}) + protocol. +- `cache_config`: (Required) Pass `cache_config=CacheConfig()` here to enable client-side caching. + +The example below shows the simplest client-side caching connection to the default host and port, +`localhost:6379`. +All of the connection variants described above accept these parameters, so you can +use client-side caching with a connection pool or a cluster connection in exactly the same way. + +{{< note >}}Client-side caching requires redis-py v5.1.0 or later. +To maximize compatibility with all Redis products, client-side caching +is supported by Redis v7.4 or later. + +The [Redis server products]({{< relref "/operate" >}}) support +[opt-in/opt-out]({{< relref "/develop/reference/client-side-caching#opt-in-and-opt-out-caching" >}}) mode +and [broadcasting mode]({{< relref "/develop/reference/client-side-caching#broadcasting-mode" >}}) +for CSC, but these modes are not currently implemented by `redis-py`. +{{< /note >}} + +```python +import redis +from redis.cache import CacheConfig + +r = redis.Redis( + protocol=3, + cache_config=CacheConfig(), + decode_responses=True +) + +r.set("city", "New York") +cityNameAttempt1 = r.get("city") # Retrieved from Redis server and cached +cityNameAttempt2 = r.get("city") # Retrieved from cache +``` + +You can see the cache working if you connect to the same Redis database +with [`redis-cli`]({{< relref "/develop/tools/cli" >}}) and run the +[`MONITOR`]({{< relref "/commands/monitor" >}}) command. If you run the +code above with the `cache_config` line commented out, you should see +the following in the CLI among the output from `MONITOR`: + +``` +1723109720.268903 [...] "SET" "city" "New York" +1723109720.269681 [...] "GET" "city" +1723109720.270205 [...] "GET" "city" +``` + +The server responds to both `get("city")` calls. +If you run the code again with `cache_config` uncommented, you will see + +``` +1723110248.712663 [...] "SET" "city" "New York" +1723110248.713607 [...] "GET" "city" +``` + +The first `get("city")` call contacted the server but the second +call was satisfied by the cache. + +### Removing items from the cache + +You can remove individual keys from the cache with the +`delete_by_redis_keys()` method. This removes all cached items associated +with the keys, so all results from multi-key commands (such as +[`MGET`]({{< relref "/commands/mget" >}})) and composite data structures +(such as [hashes]({{< relref "/develop/data-types/hashes" >}})) will be +cleared at once. The example below shows the effect of removing a single +key from the cache: + +```python +r.hget("person:1", "name") # Read from the server +r.hget("person:1", "name") # Read from the cache + +r.hget("person:2", "name") # Read from the server +r.hget("person:2", "name") # Read from the cache + +cache = r.get_cache() +cache.delete_by_redis_keys(["person:1"]) + +r.hget("person:1", "name") # Read from the server +r.hget("person:1", "name") # Read from the cache + +r.hget("person:2", "name") # Still read from the cache +``` + +You can also clear all cached items using the `flush()` +method: + +```python +r.hget("person:1", "name") # Read from the server +r.hget("person:1", "name") # Read from the cache + +r.hget("person:2", "name") # Read from the cache +r.hget("person:2", "name") # Read from the cache + +cache = r.get_cache() +cache.flush() + +r.hget("person:1", "name") # Read from the server +r.hget("person:1", "name") # Read from the cache + +r.hget("person:2", "name") # Read from the server +r.hget("person:2", "name") # Read from the cache +``` + +The client will also flush the cache automatically +if any connection (including one from a connection pool) +is disconnected. + +## Connect with a connection pool + +For production usage, you should use a connection pool to manage +connections rather than opening and closing connections individually. +A connection pool maintains several open connections and reuses them +efficiently. When you open a connection from a pool, the pool allocates +one of its open connections. When you subsequently close the same connection, +it is not actually closed but simply returned to the pool for reuse. +This avoids the overhead of repeated connecting and disconnecting. +See +[Connection pools and multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) +for more information. + +Use the following code to connect with a connection pool: + +```python +import redis + +pool = redis.ConnectionPool().from_url("redis://localhost") +r1 = redis.Redis().from_pool(pool) +r2 = redis.Redis().from_pool(pool) +r3 = redis.Redis().from_pool(pool) + +r1.set("wind:1", "Hurricane") +r2.set("wind:2", "Tornado") +r3.set("wind:3", "Mistral") + +r1.close() +r2.close() +r3.close() + +pool.close() +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis query engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 30 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) +or another Redis server available. Also install the +[`redis-py`]({{< relref "/develop/clients/redis-py" >}}) client library if you +haven't already done so. + +Add the following dependencies. All of them are applicable to both JSON and hash, +except for the `Path` class, which is specific to JSON (see +[Path]({{< relref "/develop/data-types/json/path" >}}) for a description of the +JSON path syntax). + +{{< clients-example py_home_json import >}} +{{< /clients-example >}} + +## Create data + +Create some test data to add to your database. The example data shown +below is compatible with both JSON and hash objects. + +{{< clients-example py_home_json create_data >}} +{{< /clients-example >}} + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/redis-py/connect" >}}) +to learn more about the available connection options. + +{{< clients-example py_home_json connect >}} +{{< /clients-example >}} + +Create an index for the JSON data. The code below specifies that only JSON documents with +the key prefix `user:` are indexed. For more information, see +[Query syntax]({{< relref "/develop/interact/search-and-query/query/" >}}). + +{{< clients-example py_home_json make_index >}} +{{< /clients-example >}} + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them: + +{{< clients-example py_home_json add_data >}} +{{< /clients-example >}} + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +{{< clients-example py_home_json query1 >}} +{{< /clients-example >}} + +Specify query options to return only the `city` field: + +{{< clients-example py_home_json query2 >}} +{{< /clients-example >}} + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +{{< clients-example py_home_json query3 >}} +{{< /clients-example >}} + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must use `HASH` for the `IndexType` +when you create the index. The code below shows these changes with +a new index called `hash-idx:users`, which is otherwise the same as +the `idx:users` index used for JSON documents in the previous examples. + +{{< clients-example py_home_json make_hash_index >}} +{{< /clients-example >}} + +You use [`hset()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`json().set()`]({{< relref "/commands/json.set" >}}), +but the same flat `userX` dictionaries work equally well with either +hash or JSON: + +{{< clients-example py_home_json add_hash_data >}} +{{< /clients-example >}} + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The format of the result is +almost the same except that the fields are returned directly in the +result `Document` object instead of in an enclosing `json` dictionary: + +{{< clients-example py_home_json query1_hash >}} +{{< /clients-example >}} + +## More information + +See the [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Index and query embeddings with Redis vector sets +linkTitle: Vector set embeddings +title: Vector set embeddings +weight: 40 +bannerText: Vector set is a new data type that is currently in preview and may be subject to change. +bannerChildren: true +--- + +A Redis [vector set]({{< relref "/develop/data-types/vector-sets" >}}) lets +you store a set of unique keys, each with its own associated vector. +You can then retrieve keys from the set according to the similarity between +their stored vectors and a query vector that you specify. + +You can use vector sets to store any type of numeric vector but they are +particularly optimized to work with text embedding vectors (see +[Redis for AI]({{< relref "/develop/ai" >}}) to learn more about text +embeddings). The example below shows how to use the +[`sentence-transformers`](https://pypi.org/project/sentence-transformers/) +library to generate vector embeddings and then +store and retrieve them using a vector set with `redis-py`. + +## Initialize + +Start by installing the preview version of `redis-py` with the following +command: + +```bash +pip install redis==6.0.0b2 +``` + +Also, install `sentence-transformers`: + +```bash +pip install sentence-transformers +``` + +In a new Python file, import the required classes: + +```python +from sentence_transformers import SentenceTransformer + +import redis +import numpy as np +``` + +The first of these imports is the +`SentenceTransformer` class, which generates an embedding from a section of text. +This example uses an instance of `SentenceTransformer` with the +[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) +model for the embeddings. This model generates vectors with 384 dimensions, regardless +of the length of the input text, but note that the input is truncated to 256 +tokens (see +[Word piece tokenization](https://huggingface.co/learn/nlp-course/en/chapter6/6) +at the [Hugging Face](https://huggingface.co/) docs to learn more about the way tokens +are related to the original text). + +```python +model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2") +``` + +## Create the data + +The example data is contained a dictionary with some brief +descriptions of famous people: + +```python +peopleData = { + "Marie Curie": { + "born": 1867, "died": 1934, + "description": """ + Polish-French chemist and physicist. The only person ever to win + two Nobel prizes for two different sciences. + """ + }, + "Linus Pauling": { + "born": 1901, "died": 1994, + "description": """ + American chemist and peace activist. One of only two people to win two + Nobel prizes in different fields (chemistry and peace). + """ + }, + "Freddie Mercury": { + "born": 1946, "died": 1991, + "description": """ + British musician, best known as the lead singer of the rock band + Queen. + """ + }, + "Marie Fredriksson": { + "born": 1958, "died": 2019, + "description": """ + Swedish multi-instrumentalist, mainly known as the lead singer and + keyboardist of the band Roxette. + """ + }, + "Paul Erdos": { + "born": 1913, "died": 1996, + "description": """ + Hungarian mathematician, known for his eccentric personality almost + as much as his contributions to many different fields of mathematics. + """ + }, + "Maryam Mirzakhani": { + "born": 1977, "died": 2017, + "description": """ + Iranian mathematician. The first woman ever to win the Fields medal + for her contributions to mathematics. + """ + }, + "Masako Natsume": { + "born": 1957, "died": 1985, + "description": """ + Japanese actress. She was very famous in Japan but was primarily + known elsewhere in the world for her portrayal of Tripitaka in the + TV series Monkey. + """ + }, + "Chaim Topol": { + "born": 1935, "died": 2023, + "description": """ + Israeli actor and singer, usually credited simply as 'Topol'. He was + best known for his many appearances as Tevye in the musical Fiddler + on the Roof. + """ + } +} +``` + +## Add the data to a vector set + +The next step is to connect to Redis and add the data to a new vector set. + +The code below uses the dictionary's +[`items()`](https://docs.python.org/3/library/stdtypes.html#dict.items) +view to iterate through all the key-value pairs and add corresponding +elements to a vector set called `famousPeople`. + +Use the +[`encode()`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode) +method of `SentenceTransformer` to generate the +embedding as an array of `float32` values. The `tobytes()` method converts +the array to a byte string that you can pass to the +[`vadd()`]({{< relref "/commands/vadd" >}}) command to set the embedding. +Note that `vadd()` can also accept a list of `float` values to set the +vector, but the byte string format is more compact and saves a little +transmission time. If you later use +[`vemb()`]({{< relref "/commands/vemb" >}}) to retrieve the embedding, +it will return the vector as an array rather than the original byte +string (note that this is different from the behavior of byte strings in +[hash vector indexing]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}})). + +The call to `vadd()` also adds the `born` and `died` values from the +original dictionary as attribute data. You can access this during a query +or by using the [`vgetattr()`]({{< relref "/commands/vgetattr" >}}) method. + +```py +r = redis.Redis(decode_responses=True) + +for name, details in peopleData.items(): + emb = model.encode(details["description"]).astype(np.float32).tobytes() + + r.vset().vadd( + "famousPeople", + emb, + name, + attributes={ + "born": details["born"], + "died": details["died"] + } + ) +``` + +## Query the vector set + +You can now query the data in the set. The basic approach is to use the +`encode()` method to generate another embedding vector for the query text. +(This is the same method used to add the elements to the set.) Then, pass +the query vector to [`vsim()`]({{< relref "/commands/vsim" >}}) to return elements +of the set, ranked in order of similarity to the query. + +Start with a simple query for "actors": + +```py +query_value = "actors" + +actors_results = r.vset().vsim( + "famousPeople", + model.encode(query_value).astype(np.float32).tobytes(), +) + +print(f"'actors': {actors_results}") +``` + +This returns the following list of elements (formatted slightly for clarity): + +``` +'actors': ['Masako Natsume', 'Chaim Topol', 'Linus Pauling', +'Marie Fredriksson', 'Maryam Mirzakhani', 'Marie Curie', +'Freddie Mercury', 'Paul Erdos'] +``` + +The first two people in the list are the two actors, as expected, but none of the +people from Linus Pauling onward was especially well-known for acting (and there certainly +isn't any information about that in the short description text). +As it stands, the search attempts to rank all the elements in the set, based +on the information contained in the embedding model. +You can use the `count` parameter of `vsim()` to limit the list of elements +to just the most relevant few items: + +```py +query_value = "actors" + +two_actors_results = r.vset().vsim( + "famousPeople", + model.encode(query_value).astype(np.float32).tobytes(), + count=2 +) + +print(f"'actors (2)': {two_actors_results}") +# >>> 'actors (2)': ['Masako Natsume', 'Chaim Topol'] +``` + +The reason for using text embeddings rather than simple text search +is that the embeddings represent semantic information. This allows a query +to find elements with a similar meaning even if the text is +different. For example, the word "entertainer" doesn't appear in any of the +descriptions but if you use it as a query, the actors and musicians are ranked +highest in the results list: + +```py +query_value = "entertainer" + +entertainer_results = r.vset().vsim( + "famousPeople", + model.encode(query_value).astype(np.float32).tobytes() +) + +print(f"'entertainer': {entertainer_results}") +# >>> 'entertainer': ['Chaim Topol', 'Freddie Mercury', +# >>> 'Marie Fredriksson', 'Masako Natsume', 'Linus Pauling', +# 'Paul Erdos', 'Maryam Mirzakhani', 'Marie Curie'] +``` + +Similarly, if you use "science" as a query, you get the following results: + +``` +'science': ['Marie Curie', 'Linus Pauling', 'Maryam Mirzakhani', +'Paul Erdos', 'Marie Fredriksson', 'Freddie Mercury', 'Masako Natsume', +'Chaim Topol'] +``` + +The scientists are ranked highest but they are then followed by the +mathematicians. This seems reasonable given the connection between mathematics +and science. + +You can also use +[filter expressions]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) +with `vsim()` to restrict the search further. For example, +repeat the "science" query, but this time limit the results to people +who died before the year 2000: + +```py +query_value = "science" + +science2000_results = r.vset().vsim( + "famousPeople", + model.encode(query_value).astype(np.float32).tobytes(), + filter=".died < 2000" +) + +print(f"'science2000': {science2000_results}") +# >>> 'science2000': ['Marie Curie', 'Linus Pauling', +# 'Paul Erdos', 'Freddie Mercury', 'Masako Natsume'] +``` + +Note that the boolean filter expression is applied to items in the list +before the vector distance calculation is performed. Items that don't +pass the filter test are removed from the results completely, rather +than just reduced in rank. This can help to improve the performance of the +search because there is no need to calculate the vector distance for +elements that have already been filtered out of the search. + +## More information + +See the [vector sets]({{< relref "/develop/data-types/vector-sets" >}}) +docs for more information and code examples. See the +[Redis for AI]({{< relref "/develop/ai" >}}) section for more details +about text embeddings and other AI techniques you can use with Redis. + +You may also be interested in +[vector search]({{< relref "/develop/clients/redis-py/vecsearch" >}}). +This is a feature of the +[Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) +that lets you retrieve +[JSON]({{< relref "/develop/data-types/json" >}}) and +[hash]({{< relref "/develop/data-types/hashes" >}}) documents based on +vector data stored in their fields. +--- +aliases: /develop/connect/clients/python/redis-py +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Python application to a Redis database +linkTitle: redis-py (Python) +title: redis-py guide (Python) +weight: 1 +--- + +[redis-py](https://github.com/redis/redis-py) is the Python client for Redis. +The sections below explain how to install `redis-py` and connect your application +to a Redis database. + +`redis-py` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +You can also access Redis with an object-mapping client interface. See +[RedisOM for Python]({{< relref "/integrate/redisom-for-python" >}}) +for more information. + +## Install + +To install `redis-py`, enter: + +```bash +pip install redis +``` + +For faster performance, install Redis with [`hiredis`](https://github.com/redis/hiredis) support. This provides a compiled response parser, and for most cases requires zero code changes. By default, if `hiredis` >= 1.0 is available, `redis-py` attempts to use it for response parsing. + +{{% alert title="Note" %}} +The Python `distutils` packaging scheme is no longer part of Python 3.12 and greater. If you're having difficulties getting `redis-py` installed in a Python 3.12 environment, consider updating to a recent release of `redis-py`. +{{% /alert %}} + +```bash +pip install redis[hiredis] +``` + +## Connect and test + +Connect to localhost on port 6379, set a value in Redis, and retrieve it. All responses are returned as bytes in Python. To receive decoded strings, set `decode_responses=True`. For more connection options, see [these examples](https://redis.readthedocs.io/en/stable/examples.html). + +```python +r = redis.Redis(host='localhost', port=6379, decode_responses=True) +``` + +Store and retrieve a simple string. + +```python +r.set('foo', 'bar') +# True +r.get('foo') +# bar +``` + +Store and retrieve a dict. + +```python +r.hset('user-session:123', mapping={ + 'name': 'John', + "surname": 'Smith', + "company": 'Redis', + "age": 29 +}) +# True + +r.hgetall('user-session:123') +# {'surname': 'Smith', 'name': 'John', 'company': 'Redis', 'age': '29'} +``` + + + +## More information + +The [`redis-py`](https://redis.readthedocs.io/en/stable/index.html) website +has a [command reference](https://redis.readthedocs.io/en/stable/commands.html) +and some [tutorials](https://redis.readthedocs.io/en/stable/examples.html) for +various tasks. There are also some examples in the +[GitHub repository](https://github.com/redis/redis-py) for `redis-py`. + +See also the other pages in this section for more information and examples:--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get your `redis-py` app ready for production +linkTitle: Production usage +title: Production usage +weight: 70 +--- + +This guide offers recommendations to get the best reliability and +performance in your production environment. + +## Checklist + +Each item in the checklist below links to the section +for a recommendation. Use the checklist icons to record your +progress in implementing the recommendations. + +{{< checklist "pyprodlist" >}} + {{< checklist-item "#client-side-caching" >}}Client-side caching{{< /checklist-item >}} + {{< checklist-item "#retries" >}}Retries{{< /checklist-item >}} + {{< checklist-item "#health-checks" >}}Health checks{{< /checklist-item >}} + {{< checklist-item "#exception-handling" >}}Exception handling{{< /checklist-item >}} +{{< /checklist >}} + +## Recommendations + +The sections below offer recommendations for your production environment. Some +of them may not apply to your particular use case. + +### Client-side caching + +[Client-side caching]({{< relref "/develop/clients/client-side-caching" >}}) +involves storing the results from read-only commands in a local cache. If the +same command is executed again later, the results can be obtained from the cache, +without contacting the server. This improves command execution time on the client, +while also reducing network traffic and server load. See +[Connect using client-side caching]({{< relref "/develop/clients/redis-py/connect#connect-using-client-side-caching" >}}) +for more information and example code. + +### Retries + +Redis connections and commands can often fail due to transient problems, +such as temporary network outages or timeouts. When this happens, +the operation will generally succeed after a few attempts, despite +failing the first time. + +`redis-py` can retry commands automatically when +errors occur. From version 6.0.0 onwards, the default behavior is to +attempt a failed command three times. +The timing between successive attempts is calculated using +[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) +with some random "jitter" added to avoid two or more connections retrying +commands in sync with each other. + +You can override the default behavior using an instance of the `Retry` class to +specify the number of times to retry after a failure along with your +own choice of backoff strategy. +Pass the `Retry` object in the `retry` parameter when you connect. +For example, the connection in the code below uses an exponential backoff strategy +(without jitter) that will make eight repeated attempts after a failure: + +```py +from redis.backoff import ExponentialBackoff +from redis.retry import Retry + +# Run 8 retries with exponential backoff strategy. +retry = Retry(ExponentialBackoff(), 8) + +r = Redis( + retry=retry, + . + . +) +``` + +A retry is triggered when a command throws any exception +listed in the `supported_errors` attribute of the `Retry` class. +By default, the list only includes `ConnectionError` and `TimeoutError`, +but you can set your own choice of exceptions when you create the +instance: + +```py +# Only retry after a `TimeoutError`. +retry = Retry(ExponentialBackoff(), 3, supported_errors=(TimeoutError,)) +``` + +You can also add extra exceptions to the default list using the `retry_on_error` +parameter when you connect: + +```py +# Add `BusyLoadingError` to the default list of exceptions. +from redis.exceptions import ( + BusyLoadingError, +) + . + . + +r = Redis( + retry=retry, + retry_on_error=[BusyLoadingError], + . + . +) +``` + +For a connection to a Redis cluster, you can specify a `retry` instance, +but the list of exceptions is not configurable and is always set +to `TimeoutError`, `ConnectionError`, and `ClusterDownError`. + +### Health checks + +If your code doesn't access the Redis server continuously then it +might be useful to make a "health check" periodically (perhaps once +every few seconds) to verify that the connection is working. +Set the `health_check_interval` parameter during +a connection (with either `Redis` or `ConnectionPool`) to specify +an integer number of seconds. If the connection remains idle for +longer than this interval, it will automatically issue a +[`PING`]({{< relref "/commands/ping" >}}) command and check the +response before continuing with any client commands. + +```py +# Issue a health check if the connection is idle for longer +# than three seconds. +r = Redis( + health_check_interval = 3, + . + . +) +``` + +Health checks help to detect problems as soon as possible without +waiting for a user to report them. Note that health checks, like +other commands, will be [retried](#retries) using the strategy +that you specified for the connection. + +### Exception handling + +Redis handles many errors using return values from commands, but there +are also situations where exceptions can be thrown. In production code, +you should handle exceptions wherever they can occur. + +Import the exceptions you need to check from the `redis.exceptions` +module. The list below describes some of the most common exceptions. + +- `ConnectionError`: Thrown when a connection attempt fails + (for example, when connection parameters are invalid or the server + is unavailable) and sometimes when a [health check](#health-checks) + fails. There is also a subclass, `AuthenticationError`, specifically + for authentication failures. +- `ResponseError`: Thrown when you attempt an operation that has no valid + response. Examples include executing a command on the wrong type of key + (as when you try an + ['LPUSH']({{< relref "/develop/data-types/lists#automatic-creation-and-removal-of-keys" >}}) + command on a string key), creating an + [index]({{< relref "/develop/interact/search-and-query/indexing" >}}) + with a name that already exists, and using an invalid ID for a + [stream entry]({{< relref "/develop/data-types/streams/#entry-ids" >}}). +- `TimeoutError`: Thrown when a timeout persistently happens for a command, + despite any [retries](#retries). +- `WatchError`: Thrown when a + [watched key]({{< relref "/develop/clients/redis-py/transpipe#watch-keys-for-changes" >}}) is + modified during a transaction. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use Redis pipelines and transactions +linkTitle: Pipelines/transactions +title: Pipelines and transactions +weight: 5 +--- + +Redis lets you send a sequence of commands to the server together in a batch. +There are two types of batch that you can use: + +- **Pipelines** avoid network and processing overhead by sending several commands + to the server together in a single communication. The server then sends back + a single communication with all the responses. See the + [Pipelining]({{< relref "/develop/use/pipelining" >}}) page for more + information. +- **Transactions** guarantee that all the included commands will execute + to completion without being interrupted by commands from other clients. + See the [Transactions]({{< relref "/develop/interact/transactions" >}}) + page for more information. + +## Execute a pipeline + +To execute commands in a pipeline, you first create a pipeline object +and then add commands to it using methods that resemble the standard +command methods (for example, `set()` and `get()`). The commands are +buffered in the pipeline and only execute when you call the `sync()` +method on the pipeline object. + +The main difference with the pipeline commands is that they return +`Response` objects, where `Type` is the return type of the +standard command method. A `Response` object contains a valid result +only after the pipeline has finished executing. You can access the +result using the `Response` object's `get()` method. + +{{< clients-example pipe_trans_tutorial basic_pipe Java-Sync >}} +{{< /clients-example >}} + +## Execute a transaction + +A transaction works in a similar way to a pipeline. Create a +transaction object with the `multi()` command, call command methods +on that object, and then call the transaction object's +`exec()` method to execute it. You can access the results +from commands in the transaction using `Response` objects, as +you would with a pipeline. However, the `exec()` method also +returns a `List` value that contains all the result +values in the order the commands were executed (see +[Watch keys for changes](#watch-keys-for-changes) below for +an example that uses the results list). + +{{< clients-example pipe_trans_tutorial basic_trans Java-Sync >}} +{{< /clients-example >}} + +## Watch keys for changes + +Redis supports *optimistic locking* to avoid inconsistent updates +to different keys. The basic idea is to watch for changes to any +keys that you use in a transaction while you are are processing the +updates. If the watched keys do change, you must restart the updates +with the latest data from the keys. See +[Transactions]({{< relref "/develop/interact/transactions" >}}) +for more information about optimistic locking. + +The code below reads a string +that represents a `PATH` variable for a command shell, then appends a new +command path to the string before attempting to write it back. If the watched +key is modified by another client before writing, the transaction aborts. +Note that you should call read-only commands for the watched keys synchronously on +the usual client object (called `jedis` in our examples) but you still call commands +for the transaction on the transaction object. + +For production usage, you would generally call code like the following in +a loop to retry it until it succeeds or else report or log the failure. + +{{< clients-example pipe_trans_tutorial trans_watch Java-Sync >}} +{{< /clients-example >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to index and query vector embeddings with Redis +linkTitle: Index and query vectors +title: Index and query vectors +weight: 3 +--- + +[Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}}) +or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the +[Vectors]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) +reference page for more information). +Among other things, vector fields can store *text embeddings*, which are AI-generated vector +representations of the semantic information in pieces of text. The +[vector distance]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +between two embeddings indicates how similar they are semantically. By comparing the +similarity of an embedding generated from some query text with embeddings stored in hash +or JSON fields, Redis can retrieve documents that closely match the query in terms +of their meaning. + +In the example below, we use the [HuggingFace](https://huggingface.co/) model +[`all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) +to generate the vector embeddings to store and index with Redis Query Engine. +The code is first demonstrated for hash documents with a +separate section to explain the +[differences with JSON documents](#differences-with-json-documents). + +## Initialize + +If you are using [Maven](https://maven.apache.org/), add the following +dependencies to your `pom.xml` file: + +```xml + + redis.clients + jedis + 5.2.0 + + + ai.djl.huggingface + tokenizers + 0.24.0 + +``` + +If you are using [Gradle](https://gradle.org/), add the following +dependencies to your `build.gradle` file: + +```bash +implementation 'redis.clients:jedis:5.2.0' +implementation 'ai.djl.huggingface:tokenizers:0.24.0' +``` + +## Import dependencies + +Import the following classes in your source file: + +```java +// Jedis client and query engine classes. +import redis.clients.jedis.UnifiedJedis; +import redis.clients.jedis.search.*; +import redis.clients.jedis.search.schemafields.*; +import redis.clients.jedis.search.schemafields.VectorField.VectorAlgorithm; +import redis.clients.jedis.exceptions.JedisDataException; + +// Data manipulation. +import java.nio.ByteBuffer; +import java.nio.ByteOrder; +import java.util.Map; +import java.util.List; +import org.json.JSONObject; + +// Tokenizer to generate the vector embeddings. +import ai.djl.huggingface.tokenizers.HuggingFaceTokenizer; +``` + +## Define a helper method + +Our embedding model represents the vectors as an array of `long` integer values, +but Redis Query Engine expects the vector components to be `float` values. +Also, when you store vectors in a hash object, you must encode the vector +array as a `byte` string. To simplify this situation, we declare a helper +method `longsToFloatsByteString()` that takes the `long` array that the +embedding model returns, converts it to an array of `float` values, and +then encodes the `float` array as a `byte` string: + +```java +public static byte[] longsToFloatsByteString(long[] input) { + float[] floats = new float[input.length]; + for (int i = 0; i < input.length; i++) { + floats[i] = input[i]; + } + + byte[] bytes = new byte[Float.BYTES * floats.length]; + ByteBuffer + .wrap(bytes) + .order(ByteOrder.LITTLE_ENDIAN) + .asFloatBuffer() + .put(floats); + return bytes; +} +``` + +## Create a tokenizer instance + +We will use the +[`all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) +tokenizer to generate the embeddings. The vectors that represent the +embeddings have 768 components, regardless of the length of the input +text. + +```java +HuggingFaceTokenizer sentenceTokenizer = HuggingFaceTokenizer.newInstance( + "sentence-transformers/all-mpnet-base-v2", + Map.of("maxLength", "768", "modelMaxLength", "768") +); +``` + +## Create the index + +Connect to Redis and delete any index previously created with the +name `vector_idx`. (The `ftDropIndex()` call throws an exception if +the index doesn't already exist, which is why you need the +`try...catch` block.) + +```java +UnifiedJedis jedis = new UnifiedJedis("redis://localhost:6379"); + +try {jedis.ftDropIndex("vector_idx");} catch (JedisDataException j){} +``` + +Next, we create the index. +The schema in the example below includes three fields: the text content to index, a +[tag]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +field to represent the "genre" of the text, and the embedding vector generated from +the original text content. The `embedding` field specifies +[HNSW]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) +indexing, the +[L2]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#distance-metrics" >}}) +vector distance metric, `Float32` values to represent the vector's components, +and 768 dimensions, as required by the `all-mpnet-base-v2` embedding model. + +The `FTCreateParams` object specifies hash objects for storage and a +prefix `doc:` that identifies the hash objects we want to index. + +```java +SchemaField[] schema = { + TextField.of("content"), + TagField.of("genre"), + VectorField.builder() + .fieldName("embedding") + .algorithm(VectorAlgorithm.HNSW) + .attributes( + Map.of( + "TYPE", "FLOAT32", + "DIM", 768, + "DISTANCE_METRIC", "L2" + ) + ) + .build() +}; + +jedis.ftCreate("vector_idx", + FTCreateParams.createParams() + .addPrefix("doc:") + .on(IndexDataType.HASH), + schema +); +``` + +## Add data + +You can now supply the data objects, which will be indexed automatically +when you add them with [`hset()`]({{< relref "/commands/hset" >}}), as long as +you use the `doc:` prefix specified in the index definition. + +Use the `encode()` method of the `sentenceTokenizer` object +as shown below to create the embedding that represents the `content` field. +The `getIds()` method that follows `encode()` obtains the vector +of `long` values which we then convert to a `float` array stored as a `byte` +string using our helper method. Use the `byte` string representation when you are +indexing hash objects (as we are here), but use an array of `float` for +JSON objects (see [Differences with JSON objects](#differences-with-json-documents) +below). Note that when we set the `embedding` field, we must use an overload +of `hset()` that requires `byte` arrays for each of the key, the field name, and +the value, which is why we include the `getBytes()` calls on the strings. + +```java +String sentence1 = "That is a very happy person"; +jedis.hset("doc:1", Map.of("content", sentence1, "genre", "persons")); +jedis.hset( + "doc:1".getBytes(), + "embedding".getBytes(), + longsToFloatsByteString(sentenceTokenizer.encode(sentence1).getIds()) +); + +String sentence2 = "That is a happy dog"; +jedis.hset("doc:2", Map.of("content", sentence2, "genre", "pets")); +jedis.hset( + "doc:2".getBytes(), + "embedding".getBytes(), + longsToFloatsByteString(sentenceTokenizer.encode(sentence2).getIds()) +); + +String sentence3 = "Today is a sunny day"; +jedis.hset("doc:3", Map.of("content", sentence3, "genre", "weather")); +jedis.hset( + "doc:3".getBytes(), + "embedding".getBytes(), + longsToFloatsByteString(sentenceTokenizer.encode(sentence3).getIds()) +); +``` + +## Run a query + +After you have created the index and added the data, you are ready to run a query. +To do this, you must create another embedding vector from your chosen query +text. Redis calculates the vector distance between the query vector and each +embedding vector in the index as it runs the query. We can request the results to be +sorted to rank them in order of ascending distance. + +The code below creates the query embedding using the `encode()` method, as with +the indexing, and passes it as a parameter when the query executes (see +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about using query parameters with embeddings). +The query is a +[K nearest neighbors (KNN)]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}}) +search that sorts the results in order of vector distance from the query vector. + +```java +String sentence = "That is a happy person"; + +int K = 3; +Query q = new Query("*=>[KNN $K @embedding $BLOB AS distance]") + .returnFields("content", "distance") + .addParam("K", K) + .addParam( + "BLOB", + longsToFloatsByteString( + sentenceTokenizer.encode(sentence)..getIds() + ) + ) + .setSortBy("distance", true) + .dialect(2); + +List docs = jedis.ftSearch("vector_idx", q).getDocuments(); + +for (Document doc: docs) { + System.out.println( + String.format( + "ID: %s, Distance: %s, Content: %s", + doc.getId(), + doc.get("distance"), + doc.get("content") + ) + ); +} +``` + +Assuming you have added the code from the steps above to your source file, +it is now ready to run, but note that it may take a while to complete when +you run it for the first time (which happens because the tokenizer must download the +`all-mpnet-base-v2` model data before it can +generate the embeddings). When you run the code, it outputs the following result text: + +``` +Results: +ID: doc:2, Distance: 1411344, Content: That is a happy dog +ID: doc:1, Distance: 9301635, Content: That is a very happy person +ID: doc:3, Distance: 67178800, Content: Today is a sunny day +``` + +Note that the results are ordered according to the value of the `distance` +field, with the lowest distance indicating the greatest similarity to the query. +For this model, the text *"That is a happy dog"* +is the result judged to be most similar in meaning to the query text +*"That is a happy person"*. + +## Differences with JSON documents + +Indexing JSON documents is similar to hash indexing, but there are some +important differences. JSON allows much richer data modeling with nested fields, so +you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema +to identify each field you want to index. However, you can declare a short alias for each +of these paths (using the `as()` option) to avoid typing it in full for +every query. Also, you must specify `IndexDataType.JSON` when you create the index. + +The code below shows these differences, but the index is otherwise very similar to +the one created previously for hashes: + +```java +SchemaField[] jsonSchema = { + TextField.of("$.content").as("content"), + TagField.of("$.genre").as("genre"), + VectorField.builder() + .fieldName("$.embedding").as("embedding") + .algorithm(VectorAlgorithm.HNSW) + .attributes( + Map.of( + "TYPE", "FLOAT32", + "DIM", 768, + "DISTANCE_METRIC", "L2" + ) + ) + .build() +}; + +jedis.ftCreate("vector_json_idx", + FTCreateParams.createParams() + .addPrefix("jdoc:") + .on(IndexDataType.JSON), + jsonSchema +); +``` + +An important difference with JSON indexing is that the vectors are +specified using arrays of `float` instead of binary strings. This requires +a modified version of the `longsToFloatsByteString()` method +used previously: + +```java +public static float[] longArrayToFloatArray(long[] input) { + float[] floats = new float[input.length]; + for (int i = 0; i < input.length; i++) { + floats[i] = input[i]; + } + return floats; +} +``` + +Use [`jsonSet()`]({{< relref "/commands/json.set" >}}) to add the data +instead of [`hset()`]({{< relref "/commands/hset" >}}). Use instances +of `JSONObject` to supply the data instead of `Map`, as you would for +hash objects. + +```java +String jSentence1 = "That is a very happy person"; + +JSONObject jdoc1 = new JSONObject() + .put("content", jSentence1) + .put("genre", "persons") + .put( + "embedding", + longArrayToFloatArray( + sentenceTokenizer.encode(jSentence1).getIds() + ) + ); + +jedis.jsonSet("jdoc:1", Path2.ROOT_PATH, jdoc1); + +String jSentence2 = "That is a happy dog"; + +JSONObject jdoc2 = new JSONObject() + .put("content", jSentence2) + .put("genre", "pets") + .put( + "embedding", + longArrayToFloatArray( + sentenceTokenizer.encode(jSentence2).getIds() + ) + ); + +jedis.jsonSet("jdoc:2", Path2.ROOT_PATH, jdoc2); + +String jSentence3 = "Today is a sunny day"; + +JSONObject jdoc3 = new JSONObject() + .put("content", jSentence3) + .put("genre", "weather") + .put( + "embedding", + longArrayToFloatArray( + sentenceTokenizer.encode(jSentence3).getIds() + ) + ); + +jedis.jsonSet("jdoc:3", Path2.ROOT_PATH, jdoc3); +``` + +The query is almost identical to the one for the hash documents. This +demonstrates how the right choice of aliases for the JSON paths can +save you having to write complex queries. An important thing to notice +is that the vector parameter for the query is still specified as a +binary string (using the `longsToFloatsByteString()` method), even though +the data for the `embedding` field of the JSON was specified as an array. + +```java +String jSentence = "That is a happy person"; + +int jK = 3; +Query jq = new Query("*=>[KNN $K @embedding $BLOB AS distance]"). + returnFields("content", "distance"). + addParam("K", jK). + addParam( + "BLOB", + longsToFloatsByteString( + sentenceTokenizer.encode(jSentence).getIds() + ) + ) + .setSortBy("distance", true) + .dialect(2); + +// Execute the query +List jDocs = jedis + .ftSearch("vector_json_idx", jq) + .getDocuments(); + +``` + +Apart from the `jdoc:` prefixes for the keys, the result from the JSON +query is the same as for hash: + +``` +Results: +ID: jdoc:2, Distance: 1411344, Content: That is a happy dog +ID: jdoc:1, Distance: 9301635, Content: That is a very happy person +ID: jdoc:3, Distance: 67178800, Content: Today is a sunny day +``` + +## Learn more + +See +[Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +for more information about the indexing options, distance metrics, and query format +for vectors. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Java application to a Redis database +linkTitle: Connect +title: Connect to the server +weight: 2 +--- + +## Basic connection + +The following code opens a basic connection to a local Redis server: + +```java +package org.example; +import redis.clients.jedis.UnifiedJedis; + +public class Main { + public static void main(String[] args) { + UnifiedJedis jedis = new UnifiedJedis("redis://localhost:6379"); + + // Code that interacts with Redis... + + jedis.close(); + } +} +``` + +After you have connected, you can check the connection by storing and +retrieving a simple string value: + +```java +... + +String res1 = jedis.set("bike:1", "Deimos"); +System.out.println(res1); // OK + +String res2 = jedis.get("bike:1"); +System.out.println(res2); // Deimos + +... +``` + +### Connect to a Redis cluster + +To connect to a Redis cluster, use `JedisCluster`. + +```java +import redis.clients.jedis.JedisCluster; +import redis.clients.jedis.HostAndPort; + +//... + +Set jedisClusterNodes = new HashSet(); +jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7379)); +jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7380)); +JedisCluster jedis = new JedisCluster(jedisClusterNodes); +``` + +### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security]({{< relref "/operate/oss_and_stack/management/security/" >}}) guidelines. + +Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. + +To convert user certificate and private key from the PEM format to `pkcs12`, use this command: + +``` +openssl pkcs12 -export -in ./redis_user.crt -inkey ./redis_user_private.key -out redis-user-keystore.p12 -name "redis" +``` + +Enter password to protect your `pkcs12` file. + +Convert the server (CA) certificate to the JKS format using the [keytool](https://docs.oracle.com/en/java/javase/12/tools/keytool.html) shipped with JDK. + +``` +keytool -importcert -keystore truststore.jks \ + -storepass REPLACE_WITH_YOUR_PASSWORD \ + -file redis_ca.pem +``` + +Establish a secure connection with your Redis database using this snippet. + +```java +package org.example; + +import redis.clients.jedis.*; + +import javax.net.ssl.*; +import java.io.FileInputStream; +import java.io.IOException; +import java.security.GeneralSecurityException; +import java.security.KeyStore; + +public class Main { + + public static void main(String[] args) throws GeneralSecurityException, IOException { + HostAndPort address = new HostAndPort("my-redis-instance.cloud.redislabs.com", 6379); + + SSLSocketFactory sslFactory = createSslSocketFactory( + "./truststore.jks", + "secret!", // use the password you specified for keytool command + "./redis-user-keystore.p12", + "secret!" // use the password you specified for openssl command + ); + + JedisClientConfig config = DefaultJedisClientConfig.builder() + .ssl(true).sslSocketFactory(sslFactory) + .user("default") // use your Redis user. More info https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ + .password("secret!") // use your Redis password + .build(); + + JedisPooled jedis = new JedisPooled(address, config); + jedis.set("foo", "bar"); + System.out.println(jedis.get("foo")); // prints bar + } + + private static SSLSocketFactory createSslSocketFactory( + String caCertPath, String caCertPassword, String userCertPath, String userCertPassword) + throws IOException, GeneralSecurityException { + + KeyStore keyStore = KeyStore.getInstance("pkcs12"); + keyStore.load(new FileInputStream(userCertPath), userCertPassword.toCharArray()); + + KeyStore trustStore = KeyStore.getInstance("jks"); + trustStore.load(new FileInputStream(caCertPath), caCertPassword.toCharArray()); + + TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance("X509"); + trustManagerFactory.init(trustStore); + + KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("PKIX"); + keyManagerFactory.init(keyStore, userCertPassword.toCharArray()); + + SSLContext sslContext = SSLContext.getInstance("TLS"); + sslContext.init(keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null); + + return sslContext.getSocketFactory(); + } +} +``` + +## Connect using client-side caching + +Client-side caching is a technique to reduce network traffic between +the client and server, resulting in better performance. See +[Client-side caching introduction]({{< relref "/develop/clients/client-side-caching" >}}) +for more information about how client-side caching works and how to use it effectively. + +To enable client-side caching, specify the +[RESP3]({{< relref "/develop/reference/protocol-spec#resp-versions" >}}) +protocol and pass a cache configuration object during the connection. + +The example below shows the simplest client-side caching connection to the default host and port, +`localhost:6379`. +All of the connection variants described above accept these parameters, so you can +use client-side caching with a connection pool or a cluster connection in exactly the same way. + +{{< note >}}Client-side caching requires Jedis v5.2.0 or later. +To maximize compatibility with all Redis products, client-side caching +is supported by Redis v7.4 or later. + +The [Redis server products]({{< relref "/operate" >}}) support +[opt-in/opt-out]({{< relref "/develop/reference/client-side-caching#opt-in-and-opt-out-caching" >}}) mode +and [broadcasting mode]({{< relref "/develop/reference/client-side-caching#broadcasting-mode" >}}) +for CSC, but these modes are not currently implemented by Jedis. +{{< /note >}} + +```java +HostAndPort endpoint = new HostAndPort("localhost", 6379); + +DefaultJedisClientConfig config = DefaultJedisClientConfig + .builder() + .password("secretPassword") + .protocol(RedisProtocol.RESP3) + .build(); + +CacheConfig cacheConfig = CacheConfig.builder().maxSize(1000).build(); + +UnifiedJedis client = new UnifiedJedis(endpoint, config, cacheConfig); +``` + +Once you have connected, the usual Redis commands will work transparently +with the cache: + +```java +client.set("city", "New York"); +client.get("city"); // Retrieved from Redis server and cached +client.get("city"); // Retrieved from cache +``` + +You can see the cache working if you connect to the same Redis database +with [`redis-cli`]({{< relref "/develop/tools/cli" >}}) and run the +[`MONITOR`]({{< relref "/commands/monitor" >}}) command. If you run the +code above but without passing `cacheConfig` during the connection, +you should see the following in the CLI among the output from `MONITOR`: + +``` +1723109720.268903 [...] "SET" "city" "New York" +1723109720.269681 [...] "GET" "city" +1723109720.270205 [...] "GET" "city" +``` + +The server responds to both `get("city")` calls. +If you run the code with `cacheConfig` added in again, you will see + +``` +1723110248.712663 [...] "SET" "city" "New York" +1723110248.713607 [...] "GET" "city" +``` + +The first `get("city")` call contacted the server, but the second +call was satisfied by the cache. + +### Removing items from the cache + +You can remove individual keys from the cache with the +`deleteByRedisKey()` method of the cache object. This removes all cached items associated +with each specified key, so all results from multi-key commands (such as +[`MGET`]({{< relref "/commands/mget" >}})) and composite data structures +(such as [hashes]({{< relref "/develop/data-types/hashes" >}})) will be +cleared at once. The example below shows the effect of removing a single +key from the cache: + +```java +client.hget("person:1", "name"); // Read from the server +client.hget("person:1", "name"); // Read from the cache + +client.hget("person:2", "name"); // Read from the server +client.hget("person:2", "name"); // Read from the cache + +Cache myCache = client.getCache(); +myCache.deleteByRedisKey("person:1"); + +client.hget("person:1", "name"); // Read from the server +client.hget("person:1", "name"); // Read from the cache + +client.hget("person:2", "name"); // Still read from the cache +``` + +You can also clear all cached items using the `flush()` +method: + +```java +client.hget("person:1", "name"); // Read from the server +client.hget("person:1", "name"); // Read from the cache + +client.hget("person:2", "name"); // Read from the server +client.hget("person:2", "name"); // Read from the cache + +Cache myCache = client.getCache(); +myCache.flush(); + +client.hget("person:1", "name"); // Read from the server +client.hget("person:1", "name"); // Read from the cache + +client.hget("person:2", "name"); // Read from the server +client.hget("person:2", "name"); // Read from the cache +``` + +The client will also flush the cache automatically +if any connection (including one from a connection pool) +is disconnected. + +## Connect with a connection pool + +For production usage, you should use a connection pool to manage +connections rather than opening and closing connections individually. +A connection pool maintains several open connections and reuses them +efficiently. When you open a connection from a pool, the pool allocates +one of its open connections. When you subsequently close the same connection, +it is not actually closed but simply returned to the pool for reuse. +This avoids the overhead of repeated connecting and disconnecting. +See +[Connection pools and multiplexing]({{< relref "/develop/clients/pools-and-muxing" >}}) +for more information. + +Use the following code to connect with a connection pool: + +```java +package org.example; +import redis.clients.jedis.Jedis; +import redis.clients.jedis.JedisPool; + +public class Main { + public static void main(String[] args) { + JedisPool pool = new JedisPool("localhost", 6379); + + try (Jedis jedis = pool.getResource()) { + // Store & Retrieve a simple string + jedis.set("foo", "bar"); + System.out.println(jedis.get("foo")); // prints bar + + // Store & Retrieve a HashMap + Map hash = new HashMap<>();; + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + jedis.hset("user-session:123", hash); + System.out.println(jedis.hgetAll("user-session:123")); + // Prints: {name=John, surname=Smith, company=Redis, age=29} + } + } +} +``` + +Because adding a `try-with-resources` block for each command can be cumbersome, consider using `JedisPooled` as an easier way to pool connections. `JedisPooled`, added in Jedis version 4.0.0, provides capabilities similar to `JedisPool` but with a more straightforward API. + +```java +import redis.clients.jedis.JedisPooled; + +//... + +JedisPooled jedis = new JedisPooled("localhost", 6379); +jedis.set("foo", "bar"); +System.out.println(jedis.get("foo")); // prints "bar" +``` + +A connection pool holds a specified number of connections, creates more connections when necessary, and terminates them when they are no longer needed. + +Here is a simplified connection lifecycle in a pool: + +1. A connection is requested from the pool. +2. A connection is served: + - An idle connection is served when non-active connections are available, or + - A new connection is created when the number of connections is under `maxTotal`. +3. The connection becomes active. +4. The connection is released back to the pool. +5. The connection is marked as stale. +6. The connection is kept idle for `minEvictableIdleTime`. +7. The connection becomes evictable if the number of connections is greater than `minIdle`. +8. The connection is ready to be closed. + +It's important to configure the connection pool correctly. +Use `GenericObjectPoolConfig` from [Apache Commons Pool2](https://commons.apache.org/proper/commons-pool/apidocs/org/apache/commons/pool2/impl/GenericObjectPoolConfig.html). + +```java +ConnectionPoolConfig poolConfig = new ConnectionPoolConfig(); +// maximum active connections in the pool, +// tune this according to your needs and application type +// default is 8 +poolConfig.setMaxTotal(8); + +// maximum idle connections in the pool, default is 8 +poolConfig.setMaxIdle(8); +// minimum idle connections in the pool, default 0 +poolConfig.setMinIdle(0); + +// Enables waiting for a connection to become available. +poolConfig.setBlockWhenExhausted(true); +// The maximum number of seconds to wait for a connection to become available +poolConfig.setMaxWait(Duration.ofSeconds(1)); + +// Enables sending a PING command periodically while the connection is idle. +poolConfig.setTestWhileIdle(true); +// controls the period between checks for idle connections in the pool +poolConfig.setTimeBetweenEvictionRuns(Duration.ofSeconds(1)); + +// JedisPooled does all hard work on fetching and releasing connection to the pool +// to prevent connection starvation +JedisPooled jedis = new JedisPooled(poolConfig, "localhost", 6379); +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the Redis query engine with JSON and hash documents. +linkTitle: Index and query documents +title: Index and query documents +weight: 2 +--- + +This example shows how to create a +[search index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +for [JSON]({{< relref "/develop/data-types/json" >}}) documents and +run queries against the index. It then goes on to show the slight differences +in the equivalent code for [hash]({{< relref "/develop/data-types/hashes" >}}) +documents. + +## Initialize + +Make sure that you have [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) +or another Redis server available. Also install the +[Jedis]({{< relref "/develop/clients/jedis" >}}) client library if you +haven't already done so. + +Add the following dependencies. All of them are applicable to both JSON and hash, +except for the `Path` and `JSONObject` classes, which are specific to JSON (see +[Path]({{< relref "/develop/data-types/json/path" >}}) for a description of the +JSON path syntax). + +{{< clients-example java_home_json import >}} +{{< /clients-example >}} + +## Create data + +Create some test data to add to the database: + +{{< clients-example java_home_json create_data >}} +{{< /clients-example >}} + +## Add the index + +Connect to your Redis database. The code below shows the most +basic connection but see +[Connect to the server]({{< relref "/develop/clients/jedis/connect" >}}) +to learn more about the available connection options. + +{{< clients-example java_home_json connect >}} +{{< /clients-example >}} + +Create an index. In this example, only JSON documents with the key prefix `user:` are indexed. For more information, see [Query syntax]({{< relref "/develop/interact/search-and-query/query/" >}}). + +{{< clients-example java_home_json make_index >}} +{{< /clients-example >}} + +## Add the data + +Add the three sets of user data to the database as +[JSON]({{< relref "/develop/data-types/json" >}}) objects. +If you use keys with the `user:` prefix then Redis will index the +objects automatically as you add them: + +{{< clients-example java_home_json add_data >}} +{{< /clients-example >}} + +## Query the data + +You can now use the index to search the JSON objects. The +[query]({{< relref "/develop/interact/search-and-query/query" >}}) +below searches for objects that have the text "Paul" in any field +and have an `age` value in the range 30 to 40: + +{{< clients-example java_home_json query1 >}} +{{< /clients-example >}} + +Specify query options to return only the `city` field: + +{{< clients-example java_home_json query2 >}} +{{< /clients-example >}} + +Use an +[aggregation query]({{< relref "/develop/interact/search-and-query/query/aggregation" >}}) +to count all users in each city. + +{{< clients-example java_home_json query3 >}} +{{< /clients-example >}} + +## Differences with hash documents + +Indexing for hash documents is very similar to JSON indexing but you +need to specify some slightly different options. + +When you create the schema for a hash index, you don't need to +add aliases for the fields, since you use the basic names to access +the fields anyway. Also, you must use `IndexDataType.HASH` for the `On()` +option of `FTCreateParams` when you create the index. The code below shows these +changes with a new index called `hash-idx:users`, which is otherwise the same as +the `idx:users` index used for JSON documents in the previous examples. + +{{< clients-example java_home_json make_hash_index >}} +{{< /clients-example >}} + +Use [`hset()`]({{< relref "/commands/hset" >}}) to add the hash +documents instead of [`jsonSet()`]({{< relref "/commands/json.set" >}}). + +{{< clients-example java_home_json add_hash_data >}} +{{< /clients-example >}} + +The query commands work the same here for hash as they do for JSON (but +the name of the hash index is different). The results are returned in +a `List` of `Document` objects, as with JSON: + +{{< clients-example java_home_json query1_hash >}} +{{< /clients-example >}} + +## More information + +See the [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) docs +for a full description of all query features with examples. +--- +aliases: /develop/connect/clients/java/jedis +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Java application to a Redis database +linkTitle: Jedis (Java) +title: Jedis guide (Java) +weight: 5 +--- + +[Jedis](https://github.com/redis/jedis) is a synchronous Java client for Redis. +Use [Lettuce]({{< relref "/develop/clients/lettuce" >}}) if you need +a more advanced Java client that also supports asynchronous and reactive connections. +The sections below explain how to install `Jedis` and connect your application +to a Redis database. + +`Jedis` requires a running Redis server. See [here]({{< relref "/operate/oss_and_stack/install/" >}}) for Redis Open Source installation instructions. + +## Install + +To include `Jedis` as a dependency in your application, edit the dependency file, as follows. + +* If you use **Maven**: + + ```xml + + redis.clients + jedis + 5.2.0 + + ``` + +* If you use **Gradle**: + + ``` + repositories { + mavenCentral() + } + //... + dependencies { + implementation 'redis.clients:jedis:5.2.0' + //... + } + ``` + +* If you use the JAR files, download the latest Jedis and Apache Commons Pool2 JAR files from [Maven Central](https://central.sonatype.com/) or any other Maven repository. + +* Build from [source](https://github.com/redis/jedis) + + +## Connect and test + +The following code opens a basic connection to a local Redis server: + +```java +package org.example; +import redis.clients.jedis.UnifiedJedis; + +public class Main { + public static void main(String[] args) { + UnifiedJedis jedis = new UnifiedJedis("redis://localhost:6379"); + + // Code that interacts with Redis... + + jedis.close(); + } +} +``` + +After you have connected, you can check the connection by storing and +retrieving a simple string value: + +```java +... + +String res1 = jedis.set("bike:1", "Deimos"); +System.out.println(res1); // OK + +String res2 = jedis.get("bike:1"); +System.out.println(res2); // Deimos + +... +``` + +## More information + +`Jedis` has a complete [API reference](https://www.javadoc.io/doc/redis.clients/jedis/latest/index.html) available on [javadoc.io/](https://javadoc.io/). +The `Jedis` [GitHub repository](https://github.com/redis/jedis) also has useful docs +and examples including a page about handling +[failover with Jedis](https://github.com/redis/jedis/blob/master/docs/failover.md) + +See also the other pages in this section for more information and examples: +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get your Jedis app ready for production +linkTitle: Production usage +title: Production usage +weight: 6 +--- + +This guide offers recommendations to get the best reliability and +performance in your production environment. + +## Checklist + +Each item in the checklist below links to the section +for a recommendation. Use the checklist icons to record your +progress in implementing the recommendations. + +{{< checklist "prodlist" >}} + {{< checklist-item "#connection-pooling" >}}Connection pooling{{< /checklist-item >}} + {{< checklist-item "#client-side-caching" >}}Client-side caching{{< /checklist-item >}} + {{< checklist-item "#timeouts" >}}Timeouts{{< /checklist-item >}} + {{< checklist-item "#health-checks" >}}Health checks{{< /checklist-item >}} + {{< checklist-item "#exception-handling" >}}Exception handling{{< /checklist-item >}} + {{< checklist-item "#dns-cache-and-redis" >}}DNS cache and Redis{{< /checklist-item >}} +{{< /checklist >}} + +## Recommendations + +The sections below offer recommendations for your production environment. Some +of them may not apply to your particular use case. + +### Connection pooling + +Example code often opens a connection at the start, demonstrates a feature, +and then closes the connection at the end. However, production code +typically uses connections many times intermittently. Repeatedly opening +and closing connections has a performance overhead. + +Use [connection pooling]({{< relref "/develop/clients/pools-and-muxing" >}}) +to avoid the overhead of opening and closing connections without having to +write your own code to cache and reuse open connections. See +[Connect with a connection pool]({{< relref "/develop/clients/jedis/connect#connect-with-a-connection-pool" >}}) +to learn how to use this technique with Jedis. + +### Client-side caching + +[Client-side caching]({{< relref "/develop/clients/client-side-caching" >}}) +involves storing the results from read-only commands in a local cache. If the +same command is executed again later, the results can be obtained from the cache, +without contacting the server. This improves command execution time on the client, +while also reducing network traffic and server load. See +[Connect using client-side caching]({{< relref "/develop/clients/jedis/connect#connect-using-client-side-caching" >}}) +for more information and example code. + +### Timeouts + +If a network or server error occurs while your code is opening a +connection or issuing a command, it can end up hanging indefinitely. +You can prevent this from happening by setting timeouts for socket +reads and writes and for opening connections. + +To set a timeout for a connection, use the `JedisPooled` or `JedisPool` constructor with the `timeout` parameter, or use `JedisClientConfig` with the `socketTimeout` and `connectionTimeout` parameters. +(The socket timeout is the maximum time allowed for reading or writing data while executing a +command. The connection timeout is the maximum time allowed for establishing a new connection.) + +```java +HostAndPort hostAndPort = new HostAndPort("localhost", 6379); + +JedisPooled jedisWithTimeout = new JedisPooled(hostAndPort, + DefaultJedisClientConfig.builder() + .socketTimeoutMillis(5000) // set timeout to 5 seconds + .connectionTimeoutMillis(5000) // set connection timeout to 5 seconds + .build(), + poolConfig +); +``` + +### Health checks + +If your code doesn't access the Redis server continuously then it +might be useful to make a "health check" periodically (perhaps once +every few seconds). You can do this using a simple +[`PING`]({{< relref "/commands/ping" >}}) command: + +```java +try (Jedis jedis = jedisPool.getResource()) { + if (! "PONG".equals(jedis.ping())) { + // Report problem. + } +} +``` + +Health checks help to detect problems as soon as possible without +waiting for a user to report them. + +### Exception handling + +Redis handles many errors using return values from commands, but there +are also situations where exceptions can be thrown. In production code, +you should handle exceptions as they occur. + +The Jedis exception hierarchy is rooted on `JedisException`, which implements +`RuntimeException`. All exceptions in the hierarchy are therefore unchecked +exceptions. + +``` +JedisException +├── JedisDataException +│ ├── JedisRedirectionException +│ │ ├── JedisMovedDataException +│ │ └── JedisAskDataException +│ ├── AbortedTransactionException +│ ├── JedisAccessControlException +│ └── JedisNoScriptException +├── JedisClusterException +│ ├── JedisClusterOperationException +│ ├── JedisConnectionException +│ └── JedisValidationException +└── InvalidURIException +``` + +#### General exceptions + +In general, Jedis can throw the following exceptions while executing commands: + +- `JedisConnectionException` - when the connection to Redis is lost or closed unexpectedly. Configure failover to handle this exception automatically with Resilience4J and the built-in Jedis failover mechanism. +- `JedisAccessControlException` - when the user does not have the permission to execute the command or the user ID and/or password are incorrect. +- `JedisDataException` - when there is a problem with the data being sent to or received from the Redis server. Usually, the error message will contain more information about the failed command. +- `JedisException` - this exception is a catch-all exception that can be thrown for any other unexpected errors. + +Conditions when `JedisException` can be thrown: +- Bad return from a health check with the [`PING`]({{< relref "/commands/ping" >}}) command +- Failure during SHUTDOWN +- Pub/Sub failure when issuing commands (disconnect) +- Any unknown server messages +- Sentinel: can connect to sentinel but master is not monitored or all Sentinels are down. +- MULTI or DISCARD command failed +- Shard commands key hash check failed or no Reachable Shards +- Retry deadline exceeded/number of attempts (Retry Command Executor) +- POOL - pool exhausted, error adding idle objects, returning broken resources to the pool + +All the Jedis exceptions are runtime exceptions and in most cases irrecoverable, so in general bubble up to the API capturing the error message. + +### DNS cache and Redis + +When you connect to a Redis server with multiple endpoints, such as [Redis Enterprise Active-Active](https://redis.com/redis-enterprise/technology/active-active-geo-distribution/), you *must* +disable the JVM's DNS cache. If a server node or proxy fails, the IP address for any database +affected by the failure will change. When this happens, your app will keep +trying to use the stale IP address if DNS caching is enabled. + +Use the following code to disable the DNS cache: + +```java +java.security.Security.setProperty("networkaddress.cache.ttl","0"); +java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); +``` +--- +aliases: /develop/connect/clients/python/redis-vl +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect your Python vector application to a Redis vector database +linkTitle: RedisVL (Python) +title: Redis vector library guide (Python) +weight: 2 +--- + +See the [RedisVL Guide]({{< relref "/integrate/redisvl" >}}) for more information.--- +aliases: /develop/connect/clients +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +hideListLinks: true +description: Connect your application to a Redis database and try an example +linkTitle: Client APIs +title: Connect with Redis client API libraries +weight: 30 +--- + +Use the Redis client libraries to connect to Redis servers from +your own code. We document the following client libraries +for seven main languages: + +| Language | Client name | Docs | Supported | +| :-- | :-- | :-- | :-- | +| [Python](https://www.python.org/) | [`redis-py`](https://github.com/redis/redis-py) |[`redis-py` guide]({{< relref "/develop/clients/redis-py" >}}) | Yes | +| [Python](https://www.python.org/) | [`RedisVL`](https://github.com/redis/redis-vl-python) |[RedisVL guide]({{< relref "/integrate/redisvl" >}}) | Yes +| [C#/.NET](https://learn.microsoft.com/en-us/dotnet/csharp/) | [`NRedisStack`](https://github.com/redis/NRedisStack) |[`NRedisStack` guide]({{< relref "/develop/clients/dotnet" >}}) | Yes | +| [JavaScript](https://nodejs.org/en) | [`node-redis`](https://github.com/redis/node-redis) | [`node-redis` guide]({{< relref "/develop/clients/nodejs" >}}) | Yes | +| [Java](https://www.java.com/en/) | [`Jedis`](https://github.com/redis/jedis) | [`Jedis` guide]({{< relref "/develop/clients/jedis" >}}) | Yes | +| [Java](https://www.java.com/en/) | [`Lettuce`](https://github.com/redis/lettuce) | [`Lettuce` guide]({{< relref "/develop/clients/lettuce" >}}) | Yes | +| [Go](https://go.dev/) | [`go-redis`](https://github.com/redis/go-redis) | [`go-redis` guide]({{< relref "/develop/clients/go" >}}) | Yes | +| [PHP](https://www.php.net/)| [`Predis`](https://github.com/predis/predis) | [`Predis` guide]({{< relref "/develop/clients/php" >}}) | No | +| [C](https://en.wikipedia.org/wiki/C_(programming_language)) | [`hiredis`](https://github.com/redis/hiredis) | [`hiredis` guide]({{< relref "/develop/clients/hiredis" >}}) | Yes | + +We also provide several higher-level +[object mapping (OM)]({{< relref "/develop/clients/om-clients" >}}) +libraries for [Python]({{< relref "/integrate/redisom-for-python" >}}), +[C#/.NET]({{< relref "/integrate/redisom-for-net" >}}), +[Node.js]({{< relref "/integrate/redisom-for-node-js" >}}), and +[Java/Spring]({{< relref "/integrate/redisom-for-java" >}}). + +## Community-supported clients + +The table below shows the recommended third-party client libraries for languages that +Redis does not document directly: + +| Language | Client name | Github | Docs | +| :-- | :-- | :-- | :-- | +| [C++](https://en.wikipedia.org/wiki/C%2B%2B) | Boost.Redis | https://github.com/boostorg/redis | https://www.boost.org/doc/libs/develop/libs/redis/doc/html/index.html | +| [Dart](https://dart.dev/) | redis_dart_link | https://github.com/toolsetlink/redis_dart_link | https://github.com/toolsetlink/redis_dart_link | +| [PHP](https://www.php.net/) | PhpRedis extension | https://github.com/phpredis/phpredis | https://github.com/phpredis/phpredis/blob/develop/README.md | +| [Ruby](https://www.ruby-lang.org/en/) | redis-rb | https://github.com/redis/redis-rb | https://rubydoc.info/gems/redis | +| [Rust](https://www.rust-lang.org/) | redis-rs | https://github.com/redis-rs/redis-rs | https://docs.rs/redis/latest/redis/ | + + +## Requirements + +You will need access to a Redis server to use these libraries. +You can experiment with a local installation of Redis Open Source +(see [Install Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack/" >}})) or with a free trial of [Redis Cloud]({{< relref "/operate/rc" >}}). +To interact with a Redis server without writing code, use the +[Redis CLI]({{< relref "/develop/tools/cli" >}}) and +[Redis Insight]({{< relref "/develop/tools/insight" >}}) tools. +--- +title: Redis 7.2 +alwaysopen: false +categories: +- docs +- operate +- rs +- rcw +description: What's new in Redis 7.2 +linkTitle: What's new in Redis 7.2 +weight: 20 +--- + +Redis version 7.2 introduces new capabilities, including improved geospatial queries, and streamlined JSON data manipulation. Performance optimizations, client-side enhancements, and behavioral refinements further improve the efficiency, security, and usability of Redis. +Below is a detailed breakdown of these updates. + +## New features + +### Geospatial queries with polygon search +Redis Query Engine now supports querying geospatial data using polygon search, enabling developers to efficiently filter and retrieve data within complex geographic boundaries. + +### Streamlined data manipulation in JSON +JSON now includes two new commands for improved data handling: + +- `JSON.MERGE`: Merges a given JSON value into matching paths, allowing more flexible updates. +- `JSON.MSET`: Sets or updates multiple JSON values simultaneously based on specified key-path-value triplets, improving efficiency when handling structured data. + +## Improvements + +### Existing data structures +Significant performance improvements have been made across Redis data types. Sorted sets, commonly used for gaming leaderboards, now see performance improvements ranging from [30% to 100%](https://redis.io/blog/introducing-redis-7-2/#:~:text=We%20made%20Redis%20more%20powerful%20for%20developers). + +Additionally, Redis stream consumer tracking has been enhanced to provide better visibility into consumer activity, and blocked stream commands now return a distinct error when the target key no longer exists. + +### Redis Query Engine improvements +The Redis Query Engine has received several updates, including optimized `SORT BY` operations and the addition of a new `FORMAT` response in RESP3, improving both efficiency and readability. + +### Script execution enhancemets +Client-side tracking now monitors actual keys read during script execution, improving key usage tracking accuracy. Additionally, blocked commands will re-evaluate security checks before execution, ensuring compliance with updated permissions. Standardized ACL failure messages and error codes now provide clearer error handling. + +### Client and replication enhancements +TLS-based replication now supports Server Name Indication (SNI) to improve compatibility with secure deployments. The `HELLO` command behavior has also been refined to modify client state only upon successful execution, ensuring more predictable client behavior. + +## Changes + +### Breaking changes +Redis 7.2 introduces several backward-incompatible changes. Lua scripts no longer support the `print()` function, blocking of `PFCOUNT` and `PUBLISH` in read-only scripts, and time sampling freezing during command execution. Error handling updates include case changes in error responses, new behavior for `ZPOPMIN/ZPOPMAX` with `count 0`, and adjustments to `XCLAIM/XAUTOCLAIM`. ACL changes affect command categorization and key access permissions, while command introspection now includes per-subcommand statistics. Redis now allows certain `CONFIG` commands during loading and tracks statistics only when commands are executed. + +For more details, see [Redis 7.2 Breaking Changes](https://redis.io/docs/latest/embeds/r7.2-breaking-changes/). + +### Expired keys are now deleted from replica indexes +Expired keys are now deleted from Redis Query Engine replica indexes, ensuring that queries return an empty array rather than `nil` when the data no longer exists. + +### Other changes +Redis Stack 7.2 no longer includes Graph capabilities. For more details, refer to the [RedisGraph End-of-Life Announcement](https://redis.io/blog/redisgraph-eol/#:~:text=After%20January%2031%2C%202025%2C%20RedisGraph,subscriptions%20until%20January%2031%2C%202024.). + +## Component versions +The Redis version 7.2 includes the following components: + +- [Redis 7.2](https://github.com/redis/redis/blob/7.2/00-RELEASENOTES) +- [Search 2.8](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.8-release-notes/) +- [JSON 2.6](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisjson/redisjson-2.6-release-notes/) +- [Time series 1.10](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redistimeseries/redistimeseries-1.10-release-notes/) +- [Bloom 2.6](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisbloom/redisbloom-2.6-release-notes/) +--- +title: Redis 6.2 +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: What's new in Redis 6.2 +linkTitle: What's new in Redis 6.2 +weight: 30 +--- + +Redis version 6.2 introduces new capabilities designed to improve data indexing, querying, and analytics. This update brings multi-value indexing, expanded wildcard query support, and a new probabilistic data structure for quantile estimation. Additionally, significant enhancements to Redis streams and time series data processing offer greater flexibility for developers working with real-time and historical datasets. Over 25 new commands have been added to Redis that address key feature requests and further extending its capabilities. +Below is a detailed breakdown of these improvements. + +## New features + +### Multi-value indexing and querying +Redis now supports indexing and querying multi-value attributes across all field types, including `TEXT`, `TAG`, `NUMERIC`, `GEO`, and `VECTOR`. Developers can define JSONPath expressions leading to arrays or multiple scalar values, overcoming the previous limitation of indexing only single scalar attributes. + +### Wildcard query support +The Redis Query Engine now enables suffix and infix wildcard searches for `TEXT` and `TAG` fields. This enhancement provides greater flexibility in data retrieval and filtering. + +### t-digest: a new probabilistic data structure for quantile estimation +Redis introduces t-digest, an advanced probabilistic data structure that efficiently estimates quantiles in large datasets or continuous data streams. This is particularly beneficial for analytics and monitoring applications where quantile calculations are required. + +### Retrieve aggregation results for ongoing time series buckets +A new feature allows users to retrieve the latest, still-open time series buckets during compaction. + +### Time-weighted average aggregator for time series +Redis now includes a time-weighted average aggregator, improving accuracy in average-over-time calculations. This feature is especially valuable for time series data with irregular sampling intervals. + +### Gap-filling for time series data +To improve time series analytics, Redis introduces gap-filling capabilities. This feature allows interpolation of missing values or repetition of the last known value for empty time buckets, ensuring continuity in time series analysis. + +## Improvements +### Existing data structures +Redis 6.2 introduces over 25 new commands, fulfilling long-standing community requests. Notably: + +- The long-awaited `ZUNION` and `ZINTER` commands now allow direct retrieval of results, unlike `ZUNIONSTORE` and `ZINTERSTORE`, which store results in a key. +- Redis streams enhancements include: + - Support for exclusive range queries, providing finer control over data retrieval. + - The ability to filter pending messages based on idle time, improving message management. + - A new mechanism to automatically claim pending messages from a stream consumer group, transferring ownership of messages that have exceeded their idle timeout to a new consumer without requiring manual acknowledgment. + +## Component versions +The Redis version 6.2 is built from the following component versions: + +- [Redis 6.2](https://github.com/redis/redis/blob/6.2/00-RELEASENOTES) +- [Search 2.6](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.6-release-notes/) +- [JSON 2.4](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisjson/redisjson-2.4-release-notes/) +- [Time series 1.8](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redistimeseries/redistimeseries-1.8-release-notes/) +- [Bloom 2.4](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisbloom/redisbloom-2.4-release-notes/) +- [Graph 2.10](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisgraph/redisgraph-2.10-release-notes/) +--- +title: Redis 8.0 +alwaysopen: false +aliases: +- /develop/whats-new/8-0-rc-1/ +categories: +- docs +- operate +- rs +- rc +description: What's new in Redis 8 in Redis Open Source +linkTitle: What's new in Redis 8.0 +weight: 5 +--- + +## Highlights + +- **Name change**: Redis Community Edition is now **Redis Open Source** +- **License options**: + - Redis Source Available License 2.0 (RSALv2) + - Server Side Public License v1 (SSPLv1) + - GNU Affero General Public License (AGPLv3) + +- **Integrated modules** now part of core: + - JSON + - Probabilistic: Bloom, Cuckoo, Count-min sketch, Top-K, and t-digest + - Time Series + - [Vector sets (preview)]({{< relref "/develop/data-types/vector-sets/" >}}) + - [Redis Query Engine]({{< relref "/develop/interact/#search-and-query" >}}) with horizontal & vertical scaling + - All components available in Redis binary distributions + - New config file: `redis-full.conf` for full component loading + +## New Commands + +- **Hash with expiration support**: + - `HGETDEL` – get and delete hash field + - `HGETEX`, `HSETEX` – get/set hash fields with expiration +- **Field TTL & expiration (7.4+)**: + - `HEXPIRE`, `HPEXPIRE`, `HEXPIREAT`, `HPEXPIREAT` + - `HPERSIST`, `HEXPIRETIME`, `HPEXPIRETIME`, `HTTL`, `HPTTL` +- **Other command additions**: + - `XREAD +` – read latest stream entry + - `HSCAN NOVALUES` – scan hash field names only + - `SORT` in cluster mode with `BY` and `GET` + - `CLIENT KILL MAXAGE` + - Lua: `os.clock()` now available + - `SPUBLISH` in `MULTI/EXEC` transactions on replicas + - [Vector set command group (preview)]({{< relref "/commands/?group=vector_set" >}}) + +## Internal Architecture + +- **I/O threading overhaul**: read+write threading for higher throughput +- **Replication**: improved mechanism with AOF offset support +- **Over 30 performance optimizations**: + - Optimized: `GET`, `EXISTS`, `LRANGE`, `HSET`, `XREAD`, `SCAN`, `ZADD`, `ZUNION`, `PFCOUNT`, `HSCAN`, and more + - Improved latency, memory, and CPU utilization + +## Security + +- CVE-2024-46981: Lua RCE +- CVE-2024-51741: ACL DoS +- CVE-2024-31449, 31227, 31228: DoS in Lua/ACLs + +## Packaging + +Redis 8 in Redis Open Source is available in the following distributions: + +- [Docker](https://hub.docker.com/_/redis) +- APT +- RPM +- Snap +- Homebrew +- Pre-built binaries +- [Source code](https://github.com/redis/redis/releases/tag/8.0-rc1) + +## Observability + +- New `INFO` sections: + - `KEYSIZES`, `Threads` + - Hash expiration stats + - Client buffer disconnection counters + - Dictionary memory rehashing + - Script eviction stats + +## Upgrades & Support + +- Supports upgrade from: + - Redis 7.x with or without modules + - Redis Stack 7.2 and 7.4 +- Supported operating systems: + - Ubuntu 20.04 / 22.04 / 24.04 + - Debian 11 / 12 + - macOS 13–15 + - Rocky/Alma Linux 8.10 / 9.5 +--- +title: Redis feature sets +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Highlights of what's new for Redis feature sets +linkTitle: Redis feature sets +weight: 40 +--- + +A Redis feature set includes a specific Redis database version along with the advanced capabilities and data structures provided by compatible module versions. + +To use a new feature introduced in a later feature set, you must upgrade the corresponding components according to the following table. + +| Redis feature set | What's new | +|-------------------|------------| +| **Feature set version:** 8.0| See [here]({{< relref "/develop/whats-new/8-0" >}})| +| **Feature set version:** 7.4

**Component versions:**
[Redis 7.4]({{}})
[Search 2.10]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.10-release-notes.md" >}})
[JSON 2.8]({{}})
[Time series 1.12]({{}})
[Bloom 2.8]({{}}) | **Hash**:
- [Expiration of individual hash fields]({{}}).
**Streams**:
- To start reading from the last stream message, use [`XREAD`]({{}}) with the new ID value `+`.
**Time series**:
Insertion-filter for close samples.
**JSON**:
- A fix to not duplicate `AOF` commands multiple times in [`JSON.MSET`]({{< relref "commands/json.mset/" >}}).
**Probabilistic**:
- Returns an error if [`CMS.MERGE`]({{< relref "commands/cms.merge/" >}}) results in an overflow or underflow.
**Redis Query Engine**:
- New `BFLOAT16` and `FLOAT16` vector data types, reducing memory consumed by vectors while preserving accuracy.
- Support for indexing empty and missing values and enhanced developer experience for queries with exact matching capabilities.
- You can match `TAG` fields without needing to escape special characters.
- Expanded geospatial search with new `INTERSECT` and `DISJOINT` operators, improved reporting of the memory consumed by the index, and exposed full-text scoring in aggregation pipelines. | +| **Feature set version:** 7.2

**Component versions:**
[Redis 7.2](https://raw.githubusercontent.com/redis/redis/7.2/00-RELEASENOTES)
[Search 2.8]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.8-release-notes.md" >}})
[JSON 2.6]({{}})
[Time series 1.10]({{}})
[Bloom 2.6]({{}})
[Gears 2.0](https://github.com/RedisGears/RedisGears/releases) | - Performance and resource utilization improvements, including significant memory and speed optimizations for lists, sets, and sorted sets.
**JSON**:
- New JSON commands: [`JSON.MERGE`]({{< relref "commands/json.merge/" >}}) and [`JSON.MSET`]({{< relref "commands/json.mset/" >}}).
**Redis Query Engine:**
- [Geo polygon search]({{< relref "commands/ft.search/#examples" >}}).
**Compatibility changes**:
- Redis 7.2 uses a new format (version 11) for RDB files, which is incompatible with older versions.
- Redis feature set 7.2 does not include [graph capabilities](https://redis.io/blog/redisgraph-eol/). | +| **Feature set version:** 6.2

**Component versions:**
[Redis 6.2](https://raw.githubusercontent.com/redis/redis/6.2/00-RELEASENOTES)
[Search 2.6]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.6-release-notes.md" >}})
[JSON 2.4]({{}})
[Time series 1.8]({{}})
[Bloom 2.4]({{}})
[Graph 2.10]({{}}) | **Time series**:
- Time series gap filling.
**JSON**:
- Improved JSON path parser.
**Probabilistic:**
- New probabilistic data structure t-digest.
**Redis Query Engine:**
- Wildcard queries for `TEXT` and `TAG`.
- Suffix search.
- Multi-value indexing and queries.
**Graph**:
- New pathfinding algorithms for graphs. | + +--- +title: Redis 7.4 +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: What's new in Redis 7.4 +linkTitle: What's new in Redis 7.4 +weight: 10 +--- + +Redis 7.4 introduces several new features and improvements aimed at enhancing memory efficiency, performance, and ease of use for various applications. These updates include support for hash field expiration, new memory-efficient data types for AI workloads, simplified secondary indexing, and time series optimizations. Additionally, Redis 7.4 brings several behavior and component changes. +Below is a detailed breakdown of these updates. + +## New features + +### Hash field expiration support + +Redis 7.4 adds the ability to set expiration times for individual hash fields or adjust their remaining TTL. This feature, long-requested by users, improves memory efficiency and performance, especially in caching and session storage scenarios. + +### New memory-efficient data types for AI workloads +With the growing demand for AI applications, Redis 7.4 introduces `BFLOAT16` and `FLOAT16` data types. These new types reduce memory usage by up to 47% and lower latency by as much as 59% under load, making them ideal for storing and processing vector embeddings in AI-powered applications, including vector databases and Retrieval Augmented Generation (RAG) systems. + +### Time series optimization with insertion filters +Redis 7.4 introduces insertion filters for time series data, allowing sensors to ignore new measurements when the differences in time or value are minimal. This feature helps reduce the size of time series data and boosts efficiency. + +## Improvements + +### Simplified secondary indexing +The Redis Query Engine now offers a more straightforward approach to secondary indexing with the addition of the `TAG` index type. Querying tags with special characters (like `@` and `.`) is easier, as it no longer requires escaping; simply wrap query terms in double quotes. The update also includes improved handling of empty and missing fields, making the data model more flexible. Geospatial search has been enhanced with new operators, such as `INTERSECT` and `DISJOIN`, and memory usage reporting for indexes has been improved. + +## Changes + +### Behavior changes +Redis 7.4 includes behavior changes such as using jemalloc instead of libc for allocating Lua VM code. This adjustment reduces memory fragmentation and improves performance. Additionally, the `ACL LOAD` command has been modified to ensure that only clients with affected user configurations are disconnected, reducing unnecessary disruptions. + +## Component versions +The Redis version 7.4 includes the following components: + +- [Redis 7.4](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisce/redisce-7.4-release-notes/) +- [Search 2.10](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisearch/redisearch-2.10-release-notes/) +- [JSON 2.8](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisjson/redisjson-2.8-release-notes/) +- [Time series 1.12](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redistimeseries/redistimeseries-1.12-release-notes/) +- [Bloom 2.8](https://redis.io/docs/latest/operate/oss_and_stack/stack-with-enterprise/release-notes/redisbloom/redisbloom-2.8-release-notes/) +--- +title: What's new? +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: High-level description of important updates to the Develop section +linkTitle: What's new? +hideListLinks: true +weight: 10 +--- +## Q1 2025 (January - March) Updates + +### Tools + +- Redis Insight [v2.66 release notes]({{< relref "/develop/tools/insight/release-notes/v.2.66.0.md" >}}) +- Updated CLI output samples for [`bigkeys`, `memkeys`, `keystats`]({{< relref "/develop/tools/cli.md" >}}) + +--- + +### Redis AI & Vectors + +- Expanded vector examples: + - [Python]({{< relref "/develop/clients/redis-py/vecsearch.md" >}}) + - [Node.js]({{< relref "/develop/clients/nodejs/vecsearch.md" >}}) + - [Java (Jedis)]({{< relref "/develop/clients/jedis/vecsearch.md" >}}) + - [Go]({{< relref "/develop/clients/go/vecsearch.md" >}}) + - [.NET]({{< relref "/develop/clients/dotnet/vecsearch.md" >}}) +- Updated AI integrations: + - [AI overview]({{< relref "/develop/ai/index.md" >}}) + - [RAG intro]({{< relref "/develop/get-started/rag.md" >}}) + - [Redis in AI]({{< relref "/develop/get-started/redis-in-ai.md" >}}) + +--- + +### Data Types + +- TimeSeries: + - [`COMPACTION_POLICY`]({{< relref "/develop/data-types/timeseries/configuration.md" >}}) + - [Client-side caching update]({{< relref "/develop/clients/client-side-caching.md" >}}) +- JSON: + - [Active memory defragmentation]({{< relref "/operate/oss_and_stack/stack-with-enterprise/json/commands.md" >}}) +- Probabilistic: + - [Bloom filter]({{< relref "/develop/data-types/probabilistic/bloom-filter.md" >}}) + - [Count-min sketch]({{< relref "/develop/data-types/probabilistic/count-min-sketch.md" >}}) + - [Top-K]({{< relref "/develop/data-types/probabilistic/top-k.md" >}}) + - [Cuckoo filter]({{< relref "/develop/data-types/probabilistic/cuckoo-filter.md" >}}) + +--- + +### Commands & API Docs + +- Pages updated for format and accuracy: + - [ACL SETUSER]({{< relref "/commands/acl-setuser/index.md" >}}) + - [JSON.GET]({{< relref "/commands/json.get/index.md" >}}) + - [TS.ADD]({{< relref "/commands/ts.add/index.md" >}}) + - [SCAN]({{< relref "/commands/scan/index.md" >}}) + - [SORT]({{< relref "/commands/sort/index.md" >}}) +- RESP3 reply types documented in [Hiredis command page]({{< relref "/develop/clients/hiredis/issue-commands.md" >}}) +- [CSC behavior clarified]({{< relref "/develop/clients/client-side-caching.md" >}}) + +--- + +### Search & Query + +- Best practices: + - [Dev-to-prod guide]({{< relref "/develop/interact/search-and-query/best-practices/dev-to-prod-best-practices.md" >}}) + - [Scalable queries]({{< relref "/develop/interact/search-and-query/best-practices/scalable-query-best-practices.md" >}}) + - [Index lifecycle]({{< relref "/develop/interact/search-and-query/best-practices/index-mgmt-best-practices.md" >}}) +- New/updated topics: + - [Autocomplete]({{< relref "/develop/interact/search-and-query/advanced-concepts/autocomplete.md" >}}) + - [Escaping & tokenization]({{< relref "/develop/interact/search-and-query/advanced-concepts/escaping.md" >}}) + - [Geo indexing]({{< relref "/develop/interact/search-and-query/indexing/geoindex.md" >}}) + - [Sorting, scoring, stemming]({{< relref "/develop/interact/search-and-query/advanced-concepts/sorting.md" >}}) + +--- + +### Client Libraries + +#### Go +- [Trans/pipe examples]({{< relref "/develop/clients/go/transpipe.md" >}}) +- [JSON queries]({{< relref "/develop/clients/go/queryjson.md" >}}) + +#### .NET +- [Vector search]({{< relref "/develop/clients/dotnet/vecsearch.md" >}}) +- [Trans/pipe usage]({{< relref "/develop/clients/dotnet/transpipe.md" >}}) +- [JSON queries]({{< relref "/develop/clients/dotnet/queryjson.md" >}}) + +#### Java (Jedis) +- [Vector search]({{< relref "/develop/clients/jedis/vecsearch.md" >}}) +- [Trans/pipe usage]({{< relref "/develop/clients/jedis/transpipe.md" >}}) + +#### Node.js +- [Vector queries]({{< relref "/develop/clients/nodejs/vecsearch.md" >}}) +- [Trans/pipe examples]({{< relref "/develop/clients/nodejs/transpipe.md" >}}) +- [JSON queries]({{< relref "/develop/clients/nodejs/queryjson.md" >}}) + +#### Redis-py +- [ScanIter usage]({{< relref "/develop/clients/redis-py/scaniter.md" >}}) +- [Vector search]({{< relref "/develop/clients/redis-py/vecsearch.md" >}}) +- [Trans/pipe usage]({{< relref "/develop/clients/redis-py/transpipe.md" >}}) +- [JSON queries]({{< relref "/develop/clients/redis-py/queryjson.md" >}}) + +#### Lettuce +- [Cluster connection]({{< relref "/develop/clients/lettuce/connect.md" >}}) +- [Production usage]({{< relref "/develop/clients/lettuce/produsage.md" >}}) + +#### Hiredis +- Full client guide: + - [Overview]({{< relref "/develop/clients/hiredis/_index.md" >}}) + - [Connect]({{< relref "/develop/clients/hiredis/connect.md" >}}) + - [Issue commands]({{< relref "/develop/clients/hiredis/issue-commands.md" >}}) + - [Handle replies]({{< relref "/develop/clients/hiredis/handle-replies.md" >}}) + - [Transactions and pipelines]({{< relref "/develop/clients/hiredis/transpipe.md" >}}) + + + +## Q4 2024 (October - December) Updates + +* Updated the RESP3 specification document to include the [attribute type]({{< relref "/develop/reference/protocol-spec#attributes" >}}). +* Updates to the [key eviction]({{< relref "/develop/reference/eviction" >}}) page. +* Updates to the Redis Insight page related to its new Redis Query Engine auto-completion [feature]({{< relref "/develop/tools/insight#workbench">}}). +* Restructured and added testable connection examples to the [client pages]({{< relref "/develop/clients" >}}). +* Added [Redis Open Source]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisce" >}}) and [Redis Stack]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisstack" >}}) release notes. +* Added new [Redis for AI]({{< relref "/develop/ai" >}}) page. +* Added new [Predis (PHP client library)]({{< relref "/develop/clients/php" >}}) page. + +## Q3 2024 (July - September) Updates + +* Updated the [RAG with Redis quick start guide]({{< relref "/develop/get-started/rag" >}}). +* Updates for [Redis Open Source version 7.4]({{< relref "/operate/oss_and_stack/stack-with-enterprise/release-notes/redisce" >}}). +* Added new [Redis Insight debugging]({{< relref "/develop/tools/insight/debugging" >}}) page. +* Completed a major re-write/restructuring of the [vector indexing page]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). +* Added new [client-side caching page]({{< relref "/develop/clients/client-side-caching" >}}). +* Added new documentation for the [RDI in Redis Insight feature]({{< relref "/develop/tools/insight/rdi-connector" >}}). +* Added new documentation for the [Redis for VS Code feature]({{< relref "/develop/tools/redis-for-vscode/" >}}). +* Added multi-language code examples to the Redis Query Engine [query]({{< relref "/develop/interact/search-and-query/query">}}) pages. +* Added client-side caching information to the [supported clients]({{< relref "/develop/clients/client-side-caching#which-client-libraries-support-client-side-caching" >}}) pages. +* Numerous changes to the [Redis client content]({{< relref "/develop/clients" >}}). +--- +aliases: /develop/connect/cli +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Overview of redis-cli, the Redis command line interface' +linkTitle: CLI +title: Redis CLI +weight: 1 +--- + +In interactive mode, `redis-cli` has basic line editing capabilities to provide a familiar typing experience. + +To launch the program in special modes, you can use several options, including: + +* Simulate a replica and print the replication stream it receives from the primary. +* Check the latency of a Redis server and display statistics. +* Request ASCII-art spectrogram of latency samples and frequencies. + +This topic covers the different aspects of `redis-cli`, starting from the simplest and ending with the more advanced features. + +## Command line usage + +To run a Redis command and return a standard output at the terminal, include the command to execute as separate arguments of `redis-cli`: + + $ redis-cli INCR mycounter + (integer) 7 + +The reply of the command is "7". Since Redis replies are typed (strings, arrays, integers, nil, errors, etc.), you see the type of the reply between parentheses. This additional information may not be ideal when the output of `redis-cli` must be used as input of another command or redirected into a file. + +`redis-cli` only shows additional information for human readability when it detects the standard output is a tty, or terminal. For all other outputs it will auto-enable the *raw output mode*, as in the following example: + + $ redis-cli INCR mycounter > /tmp/output.txt + $ cat /tmp/output.txt + 8 + +Note that `(integer)` is omitted from the output because `redis-cli` detects +the output is no longer written to the terminal. You can force raw output +even on the terminal with the `--raw` option: + + $ redis-cli --raw INCR mycounter + 9 + +You can force human readable output when writing to a file or in +pipe to other commands by using `--no-raw`. + +## String quoting and escaping + +When `redis-cli` parses a command, whitespace characters automatically delimit the arguments. +In interactive mode, a newline sends the command for parsing and execution. +To input string values that contain whitespaces or non-printable characters, you can use quoted and escaped strings. + +Quoted string values are enclosed in double (`"`) or single (`'`) quotation marks. +Escape sequences are used to put nonprintable characters in character and string literals. + +An escape sequence contains a backslash (`\`) symbol followed by one of the escape sequence characters. + +Doubly-quoted strings support the following escape sequences: + +* `\"` - double-quote +* `\n` - newline +* `\r` - carriage return +* `\t` - horizontal tab +* `\b` - backspace +* `\a` - alert +* `\\` - backslash +* `\xhh` - any ASCII character represented by a hexadecimal number (_hh_) + +Single quotes assume the string is literal, and allow only the following escape sequences: +* `\'` - single quote +* `\\` - backslash + +For example, to return `Hello World` on two lines: + +``` +127.0.0.1:6379> SET mykey "Hello\nWorld" +OK +127.0.0.1:6379> GET mykey +Hello +World +``` + +When you input strings that contain single or double quotes, as you might in passwords, for example, escape the string, like so: + +``` +127.0.0.1:6379> AUTH some_admin_user ">^8T>6Na{u|jp>+v\"55\@_;OU(OR]7mbAYGqsfyu48(j'%hQH7;v*f1H${*gD(Se'" + ``` + +## Host, port, password, and database + +By default, `redis-cli` connects to the server at the address 127.0.0.1 with port 6379. +You can change the port using several command line options. To specify a different host name or an IP address, use the `-h` option. In order to set a different port, use `-p`. + + $ redis-cli -h redis15.localnet.org -p 6390 PING + PONG + +If your instance is password protected, the `-a ` option will +perform authentication saving the need of explicitly using the [`AUTH`]({{< relref "/commands/auth" >}}) command: + + $ redis-cli -a myUnguessablePazzzzzword123 PING + PONG + +**NOTE:** For security reasons, provide the password to `redis-cli` automatically via the +`REDISCLI_AUTH` environment variable. + +Finally, it's possible to send a command that operates on a database number +other than the default number zero by using the `-n ` option: + + $ redis-cli FLUSHALL + OK + $ redis-cli -n 1 INCR a + (integer) 1 + $ redis-cli -n 1 INCR a + (integer) 2 + $ redis-cli -n 2 INCR a + (integer) 1 + +Some or all of this information can also be provided by using the `-u ` +option and the URI pattern `redis://user:password@host:port/dbnum`: + + $ redis-cli -u redis://LJenkins:p%40ssw0rd@redis-16379.hosted.com:16379/0 PING + PONG + +**NOTE:** +User, password and dbnum are optional. +For authentication without a username, use username `default`. +For TLS, use the scheme `rediss`. + +You can use the `-4` or `-6` argument to set a preference for IPv4 or IPv6, respectively, for DNS lookups. + +## SSL/TLS + +By default, `redis-cli` uses a plain TCP connection to connect to Redis. +You may enable SSL/TLS using the `--tls` option, along with `--cacert` or +`--cacertdir` to configure a trusted root certificate bundle or directory. + +If the target server requires authentication using a client side certificate, +you can specify a certificate and a corresponding private key using `--cert` and +`--key`. + +## Get input from other programs + +There are two ways you can use `redis-cli` in order to receive input from other +commands via the standard input. One is to use the target payload as the last argument +from *stdin*. For example, in order to set the Redis key `net_services` +to the content of the file `/etc/services` from a local file system, use the `-x` +option: + + $ redis-cli -x SET net_services < /etc/services + OK + $ redis-cli GETRANGE net_services 0 50 + "#\n# Network services, Internet style\n#\n# Note that " + +In the first line of the above session, `redis-cli` was executed with the `-x` option and a file was redirected to the CLI's +standard input as the value to satisfy the `SET net_services` command phrase. This is useful for scripting. + +A different approach is to feed `redis-cli` a sequence of commands written in a +text file: + + $ cat /tmp/commands.txt + SET item:3374 100 + INCR item:3374 + APPEND item:3374 xxx + GET item:3374 + $ cat /tmp/commands.txt | redis-cli + OK + (integer) 101 + (integer) 6 + "101xxx" + +All the commands in `commands.txt` are executed consecutively by +`redis-cli` as if they were typed by the user in interactive mode. Strings can be +quoted inside the file if needed, so that it's possible to have single +arguments with spaces, newlines, or other special characters: + + $ cat /tmp/commands.txt + SET arg_example "This is a single argument" + STRLEN arg_example + $ cat /tmp/commands.txt | redis-cli + OK + (integer) 25 + +## Continuously run the same command + +It is possible to execute a single command a specified number of times +with a user-selected pause between executions. This is useful in +different contexts - for example when we want to continuously monitor some +key content or [`INFO`]({{< relref "/commands/info" >}}) field output, or when we want to simulate some +recurring write event, such as pushing a new item into a list every 5 seconds. + +This feature is controlled by two options: `-r ` and `-i `. +The `-r` option states how many times to run a command and `-i` sets +the delay between the different command calls in seconds (with the ability +to specify values such as 0.1 to represent 100 milliseconds). + +By default the interval (or delay) is set to 0, so commands are just executed +ASAP: + + $ redis-cli -r 5 INCR counter_value + (integer) 1 + (integer) 2 + (integer) 3 + (integer) 4 + (integer) 5 + +To run the same command indefinitely, use `-1` as the count value. +To monitor over time the RSS memory size it's possible to use the following command: + + $ redis-cli -r -1 -i 1 INFO | grep rss_human + used_memory_rss_human:2.71M + used_memory_rss_human:2.73M + used_memory_rss_human:2.73M + used_memory_rss_human:2.73M + ... a new line will be printed each second ... + +## Mass insertion of data using `redis-cli` + +Mass insertion using `redis-cli` is covered in a separate page as it is a +worthwhile topic itself. Please refer to our [mass insertion guide]({{< relref "/develop/use/patterns/bulk-loading" >}}). + +## CSV output + +A CSV (Comma Separated Values) output feature exists within `redis-cli` to export data from Redis to an external program. + + $ redis-cli LPUSH mylist a b c d + (integer) 4 + $ redis-cli --csv LRANGE mylist 0 -1 + "d","c","b","a" + +Note that the `--csv` flag will only work on a single command, not the entirety of a DB as an export. + +## Run Lua scripts + +The `redis-cli` has extensive support for using the debugging facility +of Lua scripting, available with Redis 3.2 onwards. For this feature, refer to the [Redis Lua debugger documentation]({{< relref "/develop/interact/programmability/lua-debugging" >}}). + +Even without using the debugger, `redis-cli` can be used to +run scripts from a file as an argument: + + $ cat /tmp/script.lua + return redis.call('SET',KEYS[1],ARGV[1]) + $ redis-cli --eval /tmp/script.lua location:hastings:temp , 23 + OK + +The Redis [`EVAL`]({{< relref "/commands/eval" >}}) command takes the list of keys the script uses, and the +other non key arguments, as different arrays. When calling [`EVAL`]({{< relref "/commands/eval" >}}) you +provide the number of keys as a number. + +When calling `redis-cli` with the `--eval` option above, there is no need to specify the number of keys +explicitly. Instead it uses the convention of separating keys and arguments +with a comma. This is why in the above call you see `location:hastings:temp , 23` as arguments. + +So `location:hastings:temp` will populate the [`KEYS`]({{< relref "/commands/keys" >}}) array, and `23` the `ARGV` array. + +The `--eval` option is useful when writing simple scripts. For more +complex work, the Lua debugger is recommended. It is possible to mix the two approaches, since the debugger can also execute scripts from an external file. + +## Interactive mode + +We have explored how to use the Redis CLI as a command line program. +This is useful for scripts and certain types of testing, however most +people will spend the majority of time in `redis-cli` using its interactive +mode. + +In interactive mode the user types Redis commands at the prompt. The command +is sent to the server, processed, and the reply is parsed back and rendered +into a simpler form to read. + +Nothing special is needed for running the `redis-cli` in interactive mode - +just execute it without any arguments + + $ redis-cli + 127.0.0.1:6379> PING + PONG + +The string `127.0.0.1:6379>` is the prompt. It displays the connected Redis server instance's hostname and port. + +The prompt updates as the connected server changes or when operating on a database different from the database number zero: + + 127.0.0.1:6379> SELECT 2 + OK + 127.0.0.1:6379[2]> DBSIZE + (integer) 1 + 127.0.0.1:6379[2]> SELECT 0 + OK + 127.0.0.1:6379> DBSIZE + (integer) 503 + +### Handle connections and reconnections + +Using the `CONNECT` command in interactive mode makes it possible to connect +to a different instance, by specifying the *hostname* and *port* we want +to connect to: + + 127.0.0.1:6379> CONNECT metal 6379 + metal:6379> PING + PONG + +As you can see the prompt changes accordingly when connecting to a different server instance. +If a connection is attempted to an instance that is unreachable, the `redis-cli` goes into disconnected +mode and attempts to reconnect with each new command: + + 127.0.0.1:6379> CONNECT 127.0.0.1 9999 + Could not connect to Redis at 127.0.0.1:9999: Connection refused + not connected> PING + Could not connect to Redis at 127.0.0.1:9999: Connection refused + not connected> PING + Could not connect to Redis at 127.0.0.1:9999: Connection refused + +Generally after a disconnection is detected, `redis-cli` always attempts to +reconnect transparently; if the attempt fails, it shows the error and +enters the disconnected state. The following is an example of disconnection +and reconnection: + + 127.0.0.1:6379> INFO SERVER + Could not connect to Redis at 127.0.0.1:6379: Connection refused + not connected> PING + PONG + 127.0.0.1:6379> + (now we are connected again) + +When a reconnection is performed, `redis-cli` automatically re-selects the +last database number selected. However, all other states about the +connection is lost, such as within a MULTI/EXEC transaction: + + $ redis-cli + 127.0.0.1:6379> MULTI + OK + 127.0.0.1:6379> PING + QUEUED + + ( here the server is manually restarted ) + + 127.0.0.1:6379> EXEC + (error) ERR EXEC without MULTI + +This is usually not an issue when using the `redis-cli` in interactive mode for +testing, but this limitation should be known. + +Use the `-t ` option to specify server timeout in seconds. + +### Editing, history, completion and hints + +Because `redis-cli` uses the +[linenoise line editing library](http://github.com/antirez/linenoise), it +always has line editing capabilities, without depending on `libreadline` or +other optional libraries. + +Command execution history can be accessed in order to avoid retyping commands by pressing the arrow keys (up and down). +The history is preserved between restarts of the CLI, in a file named +`.rediscli_history` inside the user home directory, as specified +by the `HOME` environment variable. It is possible to use a different +history filename by setting the `REDISCLI_HISTFILE` environment variable, +and disable it by setting it to `/dev/null`. + +The `redis-cli` client is also able to perform command-name completion by pressing the TAB +key, as in the following example: + + 127.0.0.1:6379> Z + 127.0.0.1:6379> ZADD + 127.0.0.1:6379> ZCARD + +Once Redis command name has been entered at the prompt, the `redis-cli` will display +syntax hints. Like command history, this behavior can be turned on and off via the `redis-cli` preferences. + +Reverse history searches, such as `CTRL-R` in terminals, is supported. + +### Preferences + +There are two ways to customize `redis-cli` behavior. The file `.redisclirc` +in the home directory is loaded by the CLI on startup. You can override the +file's default location by setting the `REDISCLI_RCFILE` environment variable to +an alternative path. Preferences can also be set during a CLI session, in which +case they will last only the duration of the session. + +To set preferences, use the special `:set` command. The following preferences +can be set, either by typing the command in the CLI or adding it to the +`.redisclirc` file: + +* `:set hints` - enables syntax hints +* `:set nohints` - disables syntax hints + +### Run the same command N times + +It is possible to run the same command multiple times in interactive mode by prefixing the command +name by a number: + + 127.0.0.1:6379> 5 INCR mycounter + (integer) 1 + (integer) 2 + (integer) 3 + (integer) 4 + (integer) 5 + +### Show online help for Redis commands + +`redis-cli` provides online help for most Redis [commands]({{< relref "/commands" >}}), using the `HELP` command. The command can be used +in two forms: + +* `HELP @` shows all the commands about a given category. The +categories are: + - `@generic` + - `@string` + - `@list` + - `@set` + - `@sorted_set` + - `@hash` + - `@pubsub` + - `@transactions` + - `@connection` + - `@server` + - `@scripting` + - `@hyperloglog` + - `@cluster` + - `@geo` + - `@stream` +* `HELP ` shows specific help for the command given as argument. + +For example in order to show help for the [`PFADD`]({{< relref "/commands/pfadd" >}}) command, use: + + 127.0.0.1:6379> HELP PFADD + + PFADD key element [element ...] + summary: Adds the specified elements to the specified HyperLogLog. + since: 2.8.9 + +Note that `HELP` supports TAB completion as well. + +### Clear the terminal screen + +Using the `CLEAR` command in interactive mode clears the terminal's screen. + +## Special modes of operation + +So far we saw two main modes of `redis-cli`. + +* Command line execution of Redis commands. +* Interactive "REPL" usage. + +The CLI performs other auxiliary tasks related to Redis that +are explained in the next sections: + +* Monitoring tool to show continuous stats about a Redis server. +* Scanning a Redis database for very large keys. +* Key space scanner with pattern matching. +* Acting as a [Pub/Sub]({{< relref "/develop/interact/pubsub" >}}) client to subscribe to channels. +* Monitoring the commands executed into a Redis instance. +* Checking the [latency]({{< relref "/operate/oss_and_stack/management/optimization/latency" >}}) of a Redis server in different ways. +* Checking the scheduler latency of the local computer. +* Transferring RDB backups from a remote Redis server locally. +* Acting as a Redis replica for showing what a replica receives. +* Simulating [LRU]({{< relref "/develop/reference/eviction" >}}) workloads for showing stats about keys hits. +* A client for the Lua debugger. + +### Continuous stats mode + +Continuous stats mode is probably one of the lesser known yet very useful features of `redis-cli` to monitor Redis instances in real time. To enable this mode, the `--stat` option is used. +The output is very clear about the behavior of the CLI in this mode: + + $ redis-cli --stat + ------- data ------ --------------------- load -------------------- - child - + keys mem clients blocked requests connections + 506 1015.00K 1 0 24 (+0) 7 + 506 1015.00K 1 0 25 (+1) 7 + 506 3.40M 51 0 60461 (+60436) 57 + 506 3.40M 51 0 146425 (+85964) 107 + 507 3.40M 51 0 233844 (+87419) 157 + 507 3.40M 51 0 321715 (+87871) 207 + 508 3.40M 51 0 408642 (+86927) 257 + 508 3.40M 51 0 497038 (+88396) 257 + +In this mode a new line is printed every second with useful information and differences of request values between old data points. Memory usage, client connection counts, and various other statistics about the connected Redis database can be easily understood with this auxiliary `redis-cli` tool. + +The `-i ` option in this case works as a modifier in order to +change the frequency at which new lines are emitted. The default is one +second. + +## Scan for big keys and memory usage + +### Big keys + +In this special mode, `redis-cli` works as a key space analyzer. It scans the +dataset for big keys, but also provides information about the data types +that the data set consists of. This mode is enabled with the `--bigkeys` option, +and produces verbose output: + +``` +$ redis-cli --bigkeys + +# Scanning the entire keyspace to find biggest keys as well as +# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec +# per 100 SCAN commands (not usually needed). + +100.00% |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| +Keys sampled: 55 + +-------- summary ------- + +Total key length in bytes is 495 (avg len 9.00) + +Biggest list found "bikes:finished" has 1 items +Biggest string found "all_bikes" has 36 bytes +Biggest hash found "bike:1:stats" has 3 fields +Biggest stream found "race:france" has 4 entries +Biggest set found "bikes:racing:france" has 3 members +Biggest zset found "racer_scores" has 8 members + +1 lists with 1 items (01.82% of keys, avg size 1.00) +16 strings with 149 bytes (29.09% of keys, avg size 9.31) +1 MBbloomCFs with 0 ? (01.82% of keys, avg size 0.00) +1 hashs with 3 fields (01.82% of keys, avg size 3.00) +3 streams with 8 entries (05.45% of keys, avg size 2.67) +2 TDIS-TYPEs with 0 ? (03.64% of keys, avg size 0.00) +1 TopK-TYPEs with 0 ? (01.82% of keys, avg size 0.00) +2 sets with 5 members (03.64% of keys, avg size 2.50) +1 CMSk-TYPEs with 0 ? (01.82% of keys, avg size 0.00) +2 zsets with 11 members (03.64% of keys, avg size 5.50) +25 ReJSON-RLs with 0 ? (45.45% of keys, avg size 0.00) +``` + +In the first part of the output, each new key larger than the previous larger +key (of the same type) encountered is reported. The summary section +provides general stats about the data inside the Redis instance. + +The program uses the [`SCAN`]({{< relref "/commands/scan" >}}) command, so it can be executed against a busy +server without impacting the operations, however the `-i` option can be +used in order to throttle the scanning process of the specified fraction +of second for each [`SCAN`]({{< relref "/commands/scan" >}}) command. + +For example, `-i 0.01` will slow down the program execution considerably, but will also reduce the load on the server +to a negligible amount. + +Note that the summary also reports in a cleaner form the biggest keys found +for each time. The initial output is just to provide some interesting info +ASAP if running against a very large data set. + +The `--bigkeys` option now works on cluster replicas. + +### Memory usage + +Similar to the `--bigkeys` option, `--memkeys` allows you to scan the entire keyspace to find biggest keys as well as +the average sizes per key type. + +``` +$ redis-cli --memkeys + +# Scanning the entire keyspace to find biggest keys as well as +# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec +# per 100 SCAN commands (not usually needed). + +100.00% |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| +Keys sampled: 55 + +-------- summary ------- + +Total key length in bytes is 495 (avg len 9.00) + +Biggest list found "bikes:finished" has 104 bytes +Biggest string found "all_bikes" has 120 bytes +Biggest MBbloomCF found "bikes:models" has 1048680 bytes +Biggest hash found "bike:1:stats" has 104 bytes +Biggest stream found "race:italy" has 7172 bytes +Biggest TDIS-TYPE found "bikes:sales" has 9832 bytes +Biggest TopK-TYPE found "bikes:keywords" has 114256 bytes +Biggest set found "bikes:racing:france" has 120 bytes +Biggest CMSk-TYPE found "bikes:profit" has 144056 bytes +Biggest zset found "racer_scores" has 168 bytes +Biggest ReJSON-RL found "bikes:inventory" has 4865 bytes + +1 lists with 104 bytes (01.82% of keys, avg size 104.00) +16 strings with 1360 bytes (29.09% of keys, avg size 85.00) +1 MBbloomCFs with 1048680 bytes (01.82% of keys, avg size 1048680.00) +1 hashs with 104 bytes (01.82% of keys, avg size 104.00) +3 streams with 16960 bytes (05.45% of keys, avg size 5653.33) +2 TDIS-TYPEs with 19648 bytes (03.64% of keys, avg size 9824.00) +1 TopK-TYPEs with 114256 bytes (01.82% of keys, avg size 114256.00) +2 sets with 208 bytes (03.64% of keys, avg size 104.00) +1 CMSk-TYPEs with 144056 bytes (01.82% of keys, avg size 144056.00) +2 zsets with 304 bytes (03.64% of keys, avg size 152.00) +25 ReJSON-RLs with 15748 bytes (45.45% of keys, avg size 629.92) +``` + +The `--memkeys` option now works on cluster replicas. + +### Combine `--bigkeys` and `--memkeys` + +You can use the `--keystats` and `--keystats-samples` options to combine `--memkeys` and `--bigkeys` with additional distribution data. + +``` +$ redis-cli --keystats + +# Scanning the entire keyspace to find the biggest keys and distribution information. +# Use -i 0.1 to sleep 0.1 sec per 100 SCAN commands (not usually needed). +# Use --cursor to start the scan at the cursor (usually after a Ctrl-C). +# Use --top to display top key sizes (default is 10). +# Ctrl-C to stop the scan. + +100.00% |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| +Keys sampled: 55 +Keys size: 1.30M + +--- Top 10 key sizes --- + 1 1.00M MBbloomCF "bikes:models" + 2 140.68K CMSk-TYPE "bikes:profit" + 3 111.58K TopK-TYPE "bikes:keywords" + 4 9.60K TDIS-TYPE "bikes:sales" + 5 9.59K TDIS-TYPE "racer_ages" + 6 7.00K stream "race:italy" + 7 4.92K stream "race:france" + 8 4.75K ReJSON-RL "bikes:inventory" + 9 4.64K stream "race:usa" + 10 1.26K ReJSON-RL "bicycle:7" + +--- Top size per type --- +list bikes:finished is 104B +string all_bikes is 120B +MBbloomCF bikes:models is 1.00M +hash bike:1:stats is 104B +stream race:italy is 7.00K +TDIS-TYPE bikes:sales is 9.60K +TopK-TYPE bikes:keywords is 111.58K +set bikes:racing:france is 120B +CMSk-TYPE bikes:profit is 140.68K +zset racer_scores is 168B +ReJSON-RL bikes:inventory is 4.75K + +--- Top length and cardinality per type --- +list bikes:finished has 1 items +string all_bikes has 36B +hash bike:1:stats has 3 fields +stream race:france has 4 entries +set bikes:racing:france has 3 members +zset racer_scores has 8 members + +Key size Percentile Total keys +-------- ---------- ----------- + 64B 0.0000% 3 + 239B 50.0000% 28 + 763B 75.0000% 42 + 4.92K 87.5000% 49 + 9.60K 93.7500% 52 + 140.69K 96.8750% 54 + 1.00M 100.0000% 55 +Note: 0.01% size precision, Mean: 24.17K, StdDeviation: 138.12K + +Key name length Percentile Total keys +--------------- ---------- ----------- + 19B 100.0000% 55 +Total key length is 495B (9B avg) + +Type Total keys Keys % Tot size Avg size Total length/card Avg ln/card +--------- ------------ ------- -------- -------- ------------------ ----------- +list 1 1.82% 104B 104B 1 items 1.00 +string 16 29.09% 1.33K 85B 149B 9B +MBbloomCF 1 1.82% 1.00M 1.00M - - +hash 1 1.82% 104B 104B 3 fields 3.00 +stream 3 5.45% 16.56K 5.52K 8 entries 2.67 +TDIS-TYPE 2 3.64% 19.19K 9.59K - - +TopK-TYPE 1 1.82% 111.58K 111.58K - - +set 2 3.64% 208B 104B 5 members 2.50 +CMSk-TYPE 1 1.82% 140.68K 140.68K - - +zset 2 3.64% 304B 152B 11 members 5.50 +ReJSON-RL 25 45.45% 15.38K 629B - - +``` + +## Get a list of keys + +It is also possible to scan the key space, again in a way that does not +block the Redis server (which does happen when you use a command +like `KEYS *`), and print all the key names, or filter them for specific +patterns. This mode, like the `--bigkeys` option, uses the [`SCAN`]({{< relref "/commands/scan" >}}) command, +so keys may be reported multiple times if the dataset is changing, but no +key would ever be missing, if that key was present since the start of the +iteration. Because of the command that it uses this option is called `--scan`. + + $ redis-cli --scan | head -10 + key-419 + key-71 + key-236 + key-50 + key-38 + key-458 + key-453 + key-499 + key-446 + key-371 + +Note that `head -10` is used in order to print only the first ten lines of the +output. + +Scanning is able to use the underlying pattern matching capability of +the [`SCAN`]({{< relref "/commands/scan" >}}) command with the `--pattern` option. + + $ redis-cli --scan --pattern '*-11*' + key-114 + key-117 + key-118 + key-113 + key-115 + key-112 + key-119 + key-11 + key-111 + key-110 + key-116 + +Piping the output through the `wc` command can be used to count specific +kind of objects, by key name: + + $ redis-cli --scan --pattern 'user:*' | wc -l + 3829433 + +You can use `-i 0.01` to add a delay between calls to the [`SCAN`]({{< relref "/commands/scan" >}}) command. +This will make the command slower but will significantly reduce load on the server. + +## Pub/sub mode + +The CLI is able to publish messages in Redis Pub/Sub channels using +the [`PUBLISH`]({{< relref "/commands/publish" >}}) command. Subscribing to channels in order to receive +messages is different - the terminal is blocked and waits for +messages, so this is implemented as a special mode in `redis-cli`. Unlike +other special modes this mode is not enabled by using a special option, +but simply by using the [`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}) or [`PSUBSCRIBE`]({{< relref "/commands/psubscribe" >}}) command, which are available in +interactive or command mode: + + $ redis-cli PSUBSCRIBE '*' + Reading messages... (press Ctrl-C to quit) + 1) "PSUBSCRIBE" + 2) "*" + 3) (integer) 1 + +The *reading messages* message shows that we entered Pub/Sub mode. +When another client publishes some message in some channel, such as with the command `redis-cli PUBLISH mychannel mymessage`, the CLI in Pub/Sub mode will show something such as: + + 1) "pmessage" + 2) "*" + 3) "mychannel" + 4) "mymessage" + +This is very useful for debugging Pub/Sub issues. +To exit the Pub/Sub mode just process `CTRL-C`. + +## Monitor commands executed in Redis + +Similarly to the Pub/Sub mode, the monitoring mode is entered automatically +once you use the [`MONITOR`]({{< relref "/commands/monitor" >}}) command. All commands received by the active Redis instance will be printed to the standard output: + + $ redis-cli MONITOR + OK + 1460100081.165665 [0 127.0.0.1:51706] "set" "shipment:8000736522714:status" "sorting" + 1460100083.053365 [0 127.0.0.1:51707] "get" "shipment:8000736522714:status" + +Note that it is possible to pipe the output, so you can monitor +for specific patterns using tools such as `grep`. + +## Monitor the latency of Redis instances + +Redis is often used in contexts where latency is very critical. Latency +involves multiple moving parts within the application, from the client library +to the network stack, to the Redis instance itself. + +The `redis-cli` has multiple facilities for studying the latency of a Redis +instance and understanding the latency's maximum, average and distribution. + +The basic latency-checking tool is the `--latency` option. Using this +option the CLI runs a loop where the [`PING`]({{< relref "/commands/ping" >}}) command is sent to the Redis +instance and the time to receive a reply is measured. This happens 100 +times per second, and stats are updated in a real time in the console: + + $ redis-cli --latency + min: 0, max: 1, avg: 0.19 (427 samples) + +The stats are provided in milliseconds. Usually, the average latency of +a very fast instance tends to be overestimated a bit because of the +latency due to the kernel scheduler of the system running `redis-cli` +itself, so the average latency of 0.19 above may easily be 0.01 or less. +However this is usually not a big problem, since most developers are interested in +events of a few milliseconds or more. + +Sometimes it is useful to study how the maximum and average latencies +evolve during time. The `--latency-history` option is used for that +purpose: it works exactly like `--latency`, but every 15 seconds (by +default) a new sampling session is started from scratch: + + $ redis-cli --latency-history + min: 0, max: 1, avg: 0.14 (1314 samples) -- 15.01 seconds range + min: 0, max: 1, avg: 0.18 (1299 samples) -- 15.00 seconds range + min: 0, max: 1, avg: 0.20 (113 samples)^C + +Sampling sessions' length can be changed with the `-i ` option. + +The most advanced latency study tool, but also the most complex to +interpret for non-experienced users, is the ability to use color terminals +to show a spectrum of latencies. You'll see a colored output that indicates the +different percentages of samples, and different ASCII characters that indicate +different latency figures. This mode is enabled using the `--latency-dist` +option: + + $ redis-cli --latency-dist + (output not displayed, requires a color terminal, try it!) + +There is another pretty unusual latency tool implemented inside `redis-cli`. +It does not check the latency of a Redis instance, but the latency of the +computer running `redis-cli`. This latency is intrinsic to the kernel scheduler, +the hypervisor in case of virtualized instances, and so forth. + +Redis calls it *intrinsic latency* because it's mostly opaque to the programmer. +If the Redis instance has high latency regardless of all the obvious things +that may be the source cause, it's worth to check what's the best your system +can do by running `redis-cli` in this special mode directly in the system you +are running Redis servers on. + +By measuring the intrinsic latency, you know that this is the baseline, +and Redis cannot outdo your system. In order to run the CLI +in this mode, use the `--intrinsic-latency `. Note that the test time is in seconds and dictates how long the test should run. + + $ ./redis-cli --intrinsic-latency 5 + Max latency so far: 1 microseconds. + Max latency so far: 7 microseconds. + Max latency so far: 9 microseconds. + Max latency so far: 11 microseconds. + Max latency so far: 13 microseconds. + Max latency so far: 15 microseconds. + Max latency so far: 34 microseconds. + Max latency so far: 82 microseconds. + Max latency so far: 586 microseconds. + Max latency so far: 739 microseconds. + + 65433042 total runs (avg latency: 0.0764 microseconds / 764.14 nanoseconds per run). + Worst run took 9671x longer than the average latency. + +IMPORTANT: this command must be executed on the computer that runs the Redis server instance, not on a different host. It does not connect to a Redis instance and performs the test locally. + +In the above case, the system cannot do better than 739 microseconds of worst +case latency, so one can expect certain queries to occasionally run less than 1 millisecond. + +## Remote backups of RDB files + +During a Redis replication's first synchronization, the primary and the replica +exchange the whole data set in the form of an RDB file. This feature is exploited +by `redis-cli` in order to provide a remote backup facility that allows a +transfer of an RDB file from any Redis instance to the local computer running +`redis-cli`. To use this mode, call the CLI with the `--rdb ` +option: + + $ redis-cli --rdb /tmp/dump.rdb + SYNC sent to master, writing 13256 bytes to '/tmp/dump.rdb' + Transfer finished with success. + +This is a simple but effective way to ensure disaster recovery +RDB backups exist of your Redis instance. When using this options in +scripts or `cron` jobs, make sure to check the return value of the command. +If it is non zero, an error occurred as in the following example: + + $ redis-cli --rdb /tmp/dump.rdb + SYNC with master failed: -ERR Can't SYNC while not connected with my master + $ echo $? + 1 + +## Replica mode + +The replica mode of the CLI is an advanced feature useful for +Redis developers and for debugging operations. +It allows for the inspection of the content a primary sends to its replicas in the replication +stream in order to propagate the writes to its replicas. The option +name is simply `--replica`. The following is a working example: + + $ redis-cli --replica + SYNC with master, discarding 13256 bytes of bulk transfer... + SYNC done. Logging commands from master. + "PING" + "SELECT","0" + "SET","last_name","Enigk" + "PING" + "INCR","mycounter" + +The command begins by discarding the RDB file of the first synchronization +and then logs each command received in CSV format. + +If you think some of the commands are not replicated correctly in your replicas +this is a good way to check what's happening, and also useful information +in order to improve the bug report. + +## Perform an LRU simulation + +Redis is often used as a cache with [LRU eviction]({{< relref "/develop/reference/eviction" >}}). +Depending on the number of keys and the amount of memory allocated for the +cache (specified via the `maxmemory` directive), the amount of cache hits +and misses will change. Sometimes, simulating the rate of hits is very +useful to correctly provision your cache. + +The `redis-cli` has a special mode where it performs a simulation of GET and SET +operations, using an 80-20% power law distribution in the requests pattern. +This means that 20% of keys will be requested 80% of times, which is a +common distribution in caching scenarios. + +Theoretically, given the distribution of the requests and the Redis memory +overhead, it should be possible to compute the hit rate analytically +with a mathematical formula. However, Redis can be configured with +different LRU settings (number of samples) and LRU's implementation, which +is approximated in Redis, changes a lot between different versions. Similarly +the amount of memory per key may change between versions. That is why this +tool was built: its main motivation was for testing the quality of Redis' LRU +implementation, but now is also useful for testing how a given version +behaves with the settings originally intended for deployment. + +To use this mode, specify the amount of keys in the test and configure a sensible `maxmemory` setting as a first attempt. + +IMPORTANT NOTE: Configuring the `maxmemory` setting in the Redis configuration +is crucial: if there is no cap to the maximum memory usage, the hit will +eventually be 100% since all the keys can be stored in memory. If too many keys are specified with maximum memory, eventually all of the computer RAM will be used. It is also needed to configure an appropriate +*maxmemory policy*; most of the time `allkeys-lru` is selected. + +In the following example there is a configured a memory limit of 100MB and an LRU +simulation using 10 million keys. + +WARNING: the test uses pipelining and will stress the server, don't use it +with production instances. + + $ ./redis-cli --lru-test 10000000 + 156000 Gets/sec | Hits: 4552 (2.92%) | Misses: 151448 (97.08%) + 153750 Gets/sec | Hits: 12906 (8.39%) | Misses: 140844 (91.61%) + 159250 Gets/sec | Hits: 21811 (13.70%) | Misses: 137439 (86.30%) + 151000 Gets/sec | Hits: 27615 (18.29%) | Misses: 123385 (81.71%) + 145000 Gets/sec | Hits: 32791 (22.61%) | Misses: 112209 (77.39%) + 157750 Gets/sec | Hits: 42178 (26.74%) | Misses: 115572 (73.26%) + 154500 Gets/sec | Hits: 47418 (30.69%) | Misses: 107082 (69.31%) + 151250 Gets/sec | Hits: 51636 (34.14%) | Misses: 99614 (65.86%) + +The program shows stats every second. In the first seconds the cache starts to be populated. The misses rate later stabilizes into the actual figure that can be expected: + + 120750 Gets/sec | Hits: 48774 (40.39%) | Misses: 71976 (59.61%) + 122500 Gets/sec | Hits: 49052 (40.04%) | Misses: 73448 (59.96%) + 127000 Gets/sec | Hits: 50870 (40.06%) | Misses: 76130 (59.94%) + 124250 Gets/sec | Hits: 50147 (40.36%) | Misses: 74103 (59.64%) + +A miss rate of 59% may not be acceptable for certain use cases therefor +100MB of memory is not enough. Observe an example using a half gigabyte of memory. After several +minutes the output stabilizes to the following figures: + + 140000 Gets/sec | Hits: 135376 (96.70%) | Misses: 4624 (3.30%) + 141250 Gets/sec | Hits: 136523 (96.65%) | Misses: 4727 (3.35%) + 140250 Gets/sec | Hits: 135457 (96.58%) | Misses: 4793 (3.42%) + 140500 Gets/sec | Hits: 135947 (96.76%) | Misses: 4553 (3.24%) + +With 500MB there is sufficient space for the key quantity (10 million) and distribution (80-20 style). +--- +aliases: /develop/connect/insight/copilot-faq +categories: +- docs +- operate +- redisinsight +linkTitle: Redis Copilot FAQ +title: Redis Copilot FAQ +weight: 3 +--- + +## General questions + +### What is Redis Copilot? +Redis Copilot is an AI-powered developer assistant that helps you learn about Redis, explore your Redis data, and build search queries in a conversational manner. It is available in our flagship visual GUI developer tool, Redis Insight, as well as within the Redis public documentation (general chatbot). + +### How does Redis Copilot work? +Redis Copilot is powered by the [OpenAI API](https://platform.openai.com/docs/overview) platform. When it needs to provide context-aware assistance, such as within the **my data** chat in Redis Insight, Redis Copilot will use data from your connected database. Some of that data, such as indexing schemas and sample keys, may be transmitted to the Redis Copilot backend and OpenAI for processing. + +### What kind of tasks can Redis Copilot perform? + +Currently, Redis Copilot provides two primary features: a general chatbot and a context-aware data chatbot. + +**General chatbot**: the knowledge-based chatbot serves as an interactive and dynamic documentation interface to simplify the learning process. You can ask specific questions about Redis commands, concepts, and products, and get responses on the fly. The general chatbot is also available in our public docs. + +**My data chatbot**: the context-aware chatbot available in Redis Insight lets you construct search queries using everyday language rather than requiring specific programming syntax. This feature lets you query and explore data easily and interactively without extensive technical knowledge. + +### How do I get started with Redis Copilot in Redis Insight? + +The Redis Copilot instance within our public documentation is free for anyone to use and is available now to answer general questions about Redis. + +The Redis Copilot instance that is embedded in Redis Insight is gradually being rolled out to the user base. Once you are granted access to Redis Copilot in the app, you need to sign in with your Redis Cloud account before you can start using it. If you don’t have an account with Redis Cloud yet, it will be automatically created when you sign in at no extra cost to you. + +## Data and Privacy + +### What data does Redis Copilot have access to in my database? + +Redis Copilot will have access to any relevant data stored in your database to provide context-aware assistance. +However, this only impacts the **my data** chat in Redis Insight. + +### Will OpenAI use my data to train their models? + +OpenAI states that the data provided via OpenAI API is not used for training. Please see the [OpenAI API data privacy page](https://openai.com/api-data-privacy) for the latest information. + +### What are the Redis Copilot terms? + +Redis Copilot terms apply to your use of or access to Redis Copilot. They set out what you can expect from Redis Copilot concerning its capabilities and limitations and how Redis Copilot handles your data. + +## Feedback + +### How can I provide feedback or report a bug? + +Redis Copilot is released as Beta in Redis Insight. We welcome your feedback and bug reports. You can submit them through the feedback form available in the [Redis Insight GitHub repository](https://github.com/RedisInsight/RedisInsight). + + +--- +Title: RedisInsight v1.5, May 2020 +linkTitle: v1.5 (May 2020) +date: 2020-05-12 00:00:00 +0000 +description: New tool for RedisGears, Multi-line query builder and improved suppport of Redis 6 ACLs +weight: 95 +--- + +This is the General Availability Release of RedisInsight 1.5 (v1.5.0)! + +### Headlines + +- Added beta support for [RedisGears module](https://oss.redislabs.com/redisgears/) +- Added multi-line query editing for RediSearch, RedisGraph and Timeseries +- Improved support of Redis 6 ACLs + +### Full details: + +- Features + - Core: + - Improved support for Redis 6 managing ACL permissions for each different capabilities + - Gears: + - Beta support for [Redis Gears module](https://oss.redislabs.com/redisgears/) + - Explore the latest executed functions and analyze the results or errors + - Manage registered functions and get execution summary + - Code, build and execute functions + - RediSearch: + - Multi-line for building queries + - RedisGraph: + - Multi-line for building queries + - Timeseries: + - Multi-line for building queries + +- Bug Fixes: + - Configuration: + - Fixed issue not showing the list of modules + - Search: + - Fixed issue preventing users to see all documents matching a search query + - Fixed issue with retrieving the search indexes in case of large database +--- +Title: Redis Insight v2.60.0, October 2024 +linkTitle: v2.60.0 (October 2024) +date: 2024-10-30 00:00:00 +0000 +description: Redis Insight v2.60 +weight: 1 + +--- +## 2.60 (October 2024) +This is the General Availability (GA) release of Redis Insight 2.60. + +### Highlights +- Advanced and schema-aware command auto-complete for [Redis Query Engine](https://redis.io/docs/latest/develop/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=release_notes) is now available in Workbench, enabling faster and more accurate query building with smart suggestions for indexes, schemas, and expressions. +- Support for adding multiple elements to the head or tail of lists, for both new or existing keys. +- Multiple UI enhancements for clarity and ease of use when editing Redis Data Integration (RDI) jobs. + +### Details + +**Features and improvements** +- [#3553](https://github.com/RedisInsight/RedisInsight/pull/3553), [#3647](https://github.com/RedisInsight/RedisInsight/pull/3647), [#3669](https://github.com/RedisInsight/RedisInsight/pull/3669) Advanced, schema-aware auto-complete for [Redis Query Engine](https://redis.io/docs/latest/develop/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=release_notes) in Workbench. Enjoy faster query building with context-sensitive suggestions that recognize indexes, schemas, and fields based on your current query. Start typing any [Redis Query Engine](https://redis.io/docs/latest/commands/?group=search) command in Workbench to try this feature. +- [#3891](https://github.com/RedisInsight/RedisInsight/pull/3891) Allows to easily push multiple elements to the head or tail of list data types, whether creating new or updating existing lists. +- [#3891](https://github.com/RedisInsight/RedisInsight/pull/3891) UX/UI enhancements to provide more details about Redis Data Integration (RDI) job transformation and output results in the dry-run section. +- [#3981](https://github.com/RedisInsight/RedisInsight/pull/3981) Removes confirmation prompts for template insertions in Redis Data Integration jobs, simplifying a workflow. +- [#3827](https://github.com/RedisInsight/RedisInsight/pull/3827) Provides easy-to-understand metrics of network input/output by automatically converting units in Browser Overview. +- [#3982](https://github.com/RedisInsight/RedisInsight/pull/3982), [#3975](https://github.com/RedisInsight/RedisInsight/pull/3975), [#3941](https://github.com/RedisInsight/RedisInsight/pull/3941) Various vulnerabilities have been fixed. +--- +Title: Redis Insight v2.56.0, September 2024 +linkTitle: v2.56.0 (September 2024) +date: 2024-09-09 00:00:00 +0000 +description: Redis Insight v2.56 +weight: 1 + +--- +## 2.56 (September 2024) +This is the General Availability (GA) release of Redis Insight 2.56. + +### Highlights +- Seamlessly sign in to your Redis Cloud account using the new [SAML single sign-on](https://redis.io/docs/latest/operate/rc/security/access-control/saml-sso/) feature, now available alongside existing social logins via Google and GitHub. This integration lets you connect to all your Redis Cloud databases in several clicks. +- Start your Redis journey faster with a sample data set automatically loaded for new free Redis Cloud databases created directly within Redis Insight. +- Focus on what matters most: + - Hide or show [TTL for individual hash fields](https://redis.io/docs/latest/develop/data-types/hashes/?utm_source=redisinsight&utm_medium=release_notes&utm_campaign=2.52#field-expiration) to create a cleaner, more efficient workspace. + - Enhanced vector data representation with updated 32-bit and 64-bit vector formatters in the Browser. + - UX optimizations to make it easier and more intuitive to connect to your [Redis Data Integration (RDI)](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) instance within Redis Insight. + + +### Details + +**Features and improvements** +- [#3727](https://github.com/RedisInsight/RedisInsight/pull/3727) Seamlessly sign in to your Redis Cloud account using the new [SAML single sign-on](https://redis.io/docs/latest/operate/rc/security/access-control/saml-sso/) feature, now available alongside existing social logins via Google and GitHub. This integration lets you connect to all your Redis Cloud databases in several clicks. Before setting up SAML in Redis Cloud, you must first [verify domain ownership](https://redis.io/docs/latest/operate/rc/security/access-control/saml-sso/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) for any domains associated with your SAML setup. Note that integration with Redis Cloud is currently available only in the desktop version of Redis Insight. +- [#3659](https://github.com/RedisInsight/RedisInsight/pull/3659) Start your Redis journey faster with a sample data set automatically loaded for new free Redis Cloud databases created directly within Redis Insight. This feature ensures a smoother setup process, allowing you to dive into your data immediately. +- [#3624](https://github.com/RedisInsight/RedisInsight/pull/3624) The ability to hide or show [TTL for individual hash fields](https://redis.io/docs/latest/develop/data-types/hashes/?utm_source=redisinsight&utm_medium=release_notes&utm_campaign=2.52#field-expiration) to create a cleaner, more efficient workspace. This optimization complements the highly requested hash field expiration feature introduced in the [first release candidate of Redis 7.4](https://github.com/redis-stack/redis-stack/releases/tag/v7.4.0-v0). +- [#3701](https://github.com/RedisInsight/RedisInsight/pull/3701) Enhanced vector data representation with updated 32-bit and 64-bit vector formatters in the Browser. These changes ensure that vector formatters are applied only to data containing unprintable values when converted to UTF-8, providing a clearer and more accurate view of your data. +- [#3714](https://github.com/RedisInsight/RedisInsight/pull/3714) UX optimizations to make it easier and more intuitive to connect to your [Redis Data Integration (RDI)](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) instance within Redis Insight. +- [#3665](https://github.com/RedisInsight/RedisInsight/pull/3665) A new timestamp formatter in the Browser to improve data readability. This formatter converts timestamps in hash fields to a human-readable format, making it easier to interpret results, validate and optimize queries, and inspect indexed data when using the [Redis Query Engine](https://redis.io/docs/latest/develop/interact/search-and-query/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes). +- [#3730](https://github.com/RedisInsight/RedisInsight/pull/3730) Date and time format customization to make the data more intuitive in Redis Insight. This flexibility helps match your local time zone or standardize it to UTC for better alignment in time-critical operations across global teams. +--- +Title: RedisInsight v1.11, Oct 2021 +linkTitle: v1.11 (Oct 2021) +date: 2021-10-17 00:00:00 +0000 +description: RedisInsight v1.11.0 +weight: 89 +--- + +## 1.11.1 (January 2022) + +This is the maintenance release of RedisInsight 1.11 (v1.11.1)! + +### Fixes: + +- Core: + - Fixed a warning about `urllib` version deprecation. + - ACL errors are now show in pretty format while in edit database screen. + - Fixed unnecessary warning about segment when there's no internet connection. +- RediSearch: + - Fix index tool support for `v2.2.5`. +- Bulk actions: + - Added support for cross-slot bulk action execution. + - Fixed a bug where there's a failure when a malformed UTF-8 characters are present in the key. +- Memory Analysis: + - Added support for `quicklist2` data type. +- Cluster Management: + - Generic errors are also displayed in the tool. This is helpful when connected to a vendor provided redis with custom exceptions. + +## 1.11.0 + +This is the General Availability Release of RedisInsight 1.11 (v1.11.0)! + +### Headlines: +- Added beta support for [RedisAI](https://oss.redis.com/redisai/) +- Fixed the issue with empty fields for Hash objects. + +### Full Details: +- Core: + - Fixed a bug where editing cluster returns error. + - Fixed broken redis links. +- Browser + - Check Hash value for `emptiness` +- RedisGraph: + - Added support for point datatype. + - Fixed a bug where returning relationships without their respective nodes leads to infinite loading +- RediSearch: + - Fixed a bug where a malformed unicode string in redisearch doesn't produce results. +- RedisTimeseries: + - Added support for `TS.REVRANGE` and `TS.MREVRANGE` commands. +--- +Title: RedisInsight v1.9, January 2021 +linkTitle: v1.9 (Jan 2021) +date: 2020-01-15 00:00:00 +0000 +description: RedisInsight v1.9.0 +weight: 91 +--- + +## 1.9.0 (January 2021) + +This is the General Availability Release of RedisInsight 1.9 (v1.9.0)! + +### Headlines: + +- RedisGraph tools is getting improved with better UX and new interactions capabilities +- Adding the ability to configure a database, by just using its direct URL to auto fill all required fields +- CLI is now providing ability to configure your favorite key-bindings between Emacs or Vim + +### Full Details: + +- Core: + - Support for configuration of Redis where the number of databases goes over the default 16. + - Ability to add a Redis database using a shareable URL. +- RedisGraph + - UX improvements for large queries: fixed long pause while results are being rendered. + - Made various improvements to interactions with the graph visualization: + - Selected node's size increased to make it easier to distinguish. + - Zoom via mouse wheel. + - Double click to zoom in. + - Double right-click to zoom out. + - Keyboard shortcuts to zoom. + - Center on node on fetching direct neighbours. + - Halo masking indirect edges on selected node. + - Button to reset view: center entire graph. + - Button to center on the selected node. + - New zoom buttons +- CLI + - Basic navigation key-bindings for Emacs and Vim. + - UX improvements: the inputs and other controls are now disabled and a message is shown while the command is executing. +--- +Title: RedisInsight v2.40.0, December 2023 +linkTitle: v2.40.0 (December 2023) +date: 2023-12-27 00:00:00 +0000 +description: RedisInsight v2.40 +weight: 1 +--- +## 2.40 (December 2023) +This is the General Availability (GA) release of RedisInsight 2.40. + +### Highlights +- To simplify in-app provisioning of a free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_40) database to use with RedisInsight interactive tutorials, the recommended cloud vendor and region are now preselected +- Optimizations when uploading large text files with the list of Redis commands, available under bulk actions in Browser + +### Details + +**Features and improvements** +- [#2879](https://github.com/RedisInsight/RedisInsight/pull/2879) UX improvements to simplify in-app provisioning of a free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_40) database. Create a new database with a preselected cloud vendor and region by using the recommended sign-up settings. You can manage your database by signing in to the [Redis Cloud console](https://cloud.redis.io/#/databases?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_40) +- [#2851](https://github.com/RedisInsight/RedisInsight/pull/2851) See plan, cloud vendor, and region details after successfully provisioning your free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_40) database +- [#2882](https://github.com/RedisInsight/RedisInsight/pull/2882) Optimizations when uploading large text files with the list of Redis commands, available under bulk actions in Browser +- [#2808](https://github.com/RedisInsight/RedisInsight/pull/2808) Enhanced security measurement to no longer display existing passwords for Redis Sentinel in plain text +- [#2875](https://github.com/RedisInsight/RedisInsight/pull/2875) Increased performance when resizing the key list and key details in the Tree view, ensuring a smoother user experience +- [#2866](https://github.com/RedisInsight/RedisInsight/pull/2866) Support for hyphens in [node host names](https://github.com/RedisInsight/RedisInsight/issues/2865) + +**Bugs** +- [#2868](https://github.com/RedisInsight/RedisInsight/pull/2868) Prevent [unintentional data overwrites](https://github.com/RedisInsight/RedisInsight/issues/2791) by disabling both manual and automatic refreshing of key values while editing in the Browser +--- +Title: RedisInsight v1.8, November 2020 +linkTitle: v1.8 (Nov 2020) +date: 2020-11-11 00:00:00 +0000 +description: RedisInsight v1.8.0 +weight: 92 +--- + +## 1.8.3 (January 2021) + +This is a maintenance release of RedisInsight 1.8 (v1.8.3)! + +This fixes the crash on MacOS Big Sur (11.1) for the MacOS package. +RedisInsight is supported on Mac hardware with Intel chips, but not for Mac hardware with the Apple M1 (ARM) chip. + +## 1.8.2 (24 December 2020) + +This is the maintenance release of RedisInsight 1.8 (v1.8.2)! + +### Fixes: + +- Browser: + - Improved handling of large strings, lists, hashes, sets and sorted sets. + - Better error message when loading a key's value fails. + - More robust handling of Java serialized objects - malformed Java serialized objects are now displayed as binary strings. + - Increased the default width of the key selector. +- Memory Analysis: + - Fixed crash on databases with modules that store auxiliary module data in the RDB (RediSearch 2, RedisGears, etc.). + - Fix integer overflow which results in the size of large keys not being reported properly. + - Add Streams statistics to the charts shown in the Memory Analysis Overview tool. + - Better error message when online analysis fails due to both the `SYNC` and `DUMP` commands being unavailable. + +## 1.8.1 (17 December 2020) + +Maintenance release for RedisInsight 1.8 including bug fixes and enhancements. + +### Important Note: + +We'd love to learn more how you are using RedisInsight. Now we have a user survey in the application, but you can also get to it [here](https://www.surveymonkey.com/r/ZZVR2ZG). + +### Fixes: + +- Core: + - Fixed placeholder page for modules which was not appearing on desktop packagings of RedisInsight. + - Fixed error in handling disconnection to databases on desktop packaging of RedisInsight. + - Fixed auto-fill-on-URL-paste when adding Redis databases. + - Fixed add database form: scroll to error message if adding a database fails. + - Fixed typo in Settings page. +- CLI: + - Fixed on the repeat command option which was not properly cleared when the command was changed. + +## 1.8.0 (November 2020) + +This is the General Availability Release of RedisInsight 1.8 (v1.8.0)! + +### Important Note: +We'd love to learn more how you are using RedisInsight. We introduce a user survey, you'll see it in the application, but it's happening [here](https://www.surveymonkey.com/r/ZZVR2ZG). + + +### Headlines: + +- Guided experience to get started with RedisInsight, add a database and experiment with modules +- CLI now supports "help" commands and lets you repeat commands +- Ability to provide your CA Certificate and skip-verify option for TLS authentications +- Ability to display RediSearch indices summary +- Adding a database is getting simpler by auto-filling all database information from the Redis connection URL +- New environment variables for configuring HOST, PORT and application LOG LEVEL +- Support for RedisJSON on Redis Cluster + + +### Full Details: + +- Core: + - New welcome page when no databases are configured + - New information pages for each modules, when they are not configured + - New environment variables for configuring HOST, PORT and application LOG LEVEL + - Ability to add a CA Certificate and skip-verify for TLS authentications + - Added auto-filling database details from Redis Connection URL +- Browser: + - Added error message when trying to visualize large keys + - Fixed issue when copying key with " character +- RedisGraph, RediSearch, RedisTimeseries: + - Added ability to copy commands with a button click +- RedisSearch: + - Display the selected Index's Summary +- RedisGraph: + - Added ability to persist nodes display choices between queries +- ReJSON: + - Support for RedisJSON on Redis Cluster +- ClientList: + - Sort clients by type +- CLI: + - Added support for the `help` command in CLI + - Added ability to repeat commands multiple times with increments + - Added the ability to close the Hint window + - Added stream command assists +- Profiler: + - Fixed issue with TLS databases +--- +Title: RedisInsight v1.10, March 2021 +linkTitle: v1.10 (Mar 2021) +date: 2021-03-08 00:00:00 +0000 +description: RedisInsight v1.10.0 +weight: 90 +--- + +## 1.10.1 (April 2021) + +This is the maintenance release of RedisInsight 1.10 (v1.10.1)! + +### Fixes: + +- Core: + - Fixed a bug where launching RedisInsight on macOS mojave (10.14.6) would log out the user. + - Fixed two major container vulnerability. (CVE-2021-24031, CVE-2021-24032 and CVE-2020-36242) + - Select existing installation path on upgrades in Windows. +- CLI: + - Added support for RAW mode (`--raw` in `redis-cli`). +- Browser: + - Fixed a bug where a number in redis string datatype is treated as a JSON . + - RedisJSON - Distinguish between empty and collapsed objects/arrays. +- Streams: + - Added ability to configure auto refresh interval. +- RedisTimeseries: + - Charts now support milliseconds. + - Added ability to configure auto refresh interval. +- RedisGraph: + - Properly detect module in Redis Enterprise + - Large queries that are truncated in the query card is provided with a tooltip that displays the query on hover. + - Properly render boolean data types in the objects. +- Bulk actions: + - Fixed a bug where preview returns duplicate dry run commands. + + +## 1.10.0 (March 2021) + +This is the General Availability (GA) Release of RedisInsight 1.10 (v1.10.0)! + +### Headlines: +- Improvements to the way the Browser tool displays "special" strings. +- UX improvements to the RedisGraph tool. +- Ability to configure the slowlog threshold from within the Slowlog tool. + +### Full Details: +- Overview: + - The connection details of the Redis database are now displayed. + - A message is displayed to indicate a cluster with no replicas instead of an empty table. + - Fixed a bug where the memory usage chart would display an incorrect graph when the memory usage changes rapidly. +- Browser: + - Pretty-print "special" strings (like JSON, Java serialized object, Python pickle objects, etc.) once the entire value is loaded. + - Pretty-print "special" strings (like JSON, Java serialized object, Python pickle objects, etc.) inside container types like Hashes, Sets and Sorted Sets. + - Allow sorting the members of a sorted set by score. + - Refresh button for the key list. + - Delete a key by pressing the "Delete" key. + - All keys are now visible by default, i.e, the data type filters are disabled by default. + - Fixed bug where switching logical databases did not work correctly sometimes. + - Fixed bug where adding a field to a hash with an empty value crashes the UI. + - Fixed bug where setting TTL to -1 does not effectively delete the key. + - Added tooltip explaining how to use the logical database selector along with a submit button. +- Streams: + - Refresh button for the list of streams. +- RedisGraph: + - Node size is now dependent on the number of direct relationships. + - Added support for pasting the full `GRAPH.QUERY` command into the query textbox. +- RediSearch: + - Fixed bug where the application fails to execute queries on indices starting with/enclosed within single-quotes. +- Bulk Actions: + - Improved support for operations on a large number of keys. +- Slowlog: + - Allow configuring the slowlog threshold from within the tool for non-cluster databases. +--- +Title: RedisInsight v2.16.0, December 2022 +linkTitle: v2.16.0 (Dec 2022) +date: 2022-12-28 00:00:00 +0000 +description: RedisInsight v2.16.0 +weight: 2 +--- +## 2.16.0 (December 2022) +This is the General Availability (GA) release of RedisInsight 2.16. + +### Highlights +- Bulk import database connections from a file +- Navigation enhancements for the Tree view +- Pre-populated host, port, and database alias in the form when adding a new Redis database + + +### Details +**Features and improvements** +- [#1492](https://github.com/RedisInsight/RedisInsight/pull/1492), [#1497](https://github.com/RedisInsight/RedisInsight/pull/1497), [#1500](https://github.com/RedisInsight/RedisInsight/pull/1500), [#1502](https://github.com/RedisInsight/RedisInsight/pull/1502) Migrate your database connections from other Redis GUIs, including RESP.app, with the new feature to bulk import database connections from a file. +- [#1506](https://github.com/RedisInsight/RedisInsight/pull/1506) Pre-populated host (127.0.0.1), port (6379, or 26379 for [Sentinel](https://redis.io/docs/management/sentinel/) connection type), and database alias in the form when adding a new Redis database +- [#1473](https://github.com/RedisInsight/RedisInsight/pull/1473) **Browser** view is renamed **List** view to avoid confusion with the Browser tool +- [#1464](https://github.com/RedisInsight/RedisInsight/pull/1464) Navigation enhancements for the Tree view, covering cases when filters are applied, the list of keys is refreshed or the view is switched to the Tree view +- [#1481](https://github.com/RedisInsight/RedisInsight/pull/1481), [#1482](https://github.com/RedisInsight/RedisInsight/pull/1482), [#1489](https://github.com/RedisInsight/RedisInsight/pull/1489) Indication of new database connections that have been manually added, auto-discovered or imported, but not opened yet +- [#1499](https://github.com/RedisInsight/RedisInsight/pull/1499) Display values of [JSON](https://redis.io/docs/stack/json/) keys when [JSON.DEBUG MEMORY](https://redis.io/commands/json.debug-memory/) is not available + +**Bugs** +- [#1514](https://github.com/RedisInsight/RedisInsight/pull/1514) Scan the database even when the [DBSIZE](https://redis.io/commands/dbsize/) returns 0 +--- +Title: RedisInsight v2.20.0, February 2023 +linkTitle: v2.20.0 (Feb 2023) +date: 2023-02-28 00:00:00 +0000 +description: RedisInsight v2.20.0 +weight: 1 +--- +## 2.20.0 (February 2023) +This is the General Availability (GA) release of RedisInsight 2.20. + +### Highlights +- Visualizations of [search](https://redis.io/docs/stack/search/) and [graph](https://redis.io/docs/stack/graph/) execution plans in Workbench +- Guided walkthrough of RedisInsight tools and capabilities for new users +- Bulk export database connections to a file +- Upload values of [RedisJSON](https://redis.io/docs/stack/json/) from a file for new keys in Browser +- Visualizations of [CLIENT LIST](https://redis.io/commands/client-list/) in Workbench +- See filters previously used in Browser + +### Details +**Features and improvements** +- [#1629](https://github.com/RedisInsight/RedisInsight/pull/1629), [#1739](https://github.com/RedisInsight/RedisInsight/pull/1739), [#1740](https://github.com/RedisInsight/RedisInsight/pull/1740), [#1781](https://github.com/RedisInsight/RedisInsight/pull/1781) Investigate and optimize your [search](https://redis.io/docs/stack/search/) and [graph](https://redis.io/docs/stack/graph/) queries with new visualizations of execution plans in Workbench. Visualizations are supported for [FT.EXPLAIN](https://redis.io/commands/ft.explain/),[FT.PROFILE](https://redis.io/commands/ft.profile/), [GRAPH.EXPLAIN](https://redis.io/commands/graph.explain/), and [GRAPH.PROFILE](https://redis.io/commands/graph.profile/). +- [#1698](https://github.com/RedisInsight/RedisInsight/pull/1698) Explore RedisInsight's tools and capabilities with a new walkthrough when you start RedisInsight for the first time. +- [#1631](https://github.com/RedisInsight/RedisInsight/pull/1631), [#1632](https://github.com/RedisInsight/RedisInsight/pull/1632) Migrate your database connections to another RedisInsight instance by performing a bulk export of database connections to a file. +- [#1741](https://github.com/RedisInsight/RedisInsight/pull/1741) Upload [RedisJSON](https://redis.io/docs/stack/json/) values from a file for new keys in Browser. +- [#1653](https://github.com/RedisInsight/RedisInsight/pull/1653) Analyze client connections using new Workbench visualizations for [CLIENT LIST](https://redis.io/commands/client-list/). +- [#1625](https://github.com/RedisInsight/RedisInsight/pull/1625) Quickly set filters previously used in Browser by selecting them from the list of recently used filters. +- [#1713](https://github.com/RedisInsight/RedisInsight/pull/1713) See your new Redis keys added in Browser without a need to refresh the list of keys. +- [#1681](https://github.com/RedisInsight/RedisInsight/pull/1681), [#1692](https://github.com/RedisInsight/RedisInsight/pull/1692), [#1693](https://github.com/RedisInsight/RedisInsight/pull/1693) Avoid the timeout connection errors by configuring the connection timeout for databases added manually via host and port. +- [#1696](https://github.com/RedisInsight/RedisInsight/pull/1696), [#1703](https://github.com/RedisInsight/RedisInsight/pull/1703) Test the database connection before adding the database. +- [#1689](https://github.com/RedisInsight/RedisInsight/pull/1689) Update the port of an existing database connection instead of adding a new one. +- [#1731](https://github.com/RedisInsight/RedisInsight/pull/1731) Use database indexes based on [INFO keyspace](https://redis.io/commands/info/). + +**Bugs** +- [#1678](https://github.com/RedisInsight/RedisInsight/pull/1678) Prevent crashes when SSH is set up on Linux. +- [#1697](https://github.com/RedisInsight/RedisInsight/pull/1697) Prevent crashes when working with [Redis streams](https://redis.io/docs/data-types/streams) with large IDs. +--- +Title: RedisInsight v2.36.0, October 2023 +linkTitle: v2.36.0 (October 2023) +date: 2023-10-26 00:00:00 +0000 +description: RedisInsight v2.36 +weight: 1 +--- +## 2.36 (October 2023) +This is the General Availability (GA) release of RedisInsight 2.36. + +### Highlights + +- Optimizations for efficient handling of big [Redis strings](https://redis.io/docs/data-types/strings/): choose to either view the string value for up to a maximum of 5,000 characters or download the data fully as a file if it exceeds the limit +- Improved security measurement to no longer display in plain text existing database passwords, SSH passwords, passphrases, and private keys + +### Details + +**Features and improvements** +- [#2685](https://github.com/RedisInsight/RedisInsight/pull/2685), [#2686](https://github.com/RedisInsight/RedisInsight/pull/2686) Added optimizations for working with big [Redis strings](https://redis.io/docs/data-types/strings/). Users can now choose to either view the data up to a maximum of 5,000 characters or download it in a file fully if it exceeds the limit. +- [#2647](https://github.com/RedisInsight/RedisInsight/pull/2647) Improved security measurement to no longer expose the existing database passwords, SSH passwords, passphrases, and private keys in plain text +- [#2631](https://github.com/RedisInsight/RedisInsight/pull/2631) Added proactive notification to restart the application when a new version becomes available +- [#2705](https://github.com/RedisInsight/RedisInsight/pull/2705) Basic support in the [search index](https://redis.io/docs/interact/search-and-query/) creation form (in Browser) to enable [geo polygon](https://redis.io/commands/ft.create/#:~:text=Vector%20Fields.-,GEOSHAPE,-%2D%20Allows%20polygon%20queries) search +- [#2681](https://github.com/RedisInsight/RedisInsight/pull/2681) Updated the Pickle formatter to [support](https://github.com/RedisInsight/RedisInsight/issues/2260) Pickle protocol 5 + +**Bugs** +- [#2675](https://github.com/RedisInsight/RedisInsight/pull/2675) Show the "Scan more" control until the cursor returned by the server is 0 to fix [cases](https://github.com/RedisInsight/RedisInsight/issues/2618) when not all keys are displayed. +--- +Title: RedisInsight v1.4, April 2020 +linkTitle: v1.4 (Apr 2020) +date: 2020-04-29 00:00:00 +0000 +description: Redis 6 ACLs support, improved CLI and full screen support in Graph, TimeSeries and RedisSearch +weight: 96 +--- + +This is the General Availability Release of RedisInsight 1.4 (v1.4.0)! + +### Headlines + +- Support for Redis 6, Redis Enterprise 6 and ACLs +- Improve CLI capabilities with removed command restrictions +- Full screen support in Graph, TimeSeries and RediSearch + +### Full details: + +- Features + - Core: + - Added support for Redis 6 + RE6 and authentication using ACL + - Added Full screen support for Graph, TimeSeries and RediSearch. + - Improved UI consistency (colors and styles) in Graph and Timeseries + - CLI: + - Removed the command restrictions, unless a command is specifically blacklisted. + - Command responses are displayed in exactly the same way as in `redis-cli` + - RedisGraph: + - Optimized performances when getting nodes relationships (edges) from user's queries + - Stream: + - Improved UX when defining the timing range of events to be filtered + +- Bug Fixes: + - Core: + - Fixed issue when connecting to Redis Enterprise with a password using a special character + - Stream: + - Fixed ability to properly visualize all events + +### Known issues + +- Core: + - Authentication to Redis 6 OSS in cluster mode is not supported yet +- CLI: + - Blocking commands are not supported + - Commands which return non-standard streaming responses are not supported: `MONITOR, SUBSCRIBE, PSUBSCRIBE, SYNC, PSYNC, SCRIPT DEBUG` +--- +Title: Redis Insight v2.50.0, May 2024 +linkTitle: v2.50.0 (May 2024) +date: 2024-05-30 00:00:00 +0000 +description: Redis Insight v2.50 +weight: 1 + +--- +## 2.50 (May 2024) +This is the General Availability (GA) release of Redis Insight 2.50. + +### Highlights +- New tutorial exploring several common Redis use cases with paired-up sample data that will get you started quicker with your empty database. +- Performance and UX enhancements for the JSON data structure for smoother data rendering and interaction in the Browser. + +### Details + +**Features and improvements** +- [#3402](https://github.com/RedisInsight/RedisInsight/pull/3402) New tutorial exploring several common Redis use cases with paired-up sample data that will get you started quicker with your empty database. +- [#3251](https://github.com/RedisInsight/RedisInsight/pull/3251) UX enhancements for the JSON data structure in the Browser to prevent collapsing the entire structure when updating a JSON value. Includes performance optimizations for loading JSON documents containing numerous objects. +- [#3161](https://github.com/RedisInsight/RedisInsight/pull/3161), [#3171](https://github.com/RedisInsight/RedisInsight/pull/3171) Added a quick access button to sign in to your Redis Cloud account from anywhere within Redis Insight, to import existing databases or create a new account with a free database. Integration with your Redis Cloud account is currently available only in the desktop Redis Insight version. +- [#3349](https://github.com/RedisInsight/RedisInsight/pull/3349) Changed the sorting order in the Tree view to lexicographical. +--- +Title: RedisInsight v2.46.0, March 2024 +linkTitle: v2.46.0 (March 2024) +date: 2024-03-28 00:00:00 +0000 +description: RedisInsight v2.46 +weight: 1 +--- +## 2.46 (March 2024) +This is the General Availability (GA) release of RedisInsight 2.46. + +### Highlights +- New formatters for 32-bit and 64-bit vector embeddings for a more human-readable representation in the Browser +- Cleaner layout on the main page with quick access to JSON and search & query tutorials and Redis Cloud in-app sign-up + + +### Details + +**Features and improvements** +- [#2843](https://github.com/RedisInsight/RedisInsight/pull/2843), [#3185](https://github.com/RedisInsight/RedisInsight/pull/3185) Adding new formatters for 32-bit and 64-bit vector embeddings to visualize them as arrays in Browser for a simpler and more intuitive representation. +- [#3069](https://github.com/RedisInsight/RedisInsight/pull/3069) UX enhancements in the database list page for an improved user experience, leading to a cleaner layout and easier navigation. +- [#3151](https://github.com/RedisInsight/RedisInsight/pull/3151) Launch RedisInsight with the previously used window size. + +**Bugs** +- [#3152](https://github.com/RedisInsight/RedisInsight/pull/3152), [#3156](https://github.com/RedisInsight/RedisInsight/pull/3156) A fix to [support the * wildcard](https://github.com/RedisInsight/RedisInsight/issues/3146) in Stream IDs. +- [#3174](https://github.com/RedisInsight/RedisInsight/pull/3174) Display invalid JSONs as unformatted values when a JSON view is set in Workbench results. +--- +Title: RedisInsight v1.3, March 2020 +linkTitle: v1.3 (Mar 2020) +date: 2020-03-30 00:00:00 +0000 +description: Auto-discovery of Redis Cloud databases, visualising paths in RedisGraph +weight: 97 +--- + +## RedisInsight v1.3.1 release notes (April 2020) + +This is the maintenance release of RedisInsight 1.3 (v1.3.1). + +Update urgency: Medium + +### Headlines + +- Fixed support for connecting to Redis database on TLS-enabled hosts with SNI enforcement. + +### Details + +- Bug Fixes: + - Core: + - Fixed support for connecting to Redis database on TLS-enabled hosts with SNI enforcement. + - Memory Analysis + - Fixed wrong display of table columns in Memory Analyzer view. + - Browser + - Fixed bug where the TTL on string and RedisJSON keys was being reset on edit. + - Configuration: + - Fixed freezing/flashing on refreshing configuration. + - CLI + - Fixed minor visual bug in inline command documentation. + - Security + - Updated frontend dependencies that had developed security vulnerabilities. + +## RedisInsight v1.3.0 release notes (March 2020) + +This is the General Availability release of RedisInsight 1.3 (v1.3.0)! + +### Headlines + +- The Windows installer is now signed with a Microsoft Authenticode certificate. +- Auto-Discovery of databases for Redis Cloud Pro. +- Visualising paths in RedisGraph + +### Details + +- Features: + - Security: + - The Windows installer now signed with a Microsoft Authenticode certificate + - Core: + - Auto-Discovery for Redis Cloud Pro: Redis Cloud Pro subscribers can automatically add + their cloud databases with just a few clicks + - Support for editing the connection details of an added database + - Better support for Sentinel-monitored databases with different passwords for the sentinel instance(s) and database + - UI improvements to the add database form + - Added a button in the top-right menu to reach the online documentation with one click + - RedisGraph: + - Added support for visualising queries that use [path functions](https://oss.redislabs.com/redisgraph/commands/#path-functions) + - Memory Analysis: + - Added support for [virtual hosted-style](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#virtual-hosted-style-access) S3 paths for Offline Analysis + - Browser: + - Added tooltip to make it easier to view the name of long keys + +- Bug Fixes: + - Core: + - Fixed fonts that were being loaded from the Internet, causing jarring visual changes on slow connections + - RedisGraph: + - Improved rendering of Array records + - Removed `GRAPH.EXPLAIN` calls for now until we have execution plan visualisation +--- +Title: Redis Insight v2.66.0, January 2025 +linkTitle: v2.66.0 (January 2025) +date: 2025-01-30 00:00:00 +0000 +description: Redis Insight v2.66 +weight: 1 + +--- +## 2.66 (January 2025) +This is the General Availability (GA) release of Redis Insight 2.66. + +### Highlights +- Switch between Redis databases and [Redis Data Integration](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) (RDI) instances without returning to the database or RDI endpoint list. +- Improved performance in Browser when handling nested JSON data, along with the option to hide key size and TTL for a more efficient navigation. + +### Details + +**Features and improvements** +- [#4258](https://github.com/RedisInsight/RedisInsight/pull/4258) Improved navigation allows seamless switching between Redis databases and [Redis Data Integration](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) (RDI) instances without returning to the database or endpoint list. +- [#4315](https://github.com/RedisInsight/RedisInsight/pull/4315) Improved performance when working with nested JSON data types in Browser. +- [#4290](https://github.com/RedisInsight/RedisInsight/pull/4290) Added an option to hide key size and TTL in Browser to optimize space. Hiding key size can also help avoid performance issues when working with large keys. +- [#4268](https://github.com/RedisInsight/RedisInsight/pull/4268) Enhanced UX for adding Redis databases, now displaying information in multiple tabs. +- [#4228](https://github.com/RedisInsight/RedisInsight/pull/4228) Added the ability to customize the refresh interval or stop refreshing database overview metrics, allowing to change the frequency or avoid seeing the `INFO` command in Profiler. +- [#4255](https://github.com/RedisInsight/RedisInsight/pull/4255) Updated Brotli decompression to use brotli-wasm. + +**Bugs** +- [#4304](https://github.com/RedisInsight/RedisInsight/pull/4304) Resolved the [application startup error](https://github.com/RedisInsight/RedisInsight/issues/3871) on Ubuntu 24.04 caused by a space in the application name. + +**SHA-256 Checksums** +| Package | SHA-256 | +|--|--| +| Windows | Bjxu9UFPpWhz29prFqRsKDNlF4LZaTUJgAvBBI/FNQ9rBncFGGOb5m59wY3dXIYAG6+VB6F9U9ylffv31IDszw== | +| Linux AppImage | T6y4xd4BVs1skNAOWkFpWkcov0288qIh2dXHo7gofDw99ow6phV3LzcaasHLT5F+TdlbfcjB8aGVJMx1qEIaBw== | +| Linux Debian| WaMsSd6qKvw5x6ALLLPTnFoWMX/qVZafVJ3SJAUr8IYoGksnNlU1huUr9q/ftlwP00y2zYac7EZBbl2Z/bOppQ== | +| MacOS Intel | 4x7vG7nTt3s4+kQ6WSuhrtigRa2XZ9Q6UiR1WCb/vROAL/X5GjmFvv4jBPIxZC1w6Z46pNzS+IhLcI4oHVSyAw== | +| MacOS Apple silicon | x8wlgNy4dadKV3tcn8uJh/ksvEMnBYaCPvlxlfwcMjNwXNoSBQ+tK6kgJSONGgLVUqtQe1pfTbzqBsqkEAxquw== | +--- +Title: RedisInsight v2.26.0, May 2023 +linkTitle: v2.26.0 (May 2023) +date: 2023-05-31 00:00:00 +0000 +description: RedisInsight v2.26 +weight: 1 +--- +## 2.26 (May 2023) +This is the General Availability (GA) release of RedisInsight 2.26. + +### Highlights +- Introducing Insights (Beta): a new right-side panel that displays contextualised database recommendations for optimizing performance and memory usage. The list of recommendations gets updated as you interact with your database. Check out the paired-up tutorials to learn about the recommended feature and vote to provide feedback. This functionality is being rolled out gradually to the user base. +- Support for bulk data upload in custom tutorials: quickly upload sample datasets from your custom RedisInsight tutorials to share your Redis expertise with your team and the wider community. + +### Details + +**Features and improvements** +- [#1847](https://github.com/RedisInsight/RedisInsight/pull/1847), [#1901](https://github.com/RedisInsight/RedisInsight/pull/1901), [#1957](https://github.com/RedisInsight/RedisInsight/pull/1957), [#1972](https://github.com/RedisInsight/RedisInsight/pull/1972) Launching Insights (Beta): a new right-side panel that displays contextualised database recommendations for optimizing performance and memory usage. The list of recommendations gets updated in real-time as you interact with your database taking into account database configuration, user actions and accessed data. Consult the paired-up tutorials to learn more about the recommended feature. This functionality is being rolled out gradually to the user base in order to allow the RedisInsight team to learn and adjust the recommendations. Provide feedback directly in the app or the [GitHub repository](https://github.com/RedisInsight/RedisInsight/issues). +- [#2019](https://github.com/RedisInsight/RedisInsight/pull/2019) Quickly upload sample data sets in bulk from your custom RedisInsight tutorials to share your Redis expertise with your team and the wider community. Use a text file with the list of Redis commands and follow our simple [instructions](https://github.com/RedisInsight/Tutorials) to include example data sets in your custom RedisInsight tutorials +- [#2010](https://github.com/RedisInsight/RedisInsight/pull/2010), [#2012](https://github.com/RedisInsight/RedisInsight/pull/2012), [#2013](https://github.com/RedisInsight/RedisInsight/pull/2013) Optimized the logic when filtering per data type in Browser to avoid unnecessary [TYPE](https://redis.io/commands/type/) commands + +**Bugs** +- [#2014](https://github.com/RedisInsight/RedisInsight/pull/2014) Display the actual command processing time in Workbench when results are grouped +--- +Title: RedisInsight v2.6.0, July 2022 +linkTitle: v2.6.0 (July 2022) +date: 2022-07-25 00:00:00 +0000 +description: RedisInsight v2.6.0 +weight: 8 +--- + +## 2.6.0 (July 2022) +This is the General Availability (GA) release of RedisInsight 2.6.0 + +### Headlines: +- Bulk actions: Delete the keys in bulk based on the filters set in Browser or Tree view +- Multiline support for key values in Browser and Tree View: Click the key value to see it in full +- [Pipeline](https://redis.io/docs/manual/pipelining/) support in Workbench: Batch Redis commands in Workbench to optimize round-trip times +- In-app notifications: Receive messages about important changes, updates, or announcements inside the application. Notifications are always available in the Notification center, and can be displayed with or without preview. + +### Details +**Features and improvements:** +- [#890](https://github.com/RedisInsight/RedisInsight/pull/890), [#883](https://github.com/RedisInsight/RedisInsight/pull/883), [#875](https://github.com/RedisInsight/RedisInsight/pull/875) Delete keys in bulk from your Redis database in Browser and Tree view based on filters you set by key name or data type. +- [#878](https://github.com/RedisInsight/RedisInsight/pull/878) Multiline support for key values in Browser and Tree View: Select the truncated value to expand the row and see the full value, select again to collapse it. +- [#837](https://github.com/RedisInsight/RedisInsight/pull/837), [#838](https://github.com/RedisInsight/RedisInsight/pull/838) Added [pipeline](https://redis.io/docs/manual/pipelining/) support for commands run in Workbench to optimize round-trip times. Default number of commands sent in a pipeline is 5, and is configurable in Settings > Advanced. +- [#862](https://github.com/RedisInsight/RedisInsight/pull/862), [#840](https://github.com/RedisInsight/RedisInsight/pull/840) Added in-app notifications to inform you about any important changes, updates, or announcements. Notifications are always available in the Notification center, and can be displayed with or without preview. +- [#830](https://github.com/RedisInsight/RedisInsight/pull/830) To more easily explore and work with stream data, always display stream entry ID and controls to remove the Stream entry regardless of the number of fields. +- [#928](https://github.com/RedisInsight/RedisInsight/pull/928) Remember the sorting on the list of databases. + +**Bugs fixed:** +- [#932](https://github.com/RedisInsight/RedisInsight/pull/932) Refresh the JSON value in Browser/Tree view. +--- +Title: RedisInsight v2.30.0, July 2023 +linkTitle: v2.30.0 (July 2023) +date: 2023-07-27 00:00:00 +0000 +description: RedisInsight v2.30 +weight: 1 +--- +## 2.30 (July 2023) +This is the General Availability (GA) release of RedisInsight 2.30. + +### Highlights +Introducing support for [triggers and functions](https://github.com/RedisGears/RedisGears/) that bring application logic closer to your data and give Redis powerful features for event-driven data processing + +### Details + +**Features and improvements** + +[#2247](https://github.com/RedisInsight/RedisInsight/pull/2247), [#2249](https://github.com/RedisInsight/RedisInsight/pull/2249), [#2273](https://github.com/RedisInsight/RedisInsight/pull/2273), [#2279](https://github.com/RedisInsight/RedisInsight/pull/2279) Support for [triggers and functions](https://github.com/RedisGears/RedisGears/) that add the capability to execute server-side functions triggered by events or data operations to: + - Speed up applications by running the application logic where the data lives + - Eliminate the need to maintain the same code across different applications by moving application functionality inside the Redis database + - Maintain consistent data when applications react to any keyspace change + - Improve code resiliency by backing up and replicating triggers and functions along with the database + +Triggers and functions work with a JavaScript engine, which lets you take advantage of JavaScript’s vast ecosystem of libraries and frameworks and modern, expressive syntax. +--- +Title: RedisInsight v1.2, January 2020 +linkTitle: v1.2 (Jan 2020) +date: 2020-01-27 00:00:00 +0000 +description: TLS Client side authentication support and stability improvements +weight: 98 +--- +## RedisInsight v1.2.2 release notes + +Update urgency: Medium + +This is a maintenance release for version 1.2. + +### Details + +- Bug Fixes: + - Core: + - This releases fixes the possible __false positive__ malware issues flagged by certain antivirus vendors introduced by [pyinstaller](https://github.com/pyinstaller/pyinstaller/issues/4633) which was reported on [reddit](https://www.reddit.com/r/redis/comments/f1qapz/redisinsight_cotains_malware/). + +## RedisInsight v1.2.1 release notes + +Update urgency: Medium + +This is a maintenance release for version 1.2. + +### Details + +- Enhancements: + - Core: + - Upgrade notifications: When you open RedisInsight, a notification is shown if a new version is available. + - RediSearch: + - Support for [RediSearch 1.6](https://github.com/RediSearch/RediSearch/releases/tag/v1.6.7). +- Minor Bug Fixes: + - RedisTimeSeries: + - Time was interpreted as seconds instead of milliseconds as ([Issue 332](https://github.com/RedisTimeSeries/RedisTimeSeries/issues/332)) + +## RedisInsight v1.2.0 release notes + +### Headlines + +- This release improves overall stability and provides fixes for issues found after the previous release. +- Added support for Client side TLS authentication. +- Resolved bug which caused blank pages at startup. + +### Details + +- New features: + - Core: + - Added support for Redis databases that require TLS client authentication (as in Redis Enterprise) + - RedisTimeseries: + - Initial `auto-updating` functionality when the query's end timestamp is `+` +- Minor Enhancements: + - Core: + - Check whether the port is available before starting. + - Made `localhost` the default host instead of `0.0.0.0`. + - Improved logging during startup. + - RedisGraph: + - Fixed height of query cards. + - RediSearch: + - Add support for zero-length and whitespace-only index names. +- Bug Fixes: + - Core: + - Moved server to another thread instead of a separate process. + In certain situations, the server process was being orphaned after the main process died. This resulted in a several issues, of which the "blank page issue" was the most common. Now that the server process is in a thread instead of a process, the server is not left running when the process exits. + - RediSearch: + - Fixed several bugs in the display of summarized results in the table view. + - Browser: + - Better handling of unsupported values - link to other tools that support it or show better error message. + - Fixed UI issues when the screen size is varied to provide better responsiveness. +--- +Title: RedisInsight v2.0, Nov 2021 +linkTitle: v2.0 (Nov 2021) +date: 2021-11-23 00:00:00 +0000 +description: RedisInsight v2.0.2 +weight: 12 +--- + +## 2.0.6 (April 2022) +This is the General Availability (GA) release of RedisInsight 2.0.6 + +### Headlines: +- SNI support - added SNI support to indicate a hostname in the TLS handshake +- Save Profiler logs into a file - now you can save and download Profiler logs into a .TXT file +- Customize delimiters in Tree view - added support for custom delimiters in Tree view +- Support for node grouping and pulsing in RedisGraph visualizations in Workbench + +### Details +**Features and improvements:** +- [#548](https://github.com/RedisInsight/RedisInsight/pull/548), [#542](https://github.com/RedisInsight/RedisInsight/pull/542) Added SNI support - use the "Add Database Manually" form to see the new "Use SNI" option under the TLS section to specify the server name and connect to your Redis Database +- [#521](https://github.com/RedisInsight/RedisInsight/pull/521/files) Added an option to save Profiler logs. Enable saving before starting the Profiler to save the logs into a .TXT file and download it to analyze them outside of the application +- [#496](https://github.com/RedisInsight/RedisInsight/pull/496) Now you can specify your own delimiters to work with namespaces in the Tree view, default delimiter is colon (':') +- [#473](https://github.com/RedisInsight/RedisInsight/pull/473) Added a link to GitHub repository of RedisInsight to quickly find and access it - you can see the icon below the "Settings" +- [#586](https://github.com/RedisInsight/RedisInsight/pull/586) Added support for node grouping and pulsing in the visualisations for RedisGraph in Workbench +- [#455](https://github.com/RedisInsight/RedisInsight/pull/455) Limited the movement of the special editor with Cypher highlights to Workbench Editor area (to work with it, just type in RedisGraph commands in Workbench) +- [#462](https://github.com/RedisInsight/RedisInsight/pull/462) Provided additional information about database indexes in the form to add a database using host and port +- [#489](https://github.com/RedisInsight/RedisInsight/pull/489) Reworked user experience with filters per key type and key name in Browser +- [#535](https://github.com/RedisInsight/RedisInsight/pull/535) Added highlights of timestamps and improved text wrapping in Profiler + + +### Bug fixes: +- [#581](https://github.com/RedisInsight/RedisInsight/pull/581) Fixed the issue with displaying keys in multi-shard databases +- [#576](https://github.com/RedisInsight/RedisInsight/pull/576) Fixed encoding in Workbench + + + +## 2.0.5 (March 2022, GA) + +This is the General Availability (GA) release of RedisInsight 2.0. + +### Headlines + +* **Tree view** - A new view of the keys in Browser, which automatically groups keys scanned in your database into folders based on key namespaces. Now you can navigate through and analyze your list of keys quicker by opening only folders with namespaces you want. +* **Support for Apple M1 (arm64)** - You can download it [here](https://redis.com/redis-enterprise/redis-insight/#insight-form). +* **Added auto-discovery of local databases** - RedisInsight will automatically find and add your local databases when you open the application for the first time. +* **A dedicated Editor for Cypher syntax** - Workbench supports autocomplete and highlighting of Cypher syntax for RedisGraph queries. + +### Details + +- You can switch to the Tree view in Browser to see all the keys grouped into folders according to their namespaces. Note that we use the colon (:) as a default separator, and it is not customizable yet. +- Added support for Apple M1 (arm64). +- Added a mechanism to auto-discover local databases based on the following parameters: + - The mechanism only triggers when you open the application for the first time. + - The database has standalone connection type. + - The database uses the default username and requires no password or TLS certificates. +- Added new built-in guides in Workbench for additional capabilities. +- Added tutorials in Workbench for Redis Stack databases that describe common use cases for Redis capabilities. +- Added a new dedicated Editor to Workbench with support for Cypher syntax autocomplete and highlighting. Use the "Shift+Space" shortcut inside of the quotes for your query to open the dedicated Editor. +- Show modules uploaded to databases in the list of databases. +- Added support for returning to the previous command in Workbench Editor. Use arrow up when your cursor is at the beginning of the first row to return to the previous command. Note: there is no support for the reverse direction yet, so use it with caution. + +If you installed RedisInsight-preview before, this folder will still exist at the following path: +* For MacOs: /.redisinsight-preview +* For Windows: C:/Users/{Username}/.redisinsight-preview +* For Linux: /.redisinsight-preview + +## 2.0.4 (February 2022) + +This is the maintenance release of RedisInsight Preview 2.0 (v2.0.4)! + +### Headlines + +- Fixes to the issues found +- Profiler + - Added RedisInsight Profiler, which uses the MONITOR command to analyze every command sent to the redis instance in real-time. +- Workbench: + - Added support for RedisGears and RedisBloom on the intelligent Redis command auto-complete. + - Keep command results previously received in the Workbench. + - Support for repeating commands. +- CLI: + - Added support for RedisGears and RedisBloom on the intelligent Redis command auto-complete. + - Support for repeating commands. +- Command Helper: + - Added information about RedisGears and RedisBloom Redis commands. + +### Details + +- Profiler + - Added RedisInsight Profiler, which uses the MONITOR command to analyze every command sent to the redis instance in real-time. Note: Running the MONITOR command is dangerous to the performance of your production server, so run it reasonably and remember to stop the Profiler. +- Workbench: + - Added support for RedisGears and RedisBloom on the intelligent Redis command auto-complete, so the list of similar commands and their arguments are displayed when you start typing any RedisGears or RedisBloom commands. + - Keep command results (up to 1MB) previously received in the Workbench, so they are available even after you restart the application. + - Connect Workbench to the database index selected when adding a database. + - To repeat any command in Workbench, just enter any integer and then a Redis command with arguments. +- CLI: + - Added support for RedisGears and RedisBloom on the intelligent Redis command auto-complete, so hints with arguments are displayed when you enter any RedisGears or RedisBloom commands + - CLI is by default connected to the database index selected when adding a database. Added displaying of the database index connected. + - To repeat any command in CLI, just enter any integer and then a Redis command with arguments. +- Command Helper: + - Added information about RedisGears and RedisBloom Redis commands. +- Core: + - Fixed an issue with displaying parameter values in the Overview when no information is received for these parameters. + + +## 2.0.3 (December 2021) + +This is the maintenance release of RedisInsight Preview 2.0 (v2.0.3). + +### Headlines + +- Workbench: + - Added indications of commands + - New hints with the list of command arguments + - Reworked navigation for the built-in guides +- Help Center: + - Added a page with list of supported keyboard shortcuts +- Core: + - Uncoupled Command Helper from CLI + - Renamed `ZSET` to Sorted Set + +### Details + +- Browser: + - Changed the format of TTL in the list of keys +- CLI: + - Fixed a bug with `FT.CREATE` command that rendered the window blank +- Workbench: + - Fixed a bug to avoid executing the Redis command one more time when the view of results is changed + - Added a new information message when there are no results to display + - Added indications of commands (currently, not clickable) in Editor area to point out the lines where commands start + - Added new hints in Editor to display the list of command arguments with the following keyboard shortcuts: + - Ctrl+Shift+Space for Windows and Linux + - ⌘ ⇧ Space for Mac + - Added support for remembering the state (expanded or collapsed) for left side panel in Workbench + - Reworked navigation for the built-in guides + - Changed icons for default and custom plugins +- Command Helper: + - Changed titles of command groups to make them consistent with redis.io +- Help Center: + - Added a page with supported keyboard shortcuts +- Core: + - Reworked logic to open CLI and Command Helper, added an option to open Command Helper without a need to open CLI + - Changed fonts and colors across the application to enhance readability + - Renamed `ZSET` to Sorted Set + - Added description of RedisGears and RedisBloom commands to hints in CLI, Command Helper, and Workbench + - Added support for automatic updates to the list of commands and their description in CLI, Command Helper, and Workbench + +## 2.0.2 (November 2021) + +This is the public preview release of RedisInsight 2.0 (v2.0.2). + +RedisInsight 2.0 is a complete product rewrite based on a new tech stack. This version contains a number of must-have and most-used capabilities from previous releases, plus a number of differentiators and delights. + +RedisInsight-preview 2.0 can be installed along with the current GA (1.11) version of RedisInsight with no issues. + +### Headlines + +- Developed using a new tech stack based on Electron, Elastic UI, and Monaco Editor +- Introducing Workbench - advanced command line interface with intelligent command auto-complete and complex data visualizations +- Ability to write and render your own data visualizations within Workbench +- Built-in click-through Redis guides available +- Support for Light and Dark themes +- Enhanced user experience with Browser + +### Details + +- Core: + - Enhanced user experience with the list of databases: + - View, sort and edit databases added + - Multiple deletion of databases + - Ability to connect to Redis Standalone, Redis Cluster and Redis Sentinel + - Auto discovery of databases managed by Redis Enterprise, Redis Cloud (Flexible), and Redis Sentinel + - Support for Redis OSS Cluster API + - Support for TLS connection + - Works with Microsoft Azure (official support upcoming) +- Workbench: + - Advanced command-line interface that lets you run commands against your Redis server + - Workbench editor allows comments, multi-line formatting and multi-command execution + - Intelligent Redis command auto-complete and syntax highlighting with support for [RediSearch](https://oss.redis.com/redisearch/), [RedisJSON](https://oss.redis.com/redisjson/), [RedisGraph](https://oss.redis.com/redisgraph/), [RedisTimeSeries](https://oss.redis.com/redistimeseries/), [RedisGears]({{< relref "/operate/oss_and_stack/stack-with-enterprise/deprecated-features/triggers-and-functions" >}}), [RedisAI](https://oss.redis.com/redisai/), [RedisBloom](https://oss.redis.com/redisbloom/) + - Allows rendering custom data visualization per Redis command using externally developed plugins +- Browser: + - Browse, filter and visualize key-value Redis data structures + - Visual cues per data type + - Quick view of size and ttl in the main browser view + - Ability to filter by pattern and/or data type + - Ability to change the number of keys to scan through during filtering + - CRUD support for Lists, Hashes, Strings, Sets, Sorted Sets + - Search within the data structure (except for Strings) + - CRUD support for [RedisJSON](https://oss.redis.com/redisjson/) +- Database overview: + - A number of metrics always on display within the database workspace + - Metrics updated every 5 second + - CPU, number of keys, commands/sec, network input, network output, total memory, number of connected clients + - Enabled modules per Redis server listed +- CLI: + - Command-line interface with enhanced type-ahead command help + - Embedded command helper where you can filter and search for Redis commands +- RediSearch: + - Tabular visualizations within Workbench of [RediSearch](https://oss.redis.com/redisearch/) index queries and aggregations (support for FT.INFO, FT.SEARCH and FT.AGGREGATE) +- Custom plugins: + - Ability to build your own visualization plugins to be rendered within Workbench + - [Documentation](https://github.com/redisinsight/redisinsight) on how to develop custom plugins and a reference example are provided +- Built-in guides: + - Built-in click-through guides for Redis capabilities + - Added a guide on Document Capabilities within Redis +- User interface (UI): + - Light/dark themes available + - Colour palette adjusted to the highest level of [Web content accessibility guidelines](https://www.w3.org/WAI/standards-guidelines/wcag/) +- Data encryption: + - Optional ability to encrypt sensitive data such as connection certificates and passwords +--- +Title: RedisInsight v2.10.0, September 2022 +linkTitle: v2.10.0 (Sept 2022) +date: 2022-09-29 00:00:00 +0000 +description: RedisInsight v2.10.0 +weight: 5 +--- +## 2.10.0 (September 2022) +This is the General Availability (GA) release of RedisInsight 2.10. + +### Highlights +- Formatters: Additional support for values of keys with Protobuf, Binary, PHP unserialize (view and edit serialized PHP values as JSON), and Java serialized objects, save formatters selected when viewing other keys +- New overview for cluster databases displays memory and key allocation as well as database information per shards +- Configure Workbench to persist the Editor after commands have been run and group the results +- Complete an optional user survey + +### Details +**Features and improvements** +- [#1159](https://github.com/RedisInsight/RedisInsight/pull/1159), [#1160](https://github.com/RedisInsight/RedisInsight/pull/1160), [#1068](https://github.com/RedisInsight/RedisInsight/pull/1068), [#1071](https://github.com/RedisInsight/RedisInsight/pull/1071), [#1095](https://github.com/RedisInsight/RedisInsight/pull/1095), [#1097](https://github.com/RedisInsight/RedisInsight/pull/1097), [#1098](https://github.com/RedisInsight/RedisInsight/pull/1098) A dedicated **Analysis Tools** page displays memory and key allocation in cluster databases as well as database information per shards +- [#1017](https://github.com/RedisInsight/RedisInsight/pull/1017), [#1025](https://github.com/RedisInsight/RedisInsight/pull/1025), [#1029](https://github.com/RedisInsight/RedisInsight/pull/1029), [#1059](https://github.com/RedisInsight/RedisInsight/pull/1059), [#1092](https://github.com/RedisInsight/RedisInsight/pull/1092) Added support for additional data formats in Browser/Tree view, including Protobuf, Binary, Pickle, PHP unserialize (view and edit serialized PHP values as JSON), and Java serialized objects +- [#1130](https://github.com/RedisInsight/RedisInsight/pull/1130) Save formatters selected when viewing other keys +- [#1177](https://github.com/RedisInsight/RedisInsight/pull/1177) Add a validation when edited value is not valid in the selected format in Browser/Tree view +- [#1048](https://github.com/RedisInsight/RedisInsight/pull/1048) Configure Workbench to persist the Editor after commands have been run +- [#1119](https://github.com/RedisInsight/RedisInsight/pull/1119) Pipeline mode configuration for Workbench moved to Settings > Workbench +- [#1149](https://github.com/RedisInsight/RedisInsight/pull/1149) Save Workbench space by grouping results +- [#1162](https://github.com/RedisInsight/RedisInsight/pull/1162) Complete an optional user survey +- [#1037](https://github.com/RedisInsight/RedisInsight/pull/1037) Added tooltip to display long fields in [Redis Streams](https://redis.io/docs/data-types/streams/) +- [#1202](https://github.com/RedisInsight/RedisInsight/pull/1202) Removed format validations from the admin username in the Redis Enterprise Cluster autodiscovery process + +**Bugs** +- [#1180](https://github.com/RedisInsight/RedisInsight/pull/1180) Fix to display full values for truncated TTL in munutes +- [#1197](https://github.com/RedisInsight/RedisInsight/pull/1197) Workbench is now available even when encryption failed +- [#1176](https://github.com/RedisInsight/RedisInsight/pull/1176) Save the refresh value in Browser/Tree view +- [#1101](https://github.com/RedisInsight/RedisInsight/pull/1101) Fixed an issue when key names are not displayed +--- +Title: Redis Insight v2.68.0, April 2025 +linkTitle: v2.68.0 (April 2025) +date: 2025-04-01 00:00:00 +0000 +description: Redis Insight v2.68 +weight: 1 + +--- +## 2.68 (April 2025) +This is the General Availability (GA) release of Redis Insight 2.68. + +### Highlights +- You can now test the connectivity to your source database when setting up a [Redis Data Integration](https://redis.io/docs/latest/integrate/redis-data-integration/) (RDI) data pipeline in Redis Insight. This will help ensure that RDI can connect to the source database and keep your Redis cache updated with changes from the source database. +- Configure database connections via environment variables or a JSON file, allowing for centralized and efficient configuration management. This is specifically useful for automated deployments. + +### Details + +**Features and improvements** +- [#4368](https://github.com/RedisInsight/RedisInsight/pull/4368), [#4389](https://github.com/RedisInsight/RedisInsight/pull/4389) You can now test the connectivity to your source database when setting up a [Redis Data Integration](https://redis.io/docs/latest/integrate/redis-data-integration/) (RDI) data pipeline in Redis Insight. This will help ensure that RDI can connect to the source database and keep your Redis cache updated with changes from the source database. +- [#308](https://github.com/redislabsdev/RedisInsight-Cloud/pull/308) Configure database connections via environment variables or a JSON file, allowing for centralized and efficient configuration management. This is specifically useful for automated deployments. See [here](https://redis.io/docs/latest/operate/redisinsight/configuration/) for more details. +- [#4428](https://github.com/RedisInsight/RedisInsight/pull/4428) Added an environment variable to disable the ability to manage database connections (adding, editing, or deleting) in Redis Insight. This provides enhanced security and configuration control in scenarios where preventing changes to database connections is necessary. See [here](https://redis.io/docs/latest/operate/redisinsight/configuration/) for more details. +- [#4377](https://github.com/RedisInsight/RedisInsight/pull/4377), [#4383](https://github.com/RedisInsight/RedisInsight/pull/4383) Allows connecting to databases without requiring credentials for dangerous commands. In this mode, certain features, such as database statistics, are hidden. +- [#4427](https://github.com/RedisInsight/RedisInsight/pull/4427) Added the ability to download a file containing all keys deleted through bulk actions, which helps in tracking changes. +- [#4335](https://github.com/RedisInsight/RedisInsight/pull/4335) [Redis Data Integration](https://redis.io/docs/latest/integrate/redis-data-integration/) deployment errors are now stored in a file instead of being displayed in error messages, improving space efficiency. +- [#4374](https://github.com/RedisInsight/RedisInsight/pull/4374) Improved connection errors for clustered databases by adding detailed information to help with troubleshooting. +- [#4358](https://github.com/RedisInsight/RedisInsight/pull/4358) Added a setting to manually enforce standalone mode for clustered database connections instead of automatic clustered mode. +- [#4418](https://github.com/RedisInsight/RedisInsight/pull/4418) An ability to see key names in HEX format, useful for non-ASCII characters or debugging. To switch from Unicode to HEX, open the "Decompression & Formatters" tab while adding or editing a database connection. +- [#4401](https://github.com/RedisInsight/RedisInsight/pull/4401) Added an option to close key details for unsupported data types in the Browser to free up space. +- [#4296](https://github.com/RedisInsight/RedisInsight/pull/4296) When working with JSON data types, Redis Insight now uses [JSONPath ($) syntax](https://redis.io/docs/latest/develop/data-types/json/path/). + +**SHA-256 Checksums** +| Package | SHA-256 | +|--|--| +| Windows | 50YAPT59n2cLQu+P7kvc+kT+FxnW67pV53F1xz/C1IfgjmycgWpemycosbooQdLvXWPK4GLgk/NOnoZMI/15Lg== | +| Linux AppImage | QbI7V8jCCVPum4jdd1W8CEOqT+iFzwUTrt9tVbu9Kpv81Pub27aIJve3kWDdXWyvxHPUlUOsBHIo/uHIzdFJPw== | +| Linux Debian| V0/W8RclF6q0uT6uBR/lDNMt+OXqm7xmkSYf9vd8xCe4mGWUQBHiACX/aIgWs8l3Na5AQCNSJLrHnDXWiDD9Fw== | +| MacOS Intel | j3bdEX0rvxPGBUMZ6hD9aD+C/WTR1bOZT+lekJqujkDnRSPMZS5syGfkd1cQfx8QSnM10qYBO4RCH0Ew0m3g0A== | +| MacOS Apple silicon | iKOsvtOLOMcAvlbxL1LJI+45DgJxc+VIe9mVdoJZaNtMPCTAdxBX07GcvVVGfJOE8MdomsKrN8S2yYek7L6YLQ== | +--- +Title: RedisInsight v2.8.0, August 2022 +linkTitle: v2.8.0 (Aug 2022) +date: 2022-08-23 00:00:00 +0000 +description: RedisInsight v2.8.0 +weight: 6 +--- +## 2.8.0 (August 2022) +This is the General Availability (GA) release of RedisInsight 2.8.0 + +### Headlines +- Formatters: See formatted key values in JSON, MessagePack, Hex, Unicode, or ASCII in Browser/Tree view for an improved experience of working with different data formats. +- Clone existing database connection or connection details: Clone the database connection you previously added to have the same one, but with a different database index, username, or password. +- Raw mode in Workbench: Added support for the raw mode in Workbench results + +### Details +**Features and improvements** +- [#978](https://github.com/RedisInsight/RedisInsight/pull/978), [#959](https://github.com/RedisInsight/RedisInsight/pull/959), [#984](https://github.com/RedisInsight/RedisInsight/pull/984), [#996](https://github.com/RedisInsight/RedisInsight/pull/996), [#992](https://github.com/RedisInsight/RedisInsight/pull/992), [#1030](https://github.com/RedisInsight/RedisInsight/pull/1030) Added support for different data formats in Browser/Tree view, including JSON, MessagePack, Hex, and ASCII. If selected, formatter is applied to the entire key value and is available for any data type supported in Browser/Tree view. If any non-printable characters are detected, data editing is available only in Workbench and CLI to avoid data loss. +- [#955](https://github.com/RedisInsight/RedisInsight/pull/965) Quickly clone a database connection to have the same one, but with a different database index, username, or password. Open a database connection in the edit mode, request to clone it, and make the changes needed. The original database connection remains the same. +- [#1012](https://github.com/RedisInsight/RedisInsight/pull/1012) Added support for the raw mode (--raw) in Workbench. Enable it to see the results of commands executed in the raw mode. +- [#987](https://github.com/RedisInsight/RedisInsight/pull/987) [DBSIZE](https://redis.io/commands/dbsize/) command is no longer required to connect to a database. +- [#971](https://github.com/RedisInsight/RedisInsight/pull/971) Updated icon for the [RediSearch](https://redis.io/docs/stack/search/) Light module. +- [#1011](https://github.com/RedisInsight/RedisInsight/pull/1011) Enhanced navigation in the Command Helper allowing you to return to previous actions. + +**Bugs fixed** +- [#1009](https://github.com/RedisInsight/RedisInsight/pull/1009) Fixed an [error](https://github.com/RedisInsight/RedisInsight/issues/804) on automatic discovery of Sentinel databases. +- [#978](https://github.com/RedisInsight/RedisInsight/pull/978) Work with [non-printable characters](https://github.com/RedisInsight/RedisInsight/issues/873). To avoid data loss, when non-printable characters are detected, key and value are editable in Workbench and CLI only. +--- +Title: RedisInsight v2.44.0, February 2024 +linkTitle: v2.44.0 (February 2024) +date: 2024-02-29 00:00:00 +0000 +description: RedisInsight v2.44 +weight: 1 +--- +## 2.44 (February 2024) +This is the General Availability (GA) release of RedisInsight 2.44. + +### Highlights +- Added support for SSH tunneling for clustered databases, unblocking some users who want to migrate from RESP.app to RedisInsight. +- UX optimizations in the Browser layout to make it easier to leverage [search and query](https://redis.io/docs/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=redisinsight_release_notes) indexes. + +### Details + +**Features and improvements** +- [#2711](https://github.com/RedisInsight/RedisInsight/pull/2711), [#3040](https://github.com/RedisInsight/RedisInsight/pull/3040) Connect to your clustered Redis database via SSH tunnel using a password or private key in PEM format. +- [#3030](https://github.com/RedisInsight/RedisInsight/pull/3030), [#3070](https://github.com/RedisInsight/RedisInsight/pull/3070) UX optimizations in the Browser layout to enlarge the "Filter by Key" input field in the Browser and optimize the display of long [search and query](https://redis.io/docs/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=redisinsight_release_notes) indexes. +- [#3033](https://github.com/RedisInsight/RedisInsight/pull/3033), [#3036](https://github.com/RedisInsight/RedisInsight/pull/3036) Various improvements for custom [tutorials](https://github.com/RedisInsight/Tutorials), including visual highlighting of Redis code blocks and strengthening security measures for bulk data uploads by providing an option to download and preview the list of commands for upload. +- [#3010](https://github.com/RedisInsight/RedisInsight/pull/3010) Enhancements to prevent authentication errors caused by [certain special characters](https://github.com/RedisInsight/RedisInsight/issues/3019) in database passwords. + +**Bugs** +- [#3029](https://github.com/RedisInsight/RedisInsight/pull/3029) A fix for cases when autofill [prevents](https://github.com/RedisInsight/RedisInsight/issues/3026) the form to auto-discover Redis Enterprise Cluster database from being submitted. +--- +Title: Redis Insight v2.52.0, June 2024 +linkTitle: v2.52.0 (June 2024) +date: 2024-06-26 00:00:00 +0000 +description: Redis Insight v2.52 +weight: 1 + +--- +## 2.52 (June 2024) +This is the General Availability (GA) release of Redis Insight 2.52. + +### Highlights +- Redis Insight now supports [setting expiration for individual hash fields](https://redis.io/docs/latest/develop/data-types/hashes/?utm_source=redisinsight&utm_medium=release_notes&utm_campaign=2.52#field-expiration), a highly requested feature available in the [first release candidate of Redis 7.4](https://github.com/redis-stack/redis-stack/releases/tag/v7.4.0-rc1) +- Learn how to leverage Redis for Retrieval Augmented Generation (RAG) use cases via a new built-in Redis Insight tutorial + +### Details + +**Features and improvements** +- [#3470](https://github.com/RedisInsight/RedisInsight/pull/3470) Redis Insight now supports [setting expiration for individual hash fields](https://redis.io/docs/latest/develop/data-types/hashes/?utm_source=redisinsight&utm_medium=release_notes&utm_campaign=2.52#field-expiration) through intuitive Browser controls. The hash field expiration is a highly requested feature available in the [first release candidate of Redis 7.4](https://github.com/redis-stack/redis-stack/releases/tag/v7.4.0-rc1). +- [#60](https://github.com/RedisInsight/Tutorials/pull/60) Redis, with its high performance and versatile data structures, is an excellent choice for implementing Retrieval Augmented Generation (RAG). Our new built-in tutorial provides an overview of how Redis can be leveraged in a RAG use case. To get started, open the "Insights" panel in the top right corner and try the new tutorial. +- [#3447](https://github.com/RedisInsight/RedisInsight/pull/3447), [#3483](https://github.com/RedisInsight/RedisInsight/pull/3483) UX optimizations for displaying the values of keys in the Browser. The new layout includes controls for editing key values that appear only when you hover over them, optimizing the use of space and providing a cleaner interface. +- [#3231](https://github.com/RedisInsight/RedisInsight/pull/3231) Support for applying the JSON formatting in Browser for values of keys with float numbers that contain 10 or more decimal places. +- [#3492](https://github.com/RedisInsight/RedisInsight/pull/3492) Increased the slot refresh timeout to 5000 milliseconds to enhance connection stability for clustered databases. This adjustment helps avoid scenarios where a connection is terminated before the acknowledgment of a successful connection establishment is received. + +**Bugs** +- [#3490](https://github.com/RedisInsight/RedisInsight/pull/3490) Fix for an issue related to adding a JSON field to a key that already contains many fields. +--- +Title: Redis Insight v2.64.0, December 2024 +linkTitle: v2.64.0 (December 2024) +date: 2024-12-19 00:00:00 +0000 +description: Redis Insight v2.64 +weight: 1 + +--- +## 2.64 (December 2024) +This is the General Availability (GA) release of Redis Insight 2.64. + +### Highlights +- Improved the database connections list and simplified the connection form, delivering a cleaner, more intuitive UI for improved focus. +- New in-app reminders to prevent your free Redis Cloud database from being deleted due to inactivity, ensuring you can make the most of it to test your ideas. + +### Details + +**Features and improvements** +- [#4088](https://github.com/RedisInsight/RedisInsight/pull/4088), [#4078](https://github.com/RedisInsight/RedisInsight/pull/4078), [#4094](https://github.com/RedisInsight/RedisInsight/pull/4094) Improved the database connections list and simplified the connection form, delivering a cleaner, more intuitive UI for improved focus. +- [#4189](https://github.com/RedisInsight/RedisInsight/pull/4189), [#4191](https://github.com/RedisInsight/RedisInsight/pull/4191) New in-app reminders notify you before your free Redis Cloud database is deleted due to inactivity, ensuring you can make the most of it to test your ideas. +- [#4204](https://github.com/RedisInsight/RedisInsight/pull/4204), [#4196](https://github.com/RedisInsight/RedisInsight/pull/4196), [#4202](https://github.com/RedisInsight/RedisInsight/pull/4202) Various vulnerabilities have been fixed. + +**Bugs** +- [#4194](https://github.com/RedisInsight/RedisInsight/pull/4194) Resolved an [issue](https://github.com/RedisInsight/RedisInsight/issues/4186) where modifying a JSON value inadvertently converted strings into numbers. + +**SHA-256 Checksums** +| Package | SHA-256 | +|--|--| +| Windows | iYZbKsFtz/Ua4qeBdeHIRtZRiA1I50R3yY1t3VUD2cn94EpZLR5Xz3lK3yRxA85PxJaHjrWljyGliZv0OX0KBg== | +| Linux AppImage | ToEFW8wVLI8oFoc/puzf2Cwoff8gBTsIxEsGjQRZq5D5BgrE3muxtuEQto3J2RiRbadGAZx6AZPh75WVJ0DKRw== | +| Linux Debian| /k6jgfzDSRJ0yWmbtxpD5WG2i9wGUZ4r2AexDz6rUOLyZMqQPJUKEKuonprFvHZp+PUW/EtSWc436IFykBVmsQ== | +| MacOS Intel | PrbRc+ju0UKxr4huP7Xl9Sq0fH0XaxUtydW86rAYepEAADUADsAYV2lB8gO7Ohs9ukJ7mXBEU7OJWRqJGLhxHg== | +| MacOS Apple silicon | E6kTbnkoW3eji/v7WVrnwqlEKk444+hxiFqt56r8J+zAHhmX9dlNd7y37xdJlQ82FZ9QOIIMsN5Z0N+bgRisuw== | +--- +Title: Redis Insight v2.48.0, April 2024 +linkTitle: v2.48.0 (April 2024) +date: 2024-04-10 00:00:00 +0000 +description: Redis Insight v2.48 +weight: 1 + +--- +## 2.48 (April 2024) +This is the General Availability (GA) release of Redis Insight 2.48. + +### Highlights +- New look, equally fast. +- Learn Redis faster by uploading sample data and a concise tutorial for empty databases. +- Enhance the security and scalability when running Redis Insight on Docker behind a proxy by adding support for the static proxy subpath. + + +### Details + +**Features and improvements** +- [#3233](https://github.com/RedisInsight/RedisInsight/pull/3233) New look, equally fast. We've refreshed our Redis Insight app to align with our new brand look. +- [#3224](https://github.com/RedisInsight/RedisInsight/pull/3224) Jumpstart your Redis journey by uploading sample data with JSON and basic data structures for empty databases. To upload the sample data, navigate to the Browser screen for your empty database and initiate the upload process with just a click. +- [#2711](https://github.com/RedisInsight/RedisInsight/pull/2711) Enhance the security and scalability by running Redis Insight on Docker [behind a proxy](https://github.com/RedisInsight/RedisInsight-reverse-proxy) using the newly added support for the static proxy subpath. Use the `RIPROXYPATH` environment variable to configure the subpath proxy path. +--- +Title: RedisInsight v1.1, December 2019 +linkTitle: v1.1 (Dec 2019) +date: 2019-12-27 03:49:29 +0000 +description: Stability improvements and other fixes +weight: 99 +--- +## RedisInsight v1.1.0 release notes (December 2019) + +### Headlines + +- This release improves overall stability and provides fixes for issues found after the previous release. + +### Details + +- Minor Enhancements: + - Core: + - Enable mouse wheel support inside the `querycard`. + - Browser: + - Enable enter key press for adding keys in browser + - RediSearch: + - Disable HIGHLIGHT markup in JSON view. + - Browser: + - Improve error message when database is unreachable + - Add a reload/refresh button to refresh the value of a key + - Enable enter key press for adding keys in browser + - Improve error message for unsupported value types +- Bug Fixes: + - RedisGraph: + - Fix initial node placement in the view. + - Fix initial zoom with respect to the number of nodes in the result. + - Other minor fixes. +--- +Title: Redis Insight v2.64.1, December 2024 +linkTitle: v2.64.1 (December 2024) +date: 2024-12-27 00:00:00 +0000 +description: Redis Insight v2.64.1 +weight: 1 + +--- +## 2.64.1 (December 2024) +This is a maintenance release for Redis Insight 2.64. + +Update urgency: `HIGH`: There is a critical bug that may affect a subset of users. Upgrade! + +### Details + +- [#4236](https://github.com/RedisInsight/RedisInsight/pull/4236) Reverts the change to use JSONPath ($) by default rather than (.). These changes could cause issues with shards in Redis Enterprise Active-Active databases. + +**SHA-256 Checksums** +| Package | SHA-256 | +|--|--| +| Windows | hIK4qrC50Gd4jZnpHnwRIIVyDWtOfvfFID9nv8xfdcDgf4LvJcGLa9zVYkbfvwUv+aEaaBCohJJZMIGFC6iYHQ== | +| Linux AppImage | ll999oWjvKppawlYBPN6phGNa+mDiWmefIvkbQNAd7JPZFbHTYuLFWMWo4F1NrnZlr6vnPF6awbu7ubbiZL0HA== | +| Linux Debian| 4MKHfmmapfhxXUln0X+rpFXzm2dH6IPj2BIwlNRPQDGhpQ5flzOtLlV1iNGm9xqennZUv+hx+cVQodzPIj8FTw== | +| MacOS Intel | 5FkllEVCbD9M1fYww7N6XT3Qknl5tWrkHKWQWGhjkUiR/nZ89u+A84UzynB5H/lzBCFwUWJidfGJ4akrX2J7Hg== | +| MacOS Apple silicon | 2gWxZqGlAo0RyQKa0h8puyXMkIg1vF/Gobd9vS9DNWZMr3aYJojALx6f7pfknBoL7MDmZI29Mohtx4mnQPbjGQ== | +--- +Title: RedisInsight v1.0, November 2019 +linkTitle: v1.0 (Nov 2019) +date: 2019-10-01 03:49:29 +0000 +description: The initial release after Redis acquired RDBTools +weight: 100 +--- + +## RedisInsight v1.0.0 release notes (November 2019) + +This is the initial release after [Redis acquired RDBTools](https://www.redislabs.com/blog/redisinsight-gui/). +--- +Title: RedisInsight v2.32.0, August 2023 +linkTitle: v2.32.0 (August 2023) +date: 2023-08-31 00:00:00 +0000 +description: RedisInsight v2.32 +weight: 1 +--- +## 2.32 (August 2023) +This is the General Availability (GA) release of RedisInsight 2.32. + +### Highlights +- Easily provision a free database to use with the RedisInsight interactive tutorials to learn, among others, how to leverage Vector Similarity Search for your AI use cases or discover the power of the native JSON data structure supporting structured querying and full-text search. Take advantage of the in-app social sign-up to [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) to quickly provision a free database with [Redis Stack’s capabilities](https://redis.io/docs/about/about-stack/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32). Try the [latest 7.2](https://redis.com/blog/introducing-redis-7-2/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) release which delivers the new [Triggers and Functions](https://redis.com/blog/introducing-triggers-and-functions/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) feature, allowing you to execute server-side functions written in JavaScript that are either triggered by a keyspace change, by a stream entry arrival, or by explicitly calling them, empowering developers to build and maintain real-time applications by moving business logic closer to the data, ensuring a lower latency whilst delivering the best developer experience. +- Select a custom installation directory on Windows OS for when multi-user access to the app is required. + + +### Details + +**Features and improvements** + +- [#2270](https://github.com/RedisInsight/RedisInsight/pull/2270), [#2271](https://github.com/RedisInsight/RedisInsight/pull/2271), [#2437](https://github.com/RedisInsight/RedisInsight/pull/2437) Added the ability to quickly provision a free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) database via in-app social signup (Google or GitHub). Use the database with the RedisInsight interactive tutorials or try the [latest 7.2](https://redis.com/blog/introducing-redis-7-2/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) release which delivers the new [Triggers and Functions](https://redis.com/blog/introducing-triggers-and-functions/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_32) feature. To quickly create and automatically add a free Redis Cloud database to RedisInsight, click the "Try Redis Cloud" banner in the list of database connections page and follow the steps. +- [#2455](https://github.com/RedisInsight/RedisInsight/pull/2455) Select a custom installation directory on Windows OS +- [#2373](https://github.com/RedisInsight/RedisInsight/pull/2373), [#2387](https://github.com/RedisInsight/RedisInsight/pull/2387) Delete all command results in Workbench at once +- [#2458](https://github.com/RedisInsight/RedisInsight/pull/2458) Added in-app hints in Browser and Workbench to get started with RedisInsight interactive tutorials +- [#2422](https://github.com/RedisInsight/RedisInsight/pull/2422) Ignore the empty lines in files when uploading data in bulk +- [#2470](https://github.com/RedisInsight/RedisInsight/pull/2470) Preset the header containing the engine, the API version and the library name in the JavaScript file when creating a new library within the [Triggers and Functions](https://redis.com/blog/introducing-triggers-and-functions/?utm_source=redisinsight&utm_medium=main&utm_campaign=main) tool +--- +Title: RedisInsight v2.4.0, June 2022 +linkTitle: v2.4.0 (June 2022) +date: 2022-06-27 00:00:00 +0000 +description: RedisInsight v2.4.0 +weight: 9 +--- + +## 2.4.0 (June 2022) +This is the General Availability (GA) release of RedisInsight 2.4.0 + +### Headlines: +- Pub/Sub: Added support for [Redis pub/sub](https://redis.io/docs/manual/pubsub/) enabling subscription to channels and posting messages to channels. +- Consumer groups: Added support for [streams consumer groups](https://redis.io/docs/manual/data-types/streams/#consumer-groups) enabling provision of different subsets of messages from the same stream to many clients for inspection and processing. +- Database search: Search the list of databases added to RedisInsight to quickly find the required database. + + +### Details +**Features and improvements:** +- [#760](https://github.com/RedisInsight/RedisInsight/pull/760), [#737](https://github.com/RedisInsight/RedisInsight/pull/737), [#773](https://github.com/RedisInsight/RedisInsight/pull/773) Added support for [Redis pub/sub](https://redis.io/docs/manual/pubsub/) enabling subscription to channels and posting messages to channels. Currently does not support sharded channels. +- [#717](https://github.com/RedisInsight/RedisInsight/pull/717), [#683](https://github.com/RedisInsight/RedisInsight/pull/683), [#684](https://github.com/RedisInsight/RedisInsight/pull/684), [#688](https://github.com/RedisInsight/RedisInsight/pull/688), [#720](https://github.com/RedisInsight/RedisInsight/pull/720), Added support for [streams consumer groups](https://redis.io/docs/manual/data-types/streams/#consumer-groups) to manage different groups and consumers for the same stream, explicit acknowledgment of processed items, ability to inspect the pending items, claiming of unprocessed messages, and coherent history visibility for each single client. +- [#754](https://github.com/RedisInsight/RedisInsight/pull/754) New **All Relationship** toggle for RedisGraph visualizations in **Workbench**. Enable it to see all relationships between your nodes. +- [#788](https://github.com/RedisInsight/RedisInsight/pull/788) Quickly search the list of databases added to RedisInsight per database alias, host:port, or the last connection to find the database needed. +- [#788](https://github.com/RedisInsight/RedisInsight/pull/788) Overview displays the number of keys per the logical database connected if this number is not equal to the total number in the database. + +**Bugs Fixed:** +- [#774](https://github.com/RedisInsight/RedisInsight/pull/774) Fixed cases when not all parameters are received in Overview. +- [#810](https://github.com/RedisInsight/RedisInsight/pull/810) Display several streams values with the same timestamp. +--- +Title: RedisInsight v2.24.0, April 2023 +linkTitle: v2.24.0 (Apr 2023) +date: 2023-04-27 00:00:00 +0000 +description: RedisInsight v2.24 +weight: 1 +--- +## 2.24 (April 2023) +This is the General Availability (GA) release of RedisInsight 2.24. + +### Highlights +- Bulk data upload: Upload the list of Redis commands from a text file using the new bulk action in the Browser tool. Use the bulk data upload with custom RedisInsight tutorials to quickly load your sample dataset. +- Support for images in custom tutorials: showcase your Redis expertise with your team and the wider community by building shareable RedisInsight tutorials. +- JSON formatter support for the [JSON.GET](https://redis.io/commands/json.get/), [JSON.MGET](https://redis.io/commands/json.mget/), and [GET](https://redis.io/commands/get/) command output in Workbench. +- Added Brotli and PHP GZcompress to the list of supported decompression formats to view your data in a human-readable format. + +### Details + +**Features and improvements** +- [#1930](https://github.com/RedisInsight/RedisInsight/pull/1930), [#1961](https://github.com/RedisInsight/RedisInsight/pull/1961) Upload the list of Redis commands from a text file using the new bulk action in the Browser tool. Use the bulk data upload with custom RedisInsight tutorials to quickly load your sample dataset. +- [#1936](https://github.com/RedisInsight/RedisInsight/pull/1936), [#1939](https://github.com/RedisInsight/RedisInsight/pull/1939) Added support for images in custom tutorials, available in Workbench. Showcase your Redis expertise with your team and the wider community by building shareable tutorials. Use markdown syntax described in our [instructions](https://github.com/RedisInsight/Tutorials) to build tutorials. +- [#1946](https://github.com/RedisInsight/RedisInsight/pull/1946) See the output of [JSON.GET](https://redis.io/commands/json.get/), [JSON.MGET](https://redis.io/commands/json.mget/), and [GET](https://redis.io/commands/get/) formatted as JSON in Workbench. +- [#1876](https://github.com/RedisInsight/RedisInsight/pull/1876) Ability to directly delete a key in the Browser list of keys without having to view its values. +- [#1889](https://github.com/RedisInsight/RedisInsight/pull/1889), [#1900](https://github.com/RedisInsight/RedisInsight/pull/1900) Added Brotli and PHP GZcompress to the list of supported decompression formats to view your data in a human-readable format. Decompression format is configurable when adding a database connection. +- [#1886](https://github.com/RedisInsight/RedisInsight/pull/1886) Enhanced command syntax in CLI, Workbench, and Command Helper to align with [command documentation](https://redis.io/commands/). +- [#1975](https://github.com/RedisInsight/RedisInsight/pull/1975) [Renamed](https://github.com/RedisInsight/RedisInsight/issues/1902) the "Display On System Tray" to "Show in Menu Bar" on macOS. + +**Bugs** +- [#1990](https://github.com/RedisInsight/RedisInsight/pull/1990) Keep the previously specified SNI parameters when a database connection is edited. +- [#1999](https://github.com/RedisInsight/RedisInsight/pull/1999) Keep the previously set database index when a database connection is edited. +--- +Title: RedisInsight v2.12.0, October 2022 +linkTitle: v2.12.0 (Oct 2022) +date: 2022-09-29 00:00:00 +0000 +description: RedisInsight v2.12.0 +weight: 4 +--- +## 2.12.0 (October 2022) +This is the General Availability (GA) release of RedisInsight 2.12. + +### Highlights +- Database Analysis: Get insights and optimize the usage and performance of your Redis or Redis Stack based on the overview of the memory and data type distribution, big or complicated keys, and namespaces used +- Faster initial loading of the list of keys in Browser and Tree views +- Performance optimizations for large results in Workbench + +### Details +**Features and improvements** +- [#1207](https://github.com/RedisInsight/RedisInsight/pull/1207), [#1222](https://github.com/RedisInsight/RedisInsight/pull/1222), [#1295](https://github.com/RedisInsight/RedisInsight/pull/1295), [#1159](https://github.com/RedisInsight/RedisInsight/pull/1159), [#1231](https://github.com/RedisInsight/RedisInsight/pull/1231), [#1155](https://github.com/RedisInsight/RedisInsight/pull/1155) Get insights and optimize the usage and performance of your Redis or Redis Stack with Database Analysis. Navigate to Analysis Tools and scan up to 10,000 keys in the database to see the summary of memory allocation and the number of keys per Redis data type, memory likely to be freed over time, top 15 key namespaces, and the biggest keys found. You can extrapolate results based on the total number of keys or see the exact results for the number of keys scanned. +- [#1280](https://github.com/RedisInsight/RedisInsight/pull/1280) Speed up the initial load of the key list in Browser and Tree views +- [#1285](https://github.com/RedisInsight/RedisInsight/pull/1285) Support for infinite floating point numbers in sorted sets +- [#1290](https://github.com/RedisInsight/RedisInsight/pull/1290) Performance optimizations to process large results of Redis commands in Workbench + +**Bugs** +- [#1293](https://github.com/RedisInsight/RedisInsight/pull/1293) Fixed Workbench visualizations in Redis Stack +--- +Title: RedisInsight v1.14, may 2023 +linkTitle: v1.14 (May 2023) +date: 2023-05-02 00:00:00 +0000 +description: RedisInsight v1.14.0 +weight: 5 +--- + +## 1.14.0 (May 2023) + +RedisInsight version 1.X was retired on April 30, 2023, and will no longer be supported. +To continue using the best RedisInsight features and capabilities, download the latest RedisInsight version 2.Y from our [website](https://redis.com/redis-enterprise/redis-insight/) or install it from an app store. + +This is the maintenance release of RedisInsight 1.14 (v1.14.0). + +## Headlines +- Export connections to RedisInsight v2. + +## Details + +### Core + - Added support for exporting database connections to easily migrate them to RedisInsight v2 by bulk exporting to a file. + - Fixed Prompt verification bug for Sentinel instances. + +### Memory analysis + - Added support for `setlistpack` and `streamlistpack3` Redis 7 encoding types parsing. +--- +Title: RedisInsight v2.28.0, June 2023 +linkTitle: v2.28.0 (June 2023) +date: 2023-06-28 00:00:00 +0000 +description: RedisInsight v2.28 +weight: 1 +--- +## 2.28 (June 2023) +This is the General Availability (GA) release of RedisInsight 2.28. + +### Highlights +- Quickly and conveniently add [Redis Cloud](https://redis.com/redis-enterprise-cloud/overview/) databases that belong to fixed subscriptions using the auto-discovery tool +- UX optimizations in Browser for an improved experience when performing [full-text search and queries](https://redis.io/docs/stack/search/), filtering, and bulk actions. +- Support for a monospaced font in [JSON](https://redis.io/docs/stack/json/) key types + +### Details + +**Features and improvements** +- [#2198](https://github.com/RedisInsight/RedisInsight/pull/2198), [#2207](https://github.com/RedisInsight/RedisInsight/pull/2207) Automatically discover and add [Redis Cloud](https://redis.com/redis-enterprise-cloud/overview/) databases that belong to fixed subscriptions using the `Redis Cloud` option on the form to auto-discover databases. +- [#2146](https://github.com/RedisInsight/RedisInsight/pull/2146),[#2161](https://github.com/RedisInsight/RedisInsight/pull/2161) Added UX optimizations in the Browser layout to improve the experience when performing [full-text search and queries](https://redis.io/docs/stack/search/), filtering, and bulk actions. +- [#2200](https://github.com/RedisInsight/RedisInsight/pull/2200) Changed to a monospaced font in [JSON](https://redis.io/docs/stack/json/) key types, and JSON formatters in Browser and Workbench. +- [#2204](https://github.com/RedisInsight/RedisInsight/pull/2204) Re-bind default RedisInsight port from 5001 to 5530 to avoid conflicts with other applications +- [#2120](https://github.com/RedisInsight/RedisInsight/pull/2120) Added the ability to investigate the commands processed by Redis or Redis Stack server without the need to stop Profiler when a new record appears. +- [#2186](https://github.com/RedisInsight/RedisInsight/pull/2186) Unified the RedisInsight icon with other macOS applications. + + +**Bugs** +- [#2154](https://github.com/RedisInsight/RedisInsight/pull/2154) Display `(integer) 0` instead of `nil` in [ZRANK](https://redis.io/commands/zrank/) results in Workbench +--- +Title: RedisInsight v2.2.0, May 2022 +linkTitle: v2.2.0 (May 2022) +date: 2022-05-26 00:00:00 +0000 +description: RedisInsight v2.2.0 +weight: 10 +--- + +## 2.2.0 (May 2022) +This is the General Availability (GA) release of RedisInsight 2.2.0 + +### Headlines: +- SlowLog: New tool based on results of the [Slowlog](https://redis.io/commands/slowlog/) command to analyze slow operations in Redis instances +- Streams: Added support for [Redis Streams](https://redis.io/docs/manual/data-types/streams/) +- Open the list of keys or key details in full screen +- Automatically refresh the list of keys and key values with a timer + + +### Details +**Features and improvements:** +- [#621](https://github.com/RedisInsight/RedisInsight/pull/621) , [#645](https://github.com/RedisInsight/RedisInsight/pull/645) , [#649](https://github.com/RedisInsight/RedisInsight/pull/649) Added SlowLog, a tool that displays the list of logs captured by the [Slowlog](https://redis.io/commands/slowlog/) command to analyze all commands that exceed a specified runtime, which helps in troubleshooting performance issues. Specify both the runtime and the maximum length of SlowLog (which are server configurations) to configure the list of commands logged and set the auto-refresh interval to automatically update the list of commands displayed. +- [#597](https://github.com/RedisInsight/RedisInsight/pull/597) , [#598](https://github.com/RedisInsight/RedisInsight/pull/598), [#601](https://github.com/RedisInsight/RedisInsight/pull/601) , [#603](https://github.com/RedisInsight/RedisInsight/pull/603) , [#608](https://github.com/RedisInsight/RedisInsight/pull/608) , [#613](https://github.com/RedisInsight/RedisInsight/pull/613) , [#614](https://github.com/RedisInsight/RedisInsight/pull/614) , [#632](https://github.com/RedisInsight/RedisInsight/pull/632) Support for [Redis Streams](https://redis.io/docs/manual/data-types/streams/), including creation and deletion of Streams, addition and deletion of entries, and filtration of entries per timestamp. [Consumer groups](https://redis.io/docs/manual/data-types/streams/#consumer-groups) will be added in a future release. +- [#643](https://github.com/RedisInsight/RedisInsight/pull/643) List of keys or key details are supported in full screen mode. To open the key list in full screen, close the key details. To open key details in full screen, use the new **Full Screen** control in key details section. +- [#633](https://github.com/RedisInsight/RedisInsight/pull/633) Automatically refresh the list of keys and key values with a timer. To do so, enable the **Auto Refresh** mode by clicking the control next to the **Refresh** button and set the refresh rate. +- [#634](https://github.com/RedisInsight/RedisInsight/pull/634) Removed the max value limitation in the **Database Index** field of the form for adding a new database. + +**Bugs Fixed:** +- [#656](https://github.com/RedisInsight/RedisInsight/pull/656) Binary key names will not trigger errors in databases with enabled OSS Cluster API. Data type, TTL, and size of such keys are displayed in the list of keys in all Redis instances. Key details are currently not available. +--- +Title: Redis Insight v2.62.0, November 2024 +linkTitle: v2.62.0 (November 2024) +date: 2024-11-27 00:00:00 +0000 +description: Redis Insight v2.62 +weight: 1 + +--- +## 2.62 (November 2024) +This is the General Availability (GA) release of Redis Insight 2.62. + +### Highlights +- Support for multiple key name delimiters in Tree View, allowing more flexible browsing for databases with diverse key structures. +- Remain authenticated to [Redis Copilot](https://redis.io/docs/latest/develop/tools/insight/?utm_source=redisinsight&utm_medium=main&utm_campaign=tutorials#:~:text=for%20more%20information.-,Redis%20Copilot,-Redis%20Copilot%20is), even after reopening Redis Insight, for seamless and uninterrupted access with daily use. + +### Details + +**Features and improvements** +- [#4090](https://github.com/RedisInsight/RedisInsight/pull/4090) Added support for multiple key name delimiters in Tree View, enabling more flexible browsing of databases with varied key name patterns. +- [#3957](https://github.com/RedisInsight/RedisInsight/pull/3957) Remain authenticated to [Redis Copilot](https://redis.io/docs/latest/develop/tools/insight/?utm_source=redisinsight&utm_medium=main&utm_campaign=tutorials#:~:text=for%20more%20information.-,Redis%20Copilot,-Redis%20Copilot%20is), even after reopening Redis Insight, for seamless and uninterrupted access with daily use. +- [#3988](https://github.com/RedisInsight/RedisInsight/pull/3988), [#4059](https://github.com/RedisInsight/RedisInsight/pull/4059) Enhanced both the Java and PHP serialized formatters: the Java formatter now supports date and time data, while the PHP formatter includes UTF-8 encoding for better handling of special characters and multi-language data. +- [#4081](https://github.com/RedisInsight/RedisInsight/pull/4081) Introduced a unique theme key name with a proxy path prefix to prevent conflicts when multiple instances run on the same origin. +- [#2970](https://github.com/RedisInsight/RedisInsight/pull/4107) Upgraded to Electron 33.2.0 for enhanced security and compatibility with modern web standards. + +**Bugs** +- [#4089](https://github.com/RedisInsight/RedisInsight/pull/4089) Resolved an issue where large integers in JSON keys were being rounded, ensuring data integrity. + +**SHA-256 Checksums** +| Package | SHA-256 | +|--|--| +| Windows | ibZ5kn0GSdrbnfHRWC1lDdKozn6YllcGIrDhmLEnt2K1rjgjL2kGKvbtfq9QEkumgGwk2a9zTjr0u5zztGHriQ== | +| Linux AppImage | bM6lbyeAHFX/f0sBehu9a9ifHsDvX8o/2qn91sdtyiRcIU+f31+Ch7K4NI4v226rgj6LvkFDWDNq6VQ4pyLAPA== | +| Linux Debian| ilD86T/+8gEgrZg8MS8Niv/8g54FPeEn1nZrUI6DA7KTl3owqzqD0npb8fdAdL6YtSRbSBUK2fXPQ6GRXWZ/GA== | +| MacOS Intel | pSy3CvRfVIT3O7BXUPMUoONRaZCOA1965tF9T19gZ1NnUn9YkjWlNXdniQHZ4ALKbpC2q62ygt39xF6O52LxAw== | +| MacOS Apple silicon | uoz6I6MO4/j8UJo7eNje3dz4rx1KKj6mum/vXb2882fYPD/lK1cG0Q0OZu/lbxuk0xgzXfWv0MhMTIVVV+EADg== | +--- +Title: RedisInsight v1.7, September 2020 +linkTitle: v1.7 (Sep 2020) +date: 2020-09-10 00:00:00 +0000 +description: RediSearch 2.0 support and stability improvements +weight: 93 +--- + +## 1.7.1 (October 2020) + +Maintenance release for RedisInsight 1.7 including bug fixes and enhancements. + +### Headlines: + +- Core: + - New public health-check API to make monitoring deployments easier. + - Display progress information during memory analysis. + +### Full details: + +Enhancements and bug fixes +- Core: + - Fixed support for TLS in Redis Cluster databases. + - Application name is properly capitalized on MacOSX. + - Fixed update notifications on Docker - Now links to Docker Hub page and provides instructions for updating. +- Memory Analysis: + - Information about the current stage of analysis is now displayed while the analysis runs. + - Fixed issue with running Memory Analysis on MacOSX (related to system OpenSSL libraries). +- Browser: + - Visual improvements to key details view to improve the experience working with long key names. +- CLI: + - Improvements for Redis Cluster databases - Controls to target specific nodes, all nodes, only masters/replicas, etc. +- Streams: + - Fixed consumer groups functionality on Redis Cluster databases. +- Telemetry: + - Report specific modules even when the `MODULE LIST` command is not available. + +## 1.7.0 (September 2020) + +### Headlines: + +- Support for [RediSearch 2.0](https://redislabs.com/blog/introducing-redisearch-2-0/) + +### Full Details: + +- Core: + - Added explanation of the supported subscription types for Redis Cloud database auto-discovery. + - Fixed a bug where upgrading from some previous versions would give an error on startup. + - Use a non-root group by default for the RedisInsight Docker container. +- Memory Analysis: + - Improved UI for offline analysis via RDB file stored in S3. + - Fixed bug where using RDB stored in S3 sub-folder would fail. +- Browser: + - Improved support for searching members of large collections (hashes, sets and sorted sets). +- Streams: + - Improved UX for the handle to resize key selector. +- RediSearch: + - Fixed support for Redis Cloud Essentials databases. +- RedisGraph: + - Fixed an issue where localstorage is filled with unnecessary data. +- Analytics: + - Reporting the subscription type for auto-discovered Redis Cloud databases. +--- +Title: RedisInsight v2.22.1, March 2023 +linkTitle: v2.22.1 (Mar 2023) +date: 2023-03-30 00:00:00 +0000 +description: RedisInsight v2.22.1 +weight: 1 +--- +## 2.22.1 (March 2023) +This is the General Availability (GA) release of RedisInsight 2.22. + +### Highlights +- Share your Redis expertise with your team and the wider community by building custom RedisInsight tutorials. Use our [instructions](https://github.com/RedisInsight/Tutorials) to describe your implementations of Redis for other users to follow and interact with in the context of a connected Redis database +- Take a quick tour of RedisInsight to discover how it can enhance your development experience when building with Redis or Redis Stack +- Select from a list of supported decompression formats to view your data in a human-readable format + + +### Details +**Features and improvements** +- [#1782](https://github.com/RedisInsight/RedisInsight/pull/1782), [#1813](https://github.com/RedisInsight/RedisInsight/pull/1813) Share your Redis expertise with your team and the wider community by building custom RedisInsight tutorials. The tutorials use markdown and are easy to write. They are an ideal way to describe practical implementations of Redis so users can follow and interact with commands in the context of an already connected Redis database. Check out these [instructions](https://github.com/RedisInsight/Tutorials) to start creating your own tutorials. Let the community discover your content by labeling your GitHub repository with [redis-tutorials](https://github.com/topics/redis-tutorials) +- [#1834](https://github.com/RedisInsight/RedisInsight/pull/1834) Take a quick tour of RedisInsight to discover how it can enhance your development experience. To start the tour, in the left-side navigation, open the Help Center (above the Settings icon), reset the onboarding and open the Browser page +- [#1742](https://github.com/RedisInsight/RedisInsight/pull/1742), [#1753](https://github.com/RedisInsight/RedisInsight/pull/1753), [#1755](https://github.com/RedisInsight/RedisInsight/pull/1755), [#1762](https://github.com/RedisInsight/RedisInsight/pull/1762) Configure one of the following data decompression formats when adding a database connection to view your data in a human-readable format: GZIP, LZ4, ZSTD, SNAPPY +- [#1787](https://github.com/RedisInsight/RedisInsight/pull/1787) Added UX improvements to the search by values of keys feature in Browser: Enable the search box after the index is selected + +**Bugs** +- [#1808](https://github.com/RedisInsight/RedisInsight/pull/1808) Prevent errors when running Docker RedisInsight on Safari Version 16.2 +- [#1835](https://github.com/RedisInsight/RedisInsight/pull/1835) Display total memory and total keys for replicas in Sentinel +--- +Title: Redis Insight v2.58.0, October 2024 +linkTitle: v2.58.0 (October 2024) +date: 2024-10-01 00:00:00 +0000 +description: Redis Insight v2.58 +weight: 1 + +--- +## 2.58 (October 2024) +This is the General Availability (GA) release of Redis Insight 2.58. + +### Highlights +- Added functionality to start, stop, and reset [Redis Data Integration](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) pipelines directly in the app, simplifying management and enhancing control +- Introduced support for subscribing to specific Pub/Sub channel - a [highly requested feature](https://github.com/RedisInsight/RedisInsight/issues/1671) +- Ability to delete previously added CA and Client certificates to keep them updated + +### Details + +**Features and improvements** +- [#3843](https://github.com/RedisInsight/RedisInsight/pull/3843) Redis Insight now supports starting, stopping, and resetting [Redis Data Integration](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) (RDI) pipelines. Use RDI version 1.2.9 or later to seamlessly stop or resume processing new data. You can also reset an RDI pipeline to take a new snapshot of the data, process it, and continue tracking changes. To get started, navigate to the "Redis Data Integration" tab on the database list page and add or connect to your RDI endpoint. +- [#3832](https://github.com/RedisInsight/RedisInsight/pull/3832) Added support for a [highly requested feature](https://github.com/RedisInsight/RedisInsight/issues/1671) to subscribe to specific Pub/Sub channels. On the Pub/Sub page, you can now subscribe to multiple channels or patterns by entering them as a space delimited list. +- [#3796](https://github.com/RedisInsight/RedisInsight/pull/3796) Ability to delete previously added CA and Client certificates to keep them up-to-date. + +**Bugs** +- [#3840](https://github.com/RedisInsight/RedisInsight/pull/3840) [Saved](https://github.com/RedisInsight/RedisInsight/issues/3833) SNI and SSH connection information for newly added database connections. +- [#3828](https://github.com/RedisInsight/RedisInsight/pull/3828) Fixed an issue to [display multiple hash fields](https://github.com/RedisInsight/RedisInsight/issues/3826) when expanding a hash value. +--- +Title: RedisInsight v2.42.0, January 2024 +linkTitle: v2.42.0 (January 2024) +date: 2024-01-30 00:00:00 +0000 +description: RedisInsight v2.42 +weight: 1 +--- +## 2.42 (January 2024) +This is the General Availability (GA) release of RedisInsight 2.42. + +### Highlights +- Introducing a new dedicated developer enablement area! Explore Redis capabilities and learn how to use the native JSON data structure for structured querying and full-text search, including vector similarity search for AI use cases, and more. Browse the tutorials offline or use the in-app provisioning of a free Redis Cloud database to try them interactively. +- RedisInsight is now available on Docker. Check out our [Docker repository](https://hub.docker.com/repository/docker/redis/redisinsight/general) if that’s your preferred platform. + + +### Details + +**Features and improvements** +- [#2724](https://github.com/RedisInsight/RedisInsight/pull/2724), [#2752](https://github.com/RedisInsight/RedisInsight/pull/2752), [#2965](https://github.com/RedisInsight/RedisInsight/pull/2965) Introducing a dedicated developer enablement area. Dive into interactive tutorials and level up your Redis game even without a database connected. Start exploring tutorials by clicking on the "Insights" button located in the top-right corner. Because interactive tutorials can alter data in your database, avoid running them in a production environment. For an optimal tutorial experience, create a free [Redis Cloud](https://redis.com/try-free/?utm_source=redisinsight&utm_medium=main&utm_campaign=redisinsight_release_notes) database. +- [#2972](https://github.com/RedisInsight/RedisInsight/pull/2972), [#2811](https://github.com/RedisInsight/RedisInsight/pull/2811) The long-awaited Docker build is now available. Check out our [Docker repository](https://hub.docker.com/repository/docker/redis/redisinsight/general) if that’s your preferred platform. +- [#2857](https://github.com/RedisInsight/RedisInsight/pull/2857) Empty Browser and Workbench pages are aligned with the new interactive tutorials. +- [#2940](https://github.com/RedisInsight/RedisInsight/pull/2940) Recommendations have been renamed to Tips. +- [#2970](https://github.com/RedisInsight/RedisInsight/pull/2970) A critical vulnerability has been fixed. +--- +Title: RedisInsight v1.13, Aug 2022 +linkTitle: v1.13 (Aug 2022) +date: 2022-08-24 00:00:00 +0000 +description: RedisInsight v1.13.0 +weight: 7 +--- + +## 1.13.1 (November 2022) + +This is the maintenance release of RedisInsight 1.13 (v1.13.1). + +### Fixes: +- Core: + - Fixed container vulnerabilities. + - Prevented healthcheck API from overloading RedisInsight DB. Earlier, a separate session was created for each healthcheck hit, which overloaded the database with too many session tokens. Now, healtcheck API doesn't create any session tokens. + - Get Sentinel host using IP field. +- Memory Analysis: + - Added support for `hashlistpack`, `zsetlistpack`, `quicklist2` and `streamlistpack2`, encoding types. + + +## 1.13.0 (August 2022) + +This is the General Availability Release of RedisInsight 1.13 (v1.13.0). + + +## Headlines +- Subpath Proxy Support + +## Details + +### Core +- Subpath Proxy support: RedisInsight can now be proxied behind a subpath +- Added trusted origins environment variable to set trusted origins +- Fixed major container vulnerabilities +- Added proxy notification that displays when such an environment is found +### RediSearch +- Fixed index information +### Profiler +- Added support for IPv6 clients +### Memory Analyzer +- Fixed Lua recommendation + + +--- +Title: Redis Insight v2.54.0, August 2024 +linkTitle: v2.54.0 (August 2024) +date: 2024-08-06 00:00:00 +0000 +description: Redis Insight v2.54 +weight: 1 + +--- +## 2.54 (August 2024) +This is the General Availability (GA) release of Redis Insight 2.54. + +### Highlights +Support for [Redis Data Integration (RDI)](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) - a powerful tool designed to seamlessly synchronize data from your existing database to Redis in near real-time. RDI establishes a data streaming pipeline that mirrors data from your existing database to Redis Software, so if a record is added or updated, those changes automatically flow into Redis. This no-code solution enables seamless data integration and faster data access so you can build real-time apps at any scale. And now you can seamlessly create, validate, deploy, and monitor your data pipelines directly from Redis Insight. + +### Details + +**Features and improvements** +- [#2839](https://github.com/RedisInsight/RedisInsight/pull/2839), [#2853](https://github.com/RedisInsight/RedisInsight/pull/2853), [#3101](https://github.com/RedisInsight/RedisInsight/pull/3101) Redis Insight now comes with the support for [Redis Data Integration (RDI)](https://redis.io/data-integration/?utm_source=redisinsight&utm_medium=repository&utm_campaign=release_notes) - a powerful tool designed to seamlessly synchronize data from your existing database to Redis in near real-time. RDI establishes a data streaming pipeline that mirrors data from your existing database to Redis Software, so if a record is added or updated, those changes automatically flow into Redis. This no-code solution enables seamless data integration and faster data access so you can build real-time apps at any scale. Use RDI version 1.2.7 or later to seamlessly create, validate, deploy, and monitor your data pipelines within Redis Insight. To get started, switch to the "Redis Data Integration" tab on the page with the list of Redis databases and add your RDI endpoint to Redis Insight. + +**Bugs** +- [#3577](https://github.com/RedisInsight/RedisInsight/pull/3577) Show [information about OSS cluster](https://github.com/RedisInsight/RedisInsight/issues/3157) when connected using TLS. +- [#3575](https://github.com/RedisInsight/RedisInsight/pull/3575) Return [results instead of an empty list](https://github.com/RedisInsight/RedisInsight/issues/3465) for commands written in lowercase. +- [#3613](https://github.com/RedisInsight/RedisInsight/pull/3613) Prevent repetitive buffer overflow by avoiding the resending of unfulfilled commands. +--- +Title: RedisInsight v2.14.0, November 2022 +linkTitle: v2.14.0 (Nov 2022) +date: 2022-11-28 00:00:00 +0000 +description: RedisInsight v2.14.0 +weight: 3 +--- +## 2.14.0 (November 2022) +This is the General Availability (GA) release of RedisInsight 2.14. + +### Highlights +- Support for [search capabilities](https://redis.io/docs/stack/search/) in Browser: Create secondary index via dedicated form, run queries and full-text search in Browser or Tree views +- Ability to resize the column width of key values when displaying hashes, lists, and sorted sets +- Command processing time displayed as part of the result in Workbench + + +### Details +**Features and improvements** +- [#1345](https://github.com/RedisInsight/RedisInsight/pull/1345), [#1346](https://github.com/RedisInsight/RedisInsight/pull/1346), [#1376](https://github.com/RedisInsight/RedisInsight/pull/1376) Added support for [search capabilities](https://redis.io/docs/stack/search/) in Browser tool. Create secondary index of your data using a dedicated form. Conveniently run your queries and full-text search against the preselected index and display results in Browser or Tree views. +- [#1385](https://github.com/RedisInsight/RedisInsight/pull/1385) Resize the column width of key values when displaying hashes, lists, and sorted sets +- [#1354](https://github.com/RedisInsight/RedisInsight/pull/1407) Do not scroll to the end of results when double-clicking a command output in CLI +- [#1347](https://github.com/RedisInsight/RedisInsight/pull/1347) Display command processing time as part of the result in Workbench (time taken to process the command by both RedisInsight backend and Redis) +- [#1351](https://github.com/RedisInsight/RedisInsight/pull/1351) Display the namespaces section in the Database analysis report when no namespaces were found +--- +Title: RedisInsight v1.12, May 2022 +linkTitle: v1.12 (May 2022) +date: 2022-05-24 00:00:00 +0000 +description: RedisInsight v1.12.0 +weight: 11 +--- + +## 1.12.1 (July 2022) + +This is the maintenance release of RedisInsight 1.12 (v1.12.1)! + +### Critical Bug Fix: +- Core: + - When you add or remove a Redis Enterprise Software or Redis Cloud database in RedisInsight v1 that has the RediSearch module loaded, all hashes within that database are deleted. + +### Fixes: +- Core: + - Added curl command to container. + - Fixed container vulnerabilities (CVE-2022-1292, CVE-2022-2068). +- RediSearch: + - Fixed index info to report correct information for RediSearch v2. +- Profiling: + - Added support for profiling module commands. + - Added support for viewing information of clients that use IPv6 addresses. +- RedisGraph: + - Added support for `RO_QUERY` only mode. RedisGraph now responds to the `RO_QUERY` command. + +## 1.12.0 (May 2022) + +This is the General Availability Release of RedisInsight 1.12 (v1.12.0)! + +## Headlines: +- [Authenticate database users](https://docs.redis.com/latest/ri/using-redisinsight/auth-database/): Ask for database username and password +- Support for `GRAPH.RO_QUERY` command in RedisGraph tool. +- Support for variable CPU in RedisAI tool. + +## Full Details + +### Core +- [Authenticate database users](https://docs.redis.com/latest/ri/using-redisinsight/auth-database/): Ask for database username and password + - If enabled, each time a user attempts to open a database previously added to RedisInsight, a form to enter username and password is displayed. This form displays also if a user is idle for a configurable amount of time. +- Fix major container vulnerabilities. +- Decrease Docker image size by discarding unnecessary contents. +- Streams + - Fix slowdown and crash while loading large streams data. + - Use UTC time for stream id timestamp. +- Graph + - Allow scanning for more keys. + - Add support for `GRAPH.RO_QUERY` command. +- Browser + - Fix **Delete key** dialog box that displays when no key is selected. +- RedisAI + - Add support for variable CPU number. +--- +Title: RedisInsight v2.38.0, November 2023 +linkTitle: v2.38.0 (November 2023) +date: 2023-11-29 00:00:00 +0000 +description: RedisInsight v2.38 +weight: 1 +--- +## 2.38 (November 2023) +This is the General Availability (GA) release of RedisInsight 2.38. + +### Highlights +- Major UX improvements and space optimization for a cleaner and more organized Tree view, ensuring easier namespace navigation and faster key browsing. Additionally, in Tree view, you can now sort your Redis key names alphabetically. +- Renamed the application from RedisInsight v2 to simply RedisInsight + +### Details + +**Features and improvements** + +- [#2706](https://github.com/RedisInsight/RedisInsight/pull/2706), [#2783](https://github.com/RedisInsight/RedisInsight/pull/2783) Major UX improvements and space optimization for a cleaner and more organized Tree view. This includes consolidating the display of namespaces and keys in a dedicated section and omitting namespace information from key names in the list of keys. In addition, the Tree view introduces a new option to alphabetically sort Redis key names. +- [#2751](https://github.com/RedisInsight/RedisInsight/pull/2751) Renamed the application from RedisInsight v2 to simply RedisInsight +- [#2799](https://github.com/RedisInsight/RedisInsight/pull/2799) Automatically make three retries to establish or re-establish a database connection if an error occurs + +**Bugs** +- [#2793](https://github.com/RedisInsight/RedisInsight/pull/2793) [Do not require](https://github.com/RedisInsight/RedisInsight/issues/2765) an SSH password or passphrase +- [#2794](https://github.com/RedisInsight/RedisInsight/pull/2794) Prevent [potential crashes](https://github.com/RedisInsight/RedisInsight/issues/2763) caused by using parentheses in usernames on the Windows operating system +- [#2797](https://github.com/RedisInsight/RedisInsight/pull/2797) Avoid initiating a bulk deletion or Profiler after the operating system resumes from sleep mode +--- +Title: RedisInsight v2.18.0, January 2023 +linkTitle: v2.18.0 (Jan 2023) +date: 2023-01-31 00:00:00 +0000 +description: RedisInsight v2.18.0 +weight: 1 +--- +## 2.18.0 (January 2023) +This is the General Availability (GA) release of RedisInsight 2.18. + +### Highlights +- Support for SSH tunnel to connect to your Redis database +- Ability to switch between database indexes while connected to your database +- Recommendations on how to optimize the usage of your database + +### Details +**Features and improvements** +- [#1567](https://github.com/RedisInsight/RedisInsight/pull/1567), [#1576](https://github.com/RedisInsight/RedisInsight/pull/1576), [#1577](https://github.com/RedisInsight/RedisInsight/pull/1577) Connect to your Redis database via SSH tunnel using a password or private key in PEM format. +- [#1540](https://github.com/RedisInsight/RedisInsight/pull/1540), [#1608](https://github.com/RedisInsight/RedisInsight/pull/1608) Switch between database indexes while connected to your database in Browser, Workbench, and Database Analysis. +- [#1457](https://github.com/RedisInsight/RedisInsight/pull/1457), [#1465](https://github.com/RedisInsight/RedisInsight/pull/1465), [#1590](https://github.com/RedisInsight/RedisInsight/pull/1590) Run Database Analysis to generate recommendations on how to save memory and optimize the usage of your database. These recommendations are based on industry standards and Redis best practices. Upvote or downvote recommendations in terms of their usefulness. +- [#1598](https://github.com/RedisInsight/RedisInsight/pull/1598) Check and highlight the [JSON](https://redis.io/docs/stack/json/) syntax using new [Monaco Editor](https://microsoft.github.io/monaco-editor/). +- [#1583](https://github.com/RedisInsight/RedisInsight/pull/1583) Click a pencil icon to make changes to database aliases. +- [#1579](https://github.com/RedisInsight/RedisInsight/pull/1579) Increase the database password length limitation to 10,000. +--- +aliases: /develop/connect/insight/release-notes +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: +linkTitle: Release notes +title: Redis Insight release notes +weight: 7 +hideListLinks: true +--- + +Here are the most recent changes for Redis Insight: + +{{< table-children columnNames="Version (Date),Release notes" columnSources="LinkTitle,Title" enableLinks="Title" >}} +--- +Title: RedisInsight v1.6, June 2020 +linkTitle: v1.6 (June 2020) +date: 2020-06-10 00:00:00 +0000 +description: Rootless Docker container, Copy keys in Browser and Stream UX improvements +weight: 94 +--- + +## 1.6.3 (July 2020) + +Maintenance release for RedisInsight 1.6 including bug fixes and enhancements. + +### Headlines: + +- Mac application is now getting notarized which simplifies the installation process on OS X +- Fixed the resize of the keys explorer allowing to see long keys +- Fixed filtering of keys in the browser with the filter capability in the browser + +### Full details: + +Enhancements and bug fixes + - Core: + - Mac application is properly signed and notarized on Apple services. + - Browser: + - Fixed resizing the key explorer allows to see more characters of long keys. + - Asynchronous loading of keys in large databases (discovered keys are actionable while search continues). + - Improved performance for exact key searches (when search pattern is not using *). + - Improved UI showing the progress of scanning the keys in the database. + - Fixed filtering the keys by data structures could be to wrong. + - Fixed behavior of EXISTS and TYPE command when those have ACL restrictions. + - Fixed behavior when keys matching filters have ACL restrictions. + - Improved “Stop Scan” button behavior to respond immediately. + - Added visual indicator to show that by default, the browser is filtering out inner keys from modules. + +## 1.6.2 (30 June 2020) + +Maintenance release for RedisInsight 1.6 including bug fixes and enhancements. + +### Headlines: + +- Performance improvements to Profiler tool for TLS-enabled databases. +- Bugfix: Feedback button was not visible. + +### Full details: + +- Enhancements and bug fixes + - Core: + - Bugfix: Feedback button was not visible. + - Profiler: + - The native code implementation of the profiling logic was updated to add full support for TLS connections to Redis. + - Graph: + - Updated to use a newer version of the Ogma graph visualization library. + - Analytics: + - Bugfix: Report the OS/platform correctly. + +## 1.6.1 (24 June 2020) + +Maintenance release for RedisInsight 1.6 including bug fixes and enhancements. + +### Headlines: + +- Improved support for Redis 6 ACLs with Cluster and Sentinel databases +- Added support for Redis Cluster in RedisGraph tool +- UX improvements for RedisGraph tool +- Enriched captured usage events + +### Full details: + +- Enhancements and bug fixes + - Core: + - Improved support for Redis 6 ACLs with Cluster and Sentinel databases + - Added events capturing usage of RedisInsight + - Graph: + - Add support for Redis Cluster + - Added option to configure labels to be displayed in graph's nodes (right-click on the node) + - Added ability to submit query with 'ctrl + enter' in single line mode + - TimeSeries: + - Added ability to submit query with 'ctrl + enter' in single line mode + - RediSearch: + - Added ability to submit query with 'ctrl + enter' in single line mode + - Better handling of long index names in index selector dropdown + - Fixed bug with pagination on queries with whitespace in the query string + - Gears: + - Added button to remove an execution from executed functions list + - Added option to get rid of the warning message when executing a function + - Fixed error message when it's impossible to visualize the graph + - Streams: + - Added persistence of user's selected stream columns when switching pages + +## 1.6.0 (11 June 2020) + +This is the General Availability Release of RedisInsight 1.6! + +### Headlines: + +- RedisInsight docker container is now rootless being compliant with best practices for containers +- The Browser gets improved to allow quick copy of keys and resizing of the key explorer +- Streams is now allowing to sort entries by timestamp, active/unactive live streaming of entries and keep persisted user's selection of fields to save context when switching between streams or other tools of RedisInsight. +- New telemetry system allowing to capture tools usage and updated privacy settings + +### Full details: + +- Features + - Core: + - Improved docker container by making it rootless + - Added visual indicator to show configured user when connecting to Redis 6 using ACLs + - Improved navigation to application's settings + - Browser: + - Added ability to resize the Key explorer panel + - Added options to easily copy keys + - Added ability to filter out the inner keys + - Streams: + - Added persisting selected fields to be displayed to save context when switching to another stream or tool of RedisInsight + - Added ability to sort entries "ascending" or "descending" based on the timestamp + - Added ability to active/unactive the live streaming of events + - Updated timestamp font family for consistency + - CLI: + - Added ACL commands hints and summary info in CLI + +- Bug Fixes: + - Core: + - Fixed issue fetching data from Redis Cloud and replica enabled + - Browser: + - Fixed issue not shown field named "key" in hash keys + - Fixed wrong number of database's keys being displayed + - Fixed error when trying to view a Java serialized object + - Stream: + - Fixed issue with live streaming of entries + - Fixed UI when no entries are present in a stream + - Bulk Actions: + - Fixed responsiveness of the UI + - RedisGears: + - Fixed focus on editor and display of requirements +--- +Title: RedisInsight v2.34.0, September 2023 +linkTitle: v2.34.0 (September 2023) +date: 2023-09-28 00:00:00 +0000 +description: RedisInsight v2.34 +weight: 1 +--- +## 2.34 (September 2023) +This is the General Availability (GA) release of RedisInsight 2.34. + +### Highlights +- UX improvements to simplify the in-app provisioning and usage of a free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_34) database with RedisInsight interactive tutorials. This will allow you to learn easily, among other things, how to leverage the native JSON data structure supporting structured querying and full-text search, including vector similarity search for your AI use cases +- Ability to refresh the list of [search indexes](https://redis.io/docs/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=main) displayed in Browser +- Set the color theme to follow the local system preferences + +### Details + +**Features and improvements** +- [#2585](https://github.com/RedisInsight/RedisInsight/pull/2585) UX improvements to simplify the in-app provisioning and usage of a free [Redis Cloud](https://redis.com/comparisons/oss-vs-enterprise/?utm_source=redisinsight&utm_medium=rel_notes&utm_campaign=2_34) database with RedisInsight interactive tutorials. To provision a new database, click the "Try Redis Cloud" banner in the list of database connections page and follow the steps +- [#2606](https://github.com/RedisInsight/RedisInsight/pull/2606) Ability to refresh the list of [search indexes](https://redis.io/docs/interact/search-and-query/?utm_source=redisinsight&utm_medium=main&utm_campaign=main) displayed in Browser +- [#2593](https://github.com/RedisInsight/RedisInsight/pull/2593) UX optimizations to improve the back navigation to the list of databases, including for small resolutions +- [#2599](https://github.com/RedisInsight/RedisInsight/pull/2599) Added an option to set the color theme to follow the local system preferences +- [#2563](https://github.com/RedisInsight/RedisInsight/pull/2563) Load a new library from the Functions tab within the [Triggers and Functions](https://redis.com/blog/introducing-triggers-and-functions/?utm_source=redisinsight&utm_medium=main&utm_campaign=main) tool +- [#2496](https://github.com/RedisInsight/RedisInsight/pull/2496) Set milliseconds as a default unit in Slow Log + +**Bugs** +- [#2587](https://github.com/RedisInsight/RedisInsight/pull/2587) Display detailed [errors](https://github.com/RedisInsight/RedisInsight/issues/2562) in transactions run via CLI or Workbench +--- +aliases: /develop/connect/insight/tutorials/insight-stream-consumer +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to manage streams and consumer groups in Redis Insight +linkTitle: Streams +title: Manage streams and consumer groups in Redis Insight +weight: 5 +--- + +A _stream_ is an append-only log file. +When you add data to it, you cannot change it. +That may seem like a disadvantage; however, a stream serves as a log or single source of truth. +It can also be used as a buffer between processes that work at different speeds and do not need to know about each other. +For more conceptual information about streams, see [Redis Streams]({{< relref "/develop/data-types/streams" >}}). + +In this topic, you will learn how to add and work with streams as well as consumer groups in Redis Insight. + +Here's a stream that models temperature and humidity sensors. Processes interacting with the stream perform one of two roles: _consumer_ and _producer_. +The point of a stream is that it's not going to end, so you cannot capture whole datasets and do some processing on them. + +In this stream, sensors are considered _producers_, which broadcast data. +A _consumer_ reads from the stream and does some work on it. +For example, if the temperature is above a certain threshold, it puts a message out to turn on the air conditioner in that unit or notify the maintenance. + + + + +It is possible to have multiple consumers doing different jobs, one measuring humidity, and another taking temperature measurements over periods of time. +Redis stores a copy of the entire dataset in memory, which is a finite resource. +To avoid runaway data, streams can be trimmed when you add something to them. +When adding to a stream with [`XADD`]({{< relref "/commands/xadd" >}}), you can optionally specify that the stream should be trimmed to a specific or approximate number of the newest entries, or to only include entries whose ID is higher than the ID specified. +You can also manage the storage required for streaming data using key expiry. For example, by writing each day's data to its own stream in Redis and expiring each stream's key after a period of time, say a week. +An ID can be any number, but each new entry in the stream must have an ID whose value is higher than the last ID added to the stream. + +## Adding new entries + +Use [`XADD`]({{< relref "/commands/xadd" >}}) with `*` for the ID to have Redis automatically generate a new ID for you consisting of a millisecond precision timestamp, a dash and a sequence number. For example `1656416957625-0`. Then supply the field names and values to store in the new stream entry. + +There are a couple of ways of retrieving things. You can retrieve entries by time range or you could ask for everything that's happened since a timestamp or ID that you specify. Using a single command you can ask for anything from 10:30 until 11:15 am on a given day. + +## Consumer groups + +A more realistic use case would be a system with many temperature sensors whose data Redis puts in a stream, records the time they arrive, and orders them. + + + +On the right side we have two consumers that read the stream. One of them is alerting if the temperature is over a certain number and texting the maintenance crew that they need to do something, and the other is a data warehouse that is taking the data and putting it into a database. + +They run independently of each other. +Up in the right, we have another sort of task. +Let's assume that alerting and data warehouse are really fast. +You get a message whether the temperature is larger than a specific value, which might take a millisecond. +And alerting can keep up with the data flow. +One way you can scale consumers is _consumer groups_, which allows multiple instances of the same consumer or same code to work as a team to process the stream. + +## Managing streams in Redis Insight + +You can add a stream in Redis Insight in two ways: create a new stream or add to an existing stream. + +To create a stream, start by selecting the key type (stream). +You cannot set time to live (TTL) because it cannot be put on a message in a stream; it can only be done on a Redis key. Name the stream _mystream_. +Then, set the *Entry ID* to `*` to default to timestamp. +If you have your own ID generation strategy, enter the next ID from your sequence. Remember that the ID must be higher than the ID of any other entry in the stream. + +Then, enter fields and values using + to add more than one (for example, name and location). +Now you have a stream that appears in the **Streams** view and you can continue adding fields and values to it. + +Redis Insight runs read commands for you so you can see the stream entries in the **Streams** view. +And the **Consumer Groups** view shows each consumers in a given consumer group and the last time Redis allocated a message, what the ID of it was and how many times that process has happened, and whether a consumer has you have told Redis that you are finished working with that task using the [`XACK`]({{< relref "/commands/xack" >}}) command. + +## Monitor temperature and humidity from sensors in Redis Insight + +This example shows how to bring an existing stream into Redis Insight and work with it. + +### Setup + +1. Install [Redis Insight](https://redis.com/redis-enterprise/redis-insight/?_ga=2.48624486.1318387955.1655817244-1963545967.1655260674#insight-form). +2. Download and install [Node.js](https://nodejs.org/en/download/) (LTS version). +3. Install [Redis]({{< relref "/operate/oss_and_stack/install" >}}). In Docker, check that Redis is running locally on the default port 6379 (with no password set). +4. Clone the [code repository](https://github.com/redis-developer/introducing-redis-talk) for this example. +See the [README](https://github.com/redis-developer/introducing-redis-talk/tree/main/streams) for more information about this example and installation tips. +5. On your command-line, navigate to the folder containing the code repository and install the Node.js package manager (npm). + + {{< highlight bash >}} + npm install + {{< / highlight >}} + +### Run the producer + +To start the producer, which will add a new entry to the stream every few seconds, enter: + +{{< highlight bash >}} +npm run producer + +> streams@1.0.0 producer +> node producer.js + +Starting producer... +Adding reading for location: 62, temperature: 40.3, humidity: 36.5 +Added as 1632771056648-0 +Adding reading for location: 96, temperature: 15.4, humidity: 70 +Added as 1632771059039-0 +... +{{< / highlight >}} + +The producer runs indefinitely. +Select `Ctrl+C` to stop it. +You can start multiple instances of the producer if you want to add entries to the stream faster. + +### Run the consumer + +To start the consumer, which reads from the stream every few seconds, enter: + +{{< highlight bash >}} +npm run consumer + +> streams@1.0.0 consumer +> node consumer.js + +Starting consumer... +Resuming from ID 1632744741693-0 +Reading stream... +Received entry 1632771056648-0: +[ 'location', '62', 'temp', '40.3', 'humidity', '36.5' ] +Finished working with entry 1632771056648-0 +Reading stream... +Received entry 1632771059039-0: +[ 'location', '96', 'temp', '15.4', 'humidity', '70' ] +{{< / highlight >}} + +The consumer stores the last entry ID that it read in a Redis string at the key `consumer:lastid`. It uses this string to pick up from where it left off after it is restarted. Try this out by stopping it with `Ctrl+C` and restarting it. + +Once the consumer has processed every entry in the stream, it will wait indefinitely for instances of the producer to add more: + +{{< highlight bash >}} +Reading stream... +No new entries since entry 1632771060229-0. +Reading stream... +No new entries since entry 1632771060229-0. +Reading stream... +{{< / highlight >}} + +Stop it using `Ctrl+C`. + +### Run a consumer group + +A consumer group consists of multiple consumer instances working together. Redis manages allocation of entries read from the stream to members of a consumer group. A consumer in a group will receive a subset of the entries, with the group as a whole receiving all of them. When working in a consumer group, a consumer process must acknowledge receipt/processing of each entry. + +Using multiple terminal windows, start three instances of the consumer group consumer, giving each a unique name: + +{{< highlight bash >}} +npm run consumergroup consumer1 + +> streams@1.0.0 consumergroup +> node consumer_group.js -- "consumer1" + +Starting consumer consumer1... +Consumer group temphumidity_consumers exists, not created. +Reading stream... +Received entry 1632771059039-0: +[ 'location', '96', 'temp', '15.4', 'humidity', '70' ] +Acknowledged processing of entry 1632771059039-0. +Reading stream... +{{< / highlight >}} + +In a second terminal: + +{{< highlight bash >}} +npm run consumergroup consumer2 +{{< / highlight >}} + +And in a third: + +{{< highlight bash >}} +npm run consumergroup consumer3 +{{< / highlight >}} + +The consumers will run indefinitely, waiting for new messages to be added to the stream by a producer instance when they have collectively consumed the entire stream. +Note that in this model, each consumer instance does not receive all of the entries from the stream, but the three members of the group each receive a subset. + +### View the stream in Redis Insight + +1. Launch Redis Insight. +2. Select `localhost:6379` +3. Select **STREAM**. Optionally, select full screen from the upper right corner to expand the view. + + + + +You can now toggle between **Stream** and **Consumer Groups** views to see your data. +As mentioned earlier in this topic, a stream is an append-only log so you can't modify the contents of an entry, but you can delete an entire entry. +A case when that's useful is in the event of a so-called _poison-pill message_ that can cause consumers to crash. You can physically remove such messages in the **Streams** view or use the [`XDEL`]({{< relref "/commands/xdel" >}}) command at the command-line interface (CLI). + +You can continue interacting with your stream at the CLI. For example, to get the current length of a stream, use the [`XLEN`]({{< relref "/commands/xlen" >}}) command: + +{{< highlight bash >}} +XLEN ingest:temphumidity +{{< / highlight >}} + +Use streams for auditing and processing events in banking, gaming, supply chain, IoT, social media, and so on. + +## Related topics + +- [Redis Streams]({{< relref "/develop/data-types/streams" >}}) +- [Introducing Redis Streams with Redis Insight, node.js, and Python](https://www.youtube.com/watch?v=q2UOkQmIo9Q) (video)--- +aliases: /develop/connect/insight/debugging +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis Insight debugging information +linkTitle: Debugging information +stack: true +title: Redis Insight debugging information +weight: 6 +--- + +If you are experiencing errors or other issues when using Redis Insight, follow the steps below to learn more about the errors and to identify root cause. + +## Connection issues + +If you experience connection issues, try these steps. + +### 1. Launch Redis Insight in debug mode + +Run the following command to launch Redis Insight in debug mode to investigate connection issues: + +* **Windows**: + + `cmd /C “set DEBUG=ioredis* && ".\Redis Insight.exe"”` + +* **macOS** (from the Applications folder): + + `DEBUG=ioredis* open "Redis Insight.app"` + +* **Linux**: + + `DEBUG=ioredis* "redis insight"` + +### 2. Investigate logs + +You can review the Redis Insight log files (files with a `.log` extension) to get detailed information about system issues. +These are the locations on supported platforms: + +- **Docker**: In the `/data/logs` directory *inside the container*. +- **macOS**: In the `/Users//.redis-insight` directory. +- **Windows**: In the `C:\Users\\.redis-insight` directory. +- **Linux**: In the `/home//.redis-insight` directory. + +## Other issues +### To debug issues other than connectivity + +* **Windows**: + + `cmd /C “set DEBUG=* && ".\Redis Insight.exe"”` + +* **macOS** (from the Applications folder): + + `DEBUG=* open "Redis Insight.app"` + +* **Linux**: + + `DEBUG=* "redis insight"` + +### Get detailed Redis Insight logs + +* **Windows**: + + `cmd /C “set STDOUT_LOGGER=true && set LOG_LEVEL=debug && set LOGGER_OMIT_DATA=false && ".\Redis Insight.exe"”` + +* **macOS** (from the Applications folder): + + `LOG_LEVEL=debug LOGGER_OMIT_DATA=false open "Redis Insight.app"` + +* **Linux**: + + `LOG_LEVEL=debug LOGGER_OMIT_DATA=false "redis insight"` + + Note: if you use LOGGER_OMIT_DATA=false, logs may contain sensitive data. + +### To log everything +* **Windows**: + + `cmd /C “set STDOUT_LOGGER=true && set LOG_LEVEL=debug && set LOGGER_OMIT_DATA=false && set DEBUG=* && ".\Redis Insight.exe"”` + +* **macOS** (from the Applications folder): + + `LOG_LEVEL=debug LOGGER_OMIT_DATA=false DEBUG=* open "Redis Insight.app"` + +* **Linux**: + + `LOG_LEVEL=debug LOGGER_OMIT_DATA=false DEBUG=* "redis insight"` + + Note: if you use LOGGER_OMIT_DATA=false or DEBUG=*, logs may contain sensitive data.--- +aliases: /develop/connect/insight/rdi-connector +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect to RDI from Redis Insight, configure pipelines, and more. +linkTitle: RDI in Redis Insight +stack: true +title: RDI in Redis Insight +weight: 4 +--- + +Redis Data Integration (RDI) and its [ingest pipeline capability]({{< relref "/integrate/redis-data-integration" >}}) is an end-to-end solution for mirroring your application's primary database in Redis. RDI employs a capture data change mechanism and a stream processor to map and transform source data such as relational tables into fast Redis data structures that match your use cases. +You can read more about RDI's ingest architecture [on these pages]({{< relref "/integrate/redis-data-integration/architecture" >}}). + +As of version `2.54.0`, Redis Insight includes RDI connectivity, which allows you to connect to [RDI management planes]({{< relref "/integrate/redis-data-integration/architecture" >}}#how-rdi-is-deployed), create, test, and deploy [RDI pipelines]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}), and view RDI statistics. + +## Connect + +Open Redis Insight, click on the **Redis Data Integration** tab, and then click on one of the two **+ Add RDI Endpoint** buttons as shown below. + + + +Enter your RDI server details in the provided dialog. The **RDI Alias** field can be any name you choose and it will be used as the primary name in the **RDI Instances** list view. + + + +You'll receive notification if your connection is successful. + + + +## Create, test, and deploy RDI pipelines + +Begin by clicking the alias of your newly configured RDI endpoint in the **RDI Instances** view (for example, **Test connection** in the above image). You'll see the following dialog in the center of the screen. + + + +Choose from the following options: + +- **Download from server** - Download an existing pipeline from your RDI configuration. +- **Upload from file** - Upload YAML pipeline files from your local computer in zip format. +- **Create new pipeline** - Use Redis Insight's built-in editors to create a new pipeline either from scratch or using one of the built-in templates. + +Each of these menu options will be described in more detail in subsequent sections. + +There are also equivalent buttons at the top of the editor pane for the first two of these functions. + + + +If you'd rather start with an empty configuration, exit the dialog, which will leave you in the **Configuration file** editor where you can begin editing the configuration component of your pipeline; the `config.yaml` file. + +### Download a pipeline from your RDI configuration + +Click the **Download from server** button in the **Start with your pipeline** dialog to download a previously defined pipeline from your RDI configuration. The downloaded pipeline will be displayed in the **Pipeline management** pane. As shown below, each pipeline consists of a configuration file (`config.yaml`) and zero or more `job` YAML files. The configuration file will be displayed in the center editor panel. + + + +### Upload a pipeline from your local machine + +Click the **Upload from file** button in the **Start with your pipeline** dialog to upload your configuration and job YAML files from your local machine. The files must be stored in a zip file that has the following structure. + +``` +├── config.yaml +└── jobs + └── job1.yaml +``` + +The `config.yaml` file, your configuration YAML file, is required. The `jobs` directory can be empty, as job pipelines are not required, but the empty directory must exist in the zip file. Otherwise, the `jobs` folder might contain one or more job YAML files. + +### Create a new configuration file using the built-in editor + +Click the **Create new pipeline** button in the **Start with your pipeline** dialog to create a new pipeline using the built-in editors. After doing so, you'll enter the **Configuration file** editor and you'll see an open **Select a template** dialog in the upper right-hand corner of the editor. + + + +Make your selections in the provided fields: + +- **Pipeline type** is set to **Ingest** by default. +- **Database type** has six options: + - mongodb + - cassandra + - mysql + - oracle + - postgresql + - sqlserver + +{{< note >}} +The options listed in the above menus depend on the capabilities of your RDI configuration. +{{< /note >}} + +After you make your selections and click **Apply**, Redis Insight will populate the editor window with an appropriate template. To start from scratch, click **Cancel**. + +See the [RDI documentation]({{< relref "/integrate/redis-data-integration/reference/config-yaml-reference" >}}) for information about required fields. + + + +### Test your target database connection + +After you've created your **Target database configuration**, you can test the connection using the **Test Connection** button in the bottom right of the editor pane. A new panel will open to the right containing the test results as shown below. + + + +### Create a new transformation job file using the built-in editor + +In the **Pipeline Management** pane, click the `+` next to the **Jobs** folder and enter a name for the new transformation job. +Next, click the job name you just created. +This will take you to the job editor with the template selection menu open. Make your selection and click **Apply**. Redis Insight will populate the editor window with an appropriate template. To start from scratch, click **Cancel**. + +{{< note >}} +The options listed in the above menu depend on the capabilities of your RDI configuration. +{{< /note >}} + +The [RDI documentation]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples" >}}) has several examples of transformation jobs that can help get you started. Note: RDI uses a very specific YAML format for job files. See [here]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}#job-files) for more information. + + + +## Use the built-in editors + +The Redis Insight pipeline file editors are context-aware. They provide auto-completion, syntax highlighting, and error detection for: + +- YAML files in the configuration and job file editors +- JMESPath and SQL function snippets in a dedicated editor. To open the JMESPath and SQL editor, click the **SQL and JMESPathEditor** button as shown above. A new editor window will open in the lower half of the screen. + +If you decided to write your own configuration pipeline without using a template, you would see auto-completion prompts such as the one shown below. + + + +While this isn't a replacement for the RDI documentation, it can help speed you along after you have basic familiarity with the building blocks of RDI pipeline files. + +Redis Insight will also highlight any errors as shown below. + + + +Here's an example showing the SQL and JMESPath editor pane. Note the toggle in the bottom left corner of this editor pane. Clicking it allows you to select from: + +- SQLite functions +- JMESPath + +After constructing your SQLite or JMESPath code, copy it to the main editor window. Here's a [reference]({{< relref "/integrate/redis-data-integration/reference/jmespath-custom-functions" >}}) to the supported JMESPath extension functions and expressions that you can use in your job files. + + + +{{< warning >}} +Any changes you make in the editors will be lost if you exit Redis Insight without saving your work. To save any changes you made to your pipeline files, deploy them to your RDI server (see below) or download the modified files as a zip file to your local disk using the **Download** button in the top right of the RDI window. Redis Insight will prepend a green circle on unsaved/undeployed files. + + +{{< /warning >}} + +## Dry run transformation job pipelines + +After you've created a transformation job pipeline, you can execute a dry run on the RDI server. To do that, click on **Dry Run** in the lower right side of the editor pane. A new **Test transformation logic** panel will open to the side. There are two vertically-stacked panes: **Input** and **Results**. In the **Input** section, enter JSON data that will trigger the transformation. Any results will be displayed in the **Results** section. + +There are two tabs in the **Results** section: + +1. **Transformations** - this is where you'll see JSON output from your dry run. +1. **Output** - (not shown) this is where you'll see the Redis commands that would have been run in a real scenario. + +Here's an example. + + + +## Deploy pipelines and add target DB to Redis Insight + +If you're satisfied with your configuration and transformation job pipelines, you can deploy them to the RDI management plane. Click the **Deploy Pipeline** button to proceed. + +After your pipelines have been deployed, you can add the RDI target Redis database defined in your `config.yaml` file to Redis Insight. +Doing so will allow you to monitor key creation from your RDI pipeline over time. + +## View RDI statistics + +You can view various statistics for your RDI deployment. To do so, click the **Pipeline Status** menu button in the left side menu panel. + + + +Each statistics section is either static or refreshed automatically at a particular interval that you set. +The first section, **Processing performance information** is set by default to refresh every 5 seconds. +The other sections are static and need to be refreshed manually by pressing the refresh button at the top right of each section. +You can also set up automatic refresh for the other sections. + +To set up automatic refresh for one or more statistics sections, click on the downward arrow at the end of the **Last refresh** line. +Then enable the **Auto Refresh** setting and set your desired refresh interval in seconds. This is shown in the previous image. +--- +aliases: /develop/connect/insight +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Visualize and optimize Redis data, connect to RDI, and more. +hideListLinks: true +linkTitle: Redis Insight +stack: true +title: Redis Insight +weight: 1 +--- + +[![Discord](https://img.shields.io/discord/697882427875393627?style=flat-square)](https://discord.gg/QUkjSsk) +[![Github](https://img.shields.io/static/v1?label=&message=repository&color=5961FF&logo=github)](https://github.com/redisinsight/redisinsight/) + +Redis Insight is a powerful tool for visualizing and optimizing data in Redis, making real-time application development easier and more fun than ever before. Redis Insight lets you do both GUI- and CLI-based interactions in a fully-featured desktop GUI client. + +### Installation and release notes + +* See [these pages]({{< relref "/operate/redisinsight/install" >}}) for installation information. + +* [Redis Insight Release Notes](https://github.com/Redis-Insight/Redis-Insight/releases) + +## Overview + +### Connection management + +* Automatically discover and add your local Redis databases (that use standalone connection type and do not require authentication). +* Discover your databases in Redis Enterprise Cluster and databases with Flexible plans in Redis Cloud. +* Use a form to enter your connection details and add any Redis database running anywhere (including Redis Open Source cluster or sentinel). +* Connect to a Redis Data Integration (RDI) management plane, create, test, and deploy RDI pipelines, and view RDI statistics. + + + +{{< note >}} +When you add a Redis database for a particular user using the `username` and `password` fields, that user must be able to run the `INFO` command. See the [access control list (ACL) documentation]({{< relref "/operate/oss_and_stack/management/security/acl" >}}) for more information. +{{< /note >}} + +### Redis Copilot + +Redis Copilot is an AI-powered developer assistant that helps you learn about Redis, explore your Redis data, and build search queries in a conversational manner. It is available in Redis Insight as well as within the Redis public documentation. + +Currently, Redis Copilot provides two primary features: a general chatbot and a context-aware data chatbot. + +**General chatbot**: the knowledge-based chatbot serves as an interactive and dynamic documentation interface to simplify the learning process. You can ask specific questions about Redis commands, concepts, and products, and get responses on the fly. The general chatbot is also available in our public docs. + +**My data chatbot**: the context-aware chatbot available in Redis Insight lets you construct search queries using everyday language rather than requiring specific programming syntax. This feature lets you query and explore data easily and interactively without extensive technical knowledge. + +Here's an example of using Redis Copilot to search data using a simple, natural language prompt. + + + +See the [Redis Insight Copilot FAQ]({{< relref "/develop/tools/insight/copilot-faq" >}}) for more information. + +### RDI in Redis Insight + +Redis Insight includes Redis Data Integration (RDI) connectivity, which allows you to connect to an RDI management plane, and create, test, and deploy RDI pipelines. Read more about this feature [here]({{< relref "/develop/tools/insight/rdi-connector" >}}). + +### Browser + +Browse, filter and visualize your key-value Redis data structures. +* [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) support for lists, hashes, strings, sets, sorted sets, and streams +* CRUD support for [JSON]({{< relref "/develop/data-types/json/" >}}) +* Group keys according to their namespaces + + + +* View, validate, and manage your key values in a human-readable format using formatters that prettify and highlight data in different formats (for example, Unicode, JSON, MessagePack, HEX, and ASCII) in the Browser tool. + + + +### Profiler + +Analyze every command sent to Redis in real time. + + + +### CLI + +The CLI is accessible at any time within the application. +* Employs integrated help to deliver intuitive assistance +* Use together with a convenient command helper that lets you search and read on Redis commands. + + + +### Workbench + +Workbench is an advanced command line interface with intelligent command auto-complete and complex data visualization support. +* Built-in guides: you can conveniently discover Redis and Redis Open Source features using the built-in guides. +* Command auto-complete support for all features in Redis and Redis Open Source. +* Advanced, schema-aware auto-complete for Redis Query Engine, which provides for faster query building with context-sensitive suggestions that recognize indexes, schemas, and fields based on your current query. Start typing any Redis Query Engine command in to try this feature. See below for an example of an in-progress `FT.SEARCH` command. + + + +Workbench also includes: + +* Visualizations of your indexes, queries, and aggregations. +* Visualizations of your [time series]({{< relref "/develop/data-types/timeseries/" >}}) data. + + + +## Tools + +### Database analysis + +Use the database analysis tool to optimize the performance and memory usage of your Redis database. Check data type distribution and memory allocation and review the summary of key expiration time and memory to be freed over time. Inspect the top keys and namespaces sorted by consumed memory or key length and count of keys, respectively. Capture and track the changes in your database by viewing historical analysis reports. Next figure shows a sample database analysis report. + +{{< note >}} +The database analysis tool will only analyze up to 10,000 keys. If more than 10,000 keys are present, the tool will attempt to use extrapolation in its analysis. +{{< /note >}} + + + +### Redis Streams support + +Create and manage streams by adding, removing, and filtering entries per timestamp. To see and work with new entries, enable and customize the automatic refresh rate. + +View and manage the list of consumer groups. See existing consumers in a given consumer name as well as the last messages delivered to them. Inspect the list of pending messages, explicitly acknowledge the processed items, or claim unprocessed messages via Redis Insight. + + + +### Search features + +If you're using the indexing, querying, or full-text search features of Redis Open Source, Redis Insight provides UI controls to quickly and conveniently run search queries against a preselected index. You can also create a secondary index of your data in a dedicated pane. + + + +### Bulk actions + +Easily and quickly delete multiple keys of the same type and/or with the same key name pattern in bulk. To do so, in the List or Tree view, set filters per key type or key names and open the Bulk Actions section. The section displays a summary of all the keys with the expected number of keys that will be deleted based on the set filters. + +When the bulk deletion is completed, Redis Insight displays the results of this operation with the number of keys processed and the time taken to delete the keys in bulk. +Use bulk deletion to optimize the usage of your database based on the results from the Redis database analysis. + + + +### Slow Log + +The Slow Log tool displays the list of logs captured by the SLOWLOG command to analyze all commands that exceed a specified runtime, which helps with troubleshooting performance issues. Specify both the runtime and the maximum length of Slowlog (which are server configurations) to configure the list of commands logged and set the auto-refresh interval to automatically update the list of commands displayed. + + + +## Plugins + +With Redis Insight you can now also extend the core functionality by building your own data visualizations. See our [plugin documentation](https://github.com/Redis-Insight/Redis-Insight/wiki/Plugin-Documentation) for more information. + +## Telemetry + +Redis Insight includes an opt-in telemetry system. This help us improve the developer experience of the app. We value your privacy; all collected data is anonymized. + +## Log files + +You can review the Redis Insight log files (files with a `.log` extension) to get detailed information about system issues. +These are the locations on supported platforms: + +- **Docker**: In the `/data/logs` directory *inside the container*. +- **Mac**: In the `/Users//.redis-insight` directory. +- **Windows**: In the `C:\Users\\.redis-insight` directory. +- **Linux**: In the `/home//.redis-insight` directory. + +{{< note >}} +You can install Redis Insight on operating systems that are not officially supported, but it may not behave as expected. +{{< /note >}} + +## Redis Insight API (only for Docker) + +If you are running Redis Insight from [Docker]({{< relref "/operate/redisinsight/install/install-on-docker" >}}), +you can access the API from `http://localhost:5540/api/docs`. + +## Feedback + +To provide your feedback, [open a ticket in our Redis Insight repository](https://github.com/Redis-Insight/Redis-Insight/issues/new). + +## License + +Redis Insight is licensed under [SSPL](https://github.com/Redis-Insight/Redis-Insight/blob/main/LICENSE) license. +--- +Title: Redis for VS Code v1.2.0, December 2024 +linkTitle: v1.2.0 (December 2024) +date: 2024-12-19 00:00:00 +0000 +description: Redis for VS Code v1.2 +weight: 99 +--- + +## 1.2.0 (December 2024) + +This is the General Availability (GA) release of Redis for VS Code 1.2. + +### Headlines +* Work with keys across multiple database indexes, which are automatically discovered and displayed in the database list. +* Support for adding multiple elements to the head or tail of Redis lists, for both new and existing keys. +* Auto-refresh the list of keys and key values with a customizable timer. +* Delete and update previously added CA and client certificates to keep them updated. + +### Details + +- [#223](https://github.com/RedisInsight/Redis-for-VS-Code/pull/223) Work with keys across multiple database indexes. Database indexes with keys are automatically discovered and displayed in the database list. +- [#207](https://github.com/RedisInsight/Redis-for-VS-Code/pull/207) Support for adding multiple elements to the head or tail of Redis lists for new and existing key. +- [#226](https://github.com/RedisInsight/Redis-for-VS-Code/pull/226) Auto-refresh the list of keys and key values with a customizable timer. To do so, enable the Auto-refresh mode by clicking the control next to the Refresh button and setting the refresh rate. +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Ability to delete previously added CA and Client certificates to keep them up-to-date. +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Enhanced both the Java and PHP serialized formatters: the Java formatter now supports date and time data, while the PHP formatter includes UTF-8 encoding for better handling of special characters and multi-language data. +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Keep databases and the list of keys [expanded](https://github.com/RedisInsight/Redis-for-VS-Code/issues/217) after navigating away. +- [#226](https://github.com/RedisInsight/Redis-for-VS-Code/pull/226) New users can optionally encrypt sensitive data, such as connection certificates and passwords. Existing users can enable encryption by deleting the ~/.redis-for-vscode/redisinsight.db file and re-adding their database connections. + +**Bugs** +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Resolved an issue where large integers in JSON keys were being rounded, ensuring data integrity. +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Saved SNI and SSH connection information for newly added database connections. +- [#224](https://github.com/RedisInsight/Redis-for-VS-Code/pull/224) Fixed an issue to display multiple hash fields when expanding a hash value. + +### Get started with Redis for VS Code +Install the extension from the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=redis.redis-for-vscode) to use it. +--- +Title: Redis for VS Code v1.0.0, September 2024 +linkTitle: v1.0.0 (September 2024) +date: 2024-09-06 00:00:00 +0000 +description: Redis for VS Code v1.0 +weight: 99 +--- + +## 1.0.0 (September 2024) + +This is the first release of Redis for VS Code. + +Redis for VS Code is the official Visual Studio Code extension that provides an intuitive and efficient GUI for Redis databases, developed by Redis. + +### Headlines + +* Universal Redis Support: Connect to any Redis instance, including Redis Open Source, Redis Cloud, Redis Software, and Redis on Azure Cache. + +* Advanced Connectivity: Supports TLS certificates and SSH tunnels, with an option for automatic data decompression for GZIP, SNAPPY, Brotli, and more. + +* Data types: Supports strings, hashes, lists, sets, sorted sets, and JSON. + +* Human-readable data representation: Offers formatters like ASCII, JSON, Binary, Hex, 32-bit, and 64-bit vectors, and others. + +* Integrated Redis CLI: Leverage Redis CLI with syntax preview as you type commands. + +### Details + +- Database connections: + +  - Connect to any Redis instance, including Redis Open Source, Redis Cloud, Redis Software, and Redis on Azure Cache. + +  - View, edit, and manage your Redis database connections. + +  - Supports TLS connections and SSH tunnels for secure access. + +  - Automatically handle data compressed with GZIP, LZ4, SNAPPY, ZSTD, Brotli, or PHP GZCompress. + +  - Choose and work with a specific logical database within your Redis instance. + +- Redis data structures: + +  - Use an intuitive tree view interface to browse, filter, and visualize Redis key-value data structures. + +  -  Perform create, read, update, and delete operations on the following Redis data types: + +    - Strings + +    - Hashes + +    - Lists + +    - Sets + +    - Sorted sets + +    - JSON + +- View your data in multiple human-readable formats, including Unicode, ASCII, Binary, HEX, JSON, Msgpack, Pickle, Protobuf, PHP serialized, Java serialized, and Vector (32 and 64-bit). + + - Sort by key names and apply filters by pattern or data type for quick and precise data access. + + - Conduct detailed searches within fields in hashes, indexes in lists, and members in sets and sorted sets. + +- Redis CLI: + +  - Access a built-in Redis CLI with improved type-ahead command suggestions, helping you execute commands accurately and efficiently. + +### Get started with Redis for VS Code + +This repository contains the source code for the Redis for VS Code extension. + +Install the extension from the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=redis.redis-for-vscode) to use it. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: +linkTitle: Release notes +title: Redis for VS Code release notes +weight: 7 +hideListLinks: true +--- + +Here are the most recent changes for Redis for VS Code: + +{{< table-children columnNames="Version (Date),Release notes" columnSources="LinkTitle,Title" enableLinks="Title" >}}--- +aliases: /develop/connect/redis-for-vscode +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Connect to Redis from Visual Studio Code. +hideListLinks: true +linkTitle: Redis for VS Code +stack: true +title: Redis for VS Code +weight: 5 +--- + +Redis for VS Code is an extension that allows you to connect to your Redis databases from within Microsoft Visual Studio Code. +After connecting to a database, you can view, add, modify, and delete keys, and interact with your Redis databases using a Redis Insight like UI and also a built-in CLI interface. +The following data types are supported: + +- [Hash]({{< relref "/develop/data-types/hashes" >}}) +- [List]({{< relref "/develop/data-types/lists" >}}) +- [Set]({{< relref "/develop/data-types/sets" >}}) +- [Sorted Set]({{< relref "/develop/data-types/sorted-sets" >}}) +- [String]({{< relref "/develop/data-types/strings" >}}) +- [JSON]({{< relref "/develop/data-types/json" >}}) + +## Install the Redis for VS Code extension + +Open VS Code and click on the **Extensions** menu button. In the **Search Extensions in Marketplace** field, type "Redis for VS Code" and press the `enter` or `return` key. There may be more than one option shown, so be sure to click on the extension published by Redis. The correct extension is shown below. Click on the **Install** to install the extension. + +{{< image filename="images/dev/connect/vscode/vscode-install1.png" >}} + +Once installed, check the **Auto Update** button to allow VS Code to install future revisions of the extension automatically. + +{{< image filename="images/dev/connect/vscode/vscode-install2.png" >}} + +After installing the extension, your VS Code menu will look similar to the following. + +{{< image filename="images/dev/connect/vscode/vscode-menu.png" >}} + +## Connect to Redis databases {#connect-db} + +Click on the Redis mark (the cursive **R**) in the VS Code menu to begin connecting a Redis database to VS Code. If you do not currently have access to a Redis database, consider giving Redis Cloud a try. [It's free](https://redis.io/try-free/). + +{{< image filename="images/dev/connect/vscode/vscode-initial.png" >}} + +Click on the **+ Connect database** button. A dialog will display in the main pane. In the image shown below, all the options have been checked to show the available details for each connection. These connection details are similar to those accessible from [`redis-cli`]({{< relref "/develop/tools/cli" >}}). + +{{< note >}} +In the first release of Redis for VS Code, there is no way to change the logical database after you have selected it. If you need to connect to a different logical database, you need to add a separate database connection. +{{< /note >}} + +{{< image filename="images/dev/connect/vscode/vscode-add-menu.png" >}} + +After filling out the necessary fields, click on the **Add Redis database** button. The pane on the left side, where you would normally see the Explorer view, shows your database connections. + +{{< image filename="images/dev/connect/vscode/vscode-cnx-view.png" >}} + +{{< note >}} +Local databases, excluding OSS cluster databases, with default usernames and no passwords will automatically be added to your list of database connections. +{{< /note >}} + +### Connection tools + +Several tools are displayed for each open connection. + +{{< image filename="images/dev/connect/vscode/vscode-cnx-tools.png" >}} + +Left to right, they are: + +- Refresh connection, which retrieves fresh data from the connected Redis database. +- Edit connection, which shows a dialog similar to the one described in [Connect to Redis Databases](#connect-db) above. +- Delete connection. +- Open CLI. See [CLI tool](#cli) below for more information. +- Sort keys, either ascending or descending. +- Filter keys by key name or pattern, and by key type. +- Add a new key by type: Hash, List, Set, Sorted Set, String, or JSON. + +## Key view + +Here's what you'll see when there are no keys in your database (the image on the left) and when keys are present (the image on the right). + +{{< image filename="images/dev/connect/vscode/vscode-key-view-w-wo-keys.png" >}} + +Redis for VS Code will automatically group the keys based on the one available setting, **Delimiter to separate namespaces**, which you can view by clicking on the gear icon in the top-right of the left side pane. Click on the current value to change it. The default setting is the colon (`:`) character. + +{{< image filename="images/dev/connect/vscode/vscode-settings.png" >}} + +Click on a key to display its contents. + +{{< image filename="images/dev/connect/vscode/vscode-key-view.png" >}} + +### Key editing tools + +There are several editing tools that you can use to edit key data. Each data type has its own editing capabilities. The following examples show edits to JSON data. Note that changes to keys are immediately written to the server. + +- **Rename**. Click on the key name field to change the name. + +{{< image filename="images/dev/connect/vscode/vscode-edit-name.png" >}} + +- **Set time-to-live (TTL)**. Click on the **TTL** field to set the duration in seconds. + +{{< image filename="images/dev/connect/vscode/vscode-edit-ttl.png" >}} + +- **Delete**. Click on the trash can icons to delete the entire key (highlighted in red) or portions of a key (highlighted in yellow). + +{{< image filename="images/dev/connect/vscode/vscode-edit-del.png" >}} + +- **Add to key**. Click on the `+` button next to the closing bracket (shown highlighted in green above) to add a new component to a key. + +{{< image filename="images/dev/connect/vscode/vscode-edit-add.png" >}} + +- **Refresh**. Click on the refresh icon (the circular arrow) to retrieve fresh data from the server. In the examples below, refresh was clicked (the image on the left) and the key now has a new field called "test" that was added by another Redis client (the image on the right). + +{{< image filename="images/dev/connect/vscode/vscode-recycle-before-after.png" >}} + +For strings, hashes, lists, sets, and sorted sets, the extension supports numerous value formatters (highlighted in red in the image below). They are: + +- Unicode +- ASCII +- Binary (blob) +- HEX +- JSON +- Msgpack +- Pickle +- Protobuf +- PHP serialized +- Java serialized +- 32-bit vector +- 64-bit vector + +{{< image filename="images/dev/connect/vscode/vscode-edit-value-formatters.png" >}} + +Also for Hash keys, you can set per-field TTLs (highlighted in yellow in the image above), a new feature added to Redis Open Source 7.4. + +## CLI tool {#cli} + +The connection tool with the boxed `>_` icon opens a Redis CLI window in the **REDIS CLI** tab at the bottom of the primary pane. + +{{< image filename="images/dev/connect/vscode/vscode-cli.png" >}} + +The CLI interface works just like the [`redis-cli`]({{< relref "/develop/tools/cli" >}}) command. +--- +categories: +aliases: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Tools to interact with a Redis server +linkTitle: Client tools +hideListLinks: true +title: Client tools +weight: 25 +--- + +You can use several tools to connect to a Redis server, to +manage it and interact with the data: + +* The [`redis-cli`](#redis-command-line-interface-cli) command line tool +* [Redis Insight](#redis-insight) (a graphical user interface tool) +* The Redis [VSCode extension](#redis-vscode-extension) + +## Redis command line interface (CLI) + +The [Redis command line interface]({{< relref "/develop/tools/cli" >}}) (also known as `redis-cli`) is a terminal program that sends commands to and reads replies from the Redis server. It has the following two main modes: + +1. An interactive Read Eval Print Loop (REPL) mode where the user types Redis commands and receives replies. +2. A command mode where `redis-cli` is executed with additional arguments, and the reply is printed to the standard output. + +## Redis Insight + +[Redis Insight]({{< relref "/develop/tools/insight" >}}) combines a graphical user interface with Redis CLI to let you work with any Redis deployment. You can visually browse and interact with data, take advantage of diagnostic tools, learn by example, and much more. Best of all, Redis Insight is free. + +## Redis VSCode extension + +[Redis for VS Code]({{< relref "/develop/tools/redis-for-vscode" >}}) +is an extension that allows you to connect to your Redis databases from within Microsoft Visual Studio Code. After connecting to a database, you can view, add, modify, and delete keys, and interact with your Redis databases using a Redis Insight like UI and also a built-in CLI interface. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to the Redis Geospatial data type + + ' +linkTitle: Geospatial +title: Redis geospatial +weight: 80 +--- + +Redis geospatial indexes let you store coordinates and search for them. +This data structure is useful for finding nearby points within a given radius or bounding box. + +{{< note >}}Take care not to confuse the Geospatial data type with the +[Geospatial]({{< relref "/develop/interact/search-and-query/advanced-concepts/geo" >}}) +features in [Redis Query Engine]({{< relref "/develop/interact/search-and-query" >}}). +Although there are some similarities between these two features, the data type is intended +for simpler use cases and doesn't have the range of format options and queries +available in Redis Query Engine. +{{< /note >}} + +## Basic commands + +* [`GEOADD`]({{< relref "/commands/geoadd" >}}) adds a location to a given geospatial index (note that longitude comes before latitude with this command). +* [`GEOSEARCH`]({{< relref "/commands/geosearch" >}}) returns locations with a given radius or a bounding box. + +See the [complete list of geospatial index commands]({{< relref "/commands/" >}}?group=geo). + + +## Examples + +Suppose you're building a mobile app that lets you find all of the bike rental stations closest to your current location. + +Add several locations to a geospatial index: +{{< clients-example geo_tutorial geoadd >}} +> GEOADD bikes:rentable -122.27652 37.805186 station:1 +(integer) 1 +> GEOADD bikes:rentable -122.2674626 37.8062344 station:2 +(integer) 1 +> GEOADD bikes:rentable -122.2469854 37.8104049 station:3 +(integer) 1 +{{< /clients-example >}} + +Find all locations within a 5 kilometer radius of a given location, and return the distance to each location: +{{< clients-example geo_tutorial geosearch >}} +> GEOSEARCH bikes:rentable FROMLONLAT -122.2612767 37.7936847 BYRADIUS 5 km WITHDIST +1) 1) "station:1" + 2) "1.8523" +2) 1) "station:2" + 2) "1.4979" +3) 1) "station:3" + 2) "2.2441" +{{< /clients-example >}} + +## Learn more + +* [Redis Geospatial Explained](https://www.youtube.com/watch?v=qftiVQraxmI) introduces geospatial indexes by showing you how to build a map of local park attractions. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis geospatial indexes in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis sorted sets + + ' +linkTitle: Sorted sets +title: Redis sorted sets +weight: 50 +--- + +A Redis sorted set is a collection of unique strings (members) ordered by an associated score. +When more than one string has the same score, the strings are ordered lexicographically. +Some use cases for sorted sets include: + +* Leaderboards. For example, you can use sorted sets to easily maintain ordered lists of the highest scores in a massive online game. +* Rate limiters. In particular, you can use a sorted set to build a sliding-window rate limiter to prevent excessive API requests. + +You can think of sorted sets as a mix between a Set and +a Hash. Like sets, sorted sets are composed of unique, non-repeating +string elements, so in some sense a sorted set is a set as well. + +However while elements inside sets are not ordered, every element in +a sorted set is associated with a floating point value, called *the score* +(this is why the type is also similar to a hash, since every element +is mapped to a value). + +Moreover, elements in a sorted set are *taken in order* (so they are not +ordered on request, order is a peculiarity of the data structure used to +represent sorted sets). They are ordered according to the following rule: + +* If B and A are two elements with a different score, then A > B if A.score is > B.score. +* If B and A have exactly the same score, then A > B if the A string is lexicographically greater than the B string. B and A strings can't be equal since sorted sets only have unique elements. + +Let's start with a simple example, we'll add all our racers and the score they got in the first race: + +{{< clients-example ss_tutorial zadd >}} +> ZADD racer_scores 10 "Norem" +(integer) 1 +> ZADD racer_scores 12 "Castilla" +(integer) 1 +> ZADD racer_scores 8 "Sam-Bodden" 10 "Royce" 6 "Ford" 14 "Prickett" +(integer) 4 +{{< /clients-example >}} + + +As you can see [`ZADD`]({{< relref "/commands/zadd" >}}) is similar to [`SADD`]({{< relref "/commands/sadd" >}}), but takes one additional argument +(placed before the element to be added) which is the score. +[`ZADD`]({{< relref "/commands/zadd" >}}) is also variadic, so you are free to specify multiple score-value +pairs, as shown in the example above. + +With sorted sets it is trivial to return a list of racers sorted by their +score because actually *they are already sorted*. + +Implementation note: Sorted sets are implemented via a +dual-ported data structure containing both a skip list and a hash table, so +every time we add an element Redis performs an O(log(N)) operation. That's +good, so when we ask for sorted elements, Redis does not have to do any work at +all, it's already sorted. Note that the [`ZRANGE`]({{< relref "/commands/zrange" >}}) order is low to high, while the [`ZREVRANGE`]({{< relref "/commands/zrevrange" >}}) order is high to low: + +{{< clients-example ss_tutorial zrange >}} +> ZRANGE racer_scores 0 -1 +1) "Ford" +2) "Sam-Bodden" +3) "Norem" +4) "Royce" +5) "Castilla" +6) "Prickett" +> ZREVRANGE racer_scores 0 -1 +1) "Prickett" +2) "Castilla" +3) "Royce" +4) "Norem" +5) "Sam-Bodden" +6) "Ford" +{{< /clients-example >}} + +Note: 0 and -1 means from element index 0 to the last element (-1 works +here just as it does in the case of the [`LRANGE`]({{< relref "/commands/lrange" >}}) command). + +It is possible to return scores as well, using the `WITHSCORES` argument: + +{{< clients-example ss_tutorial zrange_withscores >}} +> ZRANGE racer_scores 0 -1 withscores + 1) "Ford" + 2) "6" + 3) "Sam-Bodden" + 4) "8" + 5) "Norem" + 6) "10" + 7) "Royce" + 8) "10" + 9) "Castilla" +10) "12" +11) "Prickett" +12) "14" +{{< /clients-example >}} + +### Operating on ranges + +Sorted sets are more powerful than this. They can operate on ranges. +Let's get all the racers with 10 or fewer points. We +use the [`ZRANGEBYSCORE`]({{< relref "/commands/zrangebyscore" >}}) command to do it: + +{{< clients-example ss_tutorial zrangebyscore >}} +> ZRANGEBYSCORE racer_scores -inf 10 +1) "Ford" +2) "Sam-Bodden" +3) "Norem" +4) "Royce" +{{< /clients-example >}} + +We asked Redis to return all the elements with a score between negative +infinity and 10 (both extremes are included). + +To remove an element we'd simply call [`ZREM`]({{< relref "/commands/zrem" >}}) with the racer's name. +It's also possible to remove ranges of elements. Let's remove racer Castilla along with all +the racers with strictly fewer than 10 points: + +{{< clients-example ss_tutorial zremrangebyscore >}} +> ZREM racer_scores "Castilla" +(integer) 1 +> ZREMRANGEBYSCORE racer_scores -inf 9 +(integer) 2 +> ZRANGE racer_scores 0 -1 +1) "Norem" +2) "Royce" +3) "Prickett" +{{< /clients-example >}} + +[`ZREMRANGEBYSCORE`]({{< relref "/commands/zremrangebyscore" >}}) is perhaps not the best command name, +but it can be very useful, and returns the number of removed elements. + +Another extremely useful operation defined for sorted set elements +is the get-rank operation. It is possible to ask what is the +position of an element in the set of ordered elements. +The [`ZREVRANK`]({{< relref "/commands/zrevrank" >}}) command is also available in order to get the rank, considering +the elements sorted in a descending way. + +{{< clients-example ss_tutorial zrank >}} +> ZRANK racer_scores "Norem" +(integer) 0 +> ZREVRANK racer_scores "Norem" +(integer) 3 +{{< /clients-example >}} + +### Lexicographical scores + +In version Redis 2.8, a new feature was introduced that allows +getting ranges lexicographically, assuming elements in a sorted set are all +inserted with the same identical score (elements are compared with the C +`memcmp` function, so it is guaranteed that there is no collation, and every +Redis instance will reply with the same output). + +The main commands to operate with lexicographical ranges are [`ZRANGEBYLEX`]({{< relref "/commands/zrangebylex" >}}), +[`ZREVRANGEBYLEX`]({{< relref "/commands/zrevrangebylex" >}}), [`ZREMRANGEBYLEX`]({{< relref "/commands/zremrangebylex" >}}) and [`ZLEXCOUNT`]({{< relref "/commands/zlexcount" >}}). + +For example, let's add again our list of famous racers, but this time +using a score of zero for all the elements. We'll see that because of the sorted sets ordering rules, they are already sorted lexicographically. Using [`ZRANGEBYLEX`]({{< relref "/commands/zrangebylex" >}}) we can ask for lexicographical ranges: + +{{< clients-example ss_tutorial zadd_lex >}} +> ZADD racer_scores 0 "Norem" 0 "Sam-Bodden" 0 "Royce" 0 "Castilla" 0 "Prickett" 0 "Ford" +(integer) 3 +> ZRANGE racer_scores 0 -1 +1) "Castilla" +2) "Ford" +3) "Norem" +4) "Prickett" +5) "Royce" +6) "Sam-Bodden" +> ZRANGEBYLEX racer_scores [A [L +1) "Castilla" +2) "Ford" +{{< /clients-example >}} + +Ranges can be inclusive or exclusive (depending on the first character), +also string infinite and minus infinite are specified respectively with +the `+` and `-` strings. See the documentation for more information. + +This feature is important because it allows us to use sorted sets as a generic +index. For example, if you want to index elements by a 128-bit unsigned +integer argument, all you need to do is to add elements into a sorted +set with the same score (for example 0) but with a 16 byte prefix +consisting of **the 128 bit number in big endian**. Since numbers in big +endian, when ordered lexicographically (in raw bytes order) are actually +ordered numerically as well, you can ask for ranges in the 128 bit space, +and get the element's value discarding the prefix + +Updating the score: leaderboards +--- + +Just a final note about sorted sets before switching to the next topic. +Sorted sets' scores can be updated at any time. Just calling [`ZADD`]({{< relref "/commands/zadd" >}}) against +an element already included in the sorted set will update its score +(and position) with O(log(N)) time complexity. As such, sorted sets are suitable +when there are tons of updates. + +Because of this characteristic a common use case is leaderboards. +The typical application is a Facebook game where you combine the ability to +take users sorted by their high score, plus the get-rank operation, in order +to show the top-N users, and the user rank in the leader board (e.g., "you are +the #4932 best score here"). + +## Examples + +* There are two ways we can use a sorted set to represent a leaderboard. If we know a racer's new score, we can update it directly via the [`ZADD`]({{< relref "/commands/zadd" >}}) command. However, if we want to add points to an existing score, we can use the [`ZINCRBY`]({{< relref "/commands/zincrby" >}}) command. +{{< clients-example ss_tutorial leaderboard >}} +> ZADD racer_scores 100 "Wood" +(integer) 1 +> ZADD racer_scores 100 "Henshaw" +(integer) 1 +> ZADD racer_scores 150 "Henshaw" +(integer) 0 +> ZINCRBY racer_scores 50 "Wood" +"150" +> ZINCRBY racer_scores 50 "Henshaw" +"200" +{{< /clients-example >}} + +You'll see that [`ZADD`]({{< relref "/commands/zadd" >}}) returns 0 when the member already exists (the score is updated), while [`ZINCRBY`]({{< relref "/commands/zincrby" >}}) returns the new score. The score for racer Henshaw went from 100, was changed to 150 with no regard for what score was there before, and then was incremented by 50 to 200. + +## Basic commands + +* [`ZADD`]({{< relref "/commands/zadd" >}}) adds a new member and associated score to a sorted set. If the member already exists, the score is updated. +* [`ZRANGE`]({{< relref "/commands/zrange" >}}) returns members of a sorted set, sorted within a given range. +* [`ZRANK`]({{< relref "/commands/zrank" >}}) returns the rank of the provided member, assuming the sorted is in ascending order. +* [`ZREVRANK`]({{< relref "/commands/zrevrank" >}}) returns the rank of the provided member, assuming the sorted set is in descending order. + +See the [complete list of sorted set commands]({{< relref "/commands/" >}}?group=sorted-set). + +## Performance + +Most sorted set operations are O(log(n)), where _n_ is the number of members. + +Exercise some caution when running the [`ZRANGE`]({{< relref "/commands/zrange" >}}) command with large returns values (e.g., in the tens of thousands or more). +This command's time complexity is O(log(n) + m), where _m_ is the number of results returned. + +## Alternatives + +Redis sorted sets are sometimes used for indexing other Redis data structures. +If you need to index and query your data, consider the [JSON]({{< relref "/develop/data-types/json/" >}}) data type and the [Redis Query Engine]({{< relref "/develop/interact/search-and-query/" >}}) features. + +## Learn more + +* [Redis Sorted Sets Explained](https://www.youtube.com/watch?v=MUKlxdBQZ7g) is an entertaining introduction to sorted sets in Redis. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sorted sets in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis bitmaps + + ' +linkTitle: Bitmaps +title: Redis bitmaps +weight: 120 +--- + +Bitmaps are not an actual data type, but a set of bit-oriented operations +defined on the String type which is treated like a bit vector. +Since strings are binary safe blobs and their maximum length is 512 MB, +they are suitable to set up to 2^32 different bits. + +You can perform bitwise operations on one or more strings. +Some examples of bitmap use cases include: + +* Efficient set representations for cases where the members of a set correspond to the integers 0-N. +* Object permissions, where each bit represents a particular permission, similar to the way that file systems store permissions. + +## Basic commands + +* [`SETBIT`]({{< relref "/commands/setbit" >}}) sets a bit at the provided offset to 0 or 1. +* [`GETBIT`]({{< relref "/commands/getbit" >}}) returns the value of a bit at a given offset. + +See the [complete list of bitmap commands]({{< relref "/commands/" >}}?group=bitmap). + + +## Example + +Suppose you have 1000 cyclists racing through the country-side, with sensors on their bikes labeled 0-999. +You want to quickly determine whether a given sensor has pinged a tracking server within the hour to check in on a rider. + +You can represent this scenario using a bitmap whose key references the current hour. + +* Rider 123 pings the server on January 1, 2024 within the 00:00 hour. You can then confirm that rider 123 pinged the server. You can also check to see if rider 456 has pinged the server for that same hour. + +{{< clients-example bitmap_tutorial ping >}} +> SETBIT pings:2024-01-01-00:00 123 1 +(integer) 0 +> GETBIT pings:2024-01-01-00:00 123 +1 +> GETBIT pings:2024-01-01-00:00 456 +0 +{{< /clients-example >}} + + +## Bit Operations + +Bit operations are divided into two groups: constant-time single bit +operations, like setting a bit to 1 or 0, or getting its value, and +operations on groups of bits, for example counting the number of set +bits in a given range of bits (e.g., population counting). + +One of the biggest advantages of bitmaps is that they often provide +extreme space savings when storing information. For example in a system +where different users are represented by incremental user IDs, it is possible +to remember a single bit information (for example, knowing whether +a user wants to receive a newsletter) of 4 billion users using just 512 MB of memory. + +The [`SETBIT`]({{< relref "/commands/setbit" >}}) command takes as its first argument the bit number, and as its second +argument the value to set the bit to, which is 1 or 0. The command +automatically enlarges the string if the addressed bit is outside the +current string length. + +[`GETBIT`]({{< relref "/commands/getbit" >}}) just returns the value of the bit at the specified index. +Out of range bits (addressing a bit that is outside the length of the string +stored into the target key) are always considered to be zero. + +There are three commands operating on group of bits: + +1. [`BITOP`]({{< relref "/commands/bitop" >}}) performs bit-wise operations between different strings. The provided operations are AND, OR, XOR and NOT. +2. [`BITCOUNT`]({{< relref "/commands/bitcount" >}}) performs population counting, reporting the number of bits set to 1. +3. [`BITPOS`]({{< relref "/commands/bitpos" >}}) finds the first bit having the specified value of 0 or 1. + +Both [`BITPOS`]({{< relref "/commands/bitpos" >}}) and [`BITCOUNT`]({{< relref "/commands/bitcount" >}}) are able to operate with byte ranges of the +string, instead of running for the whole length of the string. We can trivially see the number of bits that have been set in a bitmap. + +{{< clients-example bitmap_tutorial bitcount >}} +> BITCOUNT pings:2024-01-01-00:00 +(integer) 1 +{{< /clients-example >}} + +For example imagine you want to know the longest streak of daily visits of +your web site users. You start counting days starting from zero, that is the +day you made your web site public, and set a bit with [`SETBIT`]({{< relref "/commands/setbit" >}}) every time +the user visits the web site. As a bit index you simply take the current unix +time, subtract the initial offset, and divide by the number of seconds in a day +(normally, 3600\*24). + +This way for each user you have a small string containing the visit +information for each day. With [`BITCOUNT`]({{< relref "/commands/bitcount" >}}) it is possible to easily get +the number of days a given user visited the web site, while with +a few [`BITPOS`]({{< relref "/commands/bitpos" >}}) calls, or simply fetching and analyzing the bitmap client-side, +it is possible to easily compute the longest streak. + +Bitmaps are trivial to split into multiple keys, for example for +the sake of sharding the data set and because in general it is better to +avoid working with huge keys. To split a bitmap across different keys +instead of setting all the bits into a key, a trivial strategy is just +to store M bits per key and obtain the key name with `bit-number/M` and +the Nth bit to address inside the key with `bit-number MOD M`. + + + +## Performance + +[`SETBIT`]({{< relref "/commands/setbit" >}}) and [`GETBIT`]({{< relref "/commands/getbit" >}}) are O(1). +[`BITOP`]({{< relref "/commands/bitop" >}}) is O(n), where _n_ is the length of the longest string in the comparison. + +## Learn more + +* [Redis Bitmaps Explained](https://www.youtube.com/watch?v=oj8LdJQjhJo) teaches you how to use bitmaps for map exploration in an online game. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis bitmaps in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Time Series Client Libraries + + ' +linkTitle: Clients +title: Clients +weight: 5 +--- + +The table below shows the client libraries that support Redis time series: + +| Language | Client | +| :-- | :-- | +| Python | [redis-py]({{< relref "/develop/clients/redis-py" >}}) | +| JavaScript | [node-redis]({{< relref "/develop/clients/nodejs" >}}) | +| Java | [Jedis]({{< relref "/develop/clients/jedis" >}}) | +| C#/.NET | [NRedisStack]({{< relref "/develop/clients/dotnet" >}}) | +| Go | [redistimeseries-go](https://github.com/RedisTimeSeries/redistimeseries-go/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Quick Start Guide to Time Series +linkTitle: Quickstart +title: Quickstart +weight: 2 +--- + +## Setup + +You can get Redis Time Series setup in the cloud, in a Docker container, or on your own machine. + +### Redis Cloud + +Redis Time Series are available on all Redis Cloud managed services, including a completely free managed database up to 30MB. + +[Get started here](https://redislabs.com/try-free/) + + +### Docker +To quickly try out Redis Time Series, launch an instance of Redis Open Source using docker: +```sh +docker run -p 6379:6379 -it --rm redis/redis:latest +``` + +### Download and running binaries + +First download the pre-compiled version from the [Redis download center](https://redis.io/downloads). + +Next, run Redis with RedisTimeSeries: + +``` +$ redis-server --loadmodule /path/to/module/redistimeseries.so +``` + +### Build and Run it yourself + +You can also build and run RedisTimeSeries on your own machine. + +Major Linux distributions as well as macOS are supported. + +#### Requirements + +First, clone the RedisTimeSeries repository from git: + +``` +git clone --recursive https://github.com/RedisTimeSeries/RedisTimeSeries.git +``` + +Then, to install required build artifacts, invoke the following: + +``` +cd RedisTimeSeries +make setup +``` +Or you can install required dependencies manually listed in [system-setup.py](https://github.com/RedisTimeSeries/RedisTimeSeries/blob/master/sbin/system-setup.py). + +If ```make``` is not yet available, the following commands are equivalent: + +``` +./deps/readies/bin/getpy3 +./system-setup.py +``` + +Note that ```system-setup.py``` **will install various packages on your system** using the native package manager and pip. This requires root permissions (i.e. sudo) on Linux. + +If you prefer to avoid that, you can: + +* Review system-setup.py and install packages manually, +* Utilize a Python virtual environment, +* Use Docker with the ```--volume``` option to create an isolated build environment. + +#### Build + +```bash +make build +``` + +Binary artifacts are placed under the ```bin``` directory. + +#### Run + +In your redis-server run: `loadmodule bin/redistimeseries.so` + +For more information about modules, go to the [redis official documentation]({{< relref "/develop/reference/modules/" >}}). + +## Give it a try with `redis-cli` + +After you setup RedisTimeSeries, you can interact with it using redis-cli. + +```sh +$ redis-cli +127.0.0.1:6379> TS.CREATE sensor1 +OK +``` + + +## Creating a timeseries +A new timeseries can be created with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}}) command; for example, to create a timeseries named `sensor1` run the following: + +``` +TS.CREATE sensor1 +``` + +You can prevent your timeseries growing indefinitely by setting a maximum age for samples compared to the last event time (in milliseconds) with the `RETENTION` option. The default value for retention is `0`, which means the series will not be trimmed. + +``` +TS.CREATE sensor1 RETENTION 2678400000 +``` +This will create a timeseries called `sensor1` and trim it to values of up to one month. + + +## Adding data points +For adding new data points to a timeseries we use the [`TS.ADD`]({{< relref "commands/ts.add/" >}}) command: + +``` +TS.ADD key timestamp value +``` + +The `timestamp` argument is the UNIX timestamp of the sample in milliseconds and `value` is the numeric data value of the sample. + +Example: +``` +TS.ADD sensor1 1626434637914 26 +``` + +To **add a datapoint with the current timestamp** you can use a `*` instead of a specific timestamp: + +``` +TS.ADD sensor1 * 26 +``` + +You can **append data points to multiple timeseries** at the same time with the [`TS.MADD`]({{< relref "commands/ts.madd/" >}}) command: +``` +TS.MADD key timestamp value [key timestamp value ...] +``` + + +## Deleting data points +Data points between two timestamps (inclusive) can be deleted with the [`TS.DEL`]({{< relref "commands/ts.del/" >}}) command: +``` +TS.DEL key fromTimestamp toTimestamp +``` +Example: +``` +TS.DEL sensor1 1000 2000 +``` + +To delete a single timestamp, use it as both the "from" and "to" timestamp: +``` +TS.DEL sensor1 1000 1000 +``` + +**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation. + + +## Labels +Labels are key-value metadata we attach to data points, allowing us to group and filter. They can be either string or numeric values and are added to a timeseries on creation: + +``` +TS.CREATE sensor1 LABELS region east +``` + + + +## Compaction +Another useful feature of Redis Time Series is compacting data by creating a rule for compaction ([`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}})). For example, if you have collected more than one billion data points in a day, you could aggregate the data by every minute in order to downsample it, thereby reducing the dataset size to 24 * 60 = 1,440 data points. You can choose one of the many available aggregation types in order to aggregate multiple data points from a certain minute into a single one. The currently supported aggregation types are: `avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s and twa`. + +It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time. + +**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule. + +``` +TS.CREATERULE sourceKey destKey AGGREGATION aggregationType bucketDuration +``` + +Example: + +``` +TS.CREATE sensor1_compacted # Create the destination timeseries first +TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule +``` + +With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries. + + +## Filtering +You can filter your time series by value, timestamp and labels: + +### Filtering by label +You can retrieve datapoints from multiple timeseries in the same query, and the way to do this is by using label filters. For example: + +``` +TS.MRANGE - + FILTER area_id=32 +``` + +This query will show data from all sensors (timeseries) that have a label of `area_id` with a value of `32`. The results will be grouped by timeseries. + +Or we can also use the [`TS.MGET`]({{< relref "commands/ts.mget/" >}}) command to get the last sample that matches the specific filter: + +``` +TS.MGET FILTER area_id=32 +``` + +### Filtering by value +We can filter by value across a single or multiple timeseries: + +``` +TS.RANGE sensor1 - + FILTER_BY_VALUE 25 30 +``` +This command will return all data points whose value sits between 25 and 30, inclusive. + +To achieve the same filtering on multiple series we have to combine the filtering by value with filtering by label: + +``` +TS.MRANGE - + FILTER_BY_VALUE 20 30 FILTER region=east +``` + +### Filtering by timestamp +To retrieve the datapoints for specific timestamps on one or multiple timeseries we can use the `FILTER_BY_TS` argument: + +Filter on one timeseries: +``` +TS.RANGE sensor1 - + FILTER_BY_TS 1626435230501 1626443276598 +``` + +Filter on multiple timeseries: +``` +TS.MRANGE - + FILTER_BY_TS 1626435230501 1626443276598 FILTER region=east +``` + + +## Aggregation +It's possible to combine values of one or more timeseries by leveraging aggregation functions: +``` +TS.RANGE ... AGGREGATION aggType bucketDuration... +``` + +For example, to find the average temperature per hour in our `sensor1` series we could run: +``` +TS.RANGE sensor1 - + + AGGREGATION avg 3600000 +``` + +To achieve the same across multiple sensors from the area with id of 32 we would run: +``` +TS.MRANGE - + AGGREGATION avg 3600000 FILTER area_id=32 +``` + +### Aggregation bucket alignment +When doing aggregations, the aggregation buckets will be aligned to 0 as so: +``` +TS.RANGE sensor3 10 70 + AGGREGATION min 25 +``` + +``` +Value: | (1000) (2000) (3000) (4000) (5000) (6000) (7000) +Timestamp: |-------|10|-------|20|-------|30|-------|40|-------|50|-------|60|-------|70|---> + +Bucket(25ms): |_________________________||_________________________||___________________________| + V V V + min(1000, 2000)=1000 min(3000, 4000)=3000 min(5000, 6000, 7000)=5000 +``` + +And we will get the following datapoints: 1000, 3000, 5000. + +You can choose to align the buckets to the start or end of the queried interval as so: +``` +TS.RANGE sensor3 10 70 + AGGREGATION min 25 ALIGN start +``` + +``` +Value: | (1000) (2000) (3000) (4000) (5000) (6000) (7000) +Timestamp: |-------|10|-------|20|-------|30|-------|40|-------|50|-------|60|-------|70|---> + +Bucket(25ms): |__________________________||_________________________||___________________________| + V V V + min(1000, 2000, 3000)=1000 min(4000, 5000)=4000 min(6000, 7000)=6000 +``` +The result array will contain the following datapoints: 1000, 4000 and 6000 + + +### Aggregation across timeseries + +By default, results of multiple timeseries will be grouped by timeseries, but (since v1.6) you can use the `GROUPBY` and `REDUCE` options to group them by label and apply an additional aggregation. + +To find minimum temperature per region, for example, we can run: + +``` +TS.MRANGE - + FILTER region=(east,west) GROUPBY region REDUCE min +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Time series use cases + + ' +linkTitle: Use cases +title: Use cases +weight: 5 +--- + +**Monitoring (data center)** + +Modern data centers have a lot of moving pieces, such as infrastructure (servers and networks) and software systems (applications and services) that need to be monitored around the clock. + +Redis Time Series allows you to plan for new resources upfront, optimize the utilization of existing resources, reconstruct the circumstances that led to outages, and identify application performance issues by analyzing and reporting on the following metrics: + +- Maximum CPU utilization per server +- Maximum network latency between two services +- Average IO bandwidth utilization of a storage system +- 99th percentile of the response time of a specific application outages + +**Weather analysis (environment)** + +Redis Time Series can be used to track environmental measurements such as the number of daily sunshine hours and hourly rainfall depth, over a period of many years. Seasonally, you can measure average rainfall depth, average daily temperature, and the maximum number of sunny hours per day, for example. Watch the increase of the maximum daily temperature over the years. Predict the expected temperature and rainfall depth in a specific location for a particular week of the year. + +Multiple time series can be collected, each for a different location. By utilizing secondary indexes, measurements can be aggregated over given geographical regions (e.g., minimal and maximal daily temperature in Europe) or over locations with specific attributes (e.g., average rainfall depth in mountainous regions). + +Example metrics include: + +- Rain (cm) +- Temperature (C) +- Sunny periods (h) + +**Analysis of the atmosphere (environment)** + +The atmospheric concentration of CO2 is more important than ever before. Use TimeSeries to track average, maximum and minimum CO2 level per season and average yearly CO2 over the last decades. Example metrics include: + +- Concentration of CO2 (ppm) +- Location + +**Flight data recording (sensor data and IoT)** + +Planes have a multitude of sensors. This sensor data is stored in a black box and also shared with external systems. TimeSeries can help you reconstruct the sequence of events over time, optimize operations and maintenance intervals, improve safety, and provide feedback to the equipment manufacturers about the part quality. Example metrics include: + +- Altitude +- Flight path +- Engine temperature +- Level of vibrations +- Pressure + +**Ship logbooks (sensor data and IoT)** + +It's very common to keep track of ship voyages via (digital) logbooks. Use TimeSeries to calculate optimal routes using these metrics: + +- Wind (km/h) +- Ocean conditions (classes) +- Speed (knots) +- Location (long, lat) + +**Connected car (sensor data and IoT)** + +Modern cars are exposing several metrics via a standard interface. Use TimeSeries to correlate average fuel consumption with the tire pressure, figure out how long to keep a car in the fleet, determine optimal maintenance intervals, and calculate tax savings by type of the road (taxable vs. nontaxable roads). Example metrics include: + +- Acceleration +- Location (long, lat) +- Fuel level (liter) +- Distances (km) +- Speed (km/h) +- Tire pressure +- Distance until next maintenance check + +**Smart metering (sensor data and IoT)** + +Modern houses and facilities gather details about energy consumption/production. Use Redis Time Series to aggregate billing based on monthly consumption. Optimize the network by redirecting the energy delivery relative to the fluctuations in need. Provide recommendations on how to improve the energy consumption behavior. Example metrics include: + +- Consumption per location +- Produced amount of electrical energy per location + +**Quality of service (telecom)** + +Mobile phone usage is increasing, producing a natural growth that just correlates to the increasing number of cellphones. However, there might also be spikes that correlate with specific events (for example, more messages around world championships). + +Telecom providers need to ensure that they are providing the necessary infrastructure to deliver the right quality of service. This includes using mini towers for short-term peaks. Use TimeSeries to correlate traffic peaks to specific events, load balance traffic over several towers or mini towers, and predictively plan the infrastructure. Metrics include the amount of traffic per tower. + +**Stock trading (finance)** + +Stock trading is highly automated today. Algorithms, and not just human beings, are trading, from the amount of bids and asks for the trading of a stock to the extreme volumes of trades per second (millions of ops per second). Computer-driven trading requires millisecond response times. It's necessary to keep a lot of data points within a very short period of time (for example, price fluctuations per second within a minute). In addition, the long-term history needs to be kept to make statements about trends or for regulatory purposes. + +Use Redis Time Series to identify correlations between the trading behavior and other events (for example, social network posts). Discover a developing market. Detect anomalies to discover insider trades. Example metrics include: + +- Exact time and order of a trade by itself +- Type of the event (trade/bid) +- The stock price--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis time series support multiple configuration parameters. +linkTitle: Configuration +title: Configuration Parameters +weight: 3 +--- +## Redis Open Source - set configuration parameters + +Before Redis 8 in Redis Open Source (version 8.0), all time series configuration parameters are load-time parameters. +Use one of the following methods to set the values of load-time configuration parameters: + +- Pass them as command-line arguments following the `loadmodule` argument when starting `redis-server`: + + `redis-server --loadmodule ./{modulename}.so [OPT VAL]...` + +- Add them as arguments to the `loadmodule` directive in your configuration file (for example, `redis.conf`): + + `loadmodule ./{modulename}.so [OPT VAL]...` + +- Use the `MODULE LOAD path [arg [arg ...]]` command. + +- Use the `MODULE LOADEX path [CONFIG name value [CONFIG name value ...]] [ARGS args [args ....]]` command. + +Starting with Redis 8.0, most time series configuration parameters are runtime parameters. +While you can set runtime parameters at load time, using the Redis `CONFIG` command is easier and works the same way as with Redis runtime configuration parameters. + +This means: + +- `CONFIG SET parameter value [parameter value ...] ` + + Set one or more configuration parameters. + +- `CONFIG GET parameter [parameter ...]` + + Read the current value of one of more parameters. + +- `CONFIG REWRITE` + + Rewrite your Redis configuration file (for example, the `redis.conf` file) to reflect the configuration changes. + +Starting with Redis 8.0, you can specify time series configuration parameters directly in your Redis configuration file the same way you would for Redis configuration parameters. + +Once a value is set with `CONFIG SET` or added manually to your configuration file, it will overwrite values set with `--loadmodule`, `loadmodule`, `MODULE LOAD`, or `MODULE LOADEX`. + +In a cluster, you must run `CONFIG SET` and `CONFIG REWRITE` on each node separately. + +In Redis 8.0, new names for the time series configuration parameters were introduced to align the naming with the Redis configuration parameters. +You must use the new names when using the `CONFIG` command. + +## Time series configuration parameters + +| Parameter name
(version < 8.0) | Parameter name
(version ≥ 8.0) | Run-time | Redis
Software | Redis
Cloud | +| :------- | :------- | :------- | :------- | :------- | +| CHUNK_SIZE_BYTES | [ts-chunk-size-bytes](#chunk_size_bytes--ts-chunk-size-bytes) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| COMPACTION_POLICY | [ts-compaction-policy](#compaction_policy--ts-compaction-policy) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| DUPLICATE_POLICY | [ts-duplicate-policy](#duplicate_policy--ts-duplicate-policy) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| RETENTION_POLICY | [ts-retention-policy](#retention_policy--ts-retention-policy) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| ENCODING | [ts-encoding](#encoding--ts-encoding) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| IGNORE_MAX_TIME_DIFF | [ts-ignore-max-time-diff](#ignore_max_time_diff--ts-ignore-max-time-diff-and-ignore_max_val_diff--ts-ignore-max-val-diff) | :white_check_mark: ||| +| IGNORE_MAX_VAL_DIFF | [ts-ignore-max-val-diff](#ignore_max_time_diff--ts-ignore-max-time-diff-and-ignore_max_val_diff--ts-ignore-max-val-diff) | :white_check_mark: ||| +| NUM_THREADS | [ts-num-threads](#num_threads--ts-num-threads) | :white_large_square: | ✅ Supported

| ❌ Flexible & Annual
❌ Free & Fixed | +| [OSS_GLOBAL_PASSWORD](#oss_global_password) | Deprecated in v8.0 | :white_check_mark: ||| + +--- + +### CHUNK_SIZE_BYTES / ts-chunk-size-bytes + +The initial allocation size, in bytes, for the data part of each new chunk. Actual chunks may consume more memory. +Changing this value does not affect existing chunks. + +Type: integer + +Valid range: `[48 .. 1048576]`; must be a multiple of 8 + +#### Precedence order + +Because the chunk size can be provided at different levels, the actual precedence of the chunk size is: + +1. Key-level policy, as set with [`TS.CREATE`]({{< relref "/commands/ts.create/" >}})'s and [`TS.ALTER`]({{< relref "/commands/ts.alter/" >}})'s `CHUNK_SIZE` optional argument. +1. The `ts-chunk-size-bytes` configuration parameter. +1. The hard-coded default: `4096` + +#### Example + +Set the default chunk size to 1024 bytes: + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so CHUNK_SIZE_BYTES 1024 +``` + +Version >= 8.0: + +``` +redis> CONFIG SET ts-chunk-size-bytes 1024 +``` + +### COMPACTION_POLICY / ts-compaction-policy + +Default compaction rules for newly created keys with [`TS.ADD`]({{< relref "/commands/ts.add/" >}}), [`TS.INCRBY`]({{< relref "/commands/ts.incrby/" >}}), and [`TS.DECRBY`]({{< relref "/commands/ts.decrby/" >}}). + +Type: string + +Note that this configuration parameter does not affect keys you create with [`TS.CREATE`]({{< relref "commands/ts.create/" >}}). To understand why, consider the following scenario: Suppose you define a default compaction policy but then want to manually create an additional compaction rule (using [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}})), which requires you to first create an empty destination key (using `TS.CREATE`). This approach creates a problem: the default compaction policy would cause Redis to automatically create undesired compactions for the destination key. + +Each rule is separated by a semicolon (`;`), the rule consists of multiple fields that are separated by a colon (`:`): + +* Aggregation type: One of the following: + + | Aggregator | Description | + | ---------- | ---------------------------------------------------------------- | + | `avg` | Arithmetic mean of all values | + | `sum` | Sum of all values | + | `min` | Minimum value | + | `max` | Maximum value | + | `range` | Difference between the highest and the lowest value | + | `count` | Number of values | + | `first` | The value with the lowest timestamp in the bucket | + | `last` | The value with the highest timestamp in the bucket | + | `std.p` | Population standard deviation of the values | + | `std.s` | Sample standard deviation of the values | + | `var.p` | Population variance of the values | + | `var.s` | Sample variance of the values | + | `twa` | Time-weighted average of all values (since v1.8) | + +* Duration of each time bucket - number and the time representation (Example for one minute: `1M`, `60s`, or `60000m`) + + * m - millisecond + * s - seconds + * M - minute + * h - hour + * d - day + +* Retention time - number and the time representation (Example for one minute: `1M`, `60s`, or `60000m`) + + * m - millisecond + * s - seconds + * M - minute + * h - hour + * d - day + + `0m`, `0s`, `0M`, `0h`, or `0d` means no expiration. + +* (Since v1.8): + + Optional: Time bucket alignment - number and the time representation (Example for one minute: `1M`, `60s`, or `60000m`) + + * m - millisecond + * s - seconds + * M - minute + * h - hour + * d - day + + Ensure that there is a bucket that starts at exactly _alignTimestamp_ after the Epoch and align all other buckets accordingly. Default value: 0 (aligned with the Epoch). Example: if _bucketDuration_ is 24 hours, setting _alignTimestamp_ to `6h` (6 hours after the Epoch) will ensure that each bucket’s timeframe is [06:00 .. 06:00). + +{{% warning %}} +In a clustered environment, if you set this configuration parameter, you must use [hash tags]({{< relref "/operate/oss_and_stack/reference/cluster-spec" >}}#hash-tags) for all time series key names. This ensures that Redis will create each compaction in the same hash slot as its source key. If you don't, the system may fail to compact the data without displaying any error messages. +{{% /warning %}} + +When a compaction policy is defined, compaction rules are created automatically for newly created time series, and the compaction key name would be: + +* If the time bucket alignment is 0: + + _key_agg_dur_ where _key_ is the key of the source time series, _agg_ is the aggregator (in uppercase), and _dur_ is the bucket duration in milliseconds. Example: `key_SUM_60000`. + +* If the time bucket alignment is not 0: + + _key_agg_dur_aln_ where _key_ is the key of the source time series, _agg_ is the aggregator (in uppercase), _dur_ is the bucket duration in milliseconds, and _aln_ is the time bucket alignment in milliseconds. Example: `key_SUM_60000_1000`. + +#### Precedence order + +1. The `ts-compaction-policy` configuration parameter. +1. No compaction rules. + +#### Example rules + +- `max:1M:1h` - Aggregate using `max` over one-minute windows and retain the last hour +- `twa:1d:0m:360M` - Aggregate daily [06:00 .. 06:00) using `twa`; no expiration + +#### Example + +Set a compaction policy composed of 5 compaction rules: + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so COMPACTION_POLICY max:1m:1h;min:10s:5d:10d;last:5M:10m;avg:2h:10d;avg:3d:100d +``` + +Version >= 8.0: + +``` +redis> CONFIG SET ts-compaction-policy max:1m:1h;min:10s:5d:10d;last:5M:10m;avg:2h:10d;avg:3d:100d +``` + +### DUPLICATE_POLICY / ts-duplicate-policy + +The default policy for handling insertion ([`TS.ADD`]({{< relref "/commands/ts.add/" >}}) and [`TS.MADD`]({{< relref "/commands/ts.madd/" >}})) of multiple samples with identical timestamps, with one of the following values: + + | policy | description | + | ---------- | ---------------------------------------------------------------- | + | `BLOCK` | Ignore any newly reported value and reply with an error | + | `FIRST` | Ignore any newly reported value | + | `LAST` | Override with the newly reported value | + | `MIN` | Only override if the value is lower than the existing value | + | `MAX` | Only override if the value is higher than the existing value | + | `SUM` | If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value. | + +The default value is applied to each new time series upon its creation. + +Type: string + +#### Precedence order + +Because the duplication policy can be provided at different levels, the actual precedence of the duplication policy is: + +1. [`TS.ADD`]({{< relref "/commands/ts.add/" >}})'s `ON_DUPLICATE_POLICY` optional argument. +1. Key-level policy, as set with [`TS.CREATE`]({{< relref "/commands/ts.create/" >}})'s and [`TS.ALTER`]({{< relref "/commands/ts.alter/" >}})'s `DUPLICATE_POLICY` optional argument. +1. The `ts-duplicate-policy` configuration parameter. +1. The hard-coded default: `BLOCK` + +### RETENTION_POLICY / ts-retention-policy + +The default retention period, in milliseconds, for newly created keys. + +The retention period is the maximum age of samples compared to the highest reported timestamp, per key. Samples are expired based solely on the difference between their timestamps and the timestamps passed to subsequent [`TS.ADD`]({{< relref "commands/ts.add/" >}}), [`TS.MADD`]({{< relref "commands/ts.madd/" >}}), [`TS.INCRBY`]({{< relref "commands/ts.incrby/" >}}), and [`TS.DECRBY`]({{< relref "commands/ts.decrby/" >}}) calls. + +Type: integer + +Valid range: `[0 .. 9,223,372,036,854,775,807]` + +The value `0` means no expiration. + +When both `COMPACTION_POLICY` / `ts-compaction-policy` and `RETENTION_POLICY` / `ts-retention-policy` are specified, the retention of newly created compactions is according to the retention time specified in `COMPACTION_POLICY` / `ts-compaction-policy`. + +#### Precedence order + +Because the retention can be provided at different levels, the actual precedence of the retention is: + +1. Key-level retention, as set with [`TS.CREATE`]({{< relref "/commands/ts.create/" >}})'s and [`TS.ALTER`]({{< relref "/commands/ts.alter/" >}})'s `RETENTION` optional argument. +1. The `ts-retention-policy` configuration parameter. +1. No retention. + +#### Example + +Set the default retention to 300 days: + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so RETENTION_POLICY 25920000000 +``` + +Version >= 8.0: + +``` +redis> CONFIG SET ts-retention-policy 25920000000 +``` + +### ENCODING / ts-encoding + +Note: Before v1.6 this configuration parameter was named `CHUNK_TYPE`. + +Default chunk encoding for automatically created compactions when [ts-compaction-policy](#ts-compaction-policy) is configured. + +Type: string + +Valid values: `COMPRESSED`, `UNCOMPRESSED` + +#### Precedence order + +1. The `ts-encoding` configuration parameter. +1. The hard-coded default: `COMPRESSED` + +#### Example + +Set the default encoding to `UNCOMPRESSED`: + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so ENCODING UNCOMPRESSED +``` + +Version >= 8.0: + +``` +redis> CONFIG SET ts-encoding UNCOMPRESSED +``` + +### IGNORE_MAX_TIME_DIFF / ts-ignore-max-time-diff and IGNORE_MAX_VAL_DIFF / ts-ignore-max-val-diff + +Default values for newly created keys. + +Types: +- `ts-ignore-max-time-diff`: integer +- `ts-ignore-max-val-diff`: double + +Valid ranges: +- `ts-ignore-max-time-diff`: `[0 .. 9,223,372,036,854,775,807]` +- `ts-ignore-max-val-diff`: `[0 .. 1.7976931348623157e+308]` + +Many sensors report data periodically. Often, the difference between the measured value and the previous measured value is negligible and related to random noise or to measurement accuracy limitations. In such situations it may be preferable not to add the new measurement to the time series. + +A new sample is considered a duplicate and is ignored if the following conditions are met: + +1. The time series is not a compaction. +1. The time series' `ts-duplicate-policy` is `LAST`. +1. The sample is added in-order (`timestamp ≥ max_timestamp`). +1. The difference of the current timestamp from the previous timestamp (`timestamp - max_timestamp`) is less than or equal to `ts-ignore-max-time-diff`. +1. The absolute value difference of the current value from the value at the previous maximum timestamp (`abs(value - value_at_max_timestamp`) is less than or equal to `ts-ignore-max-val-diff`. + +where `max_timestamp` is the timestamp of the sample with the largest timestamp in the time series, and `value_at_max_timestamp` is the value at `max_timestamp`. + +#### Precedence order + +1. The `ts-ignore-max-time-diff` and `ts-ignore-max-val-diff` configuration parameters. +1. The hard-coded defaults: `0` and `0.0`. + +#### Example + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so IGNORE_MAX_TIME_DIFF 10 IGNORE_MAX_VAL_DIFF 0.1 +``` + +Version >= 8.0: + +``` +redis> CONFIG SET ts-ignore-max-time-diff 10 ts-ignore-max-val-diff 0.1 +``` + +### NUM_THREADS / ts-num-threads + +The maximum number of per-shard threads for cross-key queries when using cluster mode ([`TS.MRANGE`]({{< relref "/commands/ts.mrange/" >}}), [`TS.MREVRANGE`]({{< relref "/commands/ts.mrevrange/" >}}), [`TS.MGET`]({{< relref "/commands/ts.mget/" >}}), and [`TS.QUERYINDEX`]({{< relref "/commands/ts.queryindex/" >}})). The value must be equal to or greater than `1`. Note that increasing this value may either increase or decrease the performance! + +Type: integer + +Valid range: `[1..16]` + +Redis Open Source default: `3` + +Redis Software default: Set by plan, and automatically updates when you change your plan. + +Redis Cloud defaults: +- Flexible & Annual: Set by plan +- Free & Fixed: `1` + +#### Example + +Version < 8.0: + +``` +$ redis-server --loadmodule ./redistimeseries.so NUM_THREADS 3 +``` + +Version >= 8.0: + +``` +redis> redis-server --loadmodule ./redistimeseries.so ts-num-threads 3 +``` + +### OSS_GLOBAL_PASSWORD + +Prior to version 8.0, when using time series in a cluster, you had to set the `OSS_GLOBAL_PASSWORD` configuration parameter on all cluster nodes. As of version 8.0, Redis no longer uses this parameter and ignores it if present. Redis now uses a new shared secret mechanism to send internal commands between cluster nodes. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Developing RedisTimeSeries + + ' +linkTitle: Development +title: Development +weight: 4 +--- + +Developing RedisTimeSeries involves setting up the development environment (which can be either Linux-based or macOS-based), building the RedisTimeSeries module, running tests and benchmarks, and debugging both the RedisTimeSeries module and its tests. + +## Cloning the git repository +By invoking the following command, the RedisTimeSeries module and its submodules are cloned: +```sh +git clone --recursive https://github.com/RedisTimeSeries/RedisTimeSeries.git +``` +## Working in an isolated environment +There are several reasons to develop in an isolated environment, like keeping your workstation clean, and developing for a different Linux distribution. +The most general option for an isolated environment is a virtual machine (it's very easy to set one up using [Vagrant](https://www.vagrantup.com)). +Docker is even a more agile solution, as it offers an almost instant solution: +``` +ts=$(docker run -d -it -v $PWD:/build debian:bullseye bash) +docker exec -it $ts bash +``` +Then, from within the container, `cd /build` and go on as usual. +In this mode, all installations remain in the scope of the Docker container. +Upon exiting the container, you can either re-invoke the container with the above `docker exec` or commit the state of the container to an image and re-invoke it on a later stage: + +``` +docker commit $ts ts1 +docker stop $ts +ts=$(docker run -d -it -v $PWD:/build ts1 bash) +docker exec -it $ts bash +``` + +## Installing prerequisites +To build and test RedisTimeSeries you needs to install several packages, depending on the underlying OS. Currently, we support the Ubuntu/Debian, CentOS, Fedora, and macOS. + +If you have `gnu make` installed, you can execute +``` +cd RedisTimeSeries +make setup +``` +Alternatively, just invoke the following: +``` +cd RedisTimeSeries +git submodule update --init --recursive +./deps/readies/bin/getpy3 +./system-setup.py +``` +Note that `system-setup.py` **will install various packages on your system** using the native package manager and pip. This requires root permissions (i.e. `sudo`) on Linux. + +If you prefer to avoid that, you can: + +* Review `system-setup.py` and install packages manually, +* Use an isolated environment like explained above, +* Utilize a Python virtual environment, as Python installations known to be sensitive when not used in isolation. + +## Installing Redis +As a rule of thumb, you're better off running the latest Redis version. + +If your OS has a Redis package, you can install it using the OS package manager. + +Otherwise, you can invoke `./deps/readies/bin/getredis`. + +## Getting help +`make help` provides a quick summary of the development features. + +## Building from source +`make` will build RedisTimeSeries. + +Build artifacts are placed into `bin/linux-x64-release` (or similar, according to your platform and build options). + +Use `make clean` to remove built artifacts. `make clean ALL=1` will remove the entire binary artifacts directory. + +## Running Redis with RedisTimeSeries +The following will run `redis` and load the RedisTimeSeries module. +``` +make run +``` +You can open `redis-cli` in another terminal to interact with it. + +## Running tests +The module includes a basic set of unit tests and integration tests: +* C unit tests, located in `src/tests`, run by `make unit_tests`. +* Python integration tests (enabled by RLTest), located in `tests/flow`, run by `make flow_tests`. + +One can run all tests by invoking `make test`. +A single test can be run using the `TEST` parameter, e.g. `make flow_test TEST=file:name`. + +## Debugging +To build for debugging (enabling symbolic information and disabling optimization), run `make DEBUG=1`. +You can the use `make run DEBUG=1` to invoke `gdb`. +In addition to the usual way to set breakpoints in `gdb`, it is possible to use the `BB` macro to set a breakpoint inside the RedisTimeSeries code. It will only have an effect when running under `gdb`. + +Similarly, Python tests in a single-test mode, one can set a breakpoint by using the `BB()` function inside a test. This will invoke `pudb`. + +The two methods can be combined: one can set a breakpoint within a flow test, and when reached, connect `gdb` to a `redis-server` process to debug the module. + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Out-of-order / backfilled ingestion performance considerations + + ' +linkTitle: Out-of-order / backfilled ingestion performance considerations +title: Out-of-order / backfilled ingestion performance considerations +weight: 5 +--- + +When an older timestamp is inserted into a time series, the chunk of memory corresponding to the new sample’s time frame will potentially have to be retrieved from the main memory (you can read more about these chunks [here](https://redislabs.com/blog/redistimeseries-ga-making-4th-dimension-truly-immersive/)). When this chunk is a compressed chunk, it will also have to be decoded before we can insert/update to it. These are memory-intensive—and in the case of decoding, compute-intensive—operations that will influence the overall achievable ingestion rate. + + +Ingest performance is critical for us, which pushed us to assess and be transparent about the impact of the out-of-order backfilled ratio on our overall high-performance TSDB. + + +To do so, we created a Go benchmark client that enabled us to control key factors that dictate overall system performance, like the out-of-order ratio, the compression of the series, the number of concurrent clients used, and command pipelining. For the full benchmark-driver configuration details and parameters, please refer to this [GitHub link](https://github.com/RedisTimeSeries/redistimeseries-ooo-benchmark). + + +Furthermore, all benchmark variations were run on Amazon Web Services instances, provisioned through our benchmark-testing infrastructure. Both the benchmarking client and database servers were running on separate c5.9xlarge instances. The tests were executed on a single-shard setup, with RedisTimeSeries version 1.4. + + +Below you can see the correlation between achievable ops/sec and out-of-order ratio for both compressed and uncompressed chunks. + + +## Compressed chunks out-of-order/backfilled impact analysis + +With compressed chunks, given that a single out-of-order datapoint implies the full decompression from double delta of the entire chunk, you should expect higher overheads in out-of-order writes. + +As a rule of thumb, to increase out-of-order compressed performance, reduce the chunk size as much as possible. Smaller chunks imply less computation on double-delta decompression and thus less overall impact, with the drawback of smaller compression ratio. + +The graphs and tables below make these key points: + +- If the database receives 1% of out-of-order samples with our current default chunk size in bytes (4096) the overall impact on the ingestion rate should be 10%. + +- At larger out-of-order percentages, like 5%, 10%, or even 25%, the overall impact should be between 35% to 75% fewer ops/sec. At this level of out-of-order percentages, you should really consider reducing the chunk size. + +- We've observed a maximum 95% drop in the achievable ops/sec even at 99% out-of-order ingestion. (Again, reducing the chunk size can cut the impact in half.) + +compressed-overall-ops-sec-vs-out-of-order-percentage + +compressed-overall-p50-lat-vs-out-of-order-percentage + +compressed-out-of-order-overhead-table + +## Uncompressed chunks out-of-order/backfilled impact analysis + +As visible on the charts and tables below, the chunk size does not affect the overall out-of-order impact on ingestion (meaning that if I have a chunk size of 256 bytes and a chunk size of 4096 bytes, the expected impact that out-of-order ingestion is the same—as it should be). +Apart from that, we can observe the following key take-aways: + +- If the database receives 1% of out-of-order samples, the overall impact in ingestion rate should be low or even unmeasurable. + +- At higher out-of-order percentages, like 5%, 10%, or even 25%, the overall impact should be 5% to 19% fewer ops/sec. + +- We've observed a maximum 45% drop in the achievable ops/sec, even at 99% out-of-order ingestion. + +uncompressed-overall-ops-sec-vs-out-of-order-percentage + +uncompressed-overall-p50-lat-vs-out-of-order-percentage + +uncompressed-out-of-order-overhead-table--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Reference + + ' +linkTitle: Reference +title: Reference +weight: 5 +--- +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Ingest and query time series data with Redis +linkTitle: Time series +stack: true +title: Time series +weight: 150 +--- + +[![Discord](https://img.shields.io/discord/697882427875393627?style=flat-square)](https://discord.gg/KExRgMb) +[![Github](https://img.shields.io/static/v1?label=&message=repository&color=5961FF&logo=github)](https://github.com/RedisTimeSeries/RedisTimeSeries/) + +The Redis time series structure lets you store and query timestamped data points. + +Redis time series is available in Redis Open Source, Redis Software, and Redis Cloud. +See +[Install Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack" >}}) or +[Install Redis Enterprise]({{< relref "/operate/rs/installing-upgrading/install" >}}) +for full installation instructions. + +## Features +* High volume inserts, low latency reads +* Query by start time and end-time +* Aggregated queries (min, max, avg, sum, range, count, first, last, STD.P, STD.S, Var.P, Var.S, twa) for any time bucket +* Configurable maximum retention period +* Compaction for automatically updated aggregated timeseries +* Secondary indexing for time series entries. Each time series has labels (field value pairs) which will allows to query by labels + +## Client libraries + +Official and community client libraries in Python, Java, JavaScript, Ruby, Go, C#, Rust, and PHP. + +See the [clients page](clients) for the full list. + +## Using with other metrics tools + +In the [RedisTimeSeries](https://github.com/RedisTimeSeries) GitHub organization you can +find projects that help you integrate RedisTimeSeries with other tools, including: + +1. [Prometheus](https://github.com/RedisTimeSeries/prometheus-redistimeseries-adapter), read/write adapter to use RedisTimeSeries as backend db. +2. [Grafana 7.1+](https://github.com/RedisTimeSeries/grafana-redis-datasource), using the [Redis Data Source](https://redislabs.com/blog/introducing-the-redis-data-source-plug-in-for-grafana/). +3. [Telegraf](https://github.com/influxdata/telegraf). Download the plugin from [InfluxData](https://portal.influxdata.com/downloads/). +4. StatsD, Graphite exports using graphite protocol. + +## Memory model + +A time series is a linked list of memory chunks. Each chunk has a predefined size of samples. Each sample is a 128-bit tuple: 64 bits for the timestamp and 64 bits for the value. + +## Forum + +Got questions? Feel free to ask at the [RedisTimeSeries mailing list](https://forum.redislabs.com/c/modules/redistimeseries). + +## License +RedisTimeSeries is licensed under the [Redis Source Available License 2.0 (RSALv2)](https://redis.com/legal/rsalv2-agreement) or the [Server Side Public License v1 (SSPLv1)](https://www.mongodb.com/licensing/server-side-public-license). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis strings + + ' +linkTitle: Strings +title: Redis Strings +weight: 10 +--- + +Redis strings store sequences of bytes, including text, serialized objects, and binary arrays. +As such, strings are the simplest type of value you can associate with +a Redis key. +They're often used for caching, but they support additional functionality that lets you implement counters and perform bitwise operations, too. + +Since Redis keys are strings, when we use the string type as a value too, +we are mapping a string to another string. The string data type is useful +for a number of use cases, like caching HTML fragments or pages. + +{{< clients-example set_tutorial set_get >}} + > SET bike:1 Deimos + OK + > GET bike:1 + "Deimos" +{{< /clients-example >}} + +As you can see using the [`SET`]({{< relref "/commands/set" >}}) and the [`GET`]({{< relref "/commands/get" >}}) commands are the way we set +and retrieve a string value. Note that [`SET`]({{< relref "/commands/set" >}}) will replace any existing value +already stored into the key, in the case that the key already exists, even if +the key is associated with a non-string value. So [`SET`]({{< relref "/commands/set" >}}) performs an assignment. + +Values can be strings (including binary data) of every kind, for instance you +can store a jpeg image inside a value. A value can't be bigger than 512 MB. + +The [`SET`]({{< relref "/commands/set" >}}) command has interesting options, that are provided as additional +arguments. For example, I may ask [`SET`]({{< relref "/commands/set" >}}) to fail if the key already exists, +or the opposite, that it only succeed if the key already exists: + +{{< clients-example set_tutorial setnx_xx >}} + > set bike:1 bike nx + (nil) + > set bike:1 bike xx + OK +{{< /clients-example >}} + +There are a number of other commands for operating on strings. For example +the [`GETSET`]({{< relref "/commands/getset" >}}) command sets a key to a new value, returning the old value as the +result. You can use this command, for example, if you have a +system that increments a Redis key using [`INCR`]({{< relref "/commands/incr" >}}) +every time your web site receives a new visitor. You may want to collect this +information once every hour, without losing a single increment. +You can [`GETSET`]({{< relref "/commands/getset" >}}) the key, assigning it the new value of "0" and reading the +old value back. + +The ability to set or retrieve the value of multiple keys in a single +command is also useful for reduced latency. For this reason there are +the [`MSET`]({{< relref "/commands/mset" >}}) and [`MGET`]({{< relref "/commands/mget" >}}) commands: + +{{< clients-example set_tutorial mset >}} + > mset bike:1 "Deimos" bike:2 "Ares" bike:3 "Vanth" + OK + > mget bike:1 bike:2 bike:3 + 1) "Deimos" + 2) "Ares" + 3) "Vanth" +{{< /clients-example >}} + +When [`MGET`]({{< relref "/commands/mget" >}}) is used, Redis returns an array of values. + +### Strings as counters +Even if strings are the basic values of Redis, there are interesting operations +you can perform with them. For instance, one is atomic increment: + +{{< clients-example set_tutorial incr >}} + > set total_crashes 0 + OK + > incr total_crashes + (integer) 1 + > incrby total_crashes 10 + (integer) 11 +{{< /clients-example >}} + +The [`INCR`]({{< relref "/commands/incr" >}}) command parses the string value as an integer, +increments it by one, and finally sets the obtained value as the new value. +There are other similar commands like [`INCRBY`]({{< relref "/commands/incrby" >}}), +[`DECR`]({{< relref "/commands/decr" >}}) and [`DECRBY`]({{< relref "/commands/decrby" >}}). Internally it's +always the same command, acting in a slightly different way. + +What does it mean that INCR is atomic? +That even multiple clients issuing INCR against +the same key will never enter into a race condition. For instance, it will never +happen that client 1 reads "10", client 2 reads "10" at the same time, both +increment to 11, and set the new value to 11. The final value will always be +12 and the read-increment-set operation is performed while all the other +clients are not executing a command at the same time. + + +## Limits + +By default, a single Redis string can be a maximum of 512 MB. + +## Basic commands + +### Getting and setting Strings + +* [`SET`]({{< relref "/commands/set" >}}) stores a string value. +* [`SETNX`]({{< relref "/commands/setnx" >}}) stores a string value only if the key doesn't already exist. Useful for implementing locks. +* [`GET`]({{< relref "/commands/get" >}}) retrieves a string value. +* [`MGET`]({{< relref "/commands/mget" >}}) retrieves multiple string values in a single operation. + +### Managing counters + +* [`INCR`]({{< relref "/commands/incr" >}}) atomically increments counters stored at a given key by 1. +* [`INCRBY`]({{< relref "/commands/incrby" >}}) atomically increments (and decrements when passing a negative number) counters stored at a given key. +* Another command exists for floating point counters: [`INCRBYFLOAT`]({{< relref "/commands/incrbyfloat" >}}). + +### Bitwise operations + +To perform bitwise operations on a string, see the [bitmaps data type]({{< relref "/develop/data-types/bitmaps" >}}) docs. + +See the [complete list of string commands]({{< relref "/commands/" >}}?group=string). + +## Performance + +Most string operations are O(1), which means they're highly efficient. +However, be careful with the [`SUBSTR`]({{< relref "/commands/substr" >}}), [`GETRANGE`]({{< relref "/commands/getrange" >}}), and [`SETRANGE`]({{< relref "/commands/setrange" >}}) commands, which can be O(n). +These random-access string commands may cause performance issues when dealing with large strings. + +## Alternatives + +If you're storing structured data as a serialized string, you may also want to consider Redis [hashes]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json/" >}}). + +## Learn more + +* [Redis Strings Explained](https://www.youtube.com/watch?v=7CUt4yWeRQE) is a short, comprehensive video explainer on Redis strings. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis strings in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis hashes + + ' +linkTitle: Hashes +title: Redis hashes +weight: 40 +--- + +Redis hashes are record types structured as collections of field-value pairs. +You can use hashes to represent basic objects and to store groupings of counters, among other things. + +{{< clients-example hash_tutorial set_get_all >}} +> HSET bike:1 model Deimos brand Ergonom type 'Enduro bikes' price 4972 +(integer) 4 +> HGET bike:1 model +"Deimos" +> HGET bike:1 price +"4972" +> HGETALL bike:1 +1) "model" +2) "Deimos" +3) "brand" +4) "Ergonom" +5) "type" +6) "Enduro bikes" +7) "price" +8) "4972" + +{{< /clients-example >}} + +While hashes are handy to represent *objects*, actually the number of fields you can +put inside a hash has no practical limits (other than available memory), so you can use +hashes in many different ways inside your application. + +The command [`HSET`]({{< relref "/commands/hset" >}}) sets multiple fields of the hash, while [`HGET`]({{< relref "/commands/hget" >}}) retrieves +a single field. [`HMGET`]({{< relref "/commands/hmget" >}}) is similar to [`HGET`]({{< relref "/commands/hget" >}}) but returns an array of values: + +{{< clients-example hash_tutorial hmget >}} +> HMGET bike:1 model price no-such-field +1) "Deimos" +2) "4972" +3) (nil) +{{< /clients-example >}} + +There are commands that are able to perform operations on individual fields +as well, like [`HINCRBY`]({{< relref "/commands/hincrby" >}}): + +{{< clients-example hash_tutorial hincrby >}} +> HINCRBY bike:1 price 100 +(integer) 5072 +> HINCRBY bike:1 price -100 +(integer) 4972 +{{< /clients-example >}} + +You can find the [full list of hash commands in the documentation]({{< relref "/commands#hash" >}}). + +It is worth noting that small hashes (i.e., a few elements with small values) are +encoded in special way in memory that make them very memory efficient. + +## Basic commands + +* [`HSET`]({{< relref "/commands/hset" >}}): sets the value of one or more fields on a hash. +* [`HGET`]({{< relref "/commands/hget" >}}): returns the value at a given field. +* [`HMGET`]({{< relref "/commands/hmget" >}}): returns the values at one or more given fields. +* [`HINCRBY`]({{< relref "/commands/hincrby" >}}): increments the value at a given field by the integer provided. + +See the [complete list of hash commands]({{< relref "/commands/" >}}?group=hash). + +## Examples + +* Store counters for the number of times bike:1 has been ridden, has crashed, or has changed owners: +{{< clients-example hash_tutorial incrby_get_mget >}} +> HINCRBY bike:1:stats rides 1 +(integer) 1 +> HINCRBY bike:1:stats rides 1 +(integer) 2 +> HINCRBY bike:1:stats rides 1 +(integer) 3 +> HINCRBY bike:1:stats crashes 1 +(integer) 1 +> HINCRBY bike:1:stats owners 1 +(integer) 1 +> HGET bike:1:stats rides +"3" +> HMGET bike:1:stats owners crashes +1) "1" +2) "1" +{{< /clients-example >}} + +## Field expiration + +New in Redis Open Source 7.4 is the ability to specify an expiration time or a time-to-live (TTL) value for individual hash fields. +This capability is comparable to [key expiration]({{< relref "/develop/use/keyspace#key-expiration" >}}) and includes a number of similar commands. + +Use the following commands to set either an exact expiration time or a TTL value for specific fields: + +* [`HEXPIRE`]({{< relref "/commands/hexpire" >}}): set the remaining TTL in seconds. +* [`HPEXPIRE`]({{< relref "/commands/hpexpire" >}}): set the remaining TTL in milliseconds. +* [`HEXPIREAT`]({{< relref "/commands/hexpireat" >}}): set the expiration time to a timestamp[^1] specified in seconds. +* [`HPEXPIREAT`]({{< relref "/commands/hpexpireat" >}}): set the expiration time to a timestamp specified in milliseconds. + +[^1]: all timestamps are specified in seconds or milliseconds since the [Unix epoch](https://en.wikipedia.org/wiki/Unix_time). + +Use the following commands to retrieve either the exact time when or the remaining TTL until specific fields will expire: + +* [`HEXPIRETIME`]({{< relref "/commands/hexpiretime" >}}): get the expiration time as a timestamp in seconds. +* [`HPEXPIRETIME`]({{< relref "/commands/hpexpiretime" >}}): get the expiration time as a timestamp in milliseconds. +* [`HTTL`]({{< relref "/commands/httl" >}}): get the remaining TTL in seconds. +* [`HPTTL`]({{< relref "/commands/hpttl" >}}): get the remaining TTL in milliseconds. + +Use the following command to remove the expiration of specific fields: + +* [`HPERSIST`]({{< relref "/commands/hpersist" >}}): remove the expiration. + +### Common field expiration use cases + +1. **Event Tracking**: Use a hash key to store events from the last hour. Set each event's TTL to one hour. Use `HLEN` to count events from the past hour. + +1. **Fraud Detection**: Create a hash with hourly counters for events. Set each field's TTL to 48 hours. Query the hash to get the number of events per hour for the last 48 hours. + +1. **Customer Session Management**: Store customer data in hash keys. Create a new hash key for each session and add a session field to the customer’s hash key. Expire both the session key and the session field in the customer’s hash key automatically when the session expires. + +1. **Active Session Tracking**: Store all active sessions in a hash key. Set each session's TTL to expire automatically after inactivity. Use `HLEN` to count active sessions. + +### Field expiration examples + +Support for hash field expiration in the official client libraries is not yet available, but you can test hash field expiration now with beta versions of the [Python (redis-py)](https://github.com/redis/redis-py) and [Java (Jedis)](https://github.com/redis/jedis) client libraries. + +Following are some Python examples that demonstrate how to use field expiration. + +Consider a hash data set for storing sensor data that has the following structure: + +```python +event = { + 'air_quality': 256, + 'battery_level':89 +} + +r.hset('sensor:sensor1', mapping=event) +``` + +In the examples below, you will likely need to refresh the `sensor:sensor1` key after its fields expire. + +Set and retrieve the TTL for multiple fields in a hash: + +```python +# set the TTL for two hash fields to 60 seconds +r.hexpire('sensor:sensor1', 60, 'air_quality', 'battery_level') +ttl = r.httl('sensor:sensor1', 'air_quality', 'battery_level') +print(ttl) +# prints [60, 60] +``` + +Set and retrieve a hash field's TTL in milliseconds: + +```python +# set the TTL of the 'air_quality' field in milliseconds +r.hpexpire('sensor:sensor1', 60000, 'air_quality') +# and retrieve it +pttl = r.hpttl('sensor:sensor1', 'air_quality') +print(pttl) +# prints [59994] # your actual value may vary +``` + +Set and retrieve a hash field’s expiration timestamp: + +```python +# set the expiration of 'air_quality' to now + 24 hours +# (similar to setting the TTL to 24 hours) +r.hexpireat('sensor:sensor1', + datetime.now() + timedelta(hours=24), + 'air_quality') +# and retrieve it +expire_time = r.hexpiretime('sensor:sensor1', 'air_quality') +print(expire_time) +# prints [1717668041] # your actual value may vary +``` + +## Performance + +Most Redis hash commands are O(1). + +A few commands, such as [`HKEYS`]({{< relref "/commands/hkeys" >}}), [`HVALS`]({{< relref "/commands/hvals" >}}), [`HGETALL`]({{< relref "/commands/hgetall" >}}), and most of the expiration-related commands, are O(n), where _n_ is the number of field-value pairs. + +## Limits + +Every hash can store up to 4,294,967,295 (2^32 - 1) field-value pairs. +In practice, your hashes are limited only by the overall memory on the VMs hosting your Redis deployment. + +## Learn more + +* [Redis Hashes Explained](https://www.youtube.com/watch?v=-KdITaRkQ-U) is a short, comprehensive video explainer covering Redis hashes. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis hashes in detail.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'HyperLogLog is a probabilistic data structure that estimates the cardinality + of a set. + + ' +linkTitle: HyperLogLog +title: HyperLogLog +weight: 1 +--- + +HyperLogLog is a probabilistic data structure that estimates the cardinality of a set. As a probabilistic data structure, HyperLogLog trades perfect accuracy for efficient space utilization. + +The Redis HyperLogLog implementation uses up to 12 KB and provides a standard error of 0.81%. + +Counting unique items usually requires an amount of memory +proportional to the number of items you want to count, because you need +to remember the elements you have already seen in the past in order to avoid +counting them multiple times. However, a set of algorithms exist that trade +memory for precision: they return an estimated measure with a standard error, +which, in the case of the Redis implementation for HyperLogLog, is less than 1%. +The magic of this algorithm is that you no longer need to use an amount of memory +proportional to the number of items counted, and instead can use a +constant amount of memory; 12k bytes in the worst case, or a lot less if your +HyperLogLog (We'll just call them HLL from now) has seen very few elements. + +HLLs in Redis, while technically a different data structure, are encoded +as a Redis string, so you can call [`GET`]({{< relref "/commands/get" >}}) to serialize a HLL, and [`SET`]({{< relref "/commands/set" >}}) +to deserialize it back to the server. + +Conceptually the HLL API is like using Sets to do the same task. You would +[`SADD`]({{< relref "/commands/sadd" >}}) every observed element into a set, and would use [`SCARD`]({{< relref "/commands/scard" >}}) to check the +number of elements inside the set, which are unique since [`SADD`]({{< relref "/commands/sadd" >}}) will not +re-add an existing element. + +While you don't really *add items* into an HLL, because the data structure +only contains a state that does not include actual elements, the API is the +same: + +* Every time you see a new element, you add it to the count with [`PFADD`]({{< relref "/commands/pfadd" >}}). +* When you want to retrieve the current approximation of unique elements added using the [`PFADD`]({{< relref "/commands/pfadd" >}}) command, you can use the [`PFCOUNT`]({{< relref "/commands/pfcount" >}}) command. If you need to merge two different HLLs, the [`PFMERGE`]({{< relref "/commands/pfmerge" >}}) command is available. Since HLLs provide approximate counts of unique elements, the result of the merge will give you an approximation of the number of unique elements across both source HLLs. + +{{< clients-example hll_tutorial pfadd >}} +> PFADD bikes Hyperion Deimos Phoebe Quaoar +(integer) 1 +> PFCOUNT bikes +(integer) 4 +> PFADD commuter_bikes Salacia Mimas Quaoar +(integer) 1 +> PFMERGE all_bikes bikes commuter_bikes +OK +> PFCOUNT all_bikes +(integer) 6 +{{< /clients-example >}} + +Some examples of use cases for this data structure is counting unique queries +performed by users in a search form every day, number of unique visitors to a web page and other similar cases. + +Redis is also able to perform the union of HLLs, please check the +[full documentation]({{< relref "/commands#hyperloglog" >}}) for more information. + +## Use cases + +**Anonymous unique visits of a web page (SaaS, analytics tools)** + +This application answers these questions: + +- How many unique visits has this page had on this day? +- How many unique users have played this song? +- How many unique users have viewed this video? + +{{% alert title="Note" color="warning" %}} + +Storing the IP address or any other kind of personal identifier is against the law in some countries, which makes it impossible to get unique visitor statistics on your website. + +{{% /alert %}} + +One HyperLogLog is created per page (video/song) per period, and every IP/identifier is added to it on every visit. + +## Basic commands + +* [`PFADD`]({{< relref "/commands/pfadd" >}}) adds an item to a HyperLogLog. +* [`PFCOUNT`]({{< relref "/commands/pfcount" >}}) returns an estimate of the number of items in the set. +* [`PFMERGE`]({{< relref "/commands/pfmerge" >}}) combines two or more HyperLogLogs into one. + +See the [complete list of HyperLogLog commands]({{< relref "/commands/" >}}?group=hyperloglog). + +## Performance + +Writing ([`PFADD`]({{< relref "/commands/pfadd" >}})) to and reading from ([`PFCOUNT`]({{< relref "/commands/pfcount" >}})) the HyperLogLog is done in constant time and space. +Merging HLLs is O(n), where _n_ is the number of sketches. + +## Limits + +The HyperLogLog can estimate the cardinality of sets with up to 18,446,744,073,709,551,616 (2^64) members. + +## Learn more + +* [Redis new data structure: the HyperLogLog](http://antirez.com/news/75) has a lot of details about the data structure and its implementation in Redis. +* [Redis HyperLogLog Explained](https://www.youtube.com/watch?v=MunL8nnwscQ) shows you how to use Redis HyperLogLog data structures to build a traffic heat map. + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Bloom filters are a probabilistic data structure that checks for presence + of an item in a set +linkTitle: Bloom filter +stack: true +title: Bloom filter +weight: 10 +--- + +A Bloom filter is a probabilistic data structure in Redis Open Source that enables you to check if an element is present in a set using a very small memory space of a fixed size. + +Instead of storing all the items in a set, a Bloom Filter stores only the items' hashed representations, thus sacrificing some precision. The trade-off is that Bloom Filters are very space-efficient and fast. + +A Bloom filter can guarantee the absence of an item from a set, but it can only give an estimation about its presence. So when it responds that an item is not present in a set (a negative answer), you can be sure that indeed is the case. But one out of every N positive answers will be wrong. Even though it looks unusual at first glance, this kind of uncertainty still has its place in computer science. There are many cases out there where a negative answer will prevent more costly operations, for example checking if a username has been taken, if a credit card has been reported as stolen, if a user has already seen an ad and much more. + +## Use cases + +**Financial fraud detection (finance)** + +This application answers the question, "Has the user paid from this location before?", thus checking for suspicious activity in their users' shopping habits. + +Use one Bloom filter per user, checked for every transaction. Provide an extremely fast response (local latency). Replicate in different regions in case the user moves. Prevent decreasing performance with scale. + +Using the Redis Bloom filter for this type of application provides these benefits: + +- Fast transaction completion +- Decreased possibility for transaction to break in case of network partitions (connection needs to be kept open for a shorter time) +- Extra layer of security for both credit card owners and retailers + +Other questions a Bloom filter can help answer in the finance industry are: + +- Has the user ever made purchases in this category of products/services? +- Do I need to skip some security steps when the user is buying with a vetted online shop (big retailers like Amazon, Apple app store...)? +- Has this credit card been reported as lost/stolen? An additional benefit of using a Bloom filter in the last case is that financial organizations can exchange their lists of stolen/blocked credit card numbers without revealing the numbers themselves. + +**Ad placement (retail, advertising)** + +This application answers these questions: + +- Has the user already seen this ad? +- Has the user already bought this product? + +Use a Bloom filter for every user, storing all bought products. The recommendation engine suggests a new product and checks if the product is in the user's Bloom filter. + +- If no, the ad is shown to the user and is added to the Bloom filter. +- If yes, the process restarts and repeats until it finds a product that is not present in the filter. + +Using the Redis Bloom filter for this type of application provides these benefits: + +- Cost efficient way to a customized near real-time experience +- No need to invest in expensive infrastructure + +**Check if a username is taken (SaaS, content publishing platforms)** + +This application answers this question: Has this username/email/domain name/slug already been used? + +Use a Bloom filter for every username that has signed up. A new user types in the desired username. The app checks if the username exists in the Bloom filter. + +- If no, the user is created and the username is added to the Bloom filter. +- If yes, the app can decide to either check the main database or reject the username. + +The query time stays the same at scale. + +Using the Redis Bloom filter for this type of application provides these benefits: + +- Very fast and efficient way to do a common operation +- No need to invest in expensive infrastructure + +## Example + +Consider a bike manufacturer that makes a million different kinds of bikes and you'd like to avoid using a duplicate model name in new models. A Bloom filter can be used to detect duplicates. In the example that follows, you'll create a filter with space for a million entries and with a 0.1% error rate. Add one model name and check if it exists. Then add multiple model names and check if they exist. + + +{{< clients-example bf_tutorial bloom >}} +> BF.RESERVE bikes:models 0.001 1000000 +OK +> BF.ADD bikes:models "Smoky Mountain Striker" +(integer) 1 +> BF.EXISTS bikes:models "Smoky Mountain Striker" +(integer) 1 +> BF.MADD bikes:models "Rocky Mountain Racer" "Cloudy City Cruiser" "Windy City Wippet" +1) (integer) 1 +2) (integer) 1 +3) (integer) 1 +> BF.MEXISTS bikes:models "Rocky Mountain Racer" "Cloudy City Cruiser" "Windy City Wippet" +1) (integer) 1 +2) (integer) 1 +3) (integer) 1 +{{< /clients-example >}} + +Note: there is always a chance that even with just a few items, there could be a false positive, meaning an item could "exist" even though it has not been explicitly added to the Bloom filter. For a more in depth understanding of the probabilistic nature of a Bloom filter, check out the blog posts linked at the bottom of this page. + +## Reserving Bloom filters +With the Redis Bloom filter, most of the sizing work is done for you: + +``` +BF.RESERVE {key} {error_rate} {capacity} [EXPANSION expansion] [NONSCALING] +``` + +#### 1. False positives rate (`error_rate`) +The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error_rate should be set to 0.001. + +#### 2. Expected capacity (`capacity`) +This is the number of items you expect having in your filter in total and is trivial when you have a static set but it becomes more challenging when your set grows over time. It's important to get the number right because if you **oversize** - you'll end up wasting memory. If you **undersize**, the filter will fill up and a new one will have to be stacked on top of it (sub-filter stacking). In the cases when a filter consists of multiple sub-filters stacked on top of each other latency for adds stays the same, but the latency for presence checks increases. The reason for this is the way the checks work: a regular check would first be performed on the top (latest) filter and if a negative answer is returned the next one is checked and so on. That's where the added latency comes from. + +#### 3. Scaling (`EXPANSION`) +Adding an item to a Bloom filter never fails due to the data structure "filling up". Instead, the error rate starts to grow. To keep the error close to the one set on filter initialization, the Bloom filter will auto-scale, meaning, when capacity is reached, an additional sub-filter will be created. + The size of the new sub-filter is the size of the last sub-filter multiplied by `EXPANSION`. If the number of items to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2. + + The filter will keep adding more hash functions for every new sub-filter in order to keep your desired error rate. + +Maybe you're wondering "Why would I create a smaller filter with a high expansion rate if I know I'm going to scale anyway?"; the answer is: for cases where you need to keep many filters (let's say a filter per user, or per product) and most of them will stay small, but some with more activity will have to scale. + +#### 4. `NONSCALING` +If you know you're not going to scale use the `NONSCALING` flag because that way the filter will use one hash function less. Just remember that if you ever do reach the initially assigned capacity - your error rate will start to grow. + + +### Total size of a Bloom filter +The actual memory used by a Bloom filter is a function of the chosen error rate: + +The optimal number of hash functions is `ceil(-ln(error_rate) / ln(2))`. + +The required number of bits per item, given the desired `error_rate` and the optimal number of hash functions, is `-ln(error_rate) / ln(2)^2`. Hence, the required number of bits in the filter is `capacity * -ln(error_rate) / ln(2)^2`. + +* **1%** error rate requires 7 hash functions and 9.585 bits per item. +* **0.1%** error rate requires 10 hash functions and 14.378 bits per item. +* **0.01%** error rate requires 14 hash functions and 19.170 bits per item. + +Just as a comparison, when using a Redis set for membership testing the memory needed is: + +``` +memory_with_sets = capacity*(192b + value) +``` + +For a set of IP addresses, for example, we would have around 40 bytes (320 bits) per item - considerably higher than the 19.170 bits we need for a Bloom filter with a 0.01% false positives rate. + + +## Bloom vs. Cuckoo filters +Bloom filters typically exhibit better performance and scalability when inserting +items (so if you're often adding items to your dataset, then a Bloom filter may be ideal). +Cuckoo filters are quicker on check operations and also allow deletions. + + +## Performance + +Insertion in a Bloom filter is O(K), where `k` is the number of hash functions. + +Checking for an item is O(K) or O(K*n) for stacked filters, where n is the number of stacked filters. + + +## Academic sources +- [Space/Time Trade-offs in Hash Coding with Allowable Errors](http://www.dragonwins.com/domains/getteched/bbc/literature/Bloom70.pdf) by Burton H. Bloom. +- [Scalable Bloom Filters](https://gsd.di.uminho.pt/members/cbm/ps/dbloom.pdf) + +## References +### Webinars +1. [Probabilistic Data Structures - The most useful thing in Redis you probably aren't using](https://youtu.be/dq-0xagF7v8?t=102) + +### Blog posts +1. [RedisBloom Quick Start Tutorial](https://docs.redis.com/latest/modules/redisbloom/redisbloom-quickstart/) +1. [Developing with Bloom Filters](https://redis.io/blog/bloom-filter/) +1. [RedisBloom on Redis Enterprise](https://redis.com/redis-enterprise/redis-bloom/) +1. [Probably and No: Redis, RedisBloom, and Bloom Filters](https://redis.com/blog/redis-redisbloom-bloom-filters/) +1. [RedisBloom – Bloom Filter Datatype for Redis](https://redis.com/blog/rebloom-bloom-filter-datatype-redis/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Count-min sketch is a probabilistic data structure that estimates the + frequency of an element in a data stream. +linkTitle: Count-min sketch +stack: true +title: Count-min sketch +weight: 60 +--- + +Count-Min Sketch is a probabilistic data structure in Redis Open Source that can be used to estimate the frequency of events/elements in a stream of data. + +It uses a sub-linear space at the expense of over-counting some events due to collisions. It consumes a stream of events/elements and keeps estimated counters of their frequency. + +It is very important to know that the results coming from a Count-Min sketch lower than a certain threshold (determined by the error_rate) should be ignored and often even approximated to zero. So Count-Min sketch is indeed a data-structure for counting frequencies of elements in a stream, but it's only useful for higher counts. Very low counts should be ignored as noise. + +## Use cases + +**Products (retail, online shops)** + +This application answers this question: What was the sales volume (on a certain day) for a product? + +Use one Count-Min sketch created per day (period). Every product sale goes into the CMS. The CMS give reasonably accurate results for the products that contribute the most toward the sales. Products with low percentage of the total sales are ignored. + +## Examples +Assume you select an error rate of 0.1% (0.001) with a certainty of 99.8% (0.998). This means you have an error probability of 0.02% (0.002). Your sketch strives to keep the error within 0.1% of the total count of all elements you've added. There's a 0.02% chance the error might exceed this—like when an element below the threshold overlaps with one above it. When you add a few items to the CMS and evaluate their frequency, remember that in such a small sample, collisions are rare, as seen with other probabilistic data structures. + +{{< clients-example cms_tutorial cms >}} +> CMS.INITBYPROB bikes:profit 0.001 0.002 +OK +> CMS.INCRBY bikes:profit "Smokey Mountain Striker" 100 +(integer) 100 +> CMS.INCRBY bikes:profit "Rocky Mountain Racer" 200 "Cloudy City Cruiser" 150 +1) (integer) 200 +2) (integer) 150 +> CMS.QUERY bikes:profit "Smokey Mountain Striker" "Rocky Mountain Racer" "Cloudy City Cruiser" "Terrible Bike Name" +1) (integer) 100 +2) (integer) 200 +3) (integer) 150 +4) (integer) 0 +> CMS.INFO bikes:profit +1) width +2) (integer) 2000 +3) depth +4) (integer) 9 +5) count +6) (integer) 450 +{{< /clients-example >}} + +##### Example 1: +If we had a uniform distribution of 1000 elements where each has a count of around 500 the threshold would be 500: + +``` +threshold = error * total_count = 0.001 * (1000*500) = 500 +``` + +This shows that a CMS is maybe not the best data structure to count frequency of a uniformly distributed stream. +Let's try decreasing the error to 0.01%: + +``` +threshold = error * total_count = 0.0001 * (1000*500) = 100 +``` +This threshold looks more acceptable already, but it means we will need a bigger sketch width `w = 2/error = 20 000` and consequently - more memory. + +##### Example 2: +In another example let's imagine a normal (gaussian) distribution where we have 1000 elements, out of which 800 will have a summed count of 400K (with an average count of 500) and 200 elements will have a much higher summed count of 1.6M (with an average count of 8000), making them the heavy hitters (elephant flow). The threshold after "populating" the sketch with all the 1000 elements would be: + +``` +threshold = error * total_count = 0.001 * 2M = 2000 +``` + +This threshold seems to be sitting comfortably between the 2 average counts 500 and 8000 so the initial chosen error rate should be working well for this case. + + +## Sizing + +Even though the Count-Min sketch is similar to Bloom filter in many ways, its sizing is considerably more complex. The initialisation command receives only two sizing parameters, but you have to understand them thoroughly if you want to have a usable sketch. + +``` +CMS.INITBYPROB key error probability +``` + +### 1. Error + +The `error` parameter will determine the width `w` of your sketch and the probability will determine the number of hash functions (depth `d`). The error rate we choose will determine the threshold above which we can trust the result from the sketch. The correlation is: +``` +threshold = error * total_count +``` +or +``` +error = threshold/total_count +``` + +where `total_count` is the sum of the count of all elements that can be obtained from the `count` key of the result of the [`CMS.INFO`]({{< relref "commands/cms.info/" >}}) command and is of course dynamic - it changes with every new increment in the sketch. At creation time you can approximate the `total_count` ratio as a product of the average count you'll be expecting in the sketch and the average number of elements. + +Since the threshold is a function of the total count in the filter it's very important to note that it will grow as the count grows, but knowing the total count we can always dynamically calculate the threshold. If a result is below it - it can be discarded. + + +### 2. Probability + +`probability` in this data structure represents the chance of an element that has a count below the threshold to collide with elements that had a count above the threshold on all sketches/depths thus returning a min-count of a frequently occurring element instead of its own. + + + +## Performance +Adding, updating and querying for elements in a CMS has a time complexity O(1). + + +## Academic sources +- [An Improved Data Stream Summary: The Count-Min Sketch and its Applications](http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf) + +## References +- [Count-Min Sketch: The Art and Science of Estimating Stuff](https://redis.com/blog/count-min-sketch-the-art-and-science-of-estimating-stuff/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: t-digest is a probabilistic data structure that allows you to estimate + the percentile of a data stream. +linkTitle: t-digest +stack: true +title: t-digest +weight: 40 +--- + +The t-digest is a sketch data structure in Redis Open Source for estimating percentiles from a data stream or a large dataset using a compact sketch. + +It can answer questions like: +- Which fraction of the values in the data stream are smaller than a given value? +- How many values in the data stream are smaller than a given value? +- What's the highest value that's smaller than *p* percent of the values in the data stream? (what is the p-percentile value)? + + +### What is t-digest? +t-digest is a data structure that will estimate a percentile point without having to store and order all the data points in a set. For example: to answer the question "What's the average latency for 99% of my database operations" we would have to store the average latency for every user, order the values, cut out the last 1% and only then find the average value of all the rest. This kind of process is costly not just in terms of the processing needed to order those values but also in terms of the space needed to store them. Those are precisely the problems t-digest solves. + +t-digest can also be used to estimate other values related to percentiles, like trimmed means. + +> A **trimmed mean** is the mean value from the sketch, excluding observation values outside the low and high cutoff percentiles. For example, a 0.1 trimmed mean is the mean value of the sketch, excluding the lowest 10% and the highest 10% of the values. + +## Use cases + +**Hardware/software monitoring** + +You measure your online server response latency, and you like to query: + +- What are the 50th, 90th, and 99th percentiles of the measured latencies? + +- Which fraction of the measured latencies are less than 25 milliseconds? + +- What is the mean latency, ignoring outliers? or What is the mean latency between the 10th and the 90th percentile? + +**Online gaming** + +Millions of people are playing a game on your online gaming platform, and you want to give the following information to each player? + +- Your score is better than x percent of the game sessions played. + +- There were about y game sessions where people scored larger than you. + +- To have a better score than 90% of the games played, your score should be z. + +**Network traffic monitoring** + +You measure the IP packets transferred over your network each second and try to detect denial-of-service attacks by asking: + +- Does the number of packets in the last second exceed 99% of previously observed values? + +- How many packets do I expect to see under _normal_ network conditions? +(Answer: between x and y, where x represents the 1st percentile and y represents the 99th percentile). + +**Predictive maintenance** + +- Was the measured parameter (noise level, current consumption, etc.) irregular? (not within the [1st percentile...99th percentile] range)? + +- To which values should I set my alerts? + + +## Examples + +In the following example, you'll create a t-digest with a compression of 100 and add items to it. The `COMPRESSION` argument is used to specify the tradeoff between accuracy and memory consumption. The default value is 100. Higher values mean more accuracy. Note: unlike some of the other probabilistic data structures, the [`TDIGEST.ADD`]({{< relref "commands/tdigest.add/" >}}) command will not create a new structure if the key does not exist. + +{{< clients-example tdigest_tutorial tdig_start >}} +> TDIGEST.CREATE bikes:sales COMPRESSION 100 +OK +> TDIGEST.ADD bikes:sales 21 +OK +> TDIGEST.ADD bikes:sales 150 95 75 34 +OK +{{< /clients-example >}} + + +You can repeat calling [TDIGEST.ADD]({{< relref "commands/tdigest.add" >}}) whenever new observations are available + +#### Estimating fractions or ranks by values + +Another helpful feature in t-digest is CDF (definition of rank) which gives us the fraction of observations smaller or equal to a certain value. This command is very useful to answer questions like "*What's the percentage of observations with a value lower or equal to X*". + +>More precisely, [`TDIGEST.CDF`]({{< relref "commands/tdigest.cdf/" >}}) will return the estimated fraction of observations in the sketch that are smaller than X plus half the number of observations that are equal to X. We can also use the [`TDIGEST.RANK`]({{< relref "commands/tdigest.rank/" >}}) command, which is very similar. Instead of returning a fraction, it returns the ----estimated---- rank of a value. The [`TDIGEST.RANK`]({{< relref "commands/tdigest.rank/" >}}) command is also variadic, meaning you can use a single command to retrieve estimations for one or more values. + +Here's an example. Given a set of biker's ages, you can ask a question like "What's the percentage of bike racers that are younger than 50 years?" + +{{< clients-example tdigest_tutorial tdig_cdf >}} +> TDIGEST.CREATE racer_ages +OK +> TDIGEST.ADD racer_ages 45.88 44.2 58.03 19.76 39.84 69.28 50.97 25.41 19.27 85.71 42.63 +OK +> TDIGEST.CDF racer_ages 50 +1) "0.63636363636363635" +> TDIGEST.RANK racer_ages 50 +1) (integer) 7 +> TDIGEST.RANK racer_ages 50 40 +1) (integer) 7 +2) (integer) 4 +{{< /clients-example >}} + + +And lastly, `TDIGEST.REVRANK key value...` is similar to [TDIGEST.RANK]({{< relref "commands/tdigest.rank" >}}), but returns, for each input value, an estimation of the number of (observations larger than a given value + half the observations equal to the given value). + + +#### Estimating values by fractions or ranks + +`TDIGEST.QUANTILE key fraction...` returns, for each input fraction, an estimation of the value (floating point) that is smaller than the given fraction of observations. `TDIGEST.BYRANK key rank...` returns, for each input rank, an estimation of the value (floating point) with that rank. + +{{< clients-example tdigest_tutorial tdig_quant >}} +> TDIGEST.QUANTILE racer_ages .5 +1) "44.200000000000003" +> TDIGEST.BYRANK racer_ages 4 +1) "42.630000000000003" +{{< /clients-example >}} + +`TDIGEST.BYREVRANK key rank...` returns, for each input **reverse rank**, an estimation of the **value** (floating point) with that reverse rank. + +#### Estimating trimmed mean + +Use `TDIGEST.TRIMMED_MEAN key lowFraction highFraction` to retrieve an estimation of the mean value between the specified fractions. + +This is especially useful for calculating the average value ignoring outliers. For example - calculating the average value between the 20th percentile and the 80th percentile. + +#### Merging sketches + +Sometimes it is useful to merge sketches. For example, suppose we measure latencies for 3 servers, and we want to calculate the 90%, 95%, and 99% latencies for all the servers combined. + +`TDIGEST.MERGE destKey numKeys sourceKey... [COMPRESSION compression] [OVERRIDE]` merges multiple sketches into a single sketch. + +If `destKey` does not exist - a new sketch is created. + +If `destKey` is an existing sketch, its values are merged with the values of the source keys. To override the destination key contents, use `OVERRIDE`. + +#### Retrieving sketch information + +Use [`TDIGEST.MIN`]({{< relref "commands/tdigest.min/" >}}) and [`TDIGEST.MAX`]({{< relref "commands/tdigest.max/" >}}) to retrieve the minimal and maximal values in the sketch, respectively. + +{{< clients-example tdigest_tutorial tdig_min >}} +> TDIGEST.MIN racer_ages +"19.27" +> TDIGEST.MAX racer_ages +"85.709999999999994" +{{< /clients-example >}} + +Both return `nan` when the sketch is empty. + +Both commands return accurate results and are equivalent to `TDIGEST.BYRANK racer_ages 0` and `TDIGEST.BYREVRANK racer_ages 0`, respectively. + +Use `TDIGEST.INFO racer_ages` to retrieve some additional information about the sketch. + +#### Resetting a sketch + +{{< clients-example tdigest_tutorial tdig_reset >}} +> TDIGEST.RESET racer_ages +OK +{{< /clients-example >}} + +## Academic sources +- [The _t_-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403) + +## References +- [t-digest: A New Probabilistic Data Structure in Redis Stack](https://redis.com/blog/t-digest-in-redis-stack/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis probabilistic data structures support multiple configuration parameters. +linkTitle: Configuration +title: Configuration Parameters +weight: 100 +--- +{{< note >}} +As of Redis 8 in Redis Open Source (Redis 8), configuration parameters for the probabilistic data structures are now set in the following ways: +* At load time via your `redis.conf` file. +* At run time (where applicable) using the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command. + +Also, Redis 8 persists probabilistic configuration parameters just like any other configuration parameters (e.g., using the [`CONFIG REWRITE`]({{< relref "/commands/config-rewrite/" >}}) command). +{{< /note >}} + + +## Redis probabilistic data structure configuration parameters + +The following table summarizes which Bloom filter configuration parameters can be set at run-time, and compatibility with Redis Software and Redis Cloud. + +| Parameter name
(version < 8.0) | Parameter name
(version ≥ 8.0) | Run-time | Redis
Software | Redis
Cloud | +| :------- | :------- | :------- | :------- | :------- | +| ERROR_RATE | [bf-error-rate](#bf-error-rate) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| | [bf-expansion-factor](#bf-expansion-factor) | :white_check_mark: ||| +| INITIAL_SIZE | [bf-initial-size](#bf-initial-size) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | + +The following table summarizes which Cuckoo filter configuration parameters can be set at run-time, and compatibility with Redis Software and Redis Cloud. + +| Parameter name
(version < 8.0) | Parameter name
(version ≥ 8.0) | Run-time | Redis
Software | Redis
Cloud | +| :------- | :------- | :------- | :------- | :------- | +| | [cf-bucket-size](#cf-bucket-size) | :white_check_mark: ||| +| | [cf-initial-size](#cf-initial-size) | :white_check_mark: ||| +| | [cf-expansion-factor](#cf-expansion-factor) | :white_check_mark: ||| +| CF_MAX_EXPANSIONS | [cf-max-expansions](#cf-max-expansions) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| | [cf-max-iterations](#cf-max-iterations) | :white_check_mark: ||| + +{{< note >}} +Parameter names for Redis Open Source versions < 8.0, while deprecated, will still be supported in Redis 8. +{{< /note >}} + +--- + +{{< warning >}} +A filter should always be sized for the expected capacity and the desired error rate. +Using the `INSERT` family commands with the default values should be used in cases where many small filters exist and the expectation is most will remain at around the default sizes. +Not optimizing a filter for its intended use will result in degradation of performance and memory efficiency. +{{< /warning >}} + +## Default parameters for Bloom filters + +### bf-error-rate + +Default false positive rate for Bloom filters. + +Type: double + +Valid range: `(0 .. 1)`. Though the valid range is `(0 .. 1)` (corresponding to `> 0%` to `< 100%` false positive rate), any value greater than `0.25` is treated as `0.25`. + +Default: `0.01` + +### bf-expansion-factor + +Added in v8.0.0. + +Expansion factor for Bloom filters. + +Type: integer + +Valid range: `[0 .. 32768]`. + +Default: `2` + +### bf-initial-size + +Initial capacity for Bloom filters. + +Type: integer + +Valid range: `[1 .. 1048576]` + +Default: `100` + +## Default parameters for Cuckoo filters + +### cf-bucket-size + +Added in v8.0.0. + +The number of items in each Cuckoo filter bucket. + +Type: integer + +Valid range: `[1 .. 255]` + +Default: `2` + +### cf-initial-size + +Added in v8.0.0. + +Cuckoo filter initial capacity. + +Type: integer + +Valid range: `[2*cf-bucket-size .. 1048576]` + +Default: `1024` + +### cf-expansion-factor + +Added in v8.0.0. + +Expansion factor for Cuckoo filters. + +Type: integer + +Valid range: `[0 .. 32768]` + +Default: `1` + +### cf-max-expansions + +The maximum number of expansions for Cuckoo filters. + +Type: integer + +Valid range: `[1 .. 65535]` + +Default: `32` + +### cf-max-iterations + +Added in v8.0.0 + +The maximum number of iterations for Cuckoo filters. + +Type: integer + +Valid range: `[1 .. 65535]` + +Default: `20` + +## Setting configuration parameters on module load (deprecated) + +These methods are deprecated beginning with Redis 8. + +Setting configuration parameters at load-time is done by appending arguments after the `--loadmodule` argument when starting a server from the command line or after the `loadmodule` directive in a Redis config file. For example: + +In [redis.conf]({{< relref "/operate/oss_and_stack/management/config" >}}): + +```sh +loadmodule ./redisbloom.so [OPT VAL]... +``` + +From the [Redis CLI]({{< relref "/develop/tools/cli" >}}), using the [MODULE LOAD]({{< relref "/commands/module-load" >}}) command: + +``` +127.0.0.6379> MODULE LOAD redisbloom.so [OPT VAL]... +``` + +From the command line: + +```sh +$ redis-server --loadmodule ./redisbloom.so [OPT VAL]... +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Top-K is a probabilistic data structure that allows you to find the most + frequent items in a data stream. +linkTitle: Top-K +stack: true +title: Top-K +weight: 50 +--- + +Top K is a probabilistic data structure in Redis Open Source used to estimate the `K` highest-rank elements from a stream. + +"Highest-rank" in this case means "elements with a highest number or score attached to them", where the score can be a count of how many times the element has appeared in the stream - thus making the data structure perfect for finding the elements with the highest frequency in a stream. +One very common application is detecting network anomalies and DDoS attacks where Top K can answer the question: Is there a sudden increase in the flux of requests to the same address or from the same IP? + +There is, indeed, some overlap with the functionality of Count-Min Sketch, but the two data structures have their differences and should be applied for different use cases. + +The Redis Open Source implementation of Top-K is based on the [HeavyKeepers](https://www.usenix.org/conference/atc18/presentation/gong) algorithm presented by Junzhi Gong et al. It discards some older approaches like "count-all" and "admit-all-count-some" in favour of a "**count-with-exponential-decay**" strategy which is biased against mouse (small) flows and has a limited impact on elephant (large) flows. This implementation uses two data structures in tandem: a hash table that holds the probabilistic counts (much like the Count-Min Sketch), and a min heap that holds the `K` items with the highest counts. This ensures high accuracy with shorter execution times than previous probabilistic algorithms allowed, while keeping memory utilization to a fraction of what is typically required by a Sorted Set. It has the additional benefit of being able to get real time notifications when elements are added or removed from the Top K list. + +## Use case + +**Trending hashtags (social media platforms, news distribution networks)** + +This application answers these questions: + +- What are the K hashtags people have mentioned the most in the last X hours? +- What are the K news with highest read/view count today? + +Data flow is the incoming social media posts from which you parse out the different hashtags. + +The [`TOPK.LIST`]({{< relref "commands/topk.list/" >}}) command has a time complexity of `O(K*log(k))` so if `K` is small, there is no need to keep a separate set or sorted set of all the hashtags. You can query directly from the Top K itself. + +## Example + +This example will show you how to track key words used "bike" when shopping online; e.g., "bike store" and "bike handlebars". Proceed as follows. +​ +* Use [`TOPK.RESERVE`]({{< relref "commands/topk.reserve/" >}}) to initialize a top K sketch with specific parameters. Note: the `width`, `depth`, and `decay_constant` parameters can be omitted, as they will be set to the default values 7, 8, and 0.9, respectively, if not present. +​ + ``` + > TOPK.RESERVE key k width depth decay_constant + ``` + + * Use [`TOPK.ADD`]({{< relref "commands/topk.add/" >}}) to add items to the sketch. As you can see, multiple items can be added at the same time. If an item is returned when adding additional items, it means that item was demoted out of the min heap of the top items, below it will mean the returned item is no longer in the top 5, otherwise `nil` is returned. This allows dynamic heavy-hitter detection of items being entered or expelled from top K list. +​ +In the example below, "pedals" displaces "handlebars", which is returned after "pedals" is added. Also note that the addition of both "store" and "seat" a second time don't return anything, as they're already in the top K. + + * Use [`TOPK.LIST`]({{< relref "commands/topk.list/" >}}) to list the items entered thus far. +​ + * Use [`TOPK.QUERY`]({{< relref "commands/topk.query/" >}}) to see if an item is on the top K list. Just like [`TOPK.ADD`]({{< relref "commands/topk.add/" >}}) multiple items can be queried at the same time. +{{< clients-example topk_tutorial topk >}} +> TOPK.RESERVE bikes:keywords 5 2000 7 0.925 +OK +> TOPK.ADD bikes:keywords store seat handlebars handles pedals tires store seat +1) (nil) +2) (nil) +3) (nil) +4) (nil) +5) (nil) +6) handlebars +7) (nil) +8) (nil) +> TOPK.LIST bikes:keywords +1) store +2) seat +3) pedals +4) tires +5) handles +> TOPK.QUERY bikes:keywords store handlebars +1) (integer) 1 +2) (integer) 0 +{{< /clients-example >}} + +## Sizing + +Choosing the size for a Top K sketch is relatively easy, because the only two parameters you need to set are a direct function of the number of elements (K) you want to keep in your list. + +If you start by knowing your desired `k` you can easily derive the width and depth: + +``` +width = k*log(k) +depth = log(k) # but a minimum of 5 +``` + +For the `decay_constant` you can use the value `0.9` which has been found as optimal in many cases, but you can experiment with different values and find what works best for your use case. + +## Performance +Insertion in a top-k has time complexity of O(K + depth) ≈ O(K) and lookup has time complexity of O(K), where K is the number of top elements to be kept in the list and depth is the number of hash functions used. + + +## Academic sources +- [HeavyKeeper: An Accurate Algorithm for Finding Top-k Elephant Flows.](https://yangtonghome.github.io/uploads/HeavyKeeper_ToN.pdf) + +## References +- [Meet Top-K: an Awesome Probabilistic Addition to RedisBloom](https://redis.com/blog/meet-top-k-awesome-probabilistic-addition-redisbloom/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Cuckoo filters are a probabilistic data structure that checks for presence + of an element in a set +linkTitle: Cuckoo filter +stack: true +title: Cuckoo filter +weight: 20 +--- + +A Cuckoo filter, just like a Bloom filter, is a probabilistic data structure in Redis Open Source that enables you to check if an element is present in a set in a very fast and space efficient way, while also allowing for deletions and showing better performance than Bloom in some scenarios. + +While the Bloom filter is a bit array with flipped bits at positions decided by the hash function, a Cuckoo filter is an array of buckets, storing fingerprints of the values in one of the buckets at positions decided by the two hash functions. A membership query for item `x` searches the possible buckets for the fingerprint of `x`, and returns true if an identical fingerprint is found. A cuckoo filter's fingerprint size will directly determine the false positive rate. + + +## Use cases + +**Targeted ad campaigns (advertising, retail)** + +This application answers this question: Has the user signed up for this campaign yet? + +Use a Cuckoo filter for every campaign, populated with targeted users' ids. On every visit, the user id is checked against one of the Cuckoo filters. + +- If yes, the user has not signed up for campaign. Show the ad. +- If the user clicks ad and signs up, remove the user id from that Cuckoo filter. +- If no, the user has signed up for that campaign. Try the next ad/Cuckoo filter. + +**Discount code/coupon validation (retail, online shops)** + +This application answers this question: Has this discount code/coupon been used yet? + +Use a Cuckoo filter populated with all discount codes/coupons. On every try, the entered code is checked against the filter. + +- If no, the coupon is not valid. +- If yes, the coupon can be valid. Check the main database. If valid, remove from Cuckoo filter as `used`. + +Note> In addition to these two cases, Cuckoo filters serve very well all the Bloom filter use cases. + +## Examples + +> You'll learn how to create an empty cuckoo filter with an initial capacity for 1,000 items, add items, check their existence, and remove them. Even though the [`CF.ADD`]({{< relref "commands/cf.add/" >}}) command can create a new filter if one isn't present, it might not be optimally sized for your needs. It's better to use the [`CF.RESERVE`]({{< relref "commands/cf.reserve/" >}}) command to set up a filter with your preferred capacity. + +{{< clients-example cuckoo_tutorial cuckoo >}} +> CF.RESERVE bikes:models 1000000 +OK +> CF.ADD bikes:models "Smoky Mountain Striker" +(integer) 1 +> CF.EXISTS bikes:models "Smoky Mountain Striker" +(integer) 1 +> CF.EXISTS bikes:models "Terrible Bike Name" +(integer) 0 +> CF.DEL bikes:models "Smoky Mountain Striker" +(integer) 1 +{{< /clients-example >}} + +## Bloom vs. Cuckoo filters +Bloom filters typically exhibit better performance and scalability when inserting +items (so if you're often adding items to your dataset, then a Bloom filter may be ideal). +Cuckoo filters are quicker on check operations and also allow deletions. + +## Sizing Cuckoo filters + +These are the main parameters and features of a cuckoo filter: + +- `p` target false positive rate +- `f` fingerprint length in bits +- `α` fill rate or load factor (0≤α≤1) +- `b` number of entries per bucket +- `m` number of buckets +- `n` number of items +- `C` average bits per item + +Let's start by remembering that a cuckoo filter bucket can have multiple entries (where each entry stores one fingerprint). If we end up having all entries occupied with a fingerprint then we won't have empty slots to save new elements and the filter will be declared full, that's why we should always maintain a certain percentage of our cuckoo filter free. +As a result of this the "real" memory cost of an item should include that overhead in addition to the fingerprint size. If `α` is the load factor (fingerprint size / total filter size) and `f` is the number of bits in an entry the amortised space cost `f/α bits`. + +When you initialise a new filter you are asked to choose its capacity and bucket size. + +``` +CF.RESERVE {key} {capacity} [BUCKETSIZE bucketSize] [MAXITERATIONS maxIterations] +[EXPANSION expansion] +``` + +### Choosing the capacity (`capacity`) + +The capacity of a Cuckoo filter is calculated as + +``` +capacity = n*f/α +``` +where `n` is the number of elements you expect to have in your filter, `f` is the fingerprint length in bits which is set to `8` and `α` is the fill factor. So in order to get your filter capacity you must first choose a fill factor. The fill factor will determine the density of your data and of course the memory. +The capacity will be rounded up to the next "power of two (2n)" number. + +> Please note that inserting repeated items in a cuckoo filter will try to add them multiple times causing your filter to fill up + +Because of how Cuckoo Filters work, the filter is likely to declare itself full before capacity is reached and therefore fill rate will likely never reach 100%. + + +### Choosing the bucket size (`BUCKETSIZE`) +Number of items in each bucket. A higher bucket size value improves the fill rate but also causes a higher error rate and slightly slower performance. + +``` +error_rate = (buckets * hash_functions)/2^fingerprint_size = (buckets*2)/256 +``` + +When bucket size of 1 is used the fill rate is 55% and false positive error rate is 2/256 ≈ 0.78% **which is the minimal false positive rate you can achieve**. Larger buckets increase the error rate linearly but improve the fill rate of the filter. For example, a bucket size of 3 yields a 2.34% error rate and an 80% fill rate. Bucket size of 4 yields a 3.12% error rate and a 95% fill rate. + +### Choosing the scaling factor (`EXPANSION`) + +When the filter self-declares itself full, it will auto-expand by generating additional sub-filters at the cost of reduced performance and increased error rate. The new sub-filter is created with size of the previous sub-filter multiplied by `EXPANSION` (chosen on filter creation). Like bucket size, additional sub-filters grow the error rate linearly (the compound error is a sum of all subfilters' errors). The size of the new sub-filter is the size of the last sub-filter multiplied by expansion and this is something very important to keep in mind. If you know you'll have to scale at some point it's better to choose a higher expansion value. The default is 1. + +Maybe you're wondering "Why would I create a smaller filter with a high expansion rate if I know I'm going to scale anyway?"; the answer is: for cases where you need to keep many filters (let's say a filter per user, or per product) and most of them will stay small, but some with more activity will have to scale. + +The expansion factor will be rounded up to the next "power of two (2n)" number. + +### Choosing the maximum number of iterations (`MAXITERATIONS`) +`MAXITERATIONS` dictates the number of attempts to find a slot for the incoming fingerprint. Once the filter gets full, a high MAXITERATIONS value will slow down insertions. The default value is 20. + +### Interesting facts: +- Unused capacity in prior sub-filters is automatically used when possible. +- The filter can grow up to 32 times. +- You can delete items to stay within filter limits instead of rebuilding +- Adding the same element multiple times will create multiple entries, thus filling up your filter. + + +## Performance +Adding an element to a Cuckoo filter has a time complexity of O(1). + +Similarly, checking for an element and deleting an element also has a time complexity of O(1). + + + +## Academic sources +- [Cuckoo Filter: Practically Better Than Bloom](https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Probabilistic data structures in Redis +linkTitle: Probabilistic +title: Probabilistic +weight: 140 +--- + +*Probabilistic data structures* give approximations of statistics such as +counts, frequencies, and rankings rather than precise values. +The advantage of using approximations is that they are adequate for +many common purposes but are much more efficient to calculate. They +sometimes have other advantages too, such as obfuscating times, locations, +and other sensitive data. + +Probabilistic data structures are available as part of Redis Open Source and they are available in Redis Software and Redis Cloud. +See +[Install Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack" >}}) or +[Install Redis Enterprise]({{< relref "/operate/rs/installing-upgrading/install" >}}) +for full installation instructions.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis sets + + ' +linkTitle: Sets +title: Redis sets +weight: 30 +--- + +A Redis set is an unordered collection of unique strings (members). +You can use Redis sets to efficiently: + +* Track unique items (e.g., track all unique IP addresses accessing a given blog post). +* Represent relations (e.g., the set of all users with a given role). +* Perform common set operations such as intersection, unions, and differences. + +## Basic commands + +* [`SADD`]({{< relref "/commands/sadd" >}}) adds a new member to a set. +* [`SREM`]({{< relref "/commands/srem" >}}) removes the specified member from the set. +* [`SISMEMBER`]({{< relref "/commands/sismember" >}}) tests a string for set membership. +* [`SINTER`]({{< relref "/commands/sinter" >}}) returns the set of members that two or more sets have in common (i.e., the intersection). +* [`SCARD`]({{< relref "/commands/scard" >}}) returns the size (a.k.a. cardinality) of a set. + +See the [complete list of set commands]({{< relref "/commands/" >}}?group=set). + +## Examples + +* Store the sets of bikes racing in France and the USA. Note that +if you add a member that already exists, it will be ignored. +{{< clients-example sets_tutorial sadd >}} +> SADD bikes:racing:france bike:1 +(integer) 1 +> SADD bikes:racing:france bike:1 +(integer) 0 +> SADD bikes:racing:france bike:2 bike:3 +(integer) 2 +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +{{< /clients-example >}} + +* Check whether bike:1 or bike:2 are racing in the US. +{{< clients-example sets_tutorial sismember >}} +> SISMEMBER bikes:racing:usa bike:1 +(integer) 1 +> SISMEMBER bikes:racing:usa bike:2 +(integer) 0 +{{< /clients-example >}} + +* Which bikes are competing in both races? +{{< clients-example sets_tutorial sinter >}} +> SINTER bikes:racing:france bikes:racing:usa +1) "bike:1" +{{< /clients-example >}} + +* How many bikes are racing in France? +{{< clients-example sets_tutorial scard >}} +> SCARD bikes:racing:france +(integer) 3 +{{< /clients-example >}} +## Tutorial + +The [`SADD`]({{< relref "/commands/sadd" >}}) command adds new elements to a set. It's also possible +to do a number of other operations against sets like testing if a given element +already exists, performing the intersection, union or difference between +multiple sets, and so forth. + +{{< clients-example sets_tutorial sadd_smembers >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 +(integer) 3 +> SMEMBERS bikes:racing:france +1) bike:3 +2) bike:1 +3) bike:2 +{{< /clients-example >}} + +Here I've added three elements to my set and told Redis to return all the +elements. There is no order guarantee with a set. Redis is free to return the +elements in any order at every call. + +Redis has commands to test for set membership. These commands can be used on single as well as multiple items: + +{{< clients-example sets_tutorial smismember >}} +> SISMEMBER bikes:racing:france bike:1 +(integer) 1 +> SMISMEMBER bikes:racing:france bike:2 bike:3 bike:4 +1) (integer) 1 +2) (integer) 1 +3) (integer) 0 +{{< /clients-example >}} + +We can also find the difference between two sets. For instance, we may want +to know which bikes are racing in France but not in the USA: + +{{< clients-example sets_tutorial sdiff >}} +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +> SDIFF bikes:racing:france bikes:racing:usa +1) "bike:3" +2) "bike:2" +{{< /clients-example >}} + +There are other non trivial operations that are still easy to implement +using the right Redis commands. For instance we may want a list of all the +bikes racing in France, the USA, and some other races. We can do this using +the [`SINTER`]({{< relref "/commands/sinter" >}}) command, which performs the intersection between different +sets. In addition to intersection you can also perform +unions, difference, and more. For example +if we add a third race we can see some of these commands in action: + +{{< clients-example sets_tutorial multisets >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 +(integer) 3 +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +> SADD bikes:racing:italy bike:1 bike:2 bike:3 bike:4 +(integer) 4 +> SINTER bikes:racing:france bikes:racing:usa bikes:racing:italy +1) "bike:1" +> SUNION bikes:racing:france bikes:racing:usa bikes:racing:italy +1) "bike:2" +2) "bike:1" +3) "bike:4" +4) "bike:3" +> SDIFF bikes:racing:france bikes:racing:usa bikes:racing:italy +(empty array) +> SDIFF bikes:racing:france bikes:racing:usa +1) "bike:3" +2) "bike:2" +> SDIFF bikes:racing:usa bikes:racing:france +1) "bike:4" +{{< /clients-example >}} + +You'll note that the [`SDIFF`]({{< relref "/commands/sdiff" >}}) command returns an empty array when the +difference between all sets is empty. You'll also note that the order of sets +passed to [`SDIFF`]({{< relref "/commands/sdiff" >}}) matters, since the difference is not commutative. + +When you want to remove items from a set, you can use the [`SREM`]({{< relref "/commands/srem" >}}) command to +remove one or more items from a set, or you can use the [`SPOP`]({{< relref "/commands/spop" >}}) command to +remove a random item from a set. You can also _return_ a random item from a +set without removing it using the [`SRANDMEMBER`]({{< relref "/commands/srandmember" >}}) command: + +{{< clients-example sets_tutorial srem >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> SREM bikes:racing:france bike:1 +(integer) 1 +> SPOP bikes:racing:france +"bike:3" +> SMEMBERS bikes:racing:france +1) "bike:2" +2) "bike:4" +3) "bike:5" +> SRANDMEMBER bikes:racing:france +"bike:2" +{{< /clients-example >}} + +## Limits + +The max size of a Redis set is 2^32 - 1 (4,294,967,295) members. + +## Performance + +Most set operations, including adding, removing, and checking whether an item is a set member, are O(1). +This means that they're highly efficient. +However, for large sets with hundreds of thousands of members or more, you should exercise caution when running the [`SMEMBERS`]({{< relref "/commands/smembers" >}}) command. +This command is O(n) and returns the entire set in a single response. +As an alternative, consider the [`SSCAN`]({{< relref "/commands/sscan" >}}), which lets you retrieve all members of a set iteratively. + +## Alternatives + +Sets membership checks on large datasets (or on streaming data) can use a lot of memory. +If you're concerned about memory usage and don't need perfect precision, consider a [Bloom filter or Cuckoo filter]({{< relref "/develop/data-types/probabilistic/bloom-filter" >}}) as an alternative to a set. + +Redis sets are frequently used as a kind of index. +If you need to index and query your data, consider the [JSON]({{< relref "/develop/data-types/json/" >}}) data type and the [Redis Query Engine]({{< relref "/develop/interact/search-and-query/" >}}) features. + +## Learn more + +* [Redis Sets Explained](https://www.youtube.com/watch?v=PKdCppSNTGQ) and [Redis Sets Elaborated](https://www.youtube.com/watch?v=aRw5ME_5kMY) are two short but thorough video explainers covering Redis sets. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sets in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis streams + + ' +linkTitle: Streams +title: Redis Streams +weight: 60 +--- + +A Redis stream is a data structure that acts like an append-only log but also implements several operations to overcome some of the limits of a typical append-only log. These include random access in O(1) time and complex consumption strategies, such as consumer groups. +You can use streams to record and simultaneously syndicate events in real time. +Examples of Redis stream use cases include: + +* Event sourcing (e.g., tracking user actions, clicks, etc.) +* Sensor monitoring (e.g., readings from devices in the field) +* Notifications (e.g., storing a record of each user's notifications in a separate stream) + +Redis generates a unique ID for each stream entry. +You can use these IDs to retrieve their associated entries later or to read and process all subsequent entries in the stream. Note that because these IDs are related to time, the ones shown here may vary and will be different from the IDs you see in your own Redis instance. + +Redis streams support several trimming strategies (to prevent streams from growing unbounded) and more than one consumption strategy (see [`XREAD`]({{< relref "/commands/xread" >}}), [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}), and [`XRANGE`]({{< relref "/commands/xrange" >}})). + +## Basic commands +* [`XADD`]({{< relref "/commands/xadd" >}}) adds a new entry to a stream. +* [`XREAD`]({{< relref "/commands/xread" >}}) reads one or more entries, starting at a given position and moving forward in time. +* [`XRANGE`]({{< relref "/commands/xrange" >}}) returns a range of entries between two supplied entry IDs. +* [`XLEN`]({{< relref "/commands/xlen" >}}) returns the length of a stream. + +See the [complete list of stream commands]({{< relref "/commands/" >}}?group=stream). + + +## Examples + +* When our racers pass a checkpoint, we add a stream entry for each racer that includes the racer's name, speed, position, and location ID: +{{< clients-example stream_tutorial xadd >}} +> XADD race:france * rider Castilla speed 30.2 position 1 location_id 1 +"1692632086370-0" +> XADD race:france * rider Norem speed 28.8 position 3 location_id 1 +"1692632094485-0" +> XADD race:france * rider Prickett speed 29.7 position 2 location_id 1 +"1692632102976-0" +{{< /clients-example >}} + +* Read two stream entries starting at ID `1692632086370-0`: +{{< clients-example stream_tutorial xrange >}} +> XRANGE race:france 1692632086370-0 + COUNT 2 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +* Read up to 100 new stream entries, starting at the end of the stream, and block for up to 300 ms if no entries are being written: +{{< clients-example stream_tutorial xread_block >}} +> XREAD COUNT 100 BLOCK 300 STREAMS race:france $ +(nil) +{{< /clients-example >}} + +## Performance + +Adding an entry to a stream is O(1). +Accessing any single entry is O(n), where _n_ is the length of the ID. +Since stream IDs are typically short and of a fixed length, this effectively reduces to a constant time lookup. +For details on why, note that streams are implemented as [radix trees](https://en.wikipedia.org/wiki/Radix_tree). + +Simply put, Redis streams provide highly efficient inserts and reads. +See each command's time complexity for the details. + + +## Streams basics + +Streams are an append-only data structure. The fundamental write command, called [`XADD`]({{< relref "/commands/xadd" >}}), appends a new entry to the specified stream. + +Each stream entry consists of one or more field-value pairs, somewhat like a dictionary or a Redis hash: + +{{< clients-example stream_tutorial xadd_2 >}} +> XADD race:france * rider Castilla speed 29.9 position 1 location_id 2 +"1692632147973-0" +{{< /clients-example >}} + +The above call to the [`XADD`]({{< relref "/commands/xadd" >}}) command adds an entry `rider: Castilla, speed: 29.9, position: 1, location_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1692632147973-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our [`XADD`]({{< relref "/commands/xadd" >}}) example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. + +It is possible to get the number of items inside a Stream just using the [`XLEN`]({{< relref "/commands/xlen" >}}) command: + +{{< clients-example stream_tutorial xlen >}} +> XLEN race:france +(integer) 4 +{{< /clients-example >}} + +### Entry IDs + +The entry ID returned by the [`XADD`]({{< relref "/commands/xadd" >}}) command, and identifying univocally each entry inside a given stream, is composed of two parts: + +``` +- +``` + +The milliseconds time part is actually the local time in the local Redis node generating the stream ID, however if the current milliseconds time happens to be smaller than the previous entry time, then the previous entry time is used instead, so if a clock jumps backward the monotonically incrementing ID property still holds. The sequence number is used for entries created in the same millisecond. Since the sequence number is 64 bit wide, in practical terms there is no limit to the number of entries that can be generated within the same millisecond. + +The format of such IDs may look strange at first, and the gentle reader may wonder why the time is part of the ID. The reason is that Redis streams support range queries by ID. Because the ID is related to the time the entry is generated, this gives the ability to query for time ranges basically for free. We will see this soon while covering the [`XRANGE`]({{< relref "/commands/xrange" >}}) command. + +If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the [`XADD`]({{< relref "/commands/xadd" >}}) command can take an explicit ID instead of the `*` wildcard ID that triggers auto-generation, like in the following examples: + +{{< clients-example stream_tutorial xadd_id >}} +> XADD race:usa 0-1 racer Castilla +0-1 +> XADD race:usa 0-2 racer Norem +0-2 +{{< /clients-example >}} + +Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: + +{{< clients-example stream_tutorial xadd_bad_id >}} +> XADD race:usa 0-1 racer Prickett +(error) ERR The ID specified in XADD is equal or smaller than the target stream top item +{{< /clients-example >}} + +If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: + +{{< clients-example stream_tutorial xadd_7 >}} +> XADD race:usa 0-* racer Prickett +0-3 +{{< /clients-example >}} + +## Getting data from Streams + +Now we are finally able to append entries in our stream via [`XADD`]({{< relref "/commands/xadd" >}}). However, while appending data to a stream is quite obvious, the way streams can be queried in order to extract data is not so obvious. If we continue with the analogy of the log file, one obvious way is to mimic what we normally do with the Unix command `tail -f`, that is, we may start to listen in order to get the new messages that are appended to the stream. Note that unlike the blocking list operations of Redis, where a given element will reach a single client which is blocking in a *pop style* operation like [`BLPOP`]({{< relref "/commands/blpop" >}}), with streams we want multiple consumers to see the new messages appended to the stream (the same way many `tail -f` processes can see what is added to a log). Using the traditional terminology we want the streams to be able to *fan out* messages to multiple clients. + +However, this is just one potential access mode. We could also see a stream in quite a different way: not as a messaging system, but as a *time series store*. In this case, maybe it's also useful to get the new messages appended, but another natural query mode is to get messages by ranges of time, or alternatively to iterate the messages using a cursor to incrementally check all the history. This is definitely another useful access mode. + +Finally, if we see a stream from the point of view of consumers, we may want to access the stream in yet another way, that is, as a stream of messages that can be partitioned to multiple consumers that are processing such messages, so that groups of consumers can only see a subset of the messages arriving in a single stream. In this way, it is possible to scale the message processing across different consumers, without single consumers having to process all the messages: each consumer will just get different messages to process. This is basically what Kafka (TM) does with consumer groups. Reading messages via consumer groups is yet another interesting mode of reading from a Redis Stream. + +Redis Streams support all three of the query modes described above via different commands. The next sections will show them all, starting from the simplest and most direct to use: range queries. + +### Querying by range: XRANGE and XREVRANGE + +To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. + +{{< clients-example stream_tutorial xrange_all >}} +> XRANGE race:france - + +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +3) 1) "1692632102976-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +4) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified [`XADD`]({{< relref "/commands/xadd" >}}) commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using [`XRANGE`]({{< relref "/commands/xrange" >}}). In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: + +{{< clients-example stream_tutorial xrange_time >}} +> XRANGE race:france 1692632086369 1692632086371 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +I have only a single entry in this range. However in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, [`XRANGE`]({{< relref "/commands/xrange" >}}) supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. + +{{< clients-example stream_tutorial xrange_step_1 >}} +> XRANGE race:france - + COUNT 2 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +To continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0`, and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new *start* argument for the next [`XRANGE`]({{< relref "/commands/xrange" >}}) call: + +{{< clients-example stream_tutorial xrange_step_2 >}} +> XRANGE race:france (1692632094485-0 + COUNT 2 +1) 1) "1692632102976-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +2) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Now that we've retrieved 4 items out of a stream that only had 4 entries in it, if we try to retrieve more items, we'll get an empty array: + +{{< clients-example stream_tutorial xrange_empty >}} +> XRANGE race:france (1692632147973-0 + COUNT 2 +(empty array) +{{< /clients-example >}} + +Since [`XRANGE`]({{< relref "/commands/xrange" >}}) complexity is *O(log(N))* to seek, and then *O(M)* to return M elements, with a small count the command has a logarithmic time complexity, which means that each step of the iteration is fast. So [`XRANGE`]({{< relref "/commands/xrange" >}}) is also the de facto *streams iterator* and does not require an **XSCAN** command. + +The command [`XREVRANGE`]({{< relref "/commands/xrevrange" >}}) is the equivalent of [`XRANGE`]({{< relref "/commands/xrange" >}}) but returning the elements in inverted order, so a practical use for [`XREVRANGE`]({{< relref "/commands/xrevrange" >}}) is to check what is the last item in a Stream: + +{{< clients-example stream_tutorial xrevrange >}} +> XREVRANGE race:france + - COUNT 1 +1) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Note that the [`XREVRANGE`]({{< relref "/commands/xrevrange" >}}) command takes the *start* and *stop* arguments in reverse order. + +## Listening for new items with XREAD + +When we do not want to access items by a range in a stream, usually what we want instead is to *subscribe* to new items arriving to the stream. This concept may appear related to Redis Pub/Sub, where you subscribe to a channel, or to Redis blocking lists, where you wait for a key to get new elements to fetch, but there are fundamental differences in the way you consume a stream: + +1. A stream can have multiple clients (consumers) waiting for data. Every new item, by default, will be delivered to *every consumer* that is waiting for data in a given stream. This behavior is different than blocking lists, where each consumer will get a different element. However, the ability to *fan out* to multiple consumers is similar to Pub/Sub. +2. While in Pub/Sub messages are *fire and forget* and are never stored anyway, and while when using blocking lists, when a message is received by the client it is *popped* (effectively removed) from the list, streams work in a fundamentally different way. All the messages are appended in the stream indefinitely (unless the user explicitly asks to delete entries): different consumers will know what is a new message from its point of view by remembering the ID of the last message received. +3. Streams Consumer Groups provide a level of control that Pub/Sub or blocking lists cannot achieve, with different groups for the same stream, explicit acknowledgment of processed items, ability to inspect the pending items, claiming of unprocessed messages, and coherent history visibility for each single client, that is only able to see its private past history of messages. + +The command that provides the ability to listen for new messages arriving into a stream is called [`XREAD`]({{< relref "/commands/xread" >}}). It's a bit more complex than [`XRANGE`]({{< relref "/commands/xrange" >}}), so we'll start showing simple forms, and later the whole command layout will be provided. + +{{< clients-example stream_tutorial xread >}} +> XREAD COUNT 2 STREAMS race:france 0 +1) 1) "race:france" + 2) 1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" + 2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +The above is the non-blocking form of [`XREAD`]({{< relref "/commands/xread" >}}). Note that the **COUNT** option is not mandatory, in fact the only mandatory option of the command is the **STREAMS** option, that specifies a list of keys together with the corresponding maximum ID already seen for each stream by the calling consumer, so that the command will provide the client only with messages with an ID greater than the one we specified. + +In the above command we wrote `STREAMS race:france 0` so we want all the messages in the Stream `race:france` having an ID greater than `0-0`. As you can see in the example above, the command returns the key name, because actually it is possible to call this command with more than one key to read from different streams at the same time. I could write, for instance: `STREAMS race:france race:italy 0 0`. Note how after the **STREAMS** option we need to provide the key names, and later the IDs. For this reason, the **STREAMS** option must always be the last option. +Any other options must come before the **STREAMS** option. + +Apart from the fact that [`XREAD`]({{< relref "/commands/xread" >}}) can access multiple streams at once, and that we are able to specify the last ID we own to just get newer messages, in this simple form the command is not doing something so different compared to [`XRANGE`]({{< relref "/commands/xrange" >}}). However, the interesting part is that we can turn [`XREAD`]({{< relref "/commands/xread" >}}) into a *blocking command* easily, by specifying the **BLOCK** argument: + +``` +> XREAD BLOCK 0 STREAMS race:france $ +``` + +Note that in the example above, other than removing **COUNT**, I specified the new **BLOCK** option with a timeout of 0 milliseconds (that means to never timeout). Moreover, instead of passing a normal ID for the stream `mystream` I passed the special ID `$`. This special ID means that [`XREAD`]({{< relref "/commands/xread" >}}) should use as last ID the maximum ID already stored in the stream `mystream`, so that we will receive only *new* messages, starting from the time we started listening. This is similar to the `tail -f` Unix command in some way. + +Note that when the **BLOCK** option is used, we do not have to use the special ID `$`. We can use any valid ID. If the command is able to serve our request immediately without blocking, it will do so, otherwise it will block. Normally if we want to consume the stream starting from new entries, we start with the ID `$`, and after that we continue using the ID of the last message received to make the next call, and so forth. + +The blocking form of [`XREAD`]({{< relref "/commands/xread" >}}) is also able to listen to multiple Streams, just by specifying multiple key names. If the request can be served synchronously because there is at least one stream with elements greater than the corresponding ID we specified, it returns with the results. Otherwise, the command will block and will return the items of the first stream which gets new data (according to the specified ID). + +Similarly to blocking list operations, blocking stream reads are *fair* from the point of view of clients waiting for data, since the semantics is FIFO style. The first client that blocked for a given stream will be the first to be unblocked when new items are available. + +[`XREAD`]({{< relref "/commands/xread" >}}) has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}), covered in the next section of this guide. + +## Consumer groups + +When the task at hand is to consume the same stream from different clients, then [`XREAD`]({{< relref "/commands/xread" >}}) already offers a way to *fan-out* to N clients, potentially also using replicas in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is that of messages which are slow to process: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. + +In practical terms, if we imagine having three consumers C1, C2, C3, and a stream that contains the messages 1, 2, 3, 4, 5, 6, 7 then what we want is to serve the messages according to the following diagram: + +``` +1 -> C1 +2 -> C2 +3 -> C3 +4 -> C1 +5 -> C2 +6 -> C3 +7 -> C1 +``` + +In order to achieve this, Redis uses a concept called *consumer groups*. It is very important to understand that Redis consumer groups have nothing to do, from an implementation standpoint, with Kafka (TM) consumer groups. Yet they are similar in functionality, so I decided to keep Kafka's (TM) terminology, as it originally popularized this idea. + +A consumer group is like a *pseudo consumer* that gets data from a stream, and actually serves multiple consumers, providing certain guarantees: + +1. Each message is served to a different consumer so that it is not possible that the same message will be delivered to multiple consumers. +2. Consumers are identified, within a consumer group, by a name, which is a case-sensitive string that the clients implementing consumers must choose. This means that even after a disconnect, the stream consumer group retains all the state, since the client will claim again to be the same consumer. However, this also means that it is up to the client to provide a unique identifier. +3. Each consumer group has the concept of the *first ID never consumed* so that, when a consumer asks for new messages, it can provide just messages that were not previously delivered. +4. Consuming a message, however, requires an explicit acknowledgment using a specific command. Redis interprets the acknowledgment as: this message was correctly processed so it can be evicted from the consumer group. +5. A consumer group tracks all the messages that are currently pending, that is, messages that were delivered to some consumer of the consumer group, but are yet to be acknowledged as processed. Thanks to this feature, when accessing the message history of a stream, each consumer *will only see messages that were delivered to it*. + +In a way, a consumer group can be imagined as some *amount of state* about a stream: + +``` ++----------------------------------------+ +| consumer_group_name: mygroup | +| consumer_group_stream: somekey | +| last_delivered_id: 1292309234234-92 | +| | +| consumers: | +| "consumer-1" with pending messages | +| 1292309234234-4 | +| 1292309234232-8 | +| "consumer-42" with pending messages | +| ... (and so forth) | ++----------------------------------------+ +``` + +If you see this from this point of view, it is very simple to understand what a consumer group can do, how it is able to just provide consumers with their history of pending messages, and how consumers asking for new messages will just be served with message IDs greater than `last_delivered_id`. At the same time, if you look at the consumer group as an auxiliary data structure for Redis streams, it is obvious that a single stream can have multiple consumer groups, that have a different set of consumers. Actually, it is even possible for the same stream to have clients reading without consumer groups via [`XREAD`]({{< relref "/commands/xread" >}}), and clients reading via [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) in different consumer groups. + +Now it's time to zoom in to see the fundamental consumer group commands. They are the following: + +* [`XGROUP`]({{< relref "/commands/xgroup" >}}) is used in order to create, destroy and manage consumer groups. +* [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) is used to read from a stream via a consumer group. +* [`XACK`]({{< relref "/commands/xack" >}}) is the command that allows a consumer to mark a pending message as correctly processed. + +## Creating a consumer group + +Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: + +{{< clients-example stream_tutorial xgroup_create >}} +> XGROUP CREATE race:france france_riders $ +OK +{{< /clients-example >}} + +As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what was the *last message ID* when the group was just created. If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume *all* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. + +[`XGROUP CREATE`]({{< relref "/commands/xgroup-create" >}}) also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: + +{{< clients-example stream_tutorial xgroup_create_mkstream >}} +> XGROUP CREATE race:italy italy_riders $ MKSTREAM +OK +{{< /clients-example >}} + +Now that the consumer group is created we can immediately try to read messages via the consumer group using the [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) command. We'll read from consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice or Bob. + +[`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) is very similar to [`XREAD`]({{< relref "/commands/xread" >}}) and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in [`XREAD`]({{< relref "/commands/xread" >}}). + +We'll add riders to the race:italy stream and try reading something using the consumer group: +Note: *here rider is the field name, and the name is the associated value. Remember that stream items are small dictionaries.* + +{{< clients-example stream_tutorial xgroup_read >}} +> XADD race:italy * rider Castilla +"1692632639151-0" +> XADD race:italy * rider Royce +"1692632647899-0" +> XADD race:italy * rider Sam-Bodden +"1692632662819-0" +> XADD race:italy * rider Prickett +"1692632670501-0" +> XADD race:italy * rider Norem +"1692632678249-0" +> XREADGROUP GROUP italy_riders Alice COUNT 1 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +{{< /clients-example >}} + +[`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) replies are just like [`XREAD`]({{< relref "/commands/xread" >}}) replies. Note however the `GROUP ` provided above. It states that I want to read from the stream using the consumer group `mygroup` and I'm the consumer `Alice`. Every time a consumer performs an operation with a consumer group, it must specify its name, uniquely identifying this consumer inside the group. + +There is another very important detail in the command line above, after the mandatory **STREAMS** option the ID requested for the key `mystream` is the special ID `>`. This special ID is only valid in the context of consumer groups, and it means: **messages never delivered to other consumers so far**. + +This is almost always what you want, however it is also possible to specify a real ID, such as `0` or any other valid ID, in this case, however, what happens is that we request from [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) to just provide us with the **history of pending messages**, and in such case, will never see new messages in the group. So basically [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) has the following behavior based on the ID we specify: + +* If the ID is the special ID `>` then the command will return only new messages never delivered to other consumers so far, and as a side effect, will update the consumer group's *last ID*. +* If the ID is any other valid numerical ID, then the command will let us access our *history of pending messages*. That is, the set of messages that were delivered to this specified consumer (identified by the provided name), and never acknowledged so far with [`XACK`]({{< relref "/commands/xack" >}}). + +We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about Castilla: + +{{< clients-example stream_tutorial xgroup_read_id >}} +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +{{< /clients-example >}} + +However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: + +{{< clients-example stream_tutorial xack >}} +> XACK race:italy italy_riders 1692632639151-0 +(integer) 1 +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) (empty array) +{{< /clients-example >}} + +Don't worry if you yet don't know how [`XACK`]({{< relref "/commands/xack" >}}) works, the idea is just that processed messages are no longer part of the history that we can access. + +Now it's Bob's turn to read something: + +{{< clients-example stream_tutorial xgroup_read_bob >}} +> XREADGROUP GROUP italy_riders Bob COUNT 2 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" + 2) 1) "1692632662819-0" + 2) 1) "rider" + 2) "Sam-Bodden" +{{< /clients-example >}} + +Bob asked for a maximum of two messages and is reading via the same group `mygroup`. So what happens is that Redis reports just *new* messages. As you can see the "Castilla" message is not delivered, since it was already delivered to Alice, so Bob gets Royce and Sam-Bodden and so forth. + +This way Alice, Bob, and any other consumer in the group, are able to read different messages from the same stream, to read their history of yet to process messages, or to mark messages as processed. This allows creating different topologies and semantics for consuming messages from a stream. + +There are a few things to keep in mind: + +* Consumers are auto-created the first time they are mentioned, no need for explicit creation. +* Even with [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) you can read from multiple keys at the same time, however for this to work, you need to create a consumer group with the same name in every stream. This is not a common need, but it is worth mentioning that the feature is technically available. +* [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) is a *write command* because even if it reads from the stream, the consumer group is modified as a side effect of reading, so it can only be called on master instances. + +An example of a consumer implementation, using consumer groups, written in the Ruby language could be the following. The Ruby code is aimed to be readable by virtually any experienced programmer, even if they do not know Ruby: + +```ruby +require 'redis' + +if ARGV.length == 0 + puts "Please specify a consumer name" + exit 1 +end + +ConsumerName = ARGV[0] +GroupName = "mygroup" +r = Redis.new + +def process_message(id,msg) + puts "[#{ConsumerName}] #{id} = #{msg.inspect}" +end + +$lastid = '0-0' + +puts "Consumer #{ConsumerName} starting..." +check_backlog = true +while true + # Pick the ID based on the iteration: the first time we want to + # read our pending messages, in case we crashed and are recovering. + # Once we consumed our history, we can start getting new messages. + if check_backlog + myid = $lastid + else + myid = '>' + end + + items = r.xreadgroup('GROUP',GroupName,ConsumerName,'BLOCK','2000','COUNT','10','STREAMS',:my_stream_key,myid) + + if items == nil + puts "Timeout!" + next + end + + # If we receive an empty reply, it means we were consuming our history + # and that the history is now empty. Let's start to consume new messages. + check_backlog = false if items[0][1].length == 0 + + items[0][1].each{|i| + id,fields = i + + # Process the message + process_message(id,fields) + + # Acknowledge the message as processed + r.xack(:my_stream_key,GroupName,id) + + $lastid = id + } +end +``` + +As you can see the idea here is to start by consuming the history, that is, our list of pending messages. This is useful because the consumer may have crashed before, so in the event of a restart we want to re-read messages that were delivered to us without getting acknowledged. Note that we might process a message multiple times or one time (at least in the case of consumer failures, but there are also the limits of Redis persistence and replication involved, see the specific section about this topic). + +Once the history was consumed, and we get an empty list of messages, we can switch to using the `>` special ID in order to consume new messages. + +## Recovering from permanent failures + +The example above allows us to write consumers that participate in the same consumer group, each taking a subset of messages to process, and when recovering from failures re-reading the pending messages that were delivered just to them. However in the real world consumers may permanently fail and never recover. What happens to the pending messages of the consumer that never recovers after stopping for any reason? + +Redis consumer groups offer a feature that is used in these situations in order to *claim* the pending messages of a given consumer so that such messages will change ownership and will be re-assigned to a different consumer. The feature is very explicit. A consumer has to inspect the list of pending messages, and will have to claim specific messages using a special command, otherwise the server will leave the messages pending forever and assigned to the old consumer. In this way different applications can choose if to use such a feature or not, and exactly how to use it. + +The first step of this process is just a command that provides observability of pending entries in the consumer group and is called [`XPENDING`]({{< relref "/commands/xpending" >}}). +This is a read-only command which is always safe to call and will not change ownership of any message. +In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. + +{{< clients-example stream_tutorial xpending >}} +> XPENDING race:italy italy_riders +1) (integer) 2 +2) "1692632647899-0" +3) "1692632662819-0" +4) 1) 1) "Bob" + 2) "2" +{{< /clients-example >}} + +When called in this way, the command outputs the total number of pending messages in the consumer group (two in this case), the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. +We have only Bob with two pending messages because the single message that Alice requested was acknowledged using [`XACK`]({{< relref "/commands/xack" >}}). + +We can ask for more information by giving more arguments to [`XPENDING`]({{< relref "/commands/xpending" >}}), because the full command signature is the following: + +``` +XPENDING [[IDLE ] []] +``` + +By providing a start and end ID (that can be just `-` and `+` as in [`XRANGE`]({{< relref "/commands/xrange" >}})) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. + +{{< clients-example stream_tutorial xpending_plus_minus >}} +> XPENDING race:italy italy_riders - + 10 +1) 1) "1692632647899-0" + 2) "Bob" + 3) (integer) 74642 + 4) (integer) 1 +2) 1) "1692632662819-0" + 2) "Bob" + 3) (integer) 74642 + 4) (integer) 1 +{{< /clients-example >}} + +Now we have the details for each message: the ID, the consumer name, the *idle time* in milliseconds, which is how many milliseconds have passed since the last time the message was delivered to some consumer, and finally the number of times that a given message was delivered. +We have two messages from Bob, and they are idle for 60000+ milliseconds, about a minute. + +Note that nobody prevents us from checking what the first message content was by just using [`XRANGE`]({{< relref "/commands/xrange" >}}). + +{{< clients-example stream_tutorial xrange_pending >}} +> XRANGE race:italy 1692632647899-0 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 1 minute of not processing messages, Bob will probably not recover quickly, and it's time to *claim* such messages and resume the processing in place of Bob. To do so, we use the [`XCLAIM`]({{< relref "/commands/xclaim" >}}) command. + +This command is very complex and full of options in its full form, since it is used for replication of consumer groups changes, but we'll use just the arguments that we need normally. In this case it is as simple as: + +``` +XCLAIM ... +``` + +Basically we say, for this specific key and group, I want that the message IDs specified will change ownership, and will be assigned to the specified consumer name ``. However, we also provide a minimum idle time, so that the operation will only work if the idle time of the mentioned messages is greater than the specified idle time. This is useful because maybe two clients are retrying to claim a message at the same time: + +``` +Client 1: XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +Client 2: XCLAIM race:italy italy_riders Lora 60000 1692632647899-0 +``` + +However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). + +This is the result of the command execution: + +{{< clients-example stream_tutorial xclaim >}} +> XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +The message was successfully claimed by Alice, who can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. + +It is clear from the example above that as a side effect of successfully claiming a given message, the [`XCLAIM`]({{< relref "/commands/xclaim" >}}) command also returns it. However this is not mandatory. The **JUSTID** option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server (and also the performance of the command) and you are not interested in the message because your consumer is implemented in a way that it will rescan the history of pending messages from time to time. + +Claiming may also be implemented by a separate process: one that just checks the list of pending messages, and assigns idle messages to consumers that appear to be active. Active consumers can be obtained using one of the observability features of Redis streams. This is the topic of the next section. + +## Automatic claiming + +The [`XAUTOCLAIM`]({{< relref "/commands/xautoclaim" >}}) command, added in Redis 6.2, implements the claiming process that we've described above. +[`XPENDING`]({{< relref "/commands/xpending" >}}) and [`XCLAIM`]({{< relref "/commands/xclaim" >}}) provide the basic building blocks for different types of recovery mechanisms. +This command optimizes the generic process by having Redis manage it and offers a simple solution for most recovery needs. + +[`XAUTOCLAIM`]({{< relref "/commands/xautoclaim" >}}) identifies idle pending messages and transfers ownership of them to a consumer. +The command's signature looks like this: + +``` +XAUTOCLAIM [COUNT count] [JUSTID] +``` + +So, in the example above, I could have used automatic claiming to claim a single message like this: + +{{< clients-example stream_tutorial xautoclaim >}} +> XAUTOCLAIM race:italy italy_riders Alice 60000 0-0 COUNT 1 +1) "0-0" +2) 1) 1) "1692632662819-0" + 2) 1) "rider" + 2) "Sam-Bodden" +{{< /clients-example >}} + +Like [`XCLAIM`]({{< relref "/commands/xclaim" >}}), the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. +The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: + +{{< clients-example stream_tutorial xautoclaim_cursor >}} +> XAUTOCLAIM race:italy italy_riders Lora 60000 (1692632662819-0 COUNT 1 +1) "1692632662819-0" +2) 1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +When [`XAUTOCLAIM`]({{< relref "/commands/xautoclaim" >}}) returns the "0-0" stream ID as a cursor, that means that it reached the end of the consumer group pending entries list. +That doesn't mean that there are no new idle pending messages, so the process continues by calling [`XAUTOCLAIM`]({{< relref "/commands/xautoclaim" >}}) from the beginning of the stream. + +## Claiming and the delivery counter + +The counter that you observe in the [`XPENDING`]({{< relref "/commands/xpending" >}}) output is the number of deliveries of each message. The counter is incremented in two ways: when a message is successfully claimed via [`XCLAIM`]({{< relref "/commands/xclaim" >}}) or when an [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) call is used in order to access the history of pending messages. + +When there are failures, it is normal that messages will be delivered multiple times, but eventually they usually get processed and acknowledged. However there might be a problem processing some specific message, because it is corrupted or crafted in a way that triggers a bug in the processing code. In such a case what happens is that consumers will continuously fail to process this particular message. Because we have the counter of the delivery attempts, we can use that counter to detect messages that for some reason are not processable. So once the deliveries counter reaches a given large number that you chose, it is probably wiser to put such messages in another stream and send a notification to the system administrator. This is basically the way that Redis Streams implements the *dead letter* concept. + +## Streams observability + +Messaging systems that lack observability are very hard to work with. Not knowing who is consuming messages, what messages are pending, the set of consumer groups active in a given stream, makes everything opaque. For this reason, Redis Streams and consumer groups have different ways to observe what is happening. We already covered [`XPENDING`]({{< relref "/commands/xpending" >}}), which allows us to inspect the list of messages that are under processing at a given moment, together with their idle time and number of deliveries. + +However we may want to do more than that, and the [`XINFO`]({{< relref "/commands/xinfo" >}}) command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. + +This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. + +{{< clients-example stream_tutorial xinfo >}} +> XINFO STREAM race:italy + 1) "length" + 2) (integer) 5 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1692632678249-0" + 9) "groups" +10) (integer) 1 +11) "first-entry" +12) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +13) "last-entry" +14) 1) "1692632678249-0" + 2) 1) "rider" + 2) "Norem" +{{< /clients-example >}} + +The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. + +{{< clients-example stream_tutorial xinfo_groups >}} +> XINFO GROUPS race:italy +1) 1) "name" + 2) "italy_riders" + 3) "consumers" + 4) (integer) 3 + 5) "pending" + 6) (integer) 2 + 7) "last-delivered-id" + 8) "1692632662819-0" +{{< /clients-example >}} + +As you can see in this and in the previous output, the [`XINFO`]({{< relref "/commands/xinfo" >}}) command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking compatibility with older clients. Other commands that must be more bandwidth efficient, like [`XPENDING`]({{< relref "/commands/xpending" >}}), just report the information without the field names. + +The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. + +{{< clients-example stream_tutorial xinfo_consumers >}} +> XINFO CONSUMERS race:italy italy_riders +1) 1) "name" + 2) "Alice" + 3) "pending" + 4) (integer) 1 + 5) "idle" + 6) (integer) 177546 +2) 1) "name" + 2) "Bob" + 3) "pending" + 4) (integer) 0 + 5) "idle" + 6) (integer) 424686 +3) 1) "name" + 2) "Lora" + 3) "pending" + 4) (integer) 1 + 5) "idle" + 6) (integer) 72241 +{{< /clients-example >}} + +In case you do not remember the syntax of the command, just ask the command itself for help: + +``` +> XINFO HELP +1) XINFO [ [value] [opt] ...]. Subcommands are: +2) CONSUMERS +3) Show consumers of . +4) GROUPS +5) Show the stream consumer groups. +6) STREAM [FULL [COUNT ] +7) Show information about the stream. +8) HELP +9) Prints this help. +``` + +## Differences with Kafka (TM) partitions + +Consumer groups in Redis streams may resemble in some way Kafka (TM) partitioning-based consumer groups, however note that Redis streams are, in practical terms, very different. The partitions are only *logical* and the messages are just put into a single Redis key, so the way the different clients are served is based on who is ready to process new messages, and not from which partition clients are reading. For instance, if the consumer C3 at some point fails permanently, Redis will continue to serve C1 and C2 all the new messages arriving, as if now there are only two *logical* partitions. + +Similarly, if a given consumer is much faster at processing messages than the other consumers, this consumer will receive proportionally more messages in the same unit of time. This is possible since Redis tracks all the unacknowledged messages explicitly, and remembers who received which message and the ID of the first message never delivered to any consumer. + +However, this also means that in Redis if you really want to partition messages in the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. + +We could say that schematically the following is true: + +* If you use 1 stream -> 1 consumer, you are processing messages in order. +* If you use N streams with N consumers, so that only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. +* If you use 1 stream -> N consumers, you are load balancing to N consumers, however in that case, messages about the same logical item may be consumed out of order, because a given consumer may process message 3 faster than another consumer is processing message 4. + +So basically Kafka partitions are more similar to using N different Redis keys, while Redis consumer groups are a server-side load balancing system of messages from a given stream to N different consumers. + +## Capped Streams + +Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to store the history for, potentially, decades to come. Redis streams have some support for this. One is the **MAXLEN** option of the [`XADD`]({{< relref "/commands/xadd" >}}) command. This option is very simple to use: + +{{< clients-example stream_tutorial maxlen >}} +> XADD race:italy MAXLEN 2 * rider Jones +"1692633189161-0" +> XADD race:italy MAXLEN 2 * rider Wood +"1692633198206-0" +> XADD race:italy MAXLEN 2 * rider Henshaw +"1692633208557-0" +> XLEN race:italy +(integer) 2 +> XRANGE race:italy - + +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +2) 1) "1692633208557-0" + 2) 1) "rider" + 2) "Henshaw" +{{< /clients-example >}} + +Using **MAXLEN** the old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. There is currently no option to tell the stream to just retain items that are not older than a given period, because such command, in order to run consistently, would potentially block for a long time in order to evict items. Imagine for example what happens if there is an insertion spike, then a long pause, and another insertion, all with the same maximum time. The stream would block to evict the data that became too old during the pause. So it is up to the user to do some planning and understand what is the maximum stream length desired. Moreover, while the length of the stream is proportional to the memory used, trimming by time is less simple to control and anticipate: it depends on the insertion rate which often changes over time (and when it does not change, then to just trim by size is trivial). + +However trimming with **MAXLEN** can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it's possible to use the command in the following special form: + +``` +XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... +``` + +The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this. For example, the Python client defaults to approximate and has to be explicitly set to a true length. + +There is also the [`XTRIM`]({{< relref "/commands/xtrim" >}}) command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: + +{{< clients-example stream_tutorial xtrim >}} +> XTRIM race:italy MAXLEN 10 +(integer) 0 +{{< /clients-example >}} + +Or, as for the [`XADD`]({{< relref "/commands/xadd" >}}) option: + +{{< clients-example stream_tutorial xtrim2 >}} +> XTRIM mystream MAXLEN ~ 10 +(integer) 0 +{{< /clients-example >}} + +However, [`XTRIM`]({{< relref "/commands/xtrim" >}}) is designed to accept different trimming strategies. Another trimming strategy is **MINID**, that evicts entries with IDs lower than the one specified. + +As [`XTRIM`]({{< relref "/commands/xtrim" >}}) is an explicit command, the user is expected to know about the possible shortcomings of different trimming strategies. + +Another useful eviction strategy that may be added to [`XTRIM`]({{< relref "/commands/xtrim" >}}) in the future, is to remove by a range of IDs to ease use of [`XRANGE`]({{< relref "/commands/xrange" >}}) and [`XTRIM`]({{< relref "/commands/xtrim" >}}) to move data from Redis to other storage systems if needed. + +## Special IDs in the streams API + +You may have noticed that there are several special IDs that can be used in the Redis API. Here is a short recap, so that they can make more sense in the future. + +The first two special IDs are `-` and `+`, and are used in range queries with the [`XRANGE`]({{< relref "/commands/xrange" >}}) command. Those two IDs respectively mean the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. + +Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entries with [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) I use this ID to signify I already have all the existing entries, but not the new ones that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entries to the consumers in the group. + +As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol with multiple meanings. + +Another special ID is `>`, that is a special meaning only related to consumer groups and only when the [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) command is used. This special ID means that we want only entries that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. + +Finally the special ID `*`, that can be used only with the [`XADD`]({{< relref "/commands/xadd" >}}) command, means to auto select an ID for us for the new entry. + +So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and most of the time, can be used in different contexts. + +## Persistence, replication and message safety + +A Stream, like any other Redis data structure, is asynchronously replicated to replicas and persisted into AOF and RDB files. However what may not be so obvious is that also the consumer groups full state is propagated to AOF, RDB and replicas, so if a message is pending in the master, also the replica will have the same information. Similarly, after a restart, the AOF will restore the consumer groups' state. + +However note that Redis streams and consumer groups are persisted and replicated using the Redis default replication, so: + +* AOF must be used with a strong fsync policy if persistence of messages is important in your application. +* By default the asynchronous replication will not guarantee that [`XADD`]({{< relref "/commands/xadd" >}}) commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of replicas to receive the data from the master. +* The [`WAIT`]({{< relref "/commands/wait" >}}) command may be used in order to force the propagation of the changes to a set of replicas. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the replica which is the most updated, and under certain specific failure conditions may promote a replica that lacks some data. + +So when designing an application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating whether it is safe enough for your use case. + +## Removing single items from a stream + +Streams also have a special command for removing items from the middle of a stream, just by ID. Normally for an append only data structure this may look like an odd feature, but it is actually useful for applications involving, for instance, privacy regulations. The command is called [`XDEL`]({{< relref "/commands/xdel" >}}) and receives the name of the stream followed by the IDs to delete: + +{{< clients-example stream_tutorial xdel >}} +> XRANGE race:italy - + COUNT 2 +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +2) 1) "1692633208557-0" + 2) 1) "rider" + 2) "Henshaw" +> XDEL race:italy 1692633208557-0 +(integer) 1 +> XRANGE race:italy - + COUNT 2 +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +{{< /clients-example >}} + +However in the current implementation, memory is not really reclaimed until a macro node is completely empty, so you should not abuse this feature. + +## Zero length streams + +A difference between streams and other Redis data structures is that when the other data structures no longer have any elements, as a side effect of calling commands that remove elements, the key itself will be removed. So for instance, a sorted set will be completely removed when a call to [`ZREM`]({{< relref "/commands/zrem" >}}) will remove the last element in the sorted set. Streams, on the other hand, are allowed to stay at zero elements, both as a result of using a **MAXLEN** option with a count of zero ([`XADD`]({{< relref "/commands/xadd" >}}) and [`XTRIM`]({{< relref "/commands/xtrim" >}}) commands), or because [`XDEL`]({{< relref "/commands/xdel" >}}) was called. + +The reason why such an asymmetry exists is because Streams may have associated consumer groups, and we do not want to lose the state that the consumer groups defined just because there are no longer any items in the stream. Currently the stream is not deleted even when it has no associated consumer groups. + +## Total latency of consuming a message + +Non blocking stream commands like [`XRANGE`]({{< relref "/commands/xrange" >}}) and [`XREAD`]({{< relref "/commands/xread" >}}) or [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: it is more interesting to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that [`XADD`]({{< relref "/commands/xadd" >}}) is very fast and can easily insert from half a million to one million items per second in an average machine if pipelining is used. + +However latency becomes an interesting parameter if we want to understand the delay of processing a message, in the context of blocking consumers in a consumer group, from the moment the message is produced via [`XADD`]({{< relref "/commands/xadd" >}}), to the moment the message is obtained by the consumer because [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) returned with the message. + +## How serving blocked consumers works + +Before providing the results of performed tests, it is interesting to understand what model Redis uses in order to route stream messages (and in general actually how any blocking operation waiting for data is managed). + +* The blocked client is referenced in a hash table that maps keys for which there is at least one blocking consumer, to a list of consumers that are waiting for such key. This way, given a key that received data, we can resolve all the clients that are waiting for such data. +* When a write happens, in this case when the [`XADD`]({{< relref "/commands/xadd" >}}) command is called, it calls the `signalKeyAsReady()` function. This function will put the key into a list of keys that need to be processed, because such keys may have new data for blocked consumers. Note that such *ready keys* will be processed later, so in the course of the same event loop cycle, it is possible that the key will receive other writes. +* Finally, before returning into the event loop, the *ready keys* are finally processed. For each key the list of clients waiting for data is scanned, and if applicable, such clients will receive the new data that arrived. In the case of streams the data is the messages in the applicable range requested by the consumer. + +As you can see, basically, before returning to the event loop both the client calling [`XADD`]({{< relref "/commands/xadd" >}}) and the clients blocked to consume messages, will have their reply in the output buffers, so the caller of [`XADD`]({{< relref "/commands/xadd" >}}) should receive the reply from Redis at about the same time the consumers will receive the new messages. + +This model is *push-based*, since adding data to the consumers buffers will be performed directly by the action of calling [`XADD`]({{< relref "/commands/xadd" >}}), so the latency tends to be quite predictable. + +## Latency tests results + +In order to check these latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted of comparing the current computer time with the message timestamp, in order to understand the total latency. + +Results obtained: + +``` +Processed between 0 and 1 ms -> 74.11% +Processed between 1 and 2 ms -> 25.80% +Processed between 2 and 3 ms -> 0.06% +Processed between 3 and 4 ms -> 0.01% +Processed between 4 and 5 ms -> 0.02% +``` + +So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. + +Adding a few million unacknowledged messages to the stream does not change the gist of the benchmark, with most queries still processed with very short latency. + +A few remarks: + +* Here we processed up to 10k messages per iteration, this means that the `COUNT` parameter of [`XREADGROUP`]({{< relref "/commands/xreadgroup" >}}) was set to 10000. This adds a lot of latency but is needed in order to allow the slow consumers to be able to keep with the message flow. So you can expect a real world latency that is a lot smaller. +* The system used for this benchmark is very slow compared to today's standards. + + + + +## Learn more + +* The [Redis Streams Tutorial]({{< relref "/develop/data-types/streams" >}}) explains Redis streams with many examples. +* [Redis Streams Explained](https://www.youtube.com/watch?v=Z8qcpXyMAiA) is an entertaining introduction to streams in Redis. +* [Redis University's RU202](https://university.redis.com/courses/ru202/) is a free, online course dedicated to Redis Streams. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis lists + + ' +linkTitle: Lists +title: Redis lists +weight: 20 +--- + +Redis lists are linked lists of string values. +Redis lists are frequently used to: + +* Implement stacks and queues. +* Build queue management for background worker systems. + +## Basic commands + +* [`LPUSH`]({{< relref "/commands/lpush" >}}) adds a new element to the head of a list; [`RPUSH`]({{< relref "/commands/rpush" >}}) adds to the tail. +* [`LPOP`]({{< relref "/commands/lpop" >}}) removes and returns an element from the head of a list; [`RPOP`]({{< relref "/commands/rpop" >}}) does the same but from the tails of a list. +* [`LLEN`]({{< relref "/commands/llen" >}}) returns the length of a list. +* [`LMOVE`]({{< relref "/commands/lmove" >}}) atomically moves elements from one list to another. +* [`LRANGE`]({{< relref "/commands/lrange" >}}) extracts a range of elements from a list. +* [`LTRIM`]({{< relref "/commands/ltrim" >}}) reduces a list to the specified range of elements. + +### Blocking commands + +Lists support several blocking commands. +For example: + +* [`BLPOP`]({{< relref "/commands/blpop" >}}) removes and returns an element from the head of a list. + If the list is empty, the command blocks until an element becomes available or until the specified timeout is reached. +* [`BLMOVE`]({{< relref "/commands/blmove" >}}) atomically moves elements from a source list to a target list. + If the source list is empty, the command will block until a new element becomes available. + +See the [complete series of list commands]({{< relref "/commands/" >}}?group=list). + +## Examples + +* Treat a list like a queue (first in, first out): +{{< clients-example list_tutorial queue >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> RPOP bikes:repairs +"bike:1" +> RPOP bikes:repairs +"bike:2" +{{< /clients-example >}} + +* Treat a list like a stack (first in, last out): +{{< clients-example list_tutorial stack >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> LPOP bikes:repairs +"bike:2" +> LPOP bikes:repairs +"bike:1" +{{< /clients-example >}} + +* Check the length of a list: +{{< clients-example list_tutorial llen >}} +> LLEN bikes:repairs +(integer) 0 +{{< /clients-example >}} + +* Atomically pop an element from one list and push to another: +{{< clients-example list_tutorial lmove_lrange >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> LMOVE bikes:repairs bikes:finished LEFT LEFT +"bike:2" +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +> LRANGE bikes:finished 0 -1 +1) "bike:2" +{{< /clients-example >}} + +* To limit the length of a list you can call [`LTRIM`]({{< relref "/commands/ltrim" >}}): +{{< clients-example list_tutorial ltrim.1 >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs 0 2 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +2) "bike:2" +3) "bike:3" +{{< /clients-example >}} + +### What are Lists? +To explain the List data type it's better to start with a little bit of theory, +as the term *List* is often used in an improper way by information technology +folks. For instance "Python Lists" are not what the name may suggest (Linked +Lists), but rather Arrays (the same data type is called Array in +Ruby actually). + +From a very general point of view a List is just a sequence of ordered +elements: 10,20,1,2,3 is a list. But the properties of a List implemented using +an Array are very different from the properties of a List implemented using a +*Linked List*. + +Redis lists are implemented via Linked Lists. This means that even if you have +millions of elements inside a list, the operation of adding a new element in +the head or in the tail of the list is performed *in constant time*. The speed of adding a +new element with the [`LPUSH`]({{< relref "/commands/lpush" >}}) command to the head of a list with ten +elements is the same as adding an element to the head of list with 10 +million elements. + +What's the downside? Accessing an element *by index* is very fast in lists +implemented with an Array (constant time indexed access) and not so fast in +lists implemented by linked lists (where the operation requires an amount of +work proportional to the index of the accessed element). + +Redis Lists are implemented with linked lists because for a database system it +is crucial to be able to add elements to a very long list in a very fast way. +Another strong advantage, as you'll see in a moment, is that Redis Lists can be +taken at constant length in constant time. + +When fast access to the middle of a large collection of elements is important, +there is a different data structure that can be used, called sorted sets. +Sorted sets are covered in the [Sorted sets]({{< relref "/develop/data-types/sorted-sets" >}}) tutorial page. + +### First steps with Redis Lists + +The [`LPUSH`]({{< relref "/commands/lpush" >}}) command adds a new element into a list, on the +left (at the head), while the [`RPUSH`]({{< relref "/commands/rpush" >}}) command adds a new +element into a list, on the right (at the tail). Finally the +[`LRANGE`]({{< relref "/commands/lrange" >}}) command extracts ranges of elements from lists: + +{{< clients-example list_tutorial lpush_rpush >}} +> RPUSH bikes:repairs bike:1 +(integer) 1 +> RPUSH bikes:repairs bike:2 +(integer) 2 +> LPUSH bikes:repairs bike:important_bike +(integer) 3 +> LRANGE bikes:repairs 0 -1 +1) "bike:important_bike" +2) "bike:1" +3) "bike:2" +{{< /clients-example >}} + +Note that [`LRANGE`]({{< relref "/commands/lrange" >}}) takes two indexes, the first and the last +element of the range to return. Both the indexes can be negative, telling Redis +to start counting from the end: so -1 is the last element, -2 is the +penultimate element of the list, and so forth. + +As you can see [`RPUSH`]({{< relref "/commands/rpush" >}}) appended the elements on the right of the list, while +the final [`LPUSH`]({{< relref "/commands/lpush" >}}) appended the element on the left. + +Both commands are *variadic commands*, meaning that you are free to push +multiple elements into a list in a single call: + +{{< clients-example list_tutorial variadic >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> LPUSH bikes:repairs bike:important_bike bike:very_important_bike +> LRANGE bikes:repairs 0 -1 +1) "bike:very_important_bike" +2) "bike:important_bike" +3) "bike:1" +4) "bike:2" +5) "bike:3" +{{< /clients-example >}} + +An important operation defined on Redis lists is the ability to *pop elements*. +Popping elements is the operation of both retrieving the element from the list, +and eliminating it from the list, at the same time. You can pop elements +from left and right, similarly to how you can push elements in both sides +of the list. We'll add three elements and pop three elements, so at the end of this +sequence of commands the list is empty and there are no more elements to +pop: + +{{< clients-example list_tutorial lpop_rpop >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> RPOP bikes:repairs +"bike:3" +> LPOP bikes:repairs +"bike:1" +> RPOP bikes:repairs +"bike:2" +> RPOP bikes:repairs +(nil) +{{< /clients-example >}} + +Redis returned a NULL value to signal that there are no elements in the +list. + +### Common use cases for lists + +Lists are useful for a number of tasks, two very representative use cases +are the following: + +* Remember the latest updates posted by users into a social network. +* Communication between processes, using a consumer-producer pattern where the producer pushes items into a list, and a consumer (usually a *worker*) consumes those items and executes actions. Redis has special list commands to make this use case both more reliable and efficient. + +For example both the popular Ruby libraries [resque](https://github.com/resque/resque) and +[sidekiq](https://github.com/mperham/sidekiq) use Redis lists under the hood in order to +implement background jobs. + +The popular Twitter social network [takes the latest tweets](http://www.infoq.com/presentations/Real-Time-Delivery-Twitter) +posted by users into Redis lists. + +To describe a common use case step by step, imagine your home page shows the latest +photos published in a photo sharing social network and you want to speedup access. + +* Every time a user posts a new photo, we add its ID into a list with [`LPUSH`]({{< relref "/commands/lpush" >}}). +* When users visit the home page, we use `LRANGE 0 9` in order to get the latest 10 posted items. + +### Capped lists + +In many use cases we just want to use lists to store the *latest items*, +whatever they are: social network updates, logs, or anything else. + +Redis allows us to use lists as a capped collection, only remembering the latest +N items and discarding all the oldest items using the [`LTRIM`]({{< relref "/commands/ltrim" >}}) command. + +The [`LTRIM`]({{< relref "/commands/ltrim" >}}) command is similar to [`LRANGE`]({{< relref "/commands/lrange" >}}), but **instead of displaying the +specified range of elements** it sets this range as the new list value. All +the elements outside the given range are removed. + +For example, if you're adding bikes on the end of a list of repairs, but only +want to worry about the 3 that have been on the list the longest: + +{{< clients-example list_tutorial ltrim >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs 0 2 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +2) "bike:2" +3) "bike:3" +{{< /clients-example >}} + +The above [`LTRIM`]({{< relref "/commands/ltrim" >}}) command tells Redis to keep just list elements from index +0 to 2, everything else will be discarded. This allows for a very simple but +useful pattern: doing a List push operation + a List trim operation together +to add a new element and discard elements exceeding a limit. Using +[`LTRIM`]({{< relref "/commands/ltrim" >}}) with negative indexes can then be used to keep only the 3 most recently added: + +{{< clients-example list_tutorial ltrim_end_of_list >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs -3 -1 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:3" +2) "bike:4" +3) "bike:5" +{{< /clients-example >}} + +The above combination adds new elements and keeps only the 3 +newest elements into the list. With [`LRANGE`]({{< relref "/commands/lrange" >}}) you can access the top items +without any need to remember very old data. + +Note: while [`LRANGE`]({{< relref "/commands/lrange" >}}) is technically an O(N) command, accessing small ranges +towards the head or the tail of the list is a constant time operation. + +Blocking operations on lists +--- + +Lists have a special feature that make them suitable to implement queues, +and in general as a building block for inter process communication systems: +blocking operations. + +Imagine you want to push items into a list with one process, and use +a different process in order to actually do some kind of work with those +items. This is the usual producer / consumer setup, and can be implemented +in the following simple way: + +* To push items into the list, producers call [`LPUSH`]({{< relref "/commands/lpush" >}}). +* To extract / process items from the list, consumers call [`RPOP`]({{< relref "/commands/rpop" >}}). + +However it is possible that sometimes the list is empty and there is nothing +to process, so [`RPOP`]({{< relref "/commands/rpop" >}}) just returns NULL. In this case a consumer is forced to wait +some time and retry again with [`RPOP`]({{< relref "/commands/rpop" >}}). This is called *polling*, and is not +a good idea in this context because it has several drawbacks: + +1. Forces Redis and clients to process useless commands (all the requests when the list is empty will get no actual work done, they'll just return NULL). +2. Adds a delay to the processing of items, since after a worker receives a NULL, it waits some time. To make the delay smaller, we could wait less between calls to [`RPOP`]({{< relref "/commands/rpop" >}}), with the effect of amplifying problem number 1, i.e. more useless calls to Redis. + +So Redis implements commands called [`BRPOP`]({{< relref "/commands/brpop" >}}) and [`BLPOP`]({{< relref "/commands/blpop" >}}) which are versions +of [`RPOP`]({{< relref "/commands/rpop" >}}) and [`LPOP`]({{< relref "/commands/lpop" >}}) able to block if the list is empty: they'll return to +the caller only when a new element is added to the list, or when a user-specified +timeout is reached. + +This is an example of a [`BRPOP`]({{< relref "/commands/brpop" >}}) call we could use in the worker: + +{{< clients-example list_tutorial brpop >}} +> RPUSH bikes:repairs bike:1 bike:2 +(integer) 2 +> BRPOP bikes:repairs 1 +1) "bikes:repairs" +2) "bike:2" +> BRPOP bikes:repairs 1 +1) "bikes:repairs" +2) "bike:1" +> BRPOP bikes:repairs 1 +(nil) +(2.01s) +{{< /clients-example >}} + +It means: "wait for elements in the list `bikes:repairs`, but return if after 1 second +no element is available". + +Note that you can use 0 as timeout to wait for elements forever, and you can +also specify multiple lists and not just one, in order to wait on multiple +lists at the same time, and get notified when the first list receives an +element. + +A few things to note about [`BRPOP`]({{< relref "/commands/brpop" >}}): + +1. Clients are served in an ordered way: the first client that blocked waiting for a list, is served first when an element is pushed by some other client, and so forth. +2. The return value is different compared to [`RPOP`]({{< relref "/commands/rpop" >}}): it is a two-element array since it also includes the name of the key, because [`BRPOP`]({{< relref "/commands/brpop" >}}) and [`BLPOP`]({{< relref "/commands/blpop" >}}) are able to block waiting for elements from multiple lists. +3. If the timeout is reached, NULL is returned. + +There are more things you should know about lists and blocking ops. We +suggest that you read more on the following: + +* It is possible to build safer queues or rotating queues using [`LMOVE`]({{< relref "/commands/lmove" >}}). +* There is also a blocking variant of the command, called [`BLMOVE`]({{< relref "/commands/blmove" >}}). + +## Automatic creation and removal of keys + +So far in our examples we never had to create empty lists before pushing +elements, or removing empty lists when they no longer have elements inside. +It is Redis' responsibility to delete keys when lists are left empty, or to create +an empty list if the key does not exist and we are trying to add elements +to it, for example, with [`LPUSH`]({{< relref "/commands/lpush" >}}). + +This is not specific to lists, it applies to all the Redis data types +composed of multiple elements -- Streams, Sets, Sorted Sets and Hashes. + +Basically we can summarize the behavior with three rules: + +1. When we add an element to an aggregate data type, if the target key does not exist, an empty aggregate data type is created before adding the element. +2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. The Stream data type is the only exception to this rule. +3. Calling a read-only command such as [`LLEN`]({{< relref "/commands/llen" >}}) (which returns the length of the list), or a write command removing elements, with an empty key, always produces the same result as if the key is holding an empty aggregate type of the type the command expects to find. + +Examples of rule 1: + +{{< clients-example list_tutorial rule_1 >}} +> DEL new_bikes +(integer) 0 +> LPUSH new_bikes bike:1 bike:2 bike:3 +(integer) 3 +{{< /clients-example >}} + +However we can't perform operations against the wrong type if the key exists: + +{{< clients-example list_tutorial rule_1.1 >}} +> SET new_bikes bike:1 +OK +> TYPE new_bikes +string +> LPUSH new_bikes bike:2 bike:3 +(error) WRONGTYPE Operation against a key holding the wrong kind of value +{{< /clients-example >}} + +Example of rule 2: + +{{< clients-example list_tutorial rule_2 >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> EXISTS bikes:repairs +(integer) 1 +> LPOP bikes:repairs +"bike:3" +> LPOP bikes:repairs +"bike:2" +> LPOP bikes:repairs +"bike:1" +> EXISTS bikes:repairs +(integer) 0 +{{< /clients-example >}} + +The key no longer exists after all the elements are popped. + +Example of rule 3: + +{{< clients-example list_tutorial rule_3 >}} +> DEL bikes:repairs +(integer) 0 +> LLEN bikes:repairs +(integer) 0 +> LPOP bikes:repairs +(nil) +{{< /clients-example >}} + + +## Limits + +The max length of a Redis list is 2^32 - 1 (4,294,967,295) elements. + + +## Performance + +List operations that access its head or tail are O(1), which means they're highly efficient. +However, commands that manipulate elements within a list are usually O(n). +Examples of these include [`LINDEX`]({{< relref "/commands/lindex" >}}), [`LINSERT`]({{< relref "/commands/linsert" >}}), and [`LSET`]({{< relref "/commands/lset" >}}). +Exercise caution when running these commands, mainly when operating on large lists. + +## Alternatives + +Consider [Redis streams]({{< relref "/develop/data-types/streams" >}}) as an alternative to lists when you need to store and process an indeterminate series of events. + +## Learn more + +* [Redis Lists Explained](https://www.youtube.com/watch?v=PB5SeOkkxQc) is a short, comprehensive video explainer on Redis lists. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis lists in detail. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Debugging memory consumption +linkTitle: Memory Usage +title: Redis JSON RAM Usage +weight: 6 +--- + +{{< note >}} +Because of ongoing feature additions, improvements, and optimizations, JSON memory consumption may vary depending on the Redis version. +Redis 8 in Redis Open Source was used for the examples on this page. +{{< /note >}} + +Every key in Redis takes memory and requires at least the amount of RAM to store the key name, as +well as some per-key overhead that Redis uses. On top of that, the value in the key also requires +RAM. + +Redis JSON stores JSON values as binary data after deserialization. This representation is often more +expensive, size-wise, than the serialized form. All JSON values occupy at least 8 bytes (on 64-bit architectures) because each is represented as a thin wrapper around a pointer. The type information is stored in the lower bits of the pointer, which are guaranteed to be zero due to alignment restrictions. This allows those bits to be repurposed to store some auxiliary data. + +For some types of JSON values, 8 bytes is all that’s needed. Nulls and booleans don’t require any additional storage. Small integers are stored in static memory because they’re frequently used, so they also use only the initial 8 bytes. Similarly, empty strings, arrays, and objects don’t require any bookkeeping. Instead, they point to static instances of a _null_ string, array, or object. Here are some examples that use the [JSON.DEBUG MEMORY]({{< relref "/commands/json.debug-memory" >}}) command to report on memory consumption: + +``` +127.0.0.1:6379> JSON.SET boolean . 'true' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY boolean +(integer) 8 + +127.0.0.1:6379> JSON.SET null . null +OK +127.0.0.1:6379> JSON.DEBUG MEMORY null +(integer) 8 + +127.0.0.1:6379> JSON.SET emptystring . '""' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY emptystring +(integer) 8 + +127.0.0.1:6379> JSON.SET emptyarr . '[]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY emptyarr +(integer) 8 + +127.0.0.1:6379> JSON.SET emptyobj . '{}' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY emptyobj +(integer) 8 +``` + +This RAM requirement is the same for all scalar values, but strings require additional space +depending on their length. For example, a 3-character string will use 3 additional bytes: + +``` +127.0.0.1:6379> JSON.SET foo . '"bar"' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY foo +(integer) 11 +``` + +In the following four examples, each array requires 56 bytes. This breaks down as: +- 8 bytes for the initial array value pointer +- 16 bytes of metadata: 8 bytes for the allocated capacity and 8 bytes for the point-in-time size of the array +- 32 bytes for the array. The initial capacity of an array is 4. Therefore, the calculation is `4 * 8` bytes + +``` +127.0.0.1:6379> JSON.SET arr . '[""]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY arr +(integer) 56 +``` + +``` +127.0.0.1:6379> JSON.SET arr . '["", ""]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY arr +(integer) 56 +``` + +``` +127.0.0.1:6379> JSON.SET arr . '["", "", ""]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY arr +(integer) 56 +``` + +``` +127.0.0.1:6379> JSON.SET arr . '["", "", "", ""]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY arr +(integer) 56 +``` + +Once the current capacity is insufficient to fit a new value, the array reallocates to double its capacity. An array with 5 elements will have a capacity of 8, therefore consuming `8 + 16 + 8 * 8 = 88` bytes. + +``` +127.0.0.1:6379> JSON.SET arr . '["", "", "", "", ""]' +OK +127.0.0.1:6379> JSON.DEBUG MEMORY arr +(integer) 88 +``` + +Because reallocation operations can be expensive, Redis grows JSON arrays geometrically rather than linearly. This approach spreads the cost across many insertions. + +This table gives the size (in bytes) of a few of the test files from the [module repo](https://github.com/RedisJSON/RedisJSON/tree/master/tests/files), stored using +JSON. The _MessagePack_ column is for reference purposes and reflects the length of the value when stored using [MessagePack](https://msgpack.org/index.html). + +| File | File size | Redis JSON | MessagePack | +| --------------------------------------- | --------- | ---------- | ----------- | +| /tests/files/pass-100.json | 381 | 1069 | 140 | +| /tests/files/pass-jsonsl-1.json | 1387 | 2190 | 757 | +| /tests/files/pass-json-parser-0000.json | 3718 | 5469 | 2393 | +| /tests/files/pass-jsonsl-yahoo2.json | 22466 | 26901 | 16869 | +| /tests/files/pass-jsonsl-yelp.json | 46333 | 57513 | 35529 | + +> Note: In the current version, deleting values from containers **does not** free the container's +allocated memory. + +## JSON string reuse mechanism + +Redis uses a global string reuse mechanism to reduce memory usage. When a string value appears multiple times, either within the same JSON document +or across different documents on the same node, Redis stores only a single copy of that string and uses references to it. +This approach is especially efficient when many documents share similar structures. + +However, the `JSON.DEBUG MEMORY` command reports memory usage as if each string instance is stored independently, even when it's actually reused. +For example, the document `{"foo": ["foo", "foo"]}` reuses the string `"foo"` internally, but the reported memory usage counts the string three times: once for the key and once for each array element.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Access specific elements within a JSON document +linkTitle: Path +title: Path +weight: 3 +--- + +Paths let you access specific elements within a JSON document. Since no standard for JSON path syntax exists, Redis JSON implements its own. JSON's syntax is based on common best practices and intentionally resembles [JSONPath](http://goessner.net/articles/JsonPath/). + +JSON supports two query syntaxes: [JSONPath syntax](#jsonpath-syntax) and the [legacy path syntax](#legacy-path-syntax) from the first version of JSON. + +JSON knows which syntax to use depending on the first character of the path query. If the query starts with the character `$`, it uses JSONPath syntax. Otherwise, it defaults to the legacy path syntax. + +The returned value is a JSON string with a top-level array of JSON serialized strings. +And if multi-paths are used, the return value is a JSON string with a top-level object with values that are arrays of serialized JSON values. + +## JSONPath support + +RedisJSON v2.0 introduced [JSONPath](http://goessner.net/articles/JsonPath/) support. It follows the syntax described by Goessner in his [article](http://goessner.net/articles/JsonPath/). + +A JSONPath query can resolve to several locations in a JSON document. In this case, the JSON commands apply the operation to every possible location. This is a major improvement over [legacy path](#legacy-path-syntax) queries, which only operate on the first path. + +Notice that the structure of the command response often differs when using JSONPath. See the [Commands]({{< relref "/commands/" >}}?group=json) page for more details. + +The new syntax supports bracket notation, which allows the use of special characters like colon ":" or whitespace in key names. + +If you want to include double quotes in a query from the CLI, enclose the JSONPath within single quotes. For example: + +```bash +JSON.GET store '$.inventory["mountain_bikes"]' +``` + +## JSONPath syntax + +The following JSONPath syntax table was adapted from Goessner's [path syntax comparison](https://goessner.net/articles/JsonPath/index.html#e2). + +| Syntax element | Description | +|----------------|-------------| +| $ | The root (outermost JSON element), starts the path. | +| . or [] | Selects a child element. | +| .. | Recursively descends through the JSON document. | +| * | Wildcard, returns all elements. | +| [] | Subscript operator, accesses an array element. | +| [,] | Union, selects multiple elements. | +| [start\:end\:step] | Array slice where *start*, *end*, and *step* are index values. You can omit values from the slice (for example, `[3:]`, `[:8:2]`) to use the default values: *start* defaults to the first index, *end* defaults to the last index, and *step* defaults to `1`. Use `[*]` or `[:]` to select all elements. | +| ?() | Filters a JSON object or array. Supports comparison operators (`==`, `!=`, `<`, `<=`, `>`, `>=`, `=~`), logical operators (`&&`, `\|\|`), and parenthesis (`(`, `)`). | +| () | Script expression. | +| @ | The current element, used in filter or script expressions. | + +## JSONPath examples + +The following JSONPath examples use this JSON document, which stores details about items in a store's inventory: + +```json +{ + "inventory": { + "mountain_bikes": [ + { + "id": "bike:1", + "model": "Phoebe", + "description": "This is a mid-travel trail slayer that is a fantastic daily driver or one bike quiver. The Shimano Claris 8-speed groupset gives plenty of gear range to tackle hills and there\u2019s room for mudguards and a rack too. This is the bike for the rider who wants trail manners with low fuss ownership.", + "price": 1920, + "specs": {"material": "carbon", "weight": 13.1}, + "colors": ["black", "silver"], + }, + { + "id": "bike:2", + "model": "Quaoar", + "description": "Redesigned for the 2020 model year, this bike impressed our testers and is the best all-around trail bike we've ever tested. The Shimano gear system effectively does away with an external cassette, so is super low maintenance in terms of wear and tear. All in all it's an impressive package for the price, making it very competitive.", + "price": 2072, + "specs": {"material": "aluminium", "weight": 7.9}, + "colors": ["black", "white"], + }, + { + "id": "bike:3", + "model": "Weywot", + "description": "This bike gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. A set of powerful Shimano hydraulic disc brakes provide ample stopping ability. If you're after a budget option, this is one of the best bikes you could get.", + "price": 3264, + "specs": {"material": "alloy", "weight": 13.8}, + }, + ], + "commuter_bikes": [ + { + "id": "bike:4", + "model": "Salacia", + "description": "This bike is a great option for anyone who just wants a bike to get about on With a slick-shifting Claris gears from Shimano\u2019s, this is a bike which doesn\u2019t break the bank and delivers craved performance. It\u2019s for the rider who wants both efficiency and capability.", + "price": 1475, + "specs": {"material": "aluminium", "weight": 16.6}, + "colors": ["black", "silver"], + }, + { + "id": "bike:5", + "model": "Mimas", + "description": "A real joy to ride, this bike got very high scores in last years Bike of the year report. The carefully crafted 50-34 tooth chainset and 11-32 tooth cassette give an easy-on-the-legs bottom gear for climbing, and the high-quality Vittoria Zaffiro tires give balance and grip.It includes a low-step frame , our memory foam seat, bump-resistant shocks and conveniently placed thumb throttle. Put it all together and you get a bike that helps redefine what can be done for this price.", + "price": 3941, + "specs": {"material": "alloy", "weight": 11.6}, + }, + ], + } +} +``` + +First, create the JSON document in your database: + +{{< clients-example json_tutorial set_bikes >}} +JSON.SET bikes:inventory $ '{ "inventory": { "mountain_bikes": [ { "id": "bike:1", "model": "Phoebe", "description": "This is a mid-travel trail slayer that is a fantastic daily driver or one bike quiver. The Shimano Claris 8-speed groupset gives plenty of gear range to tackle hills and there\'s room for mudguards and a rack too. This is the bike for the rider who wants trail manners with low fuss ownership.", "price": 1920, "specs": {"material": "carbon", "weight": 13.1}, "colors": ["black", "silver"] }, { "id": "bike:2", "model": "Quaoar", "description": "Redesigned for the 2020 model year, this bike impressed our testers and is the best all-around trail bike we\'ve ever tested. The Shimano gear system effectively does away with an external cassette, so is super low maintenance in terms of wear and tear. All in all it\'s an impressive package for the price, making it very competitive.", "price": 2072, "specs": {"material": "aluminium", "weight": 7.9}, "colors": ["black", "white"] }, { "id": "bike:3", "model": "Weywot", "description": "This bike gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. A set of powerful Shimano hydraulic disc brakes provide ample stopping ability. If you\'re after a budget option, this is one of the best bikes you could get.", "price": 3264, "specs": {"material": "alloy", "weight": 13.8} } ], "commuter_bikes": [ { "id": "bike:4", "model": "Salacia", "description": "This bike is a great option for anyone who just wants a bike to get about on With a slick-shifting Claris gears from Shimano\'s, this is a bike which doesn\'t break the bank and delivers craved performance. It\'s for the rider who wants both efficiency and capability.", "price": 1475, "specs": {"material": "aluminium", "weight": 16.6}, "colors": ["black", "silver"] }, { "id": "bike:5", "model": "Mimas", "description": "A real joy to ride, this bike got very high scores in last years Bike of the year report. The carefully crafted 50-34 tooth chainset and 11-32 tooth cassette give an easy-on-the-legs bottom gear for climbing, and the high-quality Vittoria Zaffiro tires give balance and grip.It includes a low-step frame , our memory foam seat, bump-resistant shocks and conveniently placed thumb throttle. Put it all together and you get a bike that helps redefine what can be done for this price.", "price": 3941, "specs": {"material": "alloy", "weight": 11.6} } ] }}' +{{< /clients-example >}} + +### Access examples + +The following examples use the [`JSON.GET`]({{< relref "commands/json.get/" >}}) command to retrieve data from various paths in the JSON document. + +You can use the wildcard operator `*` to return a list of all items in the inventory: + +{{< clients-example json_tutorial get_bikes >}} +JSON.GET bikes:inventory $.inventory.* +"[[{\"id\":\"bike:1\",\"model\":\"Phoebe\",\"description\":\"This is a mid-travel trail slayer... +{{< /clients-example >}} + +For some queries, multiple paths can produce the same results. For example, the following paths return the names of all mountain bikes: + +{{< clients-example json_tutorial get_mtnbikes >}} +> JSON.GET bikes:inventory $.inventory.mountain_bikes[*].model +"[\"Phoebe\",\"Quaoar\",\"Weywot\"]" +> JSON.GET bikes:inventory '$.inventory["mountain_bikes"][*].model' +"[\"Phoebe\",\"Quaoar\",\"Weywot\"]" +> JSON.GET bikes:inventory '$..mountain_bikes[*].model' +"[\"Phoebe\",\"Quaoar\",\"Weywot\"]" +{{< /clients-example >}} + +The recursive descent operator `..` can retrieve a field from multiple sections of a JSON document. The following example returns the names of all inventory items: + +{{< clients-example json_tutorial get_models >}} +> JSON.GET bikes:inventory $..model +"[\"Phoebe\",\"Quaoar\",\"Weywot\",\"Salacia\",\"Mimas\"]" +{{< /clients-example >}} + +You can use an array slice to select a range of elements from an array. This example returns the names of the first 2 mountain bikes: + +{{< clients-example json_tutorial get2mtnbikes >}} +> JSON.GET bikes:inventory $..mountain_bikes[0:2].model +"[\"Phoebe\",\"Quaoar\"]" +{{< /clients-example >}} + +Filter expressions `?()` let you select JSON elements based on certain conditions. You can use comparison operators (`==`, `!=`, `<`, `<=`, `>`, `>=`, and starting with version v2.4.2, also `=~`), logical operators (`&&`, `||`), and parenthesis (`(`, `)`) within these expressions. A filter expression can be applied on an array or on an object, iterating over all the **elements** in the array or all the **values** in the object, retrieving only the ones that match the filter condition. + +Paths within the filter condition use the dot notation with either `@` to denote the current array element or the current object value, or `$` to denote the top-level element. For example, use `@.key_name` to refer to a nested value and `$.top_level_key_name` to refer to a top-level value. + +From version v2.4.2 onward, you can use the comparison operator `=~` to match a path of a string value on the left side against a regular expression pattern on the right side. For more information, see the [supported regular expression syntax docs](https://docs.rs/regex/latest/regex/#syntax). + +Non-string values do not match. A match can only occur when the left side is a path of a string value and the right side is either a hard-coded string, or a path of a string value. See [examples](#json-filter-examples) below. + +The regex match is partial, meaning a regex pattern like `"foo"` matches a string such as `"barefoots"`. +To make the match exact, use the regex pattern `"^foo$"`. + +Other JSONPath engines may use regex patterns between slashes (for example, `/foo/`), +and their match is exact. They can perform partial matches using a regex pattern such +as `/.*foo.*/`. + +### Filter examples + +In the following example, the filter only returns mountain bikes with a price less than 3000 and +a weight less than 10: + +{{< clients-example json_tutorial filter1 >}} +> JSON.GET bikes:inventory '$..mountain_bikes[?(@.price < 3000 && @.specs.weight < 10)]' +"[{\"id\":\"bike:2\",\"model\":\"Quaoar\",\"description\":\"Redesigned for the 2020 model year... +{{< /clients-example >}} + +This example filters the inventory for the model names of bikes made from alloy: + +{{< clients-example json_tutorial filter2 >}} +> JSON.GET bikes:inventory '$..[?(@.specs.material == "alloy")].model' +"[\"Weywot\",\"Mimas\"]" +{{< /clients-example >}} + +This example, valid from version v2.4.2 onwards, filters only bikes whose material begins with +"al-" using regex match. Note that this match is case-insensitive because of the prefix `(?i)` in +the regular expression pattern `"(?i)al"`: + +{{< clients-example json_tutorial filter3 >}} +JSON.GET bikes:inventory '$..[?(@.specs.material =~ "(?i)al")].model' +"[\"Quaoar\",\"Weywot\",\"Salacia\",\"Mimas\"]" +{{< /clients-example >}} + +You can also specify a regex pattern using a property from the JSON object itself. +For example, we can add a string property named `regex_pat` to each mountain bike, +with the value `"(?i)al"` to match the material, as in the previous example. We +can then match `regex_pat` against the bike's material: + +{{< clients-example json_tutorial filter4 >}} +> JSON.SET bikes:inventory $.inventory.mountain_bikes[0].regex_pat '"(?i)al"' +OK +> JSON.SET bikes:inventory $.inventory.mountain_bikes[1].regex_pat '"(?i)al"' +OK +> JSON.SET bikes:inventory $.inventory.mountain_bikes[2].regex_pat '"(?i)al"' +OK +> JSON.GET bikes:inventory '$.inventory.mountain_bikes[?(@.specs.material =~ @.regex_pat)].model' +"[\"Quaoar\",\"Weywot\"]" +{{< /clients-example >}} + +### Update examples + +You can also use JSONPath queries when you want to update specific sections of a JSON document. + +For example, you can pass a JSONPath to the [`JSON.SET`]({{< relref "commands/json.set/" >}}) command to update a specific field. This example changes the price of the first item in the headphones list: + +{{< clients-example json_tutorial update_bikes >}} +> JSON.GET bikes:inventory $..price +"[1920,2072,3264,1475,3941]" +> JSON.NUMINCRBY bikes:inventory $..price -100 +"[1820,1972,3164,1375,3841]" +> JSON.NUMINCRBY bikes:inventory $..price 100 +"[1920,2072,3264,1475,3941]" +{{< /clients-example >}} + +You can use filter expressions to update only JSON elements that match certain conditions. The following example sets the price of any bike to 1500 if its price is already less than 2000: + +{{< clients-example json_tutorial update_filters1 >}} +> JSON.SET bikes:inventory '$.inventory.*[?(@.price<2000)].price' 1500 +OK +> JSON.GET bikes:inventory $..price +"[1500,2072,3264,1500,3941]" +{{< /clients-example >}} + +JSONPath queries also work with other JSON commands that accept a path as an argument. For example, you can add a new color option for a set of headphones with [`JSON.ARRAPPEND`]({{< relref "commands/json.arrappend/" >}}): + +{{< clients-example json_tutorial update_filters2 >}} +> JSON.ARRAPPEND bikes:inventory '$.inventory.*[?(@.price<2000)].colors' '"pink"' +1) (integer) 3 +2) (integer) 3 +127.0.0.1:6379> JSON.GET bikes:inventory $..[*].colors +"[[\"black\",\"silver\",\"pink\"],[\"black\",\"white\"],[\"black\",\"silver\",\"pink\"]]" +{{< /clients-example >}} + +## Legacy path syntax + +RedisJSON v1 had the following path implementation. JSON v2 still supports this legacy path in addition to JSONPath. + +Paths always begin at the root of a Redis JSON value. The root is denoted by a period character (`.`). For paths that reference the root's children, it is optional to prefix the path with the root. + +Redis JSON supports both dot notation and bracket notation for object key access. The following paths refer to _headphones_, which is a child of _inventory_ under the root: + +* `.inventory.headphones` +* `inventory["headphones"]` +* `['inventory']["headphones"]` + +To access an array element, enclose its index within a pair of square brackets. The index is 0-based, with 0 being the first element of the array, 1 being the next element, and so on. You can use negative offsets to access elements starting from the end of the array. For example, -1 is the last element in the array, -2 is the second to last element, and so on. + +### JSON key names and path compatibility + +By definition, a JSON key can be any valid JSON string. Paths, on the other hand, are traditionally based on JavaScript's (and Java's) variable naming conventions. + +Although JSON can store objects that contain arbitrary key names, you can only use a legacy path to access these keys if they conform to these naming syntax rules: + +1. Names must begin with a letter, a dollar sign (`$`), or an underscore (`_`) character +2. Names can contain letters, digits, dollar signs, and underscores +3. Names are case-sensitive + +## Time complexity of path evaluation + +The time complexity of searching (navigating to) an element in the path is calculated from: + +1. Child level - every level along the path adds an additional search +2. Key search - O(N), where N is the number of keys in the parent object +3. Array search - O(1) + +This means that the overall time complexity of searching a path is _O(N*M)_, where N is the depth and M is the number of parent object keys. + + While this is acceptable for objects where N is small, access can be optimized for larger objects. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'JSON use cases + + ' +linkTitle: Use cases +title: Use cases +weight: 4 +--- + +You can of course use Redis native data structures to store JSON objects, and that's a common practice. For example, you can serialize JSON and save it in a Redis String. + +However, Redis JSON provides several benefits over this approach. + +**Access and retrieval of subvalues** + +With JSON, you can get nested values without having to transmit the entire object over the network. Being able to access sub-objects can lead to greater efficiencies when you're storing large JSON objects in Redis. + +**Atomic partial updates** + +JSON allows you to atomically run operations like incrementing a value, adding, or removing elements from an array, append strings, and so on. To do the same with a serialized object, you have to retrieve and then reserialize the entire object, which can be expensive and also lack atomicity. + +**Indexing and querying** + +When you store JSON objects as Redis strings, there's no good way to query those objects. On the other hand, storing these objects as JSON using Redis Open Source lets you index and query them. This capability is provided by the Redis Query Engine. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Notes on JSON debugging, testing and documentation.' +linkTitle: Developer notes +title: Developer notes +weight: 7 +--- + +Developing Redis JSON involves setting up the development environment (which can be either Linux-based or macOS-based), building RedisJSON (the Redis module providing JSON), running tests and benchmarks, and debugging both the JSON module and its tests. + +## Cloning the git repository +To clone the RedisJSON module and its submodules, run: +```sh +git clone --recursive https://github.com/RedisJSON/RedisJSON.git +``` +## Working in an isolated environment +There are several reasons to use an isolated environment for development, like keeping your workstation clean and developing for a different Linux distribution. + +You can use a virtual machine as an isolated development environment. To set one up, you can use [Vagrant](https://www.vagrantup.com) or Docker. + +To set up a virtual machine with Docker: + +``` +rejson=$(docker run -d -it -v $PWD:/build debian:bullseye bash) +docker exec -it $rejson bash +``` +Then run ```cd /build``` from within the container. + +In this mode, all installations remain in the scope of the Docker container. +After you exit the container, you can either restart it with the previous ```docker exec``` command or save the state of the container to an image and resume it at a later time: + +``` +docker commit $rejson redisjson1 +docker stop $rejson +rejson=$(docker run -d -it -v $PWD:/build redisjson1 bash) +docker exec -it $rejson bash +``` + +You can replace `debian:bullseye` with your OS of choice. If you use the same OS as your host machine, you can run the RedisJSON binary on your host after it is built. + +## Installing prerequisites + +To build and test RedisJSON one needs to install several packages, depending on the underlying OS. Currently, we support the Ubuntu/Debian, CentOS, Fedora, and macOS. + +Enter the `RedisJSON` directory and run: + +```sh +$ ./sbin/setup +``` + +**This will install various packages on your system** using the native package manager and pip. It will invoke `sudo` on its own, prompting for permission. + +If you prefer to avoid that, you can: + +* Review `system-setup.py` and install packages manually, +* Use `system-setup.py --nop` to display installation commands without executing them, +* Use an isolated environment like explained above, +* Use a Python virtual environment, as Python installations are known to be sensitive when not used in isolation: `python -m virtualenv venv; . ./venv/bin/activate` + +## Installing Redis +Generally, it is best to run the latest Redis version. + +If your OS has a Redis 6.x package, you can install it using the OS package manager. + +Otherwise, you can invoke +```sh +$ ./deps/readies/bin/getredis +``` + +## Getting help +```make help``` provides a quick summary of the development features: + +``` +make setup # install prerequisites + +make build + DEBUG=1 # build debug variant + SAN=type # build with LLVM sanitizer (type=address|memory|leak|thread) + VALGRIND|VG=1 # build for testing with Valgrind +make clean # remove binary files + ALL=1 # remove binary directories + +make all # build all libraries and packages + +make test # run both cargo and python tests +make cargo_test # run inbuilt rust unit tests +make pytest # run flow tests using RLTest + TEST=file:name # run test matching `name` from `file` + TEST_ARGS="..." # RLTest arguments + QUICK=1 # run only general tests + GEN=1 # run general tests on a standalone Redis topology + AOF=1 # run AOF persistency tests on a standalone Redis topology + SLAVES=1 # run replication tests on standalone Redis topology + CLUSTER=1 # run general tests on a Redis Open Source Cluster topology + VALGRIND|VG=1 # run specified tests with Valgrind + VERBOSE=1 # display more RLTest-related information + +make pack # build package (RAMP file) +make upload-artifacts # copy snapshot packages to S3 + OSNICK=nick # copy snapshots for specific OSNICK +make upload-release # copy release packages to S3 + +common options for upload operations: + STAGING=1 # copy to staging lab area (for validation) + FORCE=1 # allow operation outside CI environment + VERBOSE=1 # show more details + NOP=1 # do not copy, just print commands + +make coverage # perform coverage analysis +make show-cov # show coverage analysis results (implies COV=1) +make upload-cov # upload coverage analysis results to codecov.io (implies COV=1) + +make docker # build for specific Linux distribution + OSNICK=nick # Linux distribution to build for + REDIS_VER=ver # use Redis version `ver` + TEST=1 # test after build + PACK=1 # create packages + ARTIFACTS=1 # copy artifacts from docker image + PUBLISH=1 # publish (i.e. docker push) after build + +make sanbox # create container for CLang Sanitizer tests +``` + +## Building from source +Run ```make build``` to build RedisJSON. + +Notes: + +* Binary files are placed under `target/release/`, according to platform and build variant. + +* RedisJSON uses [Cargo](https://github.com/rust-lang/cargo) as its build system. ```make build``` will invoke both Cargo and the subsequent `make` command that's required to complete the build. + +Use ```make clean``` to remove built artifacts. ```make clean ALL=1``` will remove the entire bin subdirectory. + +## Running tests +There are several sets of unit tests: +* Rust tests, integrated in the source code, run by ```make cargo_test```. +* Python tests (enabled by RLTest), located in ```tests/pytests```, run by ```make pytest```. + +You can run all tests with ```make test```. +To run only specific tests, use the ```TEST``` parameter. For example, run ```make test TEST=regex```. + +You can run the module's tests against an "embedded" disposable Redis instance or against an instance +you provide. To use the "embedded" mode, you must include the `redis-server` executable in your `PATH`. + +You can override the spawning of the embedded server by specifying a Redis port via the `REDIS_PORT` +environment variable, e.g.: + +```bash +$ # use an existing local Redis instance for testing the module +$ REDIS_PORT=6379 make test +``` + +## Debugging +To include debugging information, you need to set the [`DEBUG`]({{< relref "/commands/debug" >}}) environment variable before you compile RedisJSON. For example, run `export DEBUG=1`. + +You can add breakpoints to Python tests in single-test mode. To set a breakpoint, call the ```BB()``` function inside a test. + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: JSON RESP2 to RESP3 replies reference for client developers +linkTitle: RESP3 migration guide +title: Guide for migrating from RESP2 to RESP3 replies +weight: 6 +--- + +In RESP3, the default value of the optional path argument was changed from `.` to `$`. +Due to this change, the replies of some commands have slightly changed. +This page provides a brief comparison between RESP2 and RESP3 responses for JSON commands to help developers in migrating their clients from RESP2 to RESP3. + +### JSON command replies comparison + +The types are described using a [“TypeScript-like” syntax](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html). `Array` denotes an [array](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#arrays) where the type of elements is known, but the number of elements is not. + +| Command | RESP2 | RESP3 | +|---------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| All JSON commands | **Default value of optional `path` argument**: `.` | **Default value of optional `path` argument:** `$` | +| JSON.ARRAPPEND
JSON.ARRINDEX
JSON.ARRINSERT
JSON.ARRLEN
JSON.ARRTRIM
JSON.OBJLEN
JSON.STRAPPEND
JSON.STRLEN
JSON.TOGGLE | *`$`-based path argument:*
Reply: Array\

*`.`-based path argument :* 
Reply: BulkString | *`$`-based path argument:* 
Reply: Array\

*`.`-based path argument :*
Reply: number | +| JSON.GET | Reply: JSON encoded string
Example:
```> JSON.SET k $ "[1,2,3]"```
```> JSON.GET k```
```"[1,2,3]"``` | Reply: JSON encoded string with a top-level array
Example:
```> JSON.SET k $ "[1,2,3]"```
```> JSON.GET k```
```"[[1,2,3]]"``` | +| JSON.NUMINCRBY
JSON.NUMMULTBY | *`$`-based path argument:*
Reply: JSON-encoded BulkString | null

*`.`-based path argument :* 
Reply: BulkString | null | error | *`$`-based path argument:*
Reply: Array\ | error

*`.`-based path argument :* 
Reply: number | null | error | +| JSON.OBJKEYS | *`$`-based path argument:*
Reply: Array\>

*`.`-based path argument :* 
Reply: Array\ | *`$`-based path argument:*
Reply: Array\>

*`.`-based path argument :* 
Reply: Array\ | +| JSON.TYPE | *`$`-based path argument:*
Reply: Array\
Example:
```> JSON.TYPE k $```
```1) "array"```

*`.`-based path argument :* 
Reply: BulkString | *`$`-based path argument:*
Reply: Array\>
Example:
```> JSON.TYPE k $```
```1) 1) "array"```

*`.`-based path argument :* 
Reply: Array\ | +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Performance benchmarks + + ' +linkTitle: Performance +title: Performance +weight: 5 +--- + +To get an early sense of what Redis JSON is capable of, you can test it with `redis-benchmark` just like +any other Redis command. However, in order to have more control over the tests, we'll use a +a tool written in Go called _ReJSONBenchmark_ that we expect to release in the near future. + +The following figures were obtained from an AWS EC2 c4.8xlarge instance that ran both the Redis +server as well the as the benchmarking tool. Connections to the server are via the networking stack. +All tests are non-pipelined. + +> NOTE: The results below are measured using the preview version of Redis JSON, which is still very much unoptimized. + +## Redis JSON baseline + +### A smallish object + +We test a JSON value that, while purely synthetic, is interesting. The test subject is +[/tests/files/pass-100.json](https://github.com/RedisLabsModules/redisjson/blob/master/tests/files/pass-100.json), +who weighs in at 380 bytes and is nested. We first test SETting it, then GETting it using several +different paths: + +![ReJSONBenchmark pass-100.json](images/bench_pass_100.png) + +![ReJSONBenchmark pass-100.json percentiles](images/bench_pass_100_p.png) + +### A bigger array + +Moving on to bigger values, we use the 1.4 kB array in +[/tests/files/pass-jsonsl-1.json](https://github.com/RedisLabsModules/redisjson/blob/master/tests/files/pass-jsonsl-1.json): + + +![ReJSONBenchmark pass-jsonsl-1.json](images/bench_pass_jsonsl_1.png) + +![ReJSONBenchmark pass-jsonsl-1.json percentiles](images/bench_pass_jsonsl_1_p.png) + +### A largish object + +More of the same to wrap up, now we'll take on a behemoth of no less than 3.5 kB as given by +[/tests/files/pass-json-parser-0000.json](https://github.com/RedisLabsModules/redisjson/blob/master/tests/files/pass-json-parser-0000.json): + +![ReJSONBenchmark pass-json-parser-0000.json](images/bench_pass_json_parser_0000.png) + +![ReJSONBenchmark pass-json-parser-0000.json percentiles](images/bench_pass_json_parser_0000_p.png) + +### Number operations + +Last but not least, some adding and multiplying: + +![ReJSONBenchmark number operations](images/bench_numbers.png) + +![ReJSONBenchmark number operations percentiles](images/bench_numbers_p.png) + +### Baseline + +To establish a baseline, we'll use the Redis [`PING`]({{< relref "/commands/ping" >}}) command. +First, lets see what `redis-benchmark` reports: + +``` +~$ redis/src/redis-benchmark -n 1000000 ping +====== ping ====== + 1000000 requests completed in 7.11 seconds + 50 parallel clients + 3 bytes payload + keep alive: 1 + +99.99% <= 1 milliseconds +100.00% <= 1 milliseconds +140587.66 requests per second +``` + +ReJSONBenchmark's concurrency is configurable, so we'll test a few settings to find a good one. Here +are the results, which indicate that 16 workers yield the best throughput: + +![ReJSONBenchmark PING](images/bench_ping.png) + +![ReJSONBenchmark PING percentiles](images/bench_ping_p.png) + +Note how our benchmarking tool does slightly worse in PINGing - producing only 116K ops, compared to +`redis-cli`'s 140K. + +### The empty string + +Another JSON benchmark is that of setting and getting an empty string - a value that's only two +bytes long (i.e. `""`). Granted, that's not very useful, but it teaches us something about the basic +performance of the module: + +![ReJSONBenchmark empty string](images/bench_empty_string.png) + +![ReJSONBenchmark empty string percentiles](images/bench_empty_string_p.png) + +## Comparison vs. server-side Lua scripting + +We compare the JSON performance of Redis Open Source with the Redis embedded Lua engine. For this purpose, we use the Lua +scripts at [/benchmarks/lua](https://github.com/RedisLabsModules/redisjson/tree/master/benchmarks/lua). +These scripts provide JSON's GET and SET functionality on values stored in JSON or MessagePack +formats. Each of the different operations (set root, get root, set path and get path) is executed +with each "engine" on objects of varying sizes. + +### Setting and getting the root + +Storing raw JSON performs best in this test, but that isn't really surprising as all it does is +serve unprocessed strings. While you can and should use Redis for caching opaque data, and JSON +"blobs" are just one example, this does not allow any updates other than these of the entire value. + +A more meaningful comparison therefore is between JSON and the MessagePack variant, since both +process the incoming JSON value before actually storing it. While the rates and latencies of these +two behave in a very similar way, the absolute measurements suggest that Redis JSON's performance may be +further improved. + +![VS. Lua set root](images/bench_lua_set_root.png) + +![VS. Lua set root latency](images/bench_lua_set_root_l.png) + +![VS. Lua get root](images/bench_lua_get_root.png) + +![VS. Lua get root latency](images/bench_lua_get_root_l.png) + +### Setting and getting parts of objects + +This test shows why Redis JSON exists. Not only does it outperform the Lua variants, it retains constant +rates and latencies regardless the object's overall size. There's no magic here - JSON keeps the +value deserialized so that accessing parts of it is a relatively inexpensive operation. In deep contrast +are both raw JSON as well as MessagePack, which require decoding the entire object before anything can +be done with it (a process that becomes more expensive the larger the object is). + +![VS. Lua set path to scalar](images/bench_lua_set_path.png) + +![VS. Lua set path to scalar latency](images/bench_lua_set_path_l.png) + +![VS. Lua get scalar from path](images/bench_lua_get_path.png) + +![VS. Lua get scalar from path latency](images/bench_lua_get_path_l.png) + +### Even more charts + +These charts are more of the same but independent for each file (value): + +![VS. Lua pass-100.json rate](images/bench_lua_pass_100.png) + +![VS. Lua pass-100.json average latency](images/bench_lua_pass_100_l.png) + +![VS. Lua pass-jsonsl-1.json rate](images/bench_lua_pass_jsonsl_1.png) + +![VS. Lua pass-jsonsl-1.json average latency](images/bench_lua_pass_jsonsl_1_l.png) + +![VS. Lua pass-json-parser-0000.json rate](images/bench_lua_pass_json_parser_0000.png) + +![VS. Lua pass-json-parser-0000.json latency](images/bench_lua_pass_json_parser_0000_l.png) + +![VS. Lua pass-jsonsl-yahoo2.json rate](images/bench_lua_pass_jsonsl_yahoo2.png) + +![VS. Lua pass-jsonsl-yahoo2.json latency](images/bench_lua_pass_jsonsl_yahoo2_l.png) + +![VS. Lua pass-jsonsl-yelp.json rate](images/bench_lua_pass_jsonsl_yelp.png) + +![VS. Lua pass-jsonsl-yelp.json latency](images/bench_lua_pass_jsonsl_yelp_l.png) + +## Raw results + +The following are the raw results from the benchmark in CSV format. + +### JSON results + +``` +title,concurrency,rate,average latency,50.00%-tile,90.00%-tile,95.00%-tile,99.00%-tile,99.50%-tile,100.00%-tile +[ping],1,22128.12,0.04,0.04,0.04,0.05,0.05,0.05,1.83 +[ping],2,54641.13,0.04,0.03,0.05,0.05,0.06,0.07,2.14 +[ping],4,76000.18,0.05,0.05,0.07,0.07,0.09,0.10,2.10 +[ping],8,106750.99,0.07,0.07,0.10,0.11,0.14,0.16,2.99 +[ping],12,111297.33,0.11,0.10,0.15,0.16,0.20,0.22,6.81 +[ping],16,116292.19,0.14,0.13,0.19,0.21,0.27,0.33,7.50 +[ping],20,110622.82,0.18,0.17,0.24,0.27,0.38,0.47,12.21 +[ping],24,107468.51,0.22,0.20,0.31,0.38,0.58,0.71,13.86 +[ping],28,102827.35,0.27,0.25,0.38,0.44,0.66,0.79,12.87 +[ping],32,105733.51,0.30,0.28,0.42,0.50,0.79,0.97,10.56 +[ping],36,102046.43,0.35,0.33,0.48,0.56,0.90,1.13,14.66 +JSON.SET {key} . {empty string size: 2 B},16,80276.63,0.20,0.18,0.28,0.32,0.41,0.45,6.48 +JSON.GET {key} .,16,92191.23,0.17,0.16,0.24,0.27,0.34,0.38,9.80 +JSON.SET {key} . {pass-100.json size: 380 B},16,41512.77,0.38,0.35,0.50,0.62,0.81,0.86,9.56 +JSON.GET {key} .,16,48374.10,0.33,0.29,0.47,0.56,0.72,0.79,9.36 +JSON.GET {key} sclr,16,94801.23,0.17,0.15,0.24,0.27,0.35,0.39,13.21 +JSON.SET {key} sclr 1,16,82032.08,0.19,0.18,0.27,0.31,0.40,0.44,8.97 +JSON.GET {key} sub_doc,16,81633.51,0.19,0.18,0.27,0.32,0.43,0.49,9.88 +JSON.GET {key} sub_doc.sclr,16,95052.35,0.17,0.15,0.24,0.27,0.35,0.39,7.39 +JSON.GET {key} array_of_docs,16,68223.05,0.23,0.22,0.29,0.31,0.44,0.50,8.84 +JSON.GET {key} array_of_docs[1],16,76390.57,0.21,0.19,0.30,0.34,0.44,0.49,9.99 +JSON.GET {key} array_of_docs[1].sclr,16,90202.13,0.18,0.16,0.25,0.29,0.36,0.39,7.87 +JSON.SET {key} . {pass-jsonsl-1.json size: 1.4 kB},16,16117.11,0.99,0.91,1.22,1.55,2.17,2.35,9.27 +JSON.GET {key} .,16,15193.51,1.05,0.94,1.41,1.75,2.33,2.42,7.19 +JSON.GET {key} [0],16,78198.90,0.20,0.19,0.29,0.33,0.42,0.47,10.87 +"JSON.SET {key} [0] ""foo""",16,80156.90,0.20,0.18,0.28,0.32,0.40,0.44,12.03 +JSON.GET {key} [7],16,99013.98,0.16,0.15,0.23,0.26,0.34,0.38,7.67 +JSON.GET {key} [8].zero,16,90562.19,0.17,0.16,0.25,0.28,0.35,0.38,7.03 +JSON.SET {key} . {pass-json-parser-0000.json size: 3.5 kB},16,14239.25,1.12,1.06,1.21,1.48,2.35,2.59,11.91 +JSON.GET {key} .,16,8366.31,1.91,1.86,2.00,2.04,2.92,3.51,12.92 +"JSON.GET {key} [""web-app""].servlet",16,9339.90,1.71,1.68,1.74,1.78,2.68,3.26,10.47 +"JSON.GET {key} [""web-app""].servlet[0]",16,13374.88,1.19,1.07,1.54,1.95,2.69,2.82,12.15 +"JSON.GET {key} [""web-app""].servlet[0][""servlet-name""]",16,81267.36,0.20,0.18,0.28,0.31,0.38,0.42,9.67 +"JSON.SET {key} [""web-app""].servlet[0][""servlet-name""] ""bar""",16,79955.04,0.20,0.18,0.27,0.33,0.42,0.46,6.72 +JSON.SET {key} . {pass-jsonsl.yahoo2-json size: 18 kB},16,3394.07,4.71,4.62,4.72,4.79,7.35,9.03,17.78 +JSON.GET {key} .,16,891.46,17.92,17.33,17.56,20.12,31.77,42.87,66.64 +JSON.SET {key} ResultSet.totalResultsAvailable 1,16,75513.03,0.21,0.19,0.30,0.34,0.42,0.46,9.21 +JSON.GET {key} ResultSet.totalResultsAvailable,16,91202.84,0.17,0.16,0.24,0.28,0.35,0.38,5.30 +JSON.SET {key} . {pass-jsonsl-yelp.json size: 40 kB},16,1624.86,9.84,9.67,9.86,9.94,15.86,19.36,31.94 +JSON.GET {key} .,16,442.55,36.08,35.62,37.78,38.14,55.23,81.33,88.40 +JSON.SET {key} message.code 1,16,77677.25,0.20,0.19,0.28,0.33,0.42,0.45,11.07 +JSON.GET {key} message.code,16,89206.61,0.18,0.16,0.25,0.28,0.36,0.39,8.60 +[JSON.SET num . 0],16,84498.21,0.19,0.17,0.26,0.30,0.39,0.43,8.08 +[JSON.NUMINCRBY num . 1],16,78640.20,0.20,0.18,0.28,0.33,0.44,0.48,11.05 +[JSON.NUMMULTBY num . 2],16,77170.85,0.21,0.19,0.28,0.33,0.43,0.47,6.85 +``` + +### Lua using cjson + +``` +json-set-root.lua empty string,16,86817.84,0.18,0.17,0.26,0.31,0.39,0.42,9.36 +json-get-root.lua,16,90795.08,0.17,0.16,0.25,0.28,0.36,0.39,8.75 +json-set-root.lua pass-100.json,16,84190.26,0.19,0.17,0.27,0.30,0.38,0.41,12.00 +json-get-root.lua,16,87170.45,0.18,0.17,0.26,0.29,0.38,0.45,9.81 +json-get-path.lua sclr,16,54556.80,0.29,0.28,0.35,0.38,0.57,0.64,7.53 +json-set-path.lua sclr 1,16,35907.30,0.44,0.42,0.53,0.67,0.93,1.00,8.57 +json-get-path.lua sub_doc,16,51158.84,0.31,0.30,0.36,0.39,0.50,0.62,7.22 +json-get-path.lua sub_doc sclr,16,51054.47,0.31,0.29,0.39,0.47,0.66,0.74,7.43 +json-get-path.lua array_of_docs,16,39103.77,0.41,0.37,0.57,0.68,0.87,0.94,8.02 +json-get-path.lua array_of_docs 1,16,45811.31,0.35,0.32,0.45,0.56,0.77,0.83,8.17 +json-get-path.lua array_of_docs 1 sclr,16,47346.83,0.34,0.31,0.44,0.54,0.72,0.79,8.07 +json-set-root.lua pass-jsonsl-1.json,16,82100.90,0.19,0.18,0.28,0.31,0.39,0.43,12.43 +json-get-root.lua,16,77922.14,0.20,0.18,0.30,0.34,0.66,0.86,8.71 +json-get-path.lua 0,16,38162.83,0.42,0.40,0.49,0.59,0.88,0.96,6.16 +"json-set-path.lua 0 ""foo""",16,21205.52,0.75,0.70,0.84,1.07,1.60,1.74,5.77 +json-get-path.lua 7,16,37254.89,0.43,0.39,0.55,0.69,0.92,0.98,10.24 +json-get-path.lua 8 zero,16,33772.43,0.47,0.43,0.63,0.77,1.01,1.09,7.89 +json-set-root.lua pass-json-parser-0000.json,16,76314.18,0.21,0.19,0.29,0.33,0.41,0.44,8.16 +json-get-root.lua,16,65177.87,0.24,0.21,0.35,0.42,0.89,1.01,9.02 +json-get-path.lua web-app servlet,16,15938.62,1.00,0.88,1.45,1.71,2.11,2.20,8.07 +json-get-path.lua web-app servlet 0,16,19469.27,0.82,0.78,0.90,1.07,1.67,1.84,7.59 +json-get-path.lua web-app servlet 0 servlet-name,16,24694.26,0.65,0.63,0.71,0.74,1.07,1.31,8.60 +"json-set-path.lua web-app servlet 0 servlet-name ""bar""",16,16555.74,0.96,0.92,1.05,1.25,1.98,2.20,9.08 +json-set-root.lua pass-jsonsl-yahoo2.json,16,47544.65,0.33,0.31,0.41,0.47,0.59,0.64,10.52 +json-get-root.lua,16,25369.92,0.63,0.57,0.91,1.05,1.37,1.56,9.95 +json-set-path.lua ResultSet totalResultsAvailable 1,16,5077.32,3.15,3.09,3.20,3.24,5.12,6.26,14.98 +json-get-path.lua ResultSet totalResultsAvailable,16,7652.56,2.09,2.05,2.13,2.17,3.23,3.95,9.65 +json-set-root.lua pass-jsonsl-yelp.json,16,29575.20,0.54,0.52,0.64,0.75,0.94,1.00,12.66 +json-get-root.lua,16,18424.29,0.87,0.84,1.25,1.40,1.82,1.95,7.35 +json-set-path.lua message code 1,16,2251.07,7.10,6.98,7.14,7.22,11.00,12.79,21.14 +json-get-path.lua message code,16,3380.72,4.73,4.44,5.03,6.82,10.28,11.06,14.93 +``` + +### Lua using cmsgpack + +``` +msgpack-set-root.lua empty string,16,82592.66,0.19,0.18,0.27,0.31,0.38,0.42,10.18 +msgpack-get-root.lua,16,89561.41,0.18,0.16,0.25,0.29,0.37,0.40,9.52 +msgpack-set-root.lua pass-100.json,16,44326.47,0.36,0.34,0.43,0.54,0.78,0.86,6.45 +msgpack-get-root.lua,16,41036.58,0.39,0.36,0.51,0.62,0.84,0.91,7.21 +msgpack-get-path.lua sclr,16,55845.56,0.28,0.26,0.36,0.44,0.64,0.70,11.29 +msgpack-set-path.lua sclr 1,16,43608.26,0.37,0.34,0.47,0.58,0.78,0.85,10.27 +msgpack-get-path.lua sub_doc,16,50153.07,0.32,0.29,0.41,0.50,0.69,0.75,8.56 +msgpack-get-path.lua sub_doc sclr,16,54016.35,0.29,0.27,0.38,0.46,0.62,0.67,6.38 +msgpack-get-path.lua array_of_docs,16,45394.79,0.35,0.32,0.45,0.56,0.78,0.85,11.88 +msgpack-get-path.lua array_of_docs 1,16,48336.48,0.33,0.30,0.42,0.52,0.71,0.76,7.69 +msgpack-get-path.lua array_of_docs 1 sclr,16,53689.41,0.30,0.27,0.38,0.46,0.64,0.69,11.16 +msgpack-set-root.lua pass-jsonsl-1.json,16,28956.94,0.55,0.51,0.65,0.82,1.17,1.26,8.39 +msgpack-get-root.lua,16,26045.44,0.61,0.58,0.68,0.83,1.28,1.42,8.56 +"msgpack-set-path.lua 0 ""foo""",16,29813.56,0.53,0.49,0.67,0.83,1.15,1.22,6.82 +msgpack-get-path.lua 0,16,44827.58,0.36,0.32,0.48,0.58,0.76,0.81,9.19 +msgpack-get-path.lua 7,16,47529.14,0.33,0.31,0.42,0.53,0.73,0.79,7.47 +msgpack-get-path.lua 8 zero,16,44442.72,0.36,0.33,0.45,0.56,0.77,0.85,8.11 +msgpack-set-root.lua pass-json-parser-0000.json,16,19585.82,0.81,0.78,0.85,1.05,1.66,1.86,4.33 +msgpack-get-root.lua,16,19014.08,0.84,0.73,1.23,1.45,1.76,1.84,13.52 +msgpack-get-path.lua web-app servlet,16,18992.61,0.84,0.73,1.23,1.45,1.75,1.82,8.19 +msgpack-get-path.lua web-app servlet 0,16,24328.78,0.66,0.64,0.73,0.77,1.15,1.34,8.81 +msgpack-get-path.lua web-app servlet 0 servlet-name,16,31012.81,0.51,0.49,0.57,0.65,1.02,1.13,8.11 +"msgpack-set-path.lua web-app servlet 0 servlet-name ""bar""",16,20388.54,0.78,0.73,0.88,1.08,1.63,1.78,7.22 +msgpack-set-root.lua pass-jsonsl-yahoo2.json,16,5597.60,2.85,2.81,2.89,2.94,4.57,5.59,10.19 +msgpack-get-root.lua,16,6585.01,2.43,2.39,2.52,2.66,3.76,4.80,10.59 +msgpack-set-path.lua ResultSet totalResultsAvailable 1,16,6666.95,2.40,2.35,2.43,2.47,3.78,4.59,12.08 +msgpack-get-path.lua ResultSet totalResultsAvailable,16,10733.03,1.49,1.45,1.60,1.66,2.36,2.93,13.15 +msgpack-set-root-lua pass-jsonsl-yelp.json,16,2291.53,6.97,6.87,7.01,7.12,10.54,12.89,21.75 +msgpack-get-root.lua,16,2889.59,5.53,5.45,5.71,5.86,8.80,10.48,25.55 +msgpack-set-path.lua message code 1,16,2847.85,5.61,5.44,5.56,6.01,10.58,11.90,16.91 +msgpack-get-path.lua message code,16,5030.95,3.18,3.07,3.24,3.57,6.08,6.92,12.44 +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: JSON support for Redis +linkTitle: JSON +stack: true +title: JSON +weight: 11 +--- + +[![Discord](https://img.shields.io/discord/697882427875393627?style=flat-square)](https://discord.gg/QUkjSsk) +[![Github](https://img.shields.io/static/v1?label=&message=repository&color=5961FF&logo=github)](https://github.com/RedisJSON/RedisJSON/) + +The JSON capability of Redis Open Source provides JavaScript Object Notation (JSON) support for Redis. It lets you store, update, and retrieve JSON values in a Redis database, similar to any other Redis data type. Redis JSON also works seamlessly with the [Redis Query Engine]({{< relref "/develop/interact/search-and-query/" >}}) to let you [index and query JSON documents]({{< relref "/develop/interact/search-and-query/indexing/" >}}). + +## Primary features + +* Full support for the JSON standard +* A [JSONPath](http://goessner.net/articles/JsonPath/) syntax for selecting/updating elements inside documents (see [JSONPath syntax]({{< relref "/develop/data-types/json/path#jsonpath-syntax" >}})) +* Documents stored as binary data in a tree structure, allowing fast access to sub-elements +* Typed atomic operations for all JSON value types + +## Use Redis with JSON + +The first JSON command to try is [`JSON.SET`]({{< relref "commands/json.set/" >}}), which sets a Redis key with a JSON value. [`JSON.SET`]({{< relref "commands/json.set/" >}}) accepts all JSON value types. This example creates a JSON string: + +{{< clients-example json_tutorial set_get >}} +> JSON.SET bike $ '"Hyperion"' +OK +> JSON.GET bike $ +"[\"Hyperion\"]" +> JSON.TYPE bike $ +1) "string" +{{< /clients-example >}} + +Note how the commands include the dollar sign character `$`. This is the [path]({{< relref "/develop/data-types/json/path" >}}) to the value in the JSON document (in this case it just means the root). + +Here are a few more string operations. [`JSON.STRLEN`]({{< relref "commands/json.strlen/" >}}) tells you the length of the string, and you can append another string to it with [`JSON.STRAPPEND`]({{< relref "commands/json.strappend/" >}}). + +{{< clients-example json_tutorial str>}} +> JSON.STRLEN bike $ +1) (integer) 8 +> JSON.STRAPPEND bike $ '" (Enduro bikes)"' +1) (integer) 23 +> JSON.GET bike $ +"[\"Hyperion (Enduro bikes)\"]" +{{< /clients-example >}} + +Numbers can be [incremented]({{< relref "commands/json.numincrby/" >}}) and [multiplied]({{< relref "commands/json.nummultby/" >}}): + +{{< clients-example json_tutorial num >}} +> JSON.SET crashes $ 0 +OK +> JSON.NUMINCRBY crashes $ 1 +"[1]" +> JSON.NUMINCRBY crashes $ 1.5 +"[2.5]" +> JSON.NUMINCRBY crashes $ -0.75 +"[1.75]" +> JSON.NUMMULTBY crashes $ 24 +"[42]" +{{< /clients-example >}} + +Here's a more interesting example that includes JSON arrays and objects: + +{{< clients-example json_tutorial arr >}} +> JSON.SET newbike $ '["Deimos", {"crashes": 0}, null]' +OK +> JSON.GET newbike $ +"[[\"Deimos\",{\"crashes\":0},null]]" +> JSON.GET newbike $[1].crashes +"[0]" +> JSON.DEL newbike $[-1] +(integer) 1 +> JSON.GET newbike $ +"[[\"Deimos\",{\"crashes\":0}]]" +{{< /clients-example >}} + +The [`JSON.DEL`]({{< relref "commands/json.del/" >}}) command deletes any JSON value you specify with the `path` parameter. + +You can manipulate arrays with a dedicated subset of JSON commands: + +{{< clients-example json_tutorial arr2 >}} +> JSON.SET riders $ [] +OK +> JSON.ARRAPPEND riders $ '"Norem"' +1) (integer) 1 +> JSON.GET riders $ +"[[\"Norem\"]]" +> JSON.ARRINSERT riders $ 1 '"Prickett"' '"Royce"' '"Castilla"' +1) (integer) 4 +> JSON.GET riders $ +"[[\"Norem\",\"Prickett\",\"Royce\",\"Castilla\"]]" +> JSON.ARRTRIM riders $ 1 1 +1) (integer) 1 +> JSON.GET riders $ +"[[\"Prickett\"]]" +> JSON.ARRPOP riders $ +1) "\"Prickett\"" +> JSON.ARRPOP riders $ +1) (nil) +{{< /clients-example >}} + +JSON objects also have their own commands: + +{{< clients-example json_tutorial obj >}} +> JSON.SET bike:1 $ '{"model": "Deimos", "brand": "Ergonom", "price": 4972}' +OK +> JSON.OBJLEN bike:1 $ +1) (integer) 3 +> JSON.OBJKEYS bike:1 $ +1) 1) "model" + 2) "brand" + 3) "price" +{{< /clients-example >}} + +## Format CLI output + +The CLI has a raw output mode that lets you add formatting to the output from +[`JSON.GET`]({{< relref "commands/json.get/" >}}) to make +it more readable. To use this, run `redis-cli` with the `--raw` option +and include formatting keywords such as `INDENT`, `NEWLINE`, and `SPACE` +with [`JSON.GET`]({{< relref "commands/json.get/" >}}): + +```bash +$ redis-cli --raw +> JSON.GET obj INDENT "\t" NEWLINE "\n" SPACE " " $ +[ + { + "name": "Leonard Cohen", + "lastSeen": 1478476800, + "loggedOut": true + } +] +``` + +## Enable Redis JSON + +The Redis JSON data type is part of Redis Open Source and it is also available in Redis Software and Redis Cloud. +See +[Install Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack" >}}) or +[Install Redis Enterprise]({{< relref "/operate/rs/installing-upgrading/install" >}}) +for full installation instructions. + +## Limitation + +A JSON value passed to a command can have a depth of up to 128. If you pass to a command a JSON value that contains an object or an array with a nesting level of more than 128, the command returns an error. + +## Further information + +Read the other pages in this section to learn more about Redis JSON +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Combine Redis JSON and the Redis Query Engine to index and search JSON documents +linkTitle: Index/Search +title: Index/Search JSON documents +weight: 2 +--- + +In addition to storing JSON documents, you can also index them using the [Redis Query Engine]({{< relref "/develop/interact/search-and-query/" >}}) feature. This enables full-text search capabilities and document retrieval based on their content. + +To use these features, install [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}). + +See the [tutorial]({{< relref "/develop/interact/search-and-query/indexing/" >}}) to learn how to search and query your JSON.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to Redis bitfields + + ' +linkTitle: Bitfields +title: Redis bitfields +weight: 130 +--- + +Redis bitfields let you set, increment, and get integer values of arbitrary bit length. +For example, you can operate on anything from unsigned 1-bit integers to signed 63-bit integers. + +These values are stored using binary-encoded Redis strings. +Bitfields support atomic read, write and increment operations, making them a good choice for managing counters and similar numerical values. + + +## Basic commands + +* [`BITFIELD`]({{< relref "/commands/bitfield" >}}) atomically sets, increments and reads one or more values. +* [`BITFIELD_RO`]({{< relref "/commands/bitfield_ro" >}}) is a read-only variant of [`BITFIELD`]({{< relref "/commands/bitfield" >}}). + +## Example + +Suppose you want to maintain two metrics for various bicycles: the current price and the number of owners over time. You can represent these counters with a 32-bit wide bitfield for each bike. + +* Bike 1 initially costs 1,000 (counter in offset 0) and has never had an owner. After being sold, it's now considered used and the price instantly drops to reflect its new condition, and it now has an owner (offset 1). After quite some time, the bike becomes a classic. The original owner sells it for a profit, so the price goes up and the number of owners does as well.Finally, you can look at the bike's current price and number of owners. + +{{< clients-example bitfield_tutorial bf >}} +> BITFIELD bike:1:stats SET u32 #0 1000 +1) (integer) 0 +> BITFIELD bike:1:stats INCRBY u32 #0 -50 INCRBY u32 #1 1 +1) (integer) 950 +2) (integer) 1 +> BITFIELD bike:1:stats INCRBY u32 #0 500 INCRBY u32 #1 1 +1) (integer) 1450 +2) (integer) 2 +> BITFIELD bike:1:stats GET u32 #0 GET u32 #1 +1) (integer) 1450 +2) (integer) 2 +{{< /clients-example >}} + + +## Performance + +[`BITFIELD`]({{< relref "/commands/bitfield" >}}) is O(n), where _n_ is the number of counters accessed. +--- +categories: +- docs +- develop +- stack +- rs +- rc +- oss +- kubernetes +- clients +description: Scale Redis vector sets to handle larger data sets and workloads +linkTitle: Scalability +title: Scalability +weight: 20 +--- + +## Multi-instance scalability + +Vector sets can scale horizontally by sharding your data across multiple Redis instances. This is done by partitioning the dataset manually across keys and nodes. + +### Example strategy + +You can shard data using a consistent hash: + +```python +key_index = crc32(item) % 3 +key = f"vset:{key_index}" +``` + +Then add elements into different keys: + +```bash +VADD vset:0 VALUES 3 0.1 0.2 0.3 item1 +VADD vset:1 VALUES 3 0.4 0.5 0.6 item2 +``` + +To run a similarity search across all shards, send [`VSIM`]({{< relref "/commands/vsim" >}}) commands to each key and then merge the results client-side: + +```bash +VSIM vset:0 VALUES ... WITHSCORES +VSIM vset:1 VALUES ... WITHSCORES +VSIM vset:2 VALUES ... WITHSCORES +``` + +Then combine and sort the results by score. + +## Key properties + +- Write operations ([`VADD`]({{< relref "/commands/vadd" >}}), [`VREM`]({{< relref "/commands/vrem" >}})) scale linearly—you can insert in parallel across instances. +- Read operations ([`VSIM`]({{< relref "/commands/vsim" >}})) do not scale linearly—you must query all shards for a full result set. +- Smaller vector sets yield faster queries, so distributing them helps reduce query time per node. +- Merging results client-side keeps logic simple and doesn't add server-side overhead. + +## Availability benefits + +This sharding model also improves fault tolerance: + +- If one instance is down, you can still retrieve partial results from others. +- Use timeouts and partial fallbacks to increase resilience. + +## Latency considerations + +To avoid additive latency across N instances: + +- Send queries to all shards in parallel. +- Wait for the slowest response. + +This makes total latency close to the worst-case shard time, not the sum of all times. + +## Summary + +| Goal | Approach | +|---------------------------|---------------------------------------------------| +| Scale inserts | Split data across keys and instances | +| Scale reads | Query all shards and merge results | +| High availability | Accept partial results when some shards fail | +| Maintain performance | Use smaller shards for faster per-node traversal | + +## See also + +- [Performance]({{< relref "/develop/data-types/vector-sets/performance" >}}) +- [Filtered search]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) +- [Memory usage]({{< relref "/develop/data-types/vector-sets/memory" >}}) +--- +categories: +- docs +- develop +- stack +- rs +- rc +- oss +- kubernetes +- clients +description: Diagnose and debug issues when working with Redis vector sets +linkTitle: Troubleshooting +title: Troubleshooting +weight: 30 +--- + +## Common challenges + +Vector sets are approximate by design. That makes debugging trickier than with exact match queries. This section helps you understand issues with recall, filtering, and graph structure. + +## Low recall or missing results + +If [`VSIM`]({{< relref "/commands/vsim" >}}) doesn't return expected items: + +- Increase the `EF` parameter: + + ```bash + VSIM myset VALUES 3 ... COUNT 10 EF 1000 + ``` + +- Check quantization mode. Binary quantization (`BIN`) trades accuracy for speed. +- Use `TRUTH` to compare results against a linear scan: + + ```bash + VSIM myset VALUES 3 ... COUNT 10 TRUTH + ``` + + This gives you the most accurate results for validation, but it's slow. + +## Filtering issues + +Filters silently exclude items if: + +- A field is missing from the element’s attributes +- The JSON is invalid +- A type doesn’t match the expression (for example, `.rating > 8` when `.rating` is a string) + +Try retrieving the attributes with [`VGETATTR`]({{< relref "/commands/vgetattr" >}}): + +```bash +VGETATTR myset myelement +``` + +Double-check field names, JSON validity, and value types. + +## Unexpected memory usage + +Memory issues may arise from: + +- Large vectors (use `REDUCE` to project down) +- High `M` values inflating link graphs +- Large or deeply nested JSON attributes +- Storing raw `FP32` vectors (`NOQUANT`) + +Use default `Q8` quantization and compact attributes to save space. + +## Inspecting the graph + +Use [`VLINKS`]({{< relref "/commands/vlinks" >}}) to examine a node’s connections: + +```bash +VLINKS myset myelement WITHSCORES +``` + +- Helps you verify whether isolated or weakly connected nodes exist. +- Useful for explaining poor recall. + +## Deletion spikes + +Large sets deleted using the `DEL` command can briefly spike latency as Redis reclaims memory and rebuilds HNSW linkages. + +## Replication quirks + +- `VADD` with `REDUCE` does not replicate the random projection matrix. +- Replicas will produce different projected vectors for the same inputs. + +This doesn't affect similarity searches but does affect [`VEMB`]({{< relref "/commands/vemb" >}}) output. + +## Summary + +| Symptom | Try this | +|----------------------------------|-----------------------------------------------------------| +| Poor recall | Use higher `EF`, check quantization, use `TRUTH` | +| Filters exclude too much | Validate attributes with `VGETATTR`, simplify expressions | +| Memory spikes | Use `REDUCE`, `Q8`, smaller `M`, compact JSON | +| Replication mismatch with REDUCE | Avoid relying on projected vectors from replicas | + +## See also + +- [Filtered Search]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) +- [Memory Usage]({{< relref "/develop/data-types/vector-sets/memory" >}}) +- [Performance]({{< relref "/develop/data-types/vector-sets/performance" >}}) +--- +categories: +- docs +- develop +- stack +- rs +- rc +- oss +- kubernetes +- clients +description: Use filter expressions to refine vector similarity results with Redis vector sets +linkTitle: Filter expressions +title: Filter expressions +weight: 10 +--- + +## Overview + +Filtered search lets you combine vector similarity search with scalar filtering. You can associate JSON attributes with elements in a vector set, and then filter results using those attributes during [`VSIM`]({{< relref "/commands/vsim" >}}) queries. + +This allows queries such as: + +```bash +VSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.year >= 1980 and .rating > 7' +``` + +## Assigning attributes + +You can associate attributes when adding a new vector using the `SETATTR` argument: + +```bash +VADD vset VALUES 3 1 1 1 a SETATTR '{"year": 1950}' +``` + +Or update them later with the [`VSETATTR`]({{< relref "/commands/vsetattr" >}}) command: + +```bash +VSETATTR vset a '{"year": 1960}' +``` + +You can retrieve attributes with the [`VGETATTR`]({{< relref "/commands/vgetattr" >}}) command: + +```bash +VGETATTR vset a +``` + +## Filtering during similarity search + +To filter by attributes, pass the `FILTER` option to the [`VSIM`]({{< relref "/commands/vsim" >}}) command: + +```bash +VSIM vset VALUES 3 0 0 0 FILTER '.year > 1950' +``` + +This returns only elements that match both the vector similarity and the filter expression. + +## Expression syntax + +Expressions support familiar JavaScript-like syntax: + +- Arithmetic: `+`, `-`, `*`, `/`, `%`, `**` +- Comparison: `==`, `!=`, `>`, `<`, `>=`, `<=` +- Logical: `and`, `or`, `not` (or `&&`, `||`, `!`) +- Containment: `in` +- Grouping: Parentheses `()` + +Use dot notation to access attribute fields, for example, `.year`, `.rating`. + +> Only top-level fields are supported (for example, `.genre`, but not `.movie.genre`). + +## Supported data types + +- Numbers +- Strings +- Booleans (converted to 1 or 0) +- Arrays (for `in`) + +If a field is missing or invalid, the element is skipped without error. + +## FILTER-EF + +The `FILTER-EF` option controls how many candidate nodes the engine inspects to find enough filtered results. The defaults is `COUNT * 100`. + +```bash +VSIM vset VALUES 3 0 0 0 COUNT 10 FILTER '.year > 2000' FILTER-EF 500 +``` + +- Use a higher value for rare filters. +- Use `FILTER-EF 0` to scan as many as needed to fulfill the request. +- The engine will stop early if enough high-quality results are found. + +## Examples + +```bash +# Filter by year range +VSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.year >= 1980 and .year < 1990' + +# Filter by genre and rating +VSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.genre == "action" and .rating > 8.0' + +# Use IN with array +VSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.director in ["Spielberg", "Nolan"]' + +# Math and logic +VSIM movies VALUES 3 0.5 0.8 0.2 FILTER '(.year - 2000) ** 2 < 100 and .rating / 2 > 4' +``` + +## Tips + +- Missing attributes are treated as non-matching. +- Use `FILTER-EF` to tune recall vs performance. +- Combine multiple attributes for fine-grained filtering. + +## See also + +- [VSIM]({{< relref "/commands/vsim" >}}) +- [VADD]({{< relref "/commands/vadd" >}}) +- [VSETATTR]({{< relref "/commands/vsetattr" >}}) +- [VGETATTR]({{< relref "/commands/vgetattr" >}})--- +categories: +- docs +- develop +- stack +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how Redis vector sets behave under load and how to optimize for speed and recall +linkTitle: Performance +title: Performance +weight: 15 +--- + +## Query performance + +Vector similarity queries using the [`VSIM`]({{< relref "/commands/vsim" >}}) are threaded by default. Redis uses up to 32 threads to process these queries in parallel. + +- `VSIM` performance scales nearly linearly with available CPU cores. +- Expect ~50,000 similarity queries per second for a 3M-item set with 300-dim vectors using int8 quantization. +- Performance depends heavily on the `EF` parameter: + - Higher `EF` improves recall, but slows down search. + - Lower `EF` returns faster results with reduced accuracy. + +## Insertion performance + +Inserting vectors with the [`VADD`]({{< relref "/commands/vadd" >}}) command is more computationally expensive than querying: + +- Insertion is single-threaded by default. +- Use the `CAS` option to offload candidate graph search to a background thread. +- Expect a few thousand insertions per second on a single node. + +## Quantization effects + +Quantization greatly impacts both speed and memory: + +- `Q8` (default): 4x smaller than `FP32`, high recall, high speed +- `BIN` (binary): 32x smaller than `FP32`, lower recall, fastest search +- `NOQUANT` (`FP32`): Full precision, slower performance, highest memory use + +Use the quantization mode that best fits your tradeoff between precision and efficiency. +The examples below show how the different modes affect a simple vector. +Note that even with `NOQUANT` mode, the values change slightly, +due to floating point rounding. + +{{< clients-example vecset_tutorial add_quant >}} +> VADD quantSetQ8 VALUES 2 1.262185 1.958231 quantElement Q8 +(integer) 1 +> VEMB quantSetQ8 quantElement +1) "1.2643694877624512" +2) "1.958230972290039" + +> VADD quantSetNoQ VALUES 2 1.262185 1.958231 quantElement NOQUANT +(integer) 1 +> VEMB quantSetNoQ quantElement +1) "1.262184977531433" +2) "1.958230972290039" + +> VADD quantSetBin VALUES 2 1.262185 1.958231 quantElement BIN +(integer) 1 +> VEMB quantSetBin quantElement +1) "1" +2) "1" +{{< /clients-example >}} + +## Deletion performance + +Deleting large vector sets using the [`DEL`]({{< relref "/commands/del" >}}) can cause latency spikes: + +- Redis must unlink and restructure many graph nodes. +- Latency is most noticeable when deleting millions of elements. + +## Save and load performance + +Vector sets save and load the full HNSW graph structure: + +- When reloading from disk is fast and there's no need to rebuild the graph. + +Example: A 3M vector set with 300 components loads in ~15 seconds. + +## Summary of tuning tips + +| Factor | Effect on performance | Tip | +|------------|-------------------------------------|------------------------------------------------| +| `EF` | Slower queries but higher recall | Start low (for example, 200) and tune upward | +| `M` | More memory per node, better recall | Use defaults unless recall is too low | +| Quant type | Binary is fastest, `FP32` is slowest| Use `Q8` or `BIN` unless full precision needed | +| `CAS` | Faster insertions with threading | Use when high write throughput is needed | + +## See also + +- [Memory usage]({{< relref "/develop/data-types/vector-sets/memory" >}}) +- [Scalability]({{< relref "/develop/data-types/vector-sets/scalability" >}}) +- [Filtered search]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) +--- +categories: +- docs +- develop +- stack +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to optimize memory consumption in Redis vector sets +linkTitle: Memory optimization +title: Memory optimization +weight: 25 +--- + +## Overview + +Redis vector sets are efficient, but vector similarity indexing and graph traversal require memory tradeoffs. This guide helps you manage memory use through quantization, graph tuning, and attribute choices. + +## Quantization modes + +Vector sets support three quantization levels: + +| Mode | Memory usage | Recall | Notes | +|------------|---------------|--------|---------------------------------| +| `Q8` | 4x smaller | High | Default, fast and accurate | +| `BIN` | 32x smaller | Lower | Fastest, best for coarse search | +| `NOQUANT` | Full size | Highest| Best precision, slowest | + +Use `Q8` unless your use case demands either ultra-precision (use `NOQUANT`) or ultra-efficiency (use `BIN`). + +## Graph structure memory + +HNSW graphs store multiple connections per node. Each node: + +- Has an average of `M * 2 + M * 0.33` pointers (default M = 16). +- Stores pointers using 8 bytes each. +- Allocates ~1.33 layers per node. + +> A single node with M = 64 may consume ~1 KB in links alone. + +To reduce memory: + +- Lower `M` to shrink per-node connections. +- Avoid unnecessarily large values for `M` unless recall needs to be improved. + +## Attribute and label size + +Each node stores: + +- A string label (element name) +- Optional JSON attribute string + +Tips: + +- Use short, fixed-length strings for labels. +- Keep attribute JSON minimal and flat. For example, use `{"year":2020}` instead of nested data. + +## Vector dimension + +High-dimensional vectors increase storage: + +- 300 components at `FP32` = 1200 bytes/vector +- 300 components at `Q8` = 300 bytes/vector + +You can reduce this using the `REDUCE` option during [`VADD`]({{< relref "/commands/vadd" >}}), which applies [random projection](https://en.wikipedia.org/wiki/Random_projection): + +{{< clients-example vecset_tutorial add_reduce >}} +>VADD setNotReduced VALUES 300 ... element +(integer) 1 +> VDIM setNotReduced +(integer) 300 + +>VADD setReduced REDUCE 100 VALUES 300 ... element +(integer) 1 +> VDIM setReduced +(integer) 100 +{{< /clients-example >}} + +This projects a 300-dimensional vector into 100 dimensions, reducing size and improving speed at the cost of some recall. + +## Summary + +| Strategy | Effect | +|---------------------|------------------------------------------| +| Use `Q8` | Best tradeoff for most use cases | +| Use `BIN` | Minimal memory, fastest search | +| Lower `M` | Shrinks HNSW link graph size | +| Reduce dimensions | Cuts memory per vector | +| Minimize JSON | Smaller attributes, less memory per node | + +## See also + +- [Performance]({{< relref "/develop/data-types/vector-sets/performance" >}}) +- [Scalability]({{< relref "/develop/data-types/vector-sets/scalability" >}}) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Introduction to Redis vector sets +linkTitle: Vector sets +title: Redis vector sets +weight: 55 +bannerText: Vector set is a new data type that is currently in preview and may be subject to change. +bannerChildren: true +--- + +Vector sets are a data type similar to sorted sets, but instead of a score, vector set elements have a string representation of a vector. +Vector sets allow you to add items to a set, and then either: + +* retrieve a subset of items that are the most similar to a specified vector, or +* retrieve a subset of items that are the most similar to the vector of an element that is already part of the vector set. + +Vector sets also provide for optional [filtered search]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}). You can associate attributes with all or some elements in a vector set, and then use the `FILTER` option of the [`VSIM`]({{< relref "/commands/vsim" >}}) command to retrieve items similar to a given vector while applying simple mathematical filters to those attributes. Here's a sample filter: `".year > 1950"`. + +The following commands are available for vector sets: + +- [VADD]({{< relref "/commands/vadd" >}}) - add an element to a vector set, creating a new set if it didn't already exist. +- [VCARD]({{< relref "/commands/vcard" >}}) - retrieve the number of elements in a vector set. +- [VDIM]({{< relref "/commands/vdim" >}}) - retrieve the dimension of the vectors in a vector set. +- [VEMB]({{< relref "/commands/vemb" >}}) - retrieve the approximate vector associated with a vector set element. +- [VGETATTR]({{< relref "/commands/vgetattr" >}}) - retrieve the attributes of a vector set element. +- [VINFO]({{< relref "/commands/vinfo" >}}) - retrieve metadata and internal details about a vector set, including size, dimensions, quantization type, and graph structure. +- [VLINKS]({{< relref "/commands/vlinks" >}}) - retrieve the neighbors of a specified element in a vector set; the connections for each layer of the HNSW graph. +- [VRANDMEMBER]({{< relref "/commands/vrandmember" >}}) - retrieve random elements of a vector set. +- [VREM]({{< relref "/commands/vrem" >}}) - remove an element from a vector set. +- [VSETATTR]({{< relref "/commands/vsetattr" >}}) - set or replace attributes on a vector set element. +- [VSIM]({{< relref "/commands/vsim" >}}) - retrieve elements similar to a given vector or element with optional filtering. + +## Examples + +The following examples give an overview of how to use vector sets. For clarity, +we will use a set of two-dimensional vectors that represent points in the +Cartesian coordinate plane. However, in real use cases, the vectors will typically +represent *text embeddings* and have hundreds of dimensions. See +[Redis for AI]({{< relref "/develop/ai" >}}) for more information about using text +embeddings. + +The points we will use are A: (1.0, 1.0), B: (-1.0, -1.0), C: (-1.0, 1.0), D: (1.0. -1.0), and +E: (1.0, 0), shown in the diagram below. + +{{Example points on the coordinate plane.}} + +### Basic operations + +Start by adding the point vectors to a set called `points` using +[`VADD`]({{< relref "/commands/vadd" >}}). This also creates the vector set object. +The [`TYPE`]({{< relref "/commands/type" >}}) command returns a type of `vectorset` +for this object. + +{{< clients-example vecset_tutorial vadd >}} +> VADD points VALUES 2 1.0 1.0 pt:A +(integer) 1 +> VADD points VALUES 2 -1.0 -1.0 pt:B +(integer) 1 +> VADD points VALUES 2 -1.0 1.0 pt:C +(integer) 1 +> VADD points VALUES 2 1.0 -1.0 pt:D +(integer) 1 +> VADD points VALUES 2 1.0 0 pt:E +(integer) 1 +> TYPE points +vectorset +{{< /clients-example >}} + + +Get the number of elements in the set (also known as the *cardinality* of the set) +using [`VCARD`]({{< relref "/commands/vcard" >}}) and the number of dimensions of +the vectors using [`VDIM`]({{< relref "/commands/vdim" >}}): + +{{< clients-example vecset_tutorial vcardvdim >}} +> VCARD points +(integer) 5 +> VDIM points +(integer) 2 +{{< /clients-example >}} + +Get the coordinate values from the elements using [`VEMB`]({{< relref "/commands/vemb" >}}). +Note that the values will not typically be the exact values you supplied when you added +the vector because +[quantization]({{< relref "/develop/data-types/vector-sets/performance#quantization-effects" >}}) +is applied to improve performance. + +{{< clients-example vecset_tutorial vemb >}} +> VEMB points pt:A +1) "0.9999999403953552" +2) "0.9999999403953552" +9> VEMB points pt:B +1) "-0.9999999403953552" +2) "-0.9999999403953552" +> VEMB points pt:C +1) "-0.9999999403953552" +2) "0.9999999403953552" +> VEMB points pt:D +1) "0.9999999403953552" +2) "-0.9999999403953552" +> VEMB points pt:E +1) "1" +2) "0" +{{< /clients-example >}} + +Set and retrieve an element's JSON attribute data using +[`VSETATTR`]({{< relref "/commands/vsetattr" >}}) +and [`VGETATTR`]({{< relref "/commands/vgetattr" >}}). You can also pass an empty string +to `VSETATTR` to delete the attribute data: + +{{< clients-example vecset_tutorial attr >}} +> VSETATTR points pt:A "{\"name\": \"Point A\", \"description\": \"First point added\"}" +(integer) 1 +> VGETATTR points pt:A +"{\"name\": \"Point A\", \"description\": \"First point added\"}" +> VSETATTR points pt:A "" +(integer) 1 +> VGETATTR points pt:A +(nil) +{{< /clients-example >}} + +Remove an unwanted element with [`VREM`]({{< relref "/commands/vrem" >}}) + +{{< clients-example vecset_tutorial vrem >}} +> VADD points VALUES 2 0 0 pt:F +(integer) 1 +127.0.0.1:6379> VCARD points +(integer) 6 +127.0.0.1:6379> VREM points pt:F +(integer) 1 +127.0.0.1:6379> VCARD points +(integer) 5 +{{< /clients-example >}} + +### Vector similarity search + +Use [`VSIM`]({{< relref "/commands/vsim" >}}) to rank the points in order of their vector distance from a sample point: + +{{< clients-example vecset_tutorial vsim_basic >}} +> VSIM points values 2 0.9 0.1 +1) "pt:E" +2) "pt:A" +3) "pt:D" +4) "pt:C" +5) "pt:B" +{{< /clients-example >}} + +Find the four elements that are closest to point A and show their distance "scores": + +{{< clients-example vecset_tutorial vsim_options >}} +> VSIM points ELE pt:A WITHSCORES COUNT 4 +1) "pt:A" +2) "1" +3) "pt:E" +4) "0.8535534143447876" +5) "pt:C" +6) "0.5" +7) "pt:D" +8) "0.5" +{{< /clients-example >}} + +Add some JSON attributes and use +[filter expressions]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) +to include them in the search: + +{{< clients-example vecset_tutorial vsim_filter >}} +> VSETATTR points pt:A "{\"size\":\"large\",\"price\": 18.99}" +(integer) 1 +> VSETATTR points pt:B "{\"size\":\"large\",\"price\": 35.99}" +(integer) 1 +> VSETATTR points pt:C "{\"size\":\"large\",\"price\": 25.99}" +(integer) 1 +> VSETATTR points pt:D "{\"size\":\"small\",\"price\": 21.00}" +(integer) 1 +> VSETATTR points pt:E "{\"size\":\"small\",\"price\": 17.75}" +(integer) 1 + +# Return elements in order of distance from point A whose +# `size` attribute is `large`. +> VSIM points ELE pt:A FILTER '.size == "large"' +1) "pt:A" +2) "pt:C" +3) "pt:B" + +# Return elements in order of distance from point A whose size is +# `large` and whose price is greater than 20.00. +> VSIM points ELE pt:A FILTER '.size == "large" && .price > 20.00' +1) "pt:C" +2) "pt:B" +{{< /clients-example >}} + +## More information + +See the other pages in this section to learn more about the features +and performance parameters of vector sets. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Overview of data types supported by Redis +linkTitle: Understand data types +title: Understand Redis data types +hideListLinks: true +weight: 35 +--- + +Redis is a data structure server. +At its core, Redis provides a collection of native data types that help you solve a wide variety of problems, from [caching]({{< relref "/develop/data-types/strings" >}}) to +[queuing]({{< relref "/develop/data-types/lists" >}}) to +[event processing]({{< relref "/develop/data-types/streams" >}}). +Below is a short description of each data type, with links to broader overviews and command references. +Each overview includes a comprehensive tutorial with code samples. + +## Data types + +[Redis Open Source]({{< relref "/operate/oss_and_stack" >}}) +implements the following data types: + +- [String](#strings) +- [Hash](#hashes) +- [List](#lists) +- [Set](#sets) +- [Sorted set](#sorted-sets) +- [Vector set](#vector-sets) +- [Stream](#streams) +- [Bitmap](#bitmaps) +- [Bitfield](#bitfields) +- [Geospatial](#geospatial-indexes) +- [JSON](#json) +- [Probabilistic data types](#probabilistic-data-types) +- [Time series](#time-series) + +### Strings + +[Redis strings]({{< relref "/develop/data-types/strings" >}}) are the most basic Redis data type, representing a sequence of bytes. +For more information, see: + +* [Overview of Redis strings]({{< relref "/develop/data-types/strings" >}}) +* [Redis string command reference]({{< relref "/commands/" >}}?group=string) + +### Lists + +[Redis lists]({{< relref "/develop/data-types/lists" >}}) are lists of strings sorted by insertion order. +For more information, see: + +* [Overview of Redis lists]({{< relref "/develop/data-types/lists" >}}) +* [Redis list command reference]({{< relref "/commands/" >}}?group=list) + +### Sets + +[Redis sets]({{< relref "/develop/data-types/sets" >}}) are unordered collections of unique strings that act like the sets from your favorite programming language (for example, [Java HashSets](https://docs.oracle.com/javase/7/docs/api/java/util/HashSet.html), [Python sets](https://docs.python.org/3.10/library/stdtypes.html#set-types-set-frozenset), and so on). +With a Redis set, you can add, remove, and test for existence in O(1) time (in other words, regardless of the number of set elements). +For more information, see: + +* [Overview of Redis sets]({{< relref "/develop/data-types/sets" >}}) +* [Redis set command reference]({{< relref "/commands/" >}}?group=set) + +### Hashes + +[Redis hashes]({{< relref "/develop/data-types/hashes" >}}) are record types modeled as collections of field-value pairs. +As such, Redis hashes resemble [Python dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries), [Java HashMaps](https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html), and [Ruby hashes](https://ruby-doc.org/core-3.1.2/Hash.html). +For more information, see: + +* [Overview of Redis hashes]({{< relref "/develop/data-types/hashes" >}}) +* [Redis hashes command reference]({{< relref "/commands/" >}}?group=hash) + +### Sorted sets + +[Redis sorted sets]({{< relref "/develop/data-types/sorted-sets" >}}) are collections of unique strings that maintain order by each string's associated score. +For more information, see: + +* [Overview of Redis sorted sets]({{< relref "/develop/data-types/sorted-sets" >}}) +* [Redis sorted set command reference]({{< relref "/commands/" >}}?group=sorted-set) + +### Vector sets + +[Redis vector sets]({{< relref "/develop/data-types/vector-sets" >}}) are a specialized data type designed for managing high-dimensional vector data, enabling fast and efficient vector similarity search within Redis. Vector sets are optimized for use cases involving machine learning, recommendation systems, and semantic search, where each vector represents a data point in multi-dimensional space. Vector sets supports the [HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) (hierarchical navigable small world) algorithm, allowing you to store, index, and query vectors based on the cosine similarity metric. With vector sets, Redis provides native support for hybrid search, combining vector similarity with structured [filters]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}). +For more information, see: + +* [Overview of Redis vector sets]({{< relref "/develop/data-types/vector-sets" >}}) +* [Redis vector set command reference]({{< relref "/commands/" >}}?group=vector_set) + +### Streams + +A [Redis stream]({{< relref "/develop/data-types/streams" >}}) is a data structure that acts like an append-only log. +Streams help record events in the order they occur and then syndicate them for processing. +For more information, see: + +* [Overview of Redis Streams]({{< relref "/develop/data-types/streams" >}}) +* [Redis Streams command reference]({{< relref "/commands/" >}}?group=stream) + +### Geospatial indexes + +[Redis geospatial indexes]({{< relref "/develop/data-types/geospatial" >}}) are useful for finding locations within a given geographic radius or bounding box. +For more information, see: + +* [Overview of Redis geospatial indexes]({{< relref "/develop/data-types/geospatial" >}}) +* [Redis geospatial indexes command reference]({{< relref "/commands/" >}}?group=geo) + +### Bitmaps + +[Redis bitmaps]({{< relref "/develop/data-types/bitmaps" >}}) let you perform bitwise operations on strings. +For more information, see: + +* [Overview of Redis bitmaps]({{< relref "/develop/data-types/bitmaps" >}}) +* [Redis bitmap command reference]({{< relref "/commands/" >}}?group=bitmap) + +### Bitfields + +[Redis bitfields]({{< relref "/develop/data-types/bitfields" >}}) efficiently encode multiple counters in a string value. +Bitfields provide atomic get, set, and increment operations and support different overflow policies. +For more information, see: + +* [Overview of Redis bitfields]({{< relref "/develop/data-types/bitfields" >}}) +* The [`BITFIELD`]({{< relref "/commands/bitfield" >}}) command. + +### JSON + +[Redis JSON]({{< relref "/develop/data-types/json" >}}) provides +structured, hierarchical arrays and key-value objects that match +the popular [JSON](https://www.json.org/json-en.html) text file +format. You can import JSON text into Redis objects and access, +modify, and query individual data elements. +For more information, see: + +- [Overview of Redis JSON]({{< relref "/develop/data-types/json" >}}) +- [JSON command reference]({{< relref "/commands" >}}?group=json) + +### Probabilistic data types + +These data types let you gather and calculate statistics in a way +that is approximate but highly efficient. The following types are +available: + +- [HyperLogLog](#hyperloglog) +- [Bloom filter](#bloom-filter) +- [Cuckoo filter](#cuckoo-filter) +- [t-digest](#t-digest) +- [Top-K](#top-k) +- [Count-min sketch](#count-min-sketch) + +### HyperLogLog + +The [Redis HyperLogLog]({{< relref "/develop/data-types/probabilistic/hyperloglogs" >}}) data structures provide probabilistic estimates of the cardinality (i.e., number of elements) of large sets. For more information, see: + +* [Overview of Redis HyperLogLog]({{< relref "/develop/data-types/probabilistic/hyperloglogs" >}}) +* [Redis HyperLogLog command reference]({{< relref "/commands/" >}}?group=hyperloglog) + +### Bloom filter + +[Redis Bloom filters]({{< relref "/develop/data-types/probabilistic/bloom-filter" >}}) +let you check for the presence or absence of an element in a set. For more +information, see: + +- [Overview of Redis Bloom filters]({{< relref "/develop/data-types/probabilistic/bloom-filter" >}}) +- [Bloom filter command reference]({{< relref "/commands" >}}?group=bf) + +### Cuckoo filter + +[Redis Cuckoo filters]({{< relref "/develop/data-types/probabilistic/cuckoo-filter" >}}) +let you check for the presence or absence of an element in a set. They are similar to +[Bloom filters](#bloom-filter) but with slightly different trade-offs between features +and performance. For more information, see: + +- [Overview of Redis Cuckoo filters]({{< relref "/develop/data-types/probabilistic/cuckoo-filter" >}}) +- [Cuckoo filter command reference]({{< relref "/commands" >}}?group=cf) + +### t-digest + +[Redis t-digest]({{< relref "/develop/data-types/probabilistic/t-digest" >}}) +structures estimate percentiles from a stream of data values. For more +information, see: + +- [Redis t-digest overview]({{< relref "/develop/data-types/probabilistic/t-digest" >}}) +- [t-digest command reference]({{< relref "/commands" >}}?group=tdigest) + +### Top-K + +[Redis Top-K]({{< relref "/develop/data-types/probabilistic/top-k" >}}) +structures estimate the ranking of a data point within a stream of values. +For more information, see: + +- [Redis Top-K overview]({{< relref "/develop/data-types/probabilistic/top-k" >}}) +- [Top-K command reference]({{< relref "/commands" >}}?group=topk) + +### Count-min sketch + +[Redis Count-min sketch]({{< relref "/develop/data-types/probabilistic/count-min-sketch" >}}) +estimate the frequency of a data point within a stream of values. +For more information, see: + +- [Redis Count-min sketch overview]({{< relref "/develop/data-types/probabilistic/count-min-sketch" >}}) +- [Count-min sketch command reference]({{< relref "/commands" >}}?group=cms) + +## Time series + +[Redis time series]({{< relref "/develop/data-types/timeseries" >}}) +structures let you store and query timestamped data points. +For more information, see: + +- [Redis time series overview]({{< relref "/develop/data-types/timeseries" >}}) +- [Count-min sketch command reference]({{< relref "/commands" >}}?group=timeseries) + +## Adding extensions + +To extend the features provided by the included data types, use one of these options: + +1. Write your own custom [server-side functions in Lua]({{< relref "/develop/interact/programmability/" >}}). +1. Write your own Redis module using the [modules API]({{< relref "/develop/reference/modules/" >}}) or check out the [community-supported modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/" >}}). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn several Redis patterns by building a Twitter clone +linkTitle: Patterns example +title: Redis patterns example +weight: 20 +--- + +This article describes the design and implementation of a [very simple Twitter clone](https://github.com/antirez/retwis) written using PHP with Redis as the only database. The programming community has traditionally considered key-value stores as a special purpose database that couldn't be used as a drop-in replacement for a relational database for the development of web applications. This article will try to show that Redis data structures on top of a key-value layer are an effective data model to implement many kinds of applications. + +Note: the original version of this article was written in 2009 when Redis was +released. It was not exactly clear at that time that the Redis data model was +suitable to write entire applications. Now after 5 years there are many cases of +applications using Redis as their main store, so the goal of the article today +is to be a tutorial for Redis newcomers. You'll learn how to design a simple +data layout using Redis, and how to apply different data structures. + +Our Twitter clone, called [Retwis](https://github.com/antirez/retwis), is structurally simple, has very good performance, and can be distributed among any number of web and Redis servers with little efforts. [View the Retwis source code](https://github.com/antirez/retwis). + +I used PHP for the example because of its universal readability. The same (or better) results can be obtained using Ruby, Python, Erlang, and so on. +A few clones exist (however not all the clones use the same data layout as the +current version of this tutorial, so please, stick with the official PHP +implementation for the sake of following the article better). + +* [Retwis-RB](https://github.com/danlucraft/retwis-rb) is a port of Retwis to Ruby and Sinatra written by Daniel Lucraft. +* [Retwis-J](https://docs.spring.io/spring-data/data-keyvalue/examples/retwisj/current/) is a port of Retwis to Java, using the Spring Data Framework, written by [Costin Leau](http://twitter.com/costinl). Its source code can be found on [GitHub](https://github.com/SpringSource/spring-data-keyvalue-examples), and there is comprehensive documentation available at [springsource.org](http://j.mp/eo6z6I). + +What is a key-value store? +--- +The essence of a key-value store is the ability to store some data, called a _value_, inside a key. The value can be retrieved later only if we know the specific key it was stored in. There is no direct way to search for a key by value. In some sense, it is like a very large hash/dictionary, but it is persistent, i.e. when your application ends, the data doesn't go away. So, for example, I can use the command [`SET`]({{< relref "/commands/set" >}}) to store the value *bar* in the key *foo*: + + SET foo bar + +Redis stores data permanently, so if I later ask "_What is the value stored in key foo?_" Redis will reply with *bar*: + + GET foo => bar + +Other common operations provided by key-value stores are [`DEL`]({{< relref "/commands/del" >}}), to delete a given key and its associated value, SET-if-not-exists (called [`SETNX`]({{< relref "/commands/setnx" >}}) on Redis), to assign a value to a key only if the key does not already exist, and [`INCR`]({{< relref "/commands/incr" >}}), to atomically increment a number stored in a given key: + + SET foo 10 + INCR foo => 11 + INCR foo => 12 + INCR foo => 13 + +Atomic operations +--- + +There is something special about [`INCR`]({{< relref "/commands/incr" >}}). You may wonder why Redis provides such an operation if we can do it ourselves with a bit of code? After all, it is as simple as: + + x = GET foo + x = x + 1 + SET foo x + +The problem is that incrementing this way will work as long as there is only one client working with the key _foo_ at one time. See what happens if two clients are accessing this key at the same time: + + x = GET foo (yields 10) + y = GET foo (yields 10) + x = x + 1 (x is now 11) + y = y + 1 (y is now 11) + SET foo x (foo is now 11) + SET foo y (foo is now 11) + +Something is wrong! We incremented the value two times, but instead of going from 10 to 12, our key holds 11. This is because the increment done with `GET / increment / SET` *is not an atomic operation*. Instead the INCR provided by Redis, Memcached, ..., are atomic implementations, and the server will take care of protecting the key during the time needed to complete the increment in order to prevent simultaneous accesses. + +What makes Redis different from other key-value stores is that it provides other operations similar to INCR that can be used to model complex problems. This is why you can use Redis to write whole web applications without using another database like an SQL database, and without going crazy. + +Beyond key-value stores: lists +--- + +In this section we will see which Redis features we need to build our Twitter clone. The first thing to know is that Redis values can be more than strings. Redis supports Lists, Sets, Hashes, Sorted Sets, Bitmaps, and HyperLogLog types as values, and there are atomic operations to operate on them so we are safe even with multiple accesses to the same key. Let's start with Lists: + + LPUSH mylist a (now mylist holds 'a') + LPUSH mylist b (now mylist holds 'b','a') + LPUSH mylist c (now mylist holds 'c','b','a') + +[`LPUSH`]({{< relref "/commands/lpush" >}}) means _Left Push_, that is, add an element to the left (or to the head) of the list stored in _mylist_. If the key _mylist_ does not exist it is automatically created as an empty list before the PUSH operation. As you can imagine, there is also an [`RPUSH`]({{< relref "/commands/rpush" >}}) operation that adds the element to the right of the list (on the tail). This is very useful for our Twitter clone. User updates can be added to a list stored in `username:updates`, for instance. + +There are operations to get data from Lists, of course. For instance, LRANGE returns a range from the list, or the whole list. + + LRANGE mylist 0 1 => c,b + +LRANGE uses zero-based indexes - that is the first element is 0, the second 1, and so on. The command arguments are `LRANGE key first-index last-index`. The _last-index_ argument can be negative, with a special meaning: -1 is the last element of the list, -2 the penultimate, and so on. So, to get the whole list use: + + LRANGE mylist 0 -1 => c,b,a + +Other important operations are LLEN that returns the number of elements in the list, and LTRIM that is like LRANGE but instead of returning the specified range *trims* the list, so it is like _Get range from mylist, Set this range as new value_ but does so atomically. + +The Set data type +--- + +Currently we don't use the Set type in this tutorial, but since we use +Sorted Sets, which are kind of a more capable version of Sets, it is better +to start introducing Sets first (which are a very useful data structure +per se), and later Sorted Sets. + +There are more data types than just Lists. Redis also supports Sets, which are unsorted collections of elements. It is possible to add, remove, and test for existence of members, and perform the intersection between different Sets. Of course it is possible to get the elements of a Set. Some examples will make it more clear. Keep in mind that [`SADD`]({{< relref "/commands/sadd" >}}) is the _add to set_ operation, [`SREM`]({{< relref "/commands/srem" >}}) is the _remove from set_ operation, [`SISMEMBER`]({{< relref "/commands/sismember" >}}) is the _test if member_ operation, and [`SINTER`]({{< relref "/commands/sinter" >}}) is the _perform intersection_ operation. Other operations are [`SCARD`]({{< relref "/commands/scard" >}}) to get the cardinality (the number of elements) of a Set, and [`SMEMBERS`]({{< relref "/commands/smembers" >}}) to return all the members of a Set. + + SADD myset a + SADD myset b + SADD myset foo + SADD myset bar + SCARD myset => 4 + SMEMBERS myset => bar,a,foo,b + +Note that [`SMEMBERS`]({{< relref "/commands/smembers" >}}) does not return the elements in the same order we added them since Sets are *unsorted* collections of elements. When you want to store in order it is better to use Lists instead. Some more operations against Sets: + + SADD mynewset b + SADD mynewset foo + SADD mynewset hello + SINTER myset mynewset => foo,b + +[`SINTER`]({{< relref "/commands/sinter" >}}) can return the intersection between Sets but it is not limited to two Sets. You may ask for the intersection of 4,5, or 10000 Sets. Finally let's check how [`SISMEMBER`]({{< relref "/commands/sismember" >}}) works: + + SISMEMBER myset foo => 1 + SISMEMBER myset notamember => 0 + +The Sorted Set data type +--- + +Sorted Sets are similar to Sets: collection of elements. However in Sorted +Sets each element is associated with a floating point value, called the +*element score*. Because of the score, elements inside a Sorted Set are +ordered, since we can always compare two elements by score (and if the score +happens to be the same, we compare the two elements as strings). + +Like Sets in Sorted Sets it is not possible to add repeated elements, every +element is unique. However it is possible to update an element's score. + +Sorted Set commands are prefixed with `Z`. The following is an example +of Sorted Sets usage: + + ZADD zset 10 a + ZADD zset 5 b + ZADD zset 12.55 c + ZRANGE zset 0 -1 => b,a,c + +In the above example we added a few elements with [`ZADD`]({{< relref "/commands/zadd" >}}), and later retrieved +the elements with [`ZRANGE`]({{< relref "/commands/zrange" >}}). As you can see the elements are returned in order +according to their score. In order to check if a given element exists, and +also to retrieve its score if it exists, we use the [`ZSCORE`]({{< relref "/commands/zscore" >}}) command: + + ZSCORE zset a => 10 + ZSCORE zset non_existing_element => NULL + +Sorted Sets are a very powerful data structure, you can query elements by +score range, lexicographically, in reverse order, and so forth. +To know more [please check the Sorted Set sections in the official Redis commands documentation]({{< relref "/commands/#sorted_set" >}}). + +The Hash data type +--- + +This is the last data structure we use in our program, and is extremely easy +to grasp since there is an equivalent in almost every programming language out +there: Hashes. Redis Hashes are basically like Ruby or Python hashes, a +collection of fields associated with values: + + HMSET myuser name Salvatore surname Sanfilippo country Italy + HGET myuser surname => Sanfilippo + +[`HMSET`]({{< relref "/commands/hmset" >}}) can be used to set fields in the hash, that can be retrieved with +[`HGET`]({{< relref "/commands/hget" >}}) later. It is possible to check if a field exists with [`HEXISTS`]({{< relref "/commands/hexists" >}}), or +to increment a hash field with [`HINCRBY`]({{< relref "/commands/hincrby" >}}) and so forth. + +Hashes are the ideal data structure to represent *objects*. For example we +use Hashes in order to represent Users and Updates in our Twitter clone. + +Okay, we just exposed the basics of the Redis main data structures, +we are ready to start coding! + +Prerequisites +--- + +If you haven't downloaded the [Retwis source code](https://github.com/antirez/retwis) already please grab it now. It contains a few PHP files, and also a copy of [Predis](https://github.com/nrk/predis), the PHP client library we use in this example. + +Another thing you probably want is a working Redis server. Just get the source, build with `make`, run with `./redis-server`, and you're ready to go. No configuration is required at all in order to play with or run Retwis on your computer. + +Data layout +--- + +When working with a relational database, a database schema must be designed so that we'd know the tables, indexes, and so on that the database will contain. We don't have tables in Redis, so what do we need to design? We need to identify what keys are needed to represent our objects and what kind of values these keys need to hold. + +Let's start with Users. We need to represent users, of course, with their username, userid, password, the set of users following a given user, the set of users a given user follows, and so on. The first question is, how should we identify a user? Like in a relational DB, a good solution is to identify different users with different numbers, so we can associate a unique ID with every user. Every other reference to this user will be done by id. Creating unique IDs is very simple to do by using our atomic [`INCR`]({{< relref "/commands/incr" >}}) operation. When we create a new user we can do something like this, assuming the user is called "antirez": + + INCR next_user_id => 1000 + HMSET user:1000 username antirez password p1pp0 + +*Note: you should use a hashed password in a real application, for simplicity +we store the password in clear text.* + +We use the `next_user_id` key in order to always get a unique ID for every new user. Then we use this unique ID to name the key holding a Hash with user's data. *This is a common design pattern* with key-values stores! Keep it in mind. +Besides the fields already defined, we need some more stuff in order to fully define a User. For example, sometimes it can be useful to be able to get the user ID from the username, so every time we add a user, we also populate the `users` key, which is a Hash, with the username as field, and its ID as value. + + HSET users antirez 1000 + +This may appear strange at first, but remember that we are only able to access data in a direct way, without secondary indexes. It's not possible to tell Redis to return the key that holds a specific value. This is also *our strength*. This new paradigm is forcing us to organize data so that everything is accessible by _primary key_, speaking in relational DB terms. + +Followers, following, and updates +--- + +There is another central need in our system. A user might have users who follow them, which we'll call their followers. A user might follow other users, which we'll call a following. We have a perfect data structure for this. That is... Sets. +The uniqueness of Sets elements, and the fact we can test in constant time for +existence, are two interesting features. However what about also remembering +the time at which a given user started following another one? In an enhanced +version of our simple Twitter clone this may be useful, so instead of using +a simple Set, we use a Sorted Set, using the user ID of the following or follower +user as element, and the unix time at which the relation between the users +was created, as our score. + +So let's define our keys: + + followers:1000 => Sorted Set of uids of all the followers users + following:1000 => Sorted Set of uids of all the following users + +We can add new followers with: + + ZADD followers:1000 1401267618 1234 => Add user 1234 with time 1401267618 + +Another important thing we need is a place where we can add the updates to display in the user's home page. We'll need to access this data in chronological order later, from the most recent update to the oldest, so the perfect kind of data structure for this is a List. Basically every new update will be [`LPUSH`]({{< relref "/commands/lpush" >}})ed in the user updates key, and thanks to [`LRANGE`]({{< relref "/commands/lrange" >}}), we can implement pagination and so on. Note that we use the words _updates_ and _posts_ interchangeably, since updates are actually "little posts" in some way. + + posts:1000 => a List of post ids - every new post is LPUSHed here. + +This list is basically the User timeline. We'll push the IDs of her/his own +posts, and, the IDs of all the posts of created by the following users. +Basically, we'll implement a write fanout. + +Authentication +--- + +OK, we have more or less everything about the user except for authentication. We'll handle authentication in a simple but robust way: we don't want to use PHP sessions, as our system must be ready to be distributed among different web servers easily, so we'll keep the whole state in our Redis database. All we need is a random **unguessable** string to set as the cookie of an authenticated user, and a key that will contain the user ID of the client holding the string. + +We need two things in order to make this thing work in a robust way. +First: the current authentication *secret* (the random unguessable string) +should be part of the User object, so when the user is created we also set +an `auth` field in its Hash: + + HSET user:1000 auth fea5e81ac8ca77622bed1c2132a021f9 + +Moreover, we need a way to map authentication secrets to user IDs, so +we also take an `auths` key, which has as value a Hash type mapping +authentication secrets to user IDs. + + HSET auths fea5e81ac8ca77622bed1c2132a021f9 1000 + +In order to authenticate a user we'll do these simple steps (see the `login.php` file in the Retwis source code): + + * Get the username and password via the login form. + * Check if the `username` field actually exists in the `users` Hash. + * If it exists we have the user id, (i.e. 1000). + * Check if user:1000 password matches, if not, return an error message. + * Ok authenticated! Set "fea5e81ac8ca77622bed1c2132a021f9" (the value of user:1000 `auth` field) as the "auth" cookie. + +This is the actual code: + + include("retwis.php"); + + # Form sanity checks + if (!gt("username") || !gt("password")) + goback("You need to enter both username and password to login."); + + # The form is ok, check if the username is available + $username = gt("username"); + $password = gt("password"); + $r = redisLink(); + $userid = $r->hget("users",$username); + if (!$userid) + goback("Wrong username or password"); + $realpassword = $r->hget("user:$userid","password"); + if ($realpassword != $password) + goback("Wrong username or password"); + + # Username / password OK, set the cookie and redirect to index.php + $authsecret = $r->hget("user:$userid","auth"); + setcookie("auth",$authsecret,time()+3600*24*365); + header("Location: index.php"); + +This happens every time a user logs in, but we also need a function `isLoggedIn` in order to check if a given user is already authenticated or not. These are the logical steps preformed by the `isLoggedIn` function: + + * Get the "auth" cookie from the user. If there is no cookie, the user is not logged in, of course. Let's call the value of the cookie ``. + * Check if `` field in the `auths` Hash exists, and what the value (the user ID) is (1000 in the example). + * In order for the system to be more robust, also verify that user:1000 auth field also matches. + * OK the user is authenticated, and we loaded a bit of information in the `$User` global variable. + +The code is simpler than the description, possibly: + + function isLoggedIn() { + global $User, $_COOKIE; + + if (isset($User)) return true; + + if (isset($_COOKIE['auth'])) { + $r = redisLink(); + $authcookie = $_COOKIE['auth']; + if ($userid = $r->hget("auths",$authcookie)) { + if ($r->hget("user:$userid","auth") != $authcookie) return false; + loadUserInfo($userid); + return true; + } + } + return false; + } + + function loadUserInfo($userid) { + global $User; + + $r = redisLink(); + $User['id'] = $userid; + $User['username'] = $r->hget("user:$userid","username"); + return true; + } + +Having `loadUserInfo` as a separate function is overkill for our application, but it's a good approach in a complex application. The only thing that's missing from all the authentication is the logout. What do we do on logout? That's simple, we'll just change the random string in user:1000 `auth` field, remove the old authentication secret from the `auths` Hash, and add the new one. + +*Important:* the logout procedure explains why we don't just authenticate the user after looking up the authentication secret in the `auths` Hash, but double check it against user:1000 `auth` field. The true authentication string is the latter, while the `auths` Hash is just an authentication field that may even be volatile, or, if there are bugs in the program or a script gets interrupted, we may even end with multiple entries in the `auths` key pointing to the same user ID. The logout code is the following (`logout.php`): + + include("retwis.php"); + + if (!isLoggedIn()) { + header("Location: index.php"); + exit; + } + + $r = redisLink(); + $newauthsecret = getrand(); + $userid = $User['id']; + $oldauthsecret = $r->hget("user:$userid","auth"); + + $r->hset("user:$userid","auth",$newauthsecret); + $r->hset("auths",$newauthsecret,$userid); + $r->hdel("auths",$oldauthsecret); + + header("Location: index.php"); + +That is just what we described and should be simple to understand. + +Updates +--- + +Updates, also known as posts, are even simpler. In order to create a new post in the database we do something like this: + + INCR next_post_id => 10343 + HMSET post:10343 user_id $owner_id time $time body "I'm having fun with Retwis" + +As you can see each post is just represented by a Hash with three fields. The ID of the user owning the post, the time at which the post was published, and finally, the body of the post, which is, the actual status message. + +After we create a post and we obtain the post ID, we need to LPUSH the ID in the timeline of every user that is following the author of the post, and of course in the list of posts of the author itself (everybody is virtually following herself/himself). This is the file `post.php` that shows how this is performed: + + include("retwis.php"); + + if (!isLoggedIn() || !gt("status")) { + header("Location:index.php"); + exit; + } + + $r = redisLink(); + $postid = $r->incr("next_post_id"); + $status = str_replace("\n"," ",gt("status")); + $r->hmset("post:$postid","user_id",$User['id'],"time",time(),"body",$status); + $followers = $r->zrange("followers:".$User['id'],0,-1); + $followers[] = $User['id']; /* Add the post to our own posts too */ + + foreach($followers as $fid) { + $r->lpush("posts:$fid",$postid); + } + # Push the post on the timeline, and trim the timeline to the + # newest 1000 elements. + $r->lpush("timeline",$postid); + $r->ltrim("timeline",0,1000); + + header("Location: index.php"); + +The core of the function is the `foreach` loop. We use [`ZRANGE`]({{< relref "/commands/zrange" >}}) to get all the followers of the current user, then the loop will [`LPUSH`]({{< relref "/commands/lpush" >}}) the push the post in every follower timeline List. + +Note that we also maintain a global timeline for all the posts, so that in the Retwis home page we can show everybody's updates easily. This requires just doing an [`LPUSH`]({{< relref "/commands/lpush" >}}) to the `timeline` List. Let's face it, aren't you starting to think it was a bit strange to have to sort things added in chronological order using `ORDER BY` with SQL? I think so. + +There is an interesting thing to notice in the code above: we used a new +command called [`LTRIM`]({{< relref "/commands/ltrim" >}}) after we perform the [`LPUSH`]({{< relref "/commands/lpush" >}}) operation in the global +timeline. This is used in order to trim the list to just 1000 elements. The +global timeline is actually only used in order to show a few posts in the +home page, there is no need to have the full history of all the posts. + +Basically [`LTRIM`]({{< relref "/commands/ltrim" >}}) + [`LPUSH`]({{< relref "/commands/lpush" >}}) is a way to create a *capped collection* in Redis. + +Paginating updates +--- + +Now it should be pretty clear how we can use [`LRANGE`]({{< relref "/commands/lrange" >}}) in order to get ranges of posts, and render these posts on the screen. The code is simple: + + function showPost($id) { + $r = redisLink(); + $post = $r->hgetall("post:$id"); + if (empty($post)) return false; + + $userid = $post['user_id']; + $username = $r->hget("user:$userid","username"); + $elapsed = strElapsed($post['time']); + $userlink = "
".utf8entities($username).""; + + echo('
'.$userlink.' '.utf8entities($post['body'])."
"); + echo('posted '.$elapsed.' ago via web
'); + return true; + } + + function showUserPosts($userid,$start,$count) { + $r = redisLink(); + $key = ($userid == -1) ? "timeline" : "posts:$userid"; + $posts = $r->lrange($key,$start,$start+$count); + $c = 0; + foreach($posts as $p) { + if (showPost($p)) $c++; + if ($c == $count) break; + } + return count($posts) == $count+1; + } + +`showPost` will simply convert and print a Post in HTML while `showUserPosts` gets a range of posts and then passes them to `showPosts`. + +*Note: [`LRANGE`]({{< relref "/commands/lrange" >}}) is not very efficient if the list of posts start to be very +big, and we want to access elements which are in the middle of the list, since Redis Lists are backed by linked lists. If a system is designed for +deep pagination of million of items, it is better to resort to Sorted Sets +instead.* + +Following users +--- + +It is not hard, but we did not yet check how we create following / follower relationships. If user ID 1000 (antirez) wants to follow user ID 5000 (pippo), we need to create both a following and a follower relationship. We just need to [`ZADD`]({{< relref "/commands/zadd" >}}) calls: + + ZADD following:1000 5000 + ZADD followers:5000 1000 + +Note the same pattern again and again. In theory with a relational database, the list of following and followers would be contained in a single table with fields like `following_id` and `follower_id`. You can extract the followers or following of every user using an SQL query. With a key-value DB things are a bit different since we need to set both the `1000 is following 5000` and `5000 is followed by 1000` relations. This is the price to pay, but on the other hand accessing the data is simpler and extremely fast. Having these things as separate sets allows us to do interesting stuff. For example, using [`ZINTERSTORE`]({{< relref "/commands/zinterstore" >}}) we can have the intersection of `following` of two different users, so we may add a feature to our Twitter clone so that it is able to tell you very quickly when you visit somebody else's profile, "you and Alice have 34 followers in common", and things like that. + +You can find the code that sets or removes a following / follower relation in the `follow.php` file. + +Making it horizontally scalable +--- + +Gentle reader, if you read till this point you are already a hero. Thank you. Before talking about scaling horizontally it is worth checking performance on a single server. Retwis is *extremely fast*, without any kind of cache. On a very slow and loaded server, an Apache benchmark with 100 parallel clients issuing 100000 requests measured the average pageview to take 5 milliseconds. This means you can serve millions of users every day with just a single Linux box, and this one was monkey ass slow... Imagine the results with more recent hardware. + +However you can't go with a single server forever, how do you scale a key-value +store? + +Retwis does not perform any multi-keys operation, so making it scalable is +simple: you may use client-side sharding, or something like a sharding proxy +like Twemproxy, or the upcoming Redis Cluster. + +To know more about those topics please read +[our documentation about sharding]({{< relref "/operate/oss_and_stack/management/scaling" >}}). However, the point here +to stress is that in a key-value store, if you design with care, the data set +is split among **many independent small keys**. To distribute those keys +to multiple nodes is more straightforward and predictable compared to using +a semantically more complex database system. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'A distributed lock pattern with Redis + + ' +linkTitle: Distributed locks +title: Distributed Locks with Redis +weight: 1 +--- +Distributed locks are a very useful primitive in many environments where +different processes must operate with shared resources in a mutually +exclusive way. + +There are a number of libraries and blog posts describing how to implement +a DLM (Distributed Lock Manager) with Redis, but every library uses a different +approach, and many use a simple approach with lower guarantees compared to +what can be achieved with slightly more complex designs. + +This page describes a more canonical algorithm to implement +distributed locks with Redis. We propose an algorithm, called **Redlock**, +which implements a DLM which we believe to be safer than the vanilla single +instance approach. We hope that the community will analyze it, provide +feedback, and use it as a starting point for the implementations or more +complex or alternative designs. + +## Implementations + +Before describing the algorithm, here are a few links to implementations +already available that can be used for reference. + +* [Redlock-rb](https://github.com/antirez/redlock-rb) (Ruby implementation). There is also a [fork of Redlock-rb](https://github.com/leandromoreira/redlock-rb) that adds a gem for easy distribution. +* [RedisQueuedLocks](https://github.com/0exp/redis_queued_locks) (Ruby implementation). +* [Redlock-py](https://github.com/SPSCommerce/redlock-py) (Python implementation). +* [Pottery](https://github.com/brainix/pottery#redlock) (Python implementation). +* [Aioredlock](https://github.com/joanvila/aioredlock) (Asyncio Python implementation). +* [RedisMutex](https://github.com/malkusch/lock#redismutex) (PHP implementation with both [Redis extension](https://github.com/phpredis/phpredis) and [Predis library](https://github.com/predis/predis) clients support). +* [Redlock-php](https://github.com/ronnylt/redlock-php) (PHP implementation). +* [cheprasov/php-redis-lock](https://github.com/cheprasov/php-redis-lock) (PHP library for locks). +* [rtckit/react-redlock](https://github.com/rtckit/reactphp-redlock) (Async PHP implementation). +* [Redsync](https://github.com/go-redsync/redsync) (Go implementation). +* [Redisson](https://github.com/mrniko/redisson) (Java implementation). +* [Redis::DistLock](https://github.com/sbertrang/redis-distlock) (Perl implementation). +* [Redlock-cpp](https://github.com/jacket-code/redlock-cpp) (C++ implementation). +* [Redis-plus-plus](https://github.com/sewenew/redis-plus-plus/#redlock) (C++ implementation). +* [Redlock-cs](https://github.com/kidfashion/redlock-cs) (C#/.NET implementation). +* [RedLock.net](https://github.com/samcook/RedLock.net) (C#/.NET implementation). Includes async and lock extension support. +* [ScarletLock](https://github.com/psibernetic/scarletlock) (C# .NET implementation with configurable datastore). +* [Redlock4Net](https://github.com/LiZhenNet/Redlock4Net) (C# .NET implementation). +* [node-redlock](https://github.com/mike-marcacci/node-redlock) (NodeJS implementation). Includes support for lock extension. +* [Deno DLM](https://github.com/oslabs-beta/Deno-Redlock) (Deno implementation) +* [Rslock](https://github.com/hexcowboy/rslock) (Rust implementation). Includes async and lock extension support. + +## Safety and Liveness Guarantees + +We are going to model our design with just three properties that, from our point of view, are the minimum guarantees needed to use distributed locks in an effective way. + +1. Safety property: Mutual exclusion. At any given moment, only one client can hold a lock. +2. Liveness property A: Deadlock free. Eventually it is always possible to acquire a lock, even if the client that locked a resource crashes or gets partitioned. +3. Liveness property B: Fault tolerance. As long as the majority of Redis nodes are up, clients are able to acquire and release locks. + +## Why Failover-based Implementations Are Not Enough + +To understand what we want to improve, let’s analyze the current state of affairs with most Redis-based distributed lock libraries. + +The simplest way to use Redis to lock a resource is to create a key in an instance. The key is usually created with a limited time to live, using the Redis expires feature, so that eventually it will get released (property 2 in our list). When the client needs to release the resource, it deletes the key. + +Superficially this works well, but there is a problem: this is a single point of failure in our architecture. What happens if the Redis master goes down? +Well, let’s add a replica! And use it if the master is unavailable. This is unfortunately not viable. By doing so we can’t implement our safety property of mutual exclusion, because Redis replication is asynchronous. + +There is a race condition with this model: + +1. Client A acquires the lock in the master. +2. The master crashes before the write to the key is transmitted to the replica. +3. The replica gets promoted to master. +4. Client B acquires the lock to the same resource A already holds a lock for. **SAFETY VIOLATION!** + +Sometimes it is perfectly fine that, under special circumstances, for example during a failure, multiple clients can hold the lock at the same time. +If this is the case, you can use your replication based solution. Otherwise we suggest to implement the solution described in this document. + +## Correct Implementation with a Single Instance + +Before trying to overcome the limitation of the single instance setup described above, let’s check how to do it correctly in this simple case, since this is actually a viable solution in applications where a race condition from time to time is acceptable, and because locking into a single instance is the foundation we’ll use for the distributed algorithm described here. + +To acquire the lock, the way to go is the following: + + SET resource_name my_random_value NX PX 30000 + +The command will set the key only if it does not already exist (`NX` option), with an expire of 30000 milliseconds (`PX` option). +The key is set to a value “my\_random\_value”. This value must be unique across all clients and all lock requests. + +Basically the random value is used in order to release the lock in a safe way, with a script that tells Redis: remove the key only if it exists and the value stored at the key is exactly the one I expect to be. This is accomplished by the following Lua script: + + if redis.call("get",KEYS[1]) == ARGV[1] then + return redis.call("del",KEYS[1]) + else + return 0 + end + +This is important in order to avoid removing a lock that was created by another client. For example a client may acquire the lock, get blocked performing some operation for longer than the lock validity time (the time at which the key will expire), and later remove the lock, that was already acquired by some other client. +Using just [`DEL`]({{< relref "/commands/del" >}}) is not safe as a client may remove another client's lock. With the above script instead every lock is “signed” with a random string, so the lock will be removed only if it is still the one that was set by the client trying to remove it. + +What should this random string be? We assume it’s 20 bytes from `/dev/urandom`, but you can find cheaper ways to make it unique enough for your tasks. +For example a safe pick is to seed RC4 with `/dev/urandom`, and generate a pseudo random stream from that. +A simpler solution is to use a UNIX timestamp with microsecond precision, concatenating the timestamp with a client ID. It is not as safe, but probably sufficient for most environments. + +The "lock validity time" is the time we use as the key's time to live. It is both the auto release time, and the time the client has in order to perform the operation required before another client may be able to acquire the lock again, without technically violating the mutual exclusion guarantee, which is only limited to a given window of time from the moment the lock is acquired. + +So now we have a good way to acquire and release the lock. With this system, reasoning about a non-distributed system composed of a single, always available, instance, is safe. Let’s extend the concept to a distributed system where we don’t have such guarantees. + +## The Redlock Algorithm + +In the distributed version of the algorithm we assume we have N Redis masters. Those nodes are totally independent, so we don’t use replication or any other implicit coordination system. We already described how to acquire and release the lock safely in a single instance. We take for granted that the algorithm will use this method to acquire and release the lock in a single instance. In our examples we set N=5, which is a reasonable value, so we need to run 5 Redis masters on different computers or virtual machines in order to ensure that they’ll fail in a mostly independent way. + +In order to acquire the lock, the client performs the following operations: + +1. It gets the current time in milliseconds. +2. It tries to acquire the lock in all the N instances sequentially, using the same key name and random value in all the instances. During step 2, when setting the lock in each instance, the client uses a timeout which is small compared to the total lock auto-release time in order to acquire it. For example if the auto-release time is 10 seconds, the timeout could be in the ~ 5-50 milliseconds range. This prevents the client from remaining blocked for a long time trying to talk with a Redis node which is down: if an instance is not available, we should try to talk with the next instance ASAP. +3. The client computes how much time elapsed in order to acquire the lock, by subtracting from the current time the timestamp obtained in step 1. If and only if the client was able to acquire the lock in the majority of the instances (at least 3), and the total time elapsed to acquire the lock is less than lock validity time, the lock is considered to be acquired. +4. If the lock was acquired, its validity time is considered to be the initial validity time minus the time elapsed, as computed in step 3. +5. If the client failed to acquire the lock for some reason (either it was not able to lock N/2+1 instances or the validity time is negative), it will try to unlock all the instances (even the instances it believed it was not able to lock). + +### Is the Algorithm Asynchronous? + +The algorithm relies on the assumption that while there is no synchronized clock across the processes, the local time in every process updates at approximately at the same rate, with a small margin of error compared to the auto-release time of the lock. This assumption closely resembles a real-world computer: every computer has a local clock and we can usually rely on different computers to have a clock drift which is small. + +At this point we need to better specify our mutual exclusion rule: it is guaranteed only as long as the client holding the lock terminates its work within the lock validity time (as obtained in step 3), minus some time (just a few milliseconds in order to compensate for clock drift between processes). + +This paper contains more information about similar systems requiring a bound *clock drift*: [Leases: an efficient fault-tolerant mechanism for distributed file cache consistency](http://dl.acm.org/citation.cfm?id=74870). + +### Retry on Failure + +When a client is unable to acquire the lock, it should try again after a random delay in order to try to desynchronize multiple clients trying to acquire the lock for the same resource at the same time (this may result in a split brain condition where nobody wins). Also the faster a client tries to acquire the lock in the majority of Redis instances, the smaller the window for a split brain condition (and the need for a retry), so ideally the client should try to send the [`SET`]({{< relref "/commands/set" >}}) commands to the N instances at the same time using multiplexing. + +It is worth stressing how important it is for clients that fail to acquire the majority of locks, to release the (partially) acquired locks ASAP, so that there is no need to wait for key expiry in order for the lock to be acquired again (however if a network partition happens and the client is no longer able to communicate with the Redis instances, there is an availability penalty to pay as it waits for key expiration). + +### Releasing the Lock + +Releasing the lock is simple, and can be performed whether or not the client believes it was able to successfully lock a given instance. + +### Safety Arguments + +Is the algorithm safe? Let's examine what happens in different scenarios. + +To start let’s assume that a client is able to acquire the lock in the majority of instances. All the instances will contain a key with the same time to live. However, the key was set at different times, so the keys will also expire at different times. But if the first key was set at worst at time T1 (the time we sample before contacting the first server) and the last key was set at worst at time T2 (the time we obtained the reply from the last server), we are sure that the first key to expire in the set will exist for at least `MIN_VALIDITY=TTL-(T2-T1)-CLOCK_DRIFT`. All the other keys will expire later, so we are sure that the keys will be simultaneously set for at least this time. + +During the time that the majority of keys are set, another client will not be able to acquire the lock, since N/2+1 SET NX operations can’t succeed if N/2+1 keys already exist. So if a lock was acquired, it is not possible to re-acquire it at the same time (violating the mutual exclusion property). + +However we want to also make sure that multiple clients trying to acquire the lock at the same time can’t simultaneously succeed. + +If a client locked the majority of instances using a time near, or greater, than the lock maximum validity time (the TTL we use for SET basically), it will consider the lock invalid and will unlock the instances, so we only need to consider the case where a client was able to lock the majority of instances in a time which is less than the validity time. In this case for the argument already expressed above, for `MIN_VALIDITY` no client should be able to re-acquire the lock. So multiple clients will be able to lock N/2+1 instances at the same time (with "time" being the end of Step 2) only when the time to lock the majority was greater than the TTL time, making the lock invalid. + +### Liveness Arguments + +The system liveness is based on three main features: + +1. The auto release of the lock (since keys expire): eventually keys are available again to be locked. +2. The fact that clients, usually, will cooperate removing the locks when the lock was not acquired, or when the lock was acquired and the work terminated, making it likely that we don’t have to wait for keys to expire to re-acquire the lock. +3. The fact that when a client needs to retry a lock, it waits a time which is comparably greater than the time needed to acquire the majority of locks, in order to probabilistically make split brain conditions during resource contention unlikely. + +However, we pay an availability penalty equal to [`TTL`]({{< relref "/commands/ttl" >}}) time on network partitions, so if there are continuous partitions, we can pay this penalty indefinitely. +This happens every time a client acquires a lock and gets partitioned away before being able to remove the lock. + +Basically if there are infinite continuous network partitions, the system may become not available for an infinite amount of time. + +### Performance, Crash Recovery and fsync + +Many users using Redis as a lock server need high performance in terms of both latency to acquire and release a lock, and number of acquire / release operations that it is possible to perform per second. In order to meet this requirement, the strategy to talk with the N Redis servers to reduce latency is definitely multiplexing (putting the socket in non-blocking mode, send all the commands, and read all the commands later, assuming that the RTT between the client and each instance is similar). + +However there is another consideration around persistence if we want to target a crash-recovery system model. + +Basically to see the problem here, let’s assume we configure Redis without persistence at all. A client acquires the lock in 3 of 5 instances. One of the instances where the client was able to acquire the lock is restarted, at this point there are again 3 instances that we can lock for the same resource, and another client can lock it again, violating the safety property of exclusivity of lock. + +If we enable AOF persistence, things will improve quite a bit. For example we can upgrade a server by sending it a [`SHUTDOWN`]({{< relref "/commands/shutdown" >}}) command and restarting it. Because Redis expires are semantically implemented so that time still elapses when the server is off, all our requirements are fine. +However everything is fine as long as it is a clean shutdown. What about a power outage? If Redis is configured, as by default, to fsync on disk every second, it is possible that after a restart our key is missing. In theory, if we want to guarantee the lock safety in the face of any kind of instance restart, we need to enable `fsync=always` in the persistence settings. This will affect performance due to the additional sync overhead. + +However things are better than they look like at a first glance. Basically, +the algorithm safety is retained as long as when an instance restarts after a +crash, it no longer participates to any **currently active** lock. This means that the +set of currently active locks when the instance restarts were all obtained +by locking instances other than the one which is rejoining the system. + +To guarantee this we just need to make an instance, after a crash, unavailable +for at least a bit more than the max [`TTL`]({{< relref "/commands/ttl" >}}) we use. This is the time needed +for all the keys about the locks that existed when the instance crashed to +become invalid and be automatically released. + +Using *delayed restarts* it is basically possible to achieve safety even +without any kind of Redis persistence available, however note that this may +translate into an availability penalty. For example if a majority of instances +crash, the system will become globally unavailable for [`TTL`]({{< relref "/commands/ttl" >}}) (here globally means +that no resource at all will be lockable during this time). + +### Making the algorithm more reliable: Extending the lock + +If the work performed by clients consists of small steps, it is possible to +use smaller lock validity times by default, and extend the algorithm implementing +a lock extension mechanism. Basically the client, if in the middle of the +computation while the lock validity is approaching a low value, may extend the +lock by sending a Lua script to all the instances that extends the TTL of the key +if the key exists and its value is still the random value the client assigned +when the lock was acquired. + +The client should only consider the lock re-acquired if it was able to extend +the lock into the majority of instances, and within the validity time +(basically the algorithm to use is very similar to the one used when acquiring +the lock). + +However this does not technically change the algorithm, so the maximum number +of lock reacquisition attempts should be limited, otherwise one of the liveness +properties is violated. + +### Disclaimer about consistency + +Please consider thoroughly reviewing the [Analysis of Redlock](#analysis-of-redlock) section at the end of this page. +Martin Kleppman's article and antirez's answer to it are very relevant. +If you are concerned about consistency and correctness, you should pay attention to the following topics: + +1. You should implement fencing tokens. + This is especially important for processes that can take significant time and applies to any distributed locking system. + Extending locks' lifetime is also an option, but don´t assume that a lock is retained as long as the process that had acquired it is alive. +2. Redis is not using monotonic clock for TTL expiration mechanism. + That means that a wall-clock shift may result in a lock being acquired by more than one process. + Even though the problem can be mitigated by preventing admins from manually setting the server's time and setting up NTP properly, there's still a chance of this issue occurring in real life and compromising consistency. + +## Want to help? + +If you are into distributed systems, it would be great to have your opinion / analysis. Also reference implementations in other languages could be great. + +Thanks in advance! + +## Analysis of Redlock +--- + +1. Martin Kleppmann [analyzed Redlock here](http://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html). A counterpoint to this analysis can be [found here](http://antirez.com/news/101). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Writing data in bulk using the Redis protocol + + ' +linkTitle: Bulk loading +title: Bulk loading +weight: 1 +--- + +Bulk loading is the process of loading Redis with a large amount of pre-existing data. Ideally, you want to perform this operation quickly and efficiently. This document describes some strategies for bulk loading data in Redis. + +## Bulk loading using the Redis protocol + +Using a normal Redis client to perform bulk loading is not a good idea +for a few reasons: the naive approach of sending one command after the other +is slow because you have to pay for the round trip time for every command. +It is possible to use pipelining, but for bulk loading of many records +you need to write new commands while you read replies at the same time to +make sure you are inserting as fast as possible. + +Only a small percentage of clients support non-blocking I/O, and not all the +clients are able to parse the replies in an efficient way in order to maximize +throughput. For all of these reasons the preferred way to mass import data into +Redis is to generate a text file containing the Redis protocol, in raw format, +in order to call the commands needed to insert the required data. + +For instance if I need to generate a large data set where there are billions +of keys in the form: `keyN -> ValueN' I will create a file containing the +following commands in the Redis protocol format: + + SET Key0 Value0 + SET Key1 Value1 + ... + SET KeyN ValueN + +Once this file is created, the remaining action is to feed it to Redis +as fast as possible. In the past the way to do this was to use the +`netcat` with the following command: + + (cat data.txt; sleep 10) | nc localhost 6379 > /dev/null + +However this is not a very reliable way to perform mass import because netcat +does not really know when all the data was transferred and can't check for +errors. In 2.6 or later versions of Redis the `redis-cli` utility +supports a new mode called **pipe mode** that was designed in order to perform +bulk loading. + +Using the pipe mode the command to run looks like the following: + + cat data.txt | redis-cli --pipe + +That will produce an output similar to this: + + All data transferred. Waiting for the last reply... + Last reply received from server. + errors: 0, replies: 1000000 + +The redis-cli utility will also make sure to only redirect errors received +from the Redis instance to the standard output. + +### Generating Redis Protocol + +The Redis protocol is extremely simple to generate and parse, and is +[Documented here]({{< relref "/develop/reference/protocol-spec" >}}). However in order to generate protocol for +the goal of bulk loading you don't need to understand every detail of the +protocol, but just that every command is represented in the following way: + + * + $ + + + ... + + +Where `` means "\r" (or ASCII character 13) and `` means "\n" (or ASCII character 10). + +For instance the command **SET key value** is represented by the following protocol: + + *3 + $3 + SET + $3 + key + $5 + value + +Or represented as a quoted string: + + "*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n" + +The file you need to generate for bulk loading is just composed of commands +represented in the above way, one after the other. + +The following Ruby function generates valid protocol: + + def gen_redis_proto(*cmd) + proto = "" + proto << "*"+cmd.length.to_s+"\r\n" + cmd.each{|arg| + proto << "$"+arg.to_s.bytesize.to_s+"\r\n" + proto << arg.to_s+"\r\n" + } + proto + end + + puts gen_redis_proto("SET","mykey","Hello World!").inspect + +Using the above function it is possible to easily generate the key value pairs +in the above example, with this program: + + (0...1000).each{|n| + STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}")) + } + +We can run the program directly in pipe to redis-cli in order to perform our +first mass import session. + + $ ruby proto.rb | redis-cli --pipe + All data transferred. Waiting for the last reply... + Last reply received from server. + errors: 0, replies: 1000 + +### How the pipe mode works under the hood + +The magic needed inside the pipe mode of redis-cli is to be as fast as netcat +and still be able to understand when the last reply was sent by the server +at the same time. + +This is obtained in the following way: + ++ redis-cli --pipe tries to send data as fast as possible to the server. ++ At the same time it reads data when available, trying to parse it. ++ Once there is no more data to read from stdin, it sends a special **ECHO** +command with a random 20 byte string: we are sure this is the latest command +sent, and we are sure we can match the reply checking if we receive the same +20 bytes as a bulk reply. ++ Once this special final command is sent, the code receiving replies starts +to match replies with these 20 bytes. When the matching reply is reached it +can exit with success. + +Using this trick we don't need to parse the protocol we send to the server +in order to understand how many commands we are sending, but just the replies. + +However while parsing the replies we take a counter of all the replies parsed +so that at the end we are able to tell the user the amount of commands +transferred to the server by the mass insert session. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Novel patterns for working with Redis data structures +linkTitle: Patterns +title: Redis programming patterns +weight: 6 +--- + +The following documents describe some novel development patterns you can use with Redis. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Building secondary indexes in Redis +linkTitle: Secondary indexing +title: Secondary indexing +weight: 1 +--- + +Redis is not exactly a key-value store, since values can be complex data structures. However it has an external key-value shell: at API level data is addressed by the key name. It is fair to say that, natively, Redis only offers *primary key access*. However since Redis is a data structures server, its capabilities can be used for indexing, in order to create secondary indexes of different kinds, including composite (multi-column) indexes. + +This document explains how it is possible to create indexes in Redis using the following data structures: + +* Hashes and JSON documents, using a variety of field types; used in conjunction with the Redis query engine. +* Sorted sets to create secondary indexes by ID or other numerical fields. +* Sorted sets with lexicographical ranges for creating more advanced secondary indexes, composite indexes and graph traversal indexes. +* Sets for creating random indexes. +* Lists for creating simple iterable indexes and last N items indexes. +* Time series with labels. + +Implementing and maintaining indexes with Redis is an advanced topic, so most +users that need to perform complex queries on data should understand if they +are better served by a relational store. However often, especially in caching +scenarios, there is the explicit need to store indexed data into Redis in order to speedup common queries which require some form of indexing in order to be executed. + +## Hashes and JSON indexes + +The Redis query engine provides capabilities to index and query both hash and JSON keys using a variety of field types: + +* `TEXT` +* `TAG` +* `NUMERIC` +* `GEO` +* `VECTOR` +* `GEOSHAPE` + +Once hash or JSON keys have been indexed using the [`FT.CREATE`]({{< relref "commands/ft.create" >}}) command, all keys that use the prefix defined in the index can be queried using the [`FT.SEARCH`]({{< relref "commands/ft.search" >}}) and [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate" >}}) commands. + +For more information on creating hash and JSON indexes, see the following pages. + +* [Hash indexes]({{< relref "/develop/interact/search-and-query/basic-constructs/schema-definition" >}}) +* [JSON indexes]({{< relref "/develop/interact/search-and-query/indexing" >}}) + +## Simple numerical indexes with sorted sets + +The simplest secondary index you can create with Redis is by using the +sorted set data type, which is a data structure representing a set of +elements ordered by a floating point number which is the *score* of +each element. Elements are ordered from the smallest to the highest score. + +Since the score is a double precision float, indexes you can build with +vanilla sorted sets are limited to things where the indexing field is a number +within a given range. + +The two commands to build these kind of indexes are [`ZADD`]({{< relref "/commands/zadd" >}}) and +[`ZRANGE`]({{< relref "/commands/zrange" >}}) with the `BYSCORE` argument to respectively add items and retrieve items within a +specified range. + +For instance, it is possible to index a set of person names by their +age by adding element to a sorted set. The element will be the name of the +person and the score will be the age. + + ZADD myindex 25 Manuel + ZADD myindex 18 Anna + ZADD myindex 35 Jon + ZADD myindex 67 Helen + +In order to retrieve all persons with an age between 20 and 40, the following +command can be used: + + ZRANGE myindex 20 40 BYSCORE + 1) "Manuel" + 2) "Jon" + +By using the **WITHSCORES** option of [`ZRANGE`]({{< relref "/commands/zrange" >}}) it is also possible +to obtain the scores associated with the returned elements. + +The [`ZCOUNT`]({{< relref "/commands/zcount" >}}) command can be used in order to retrieve the number of elements +within a given range, without actually fetching the elements, which is also +useful, especially given the fact the operation is executed in logarithmic +time regardless of the size of the range. + +Ranges can be inclusive or exclusive, please refer to the [`ZRANGE`]({{< relref "/commands/zrange" >}}) +command documentation for more information. + +**Note**: Using the [`ZRANGE`]({{< relref "/commands/zrange" >}}) with the `BYSCORE` and `REV` arguments, it is possible to query a range in +reversed order, which is often useful when data is indexed in a given +direction (ascending or descending) but we want to retrieve information +the other way around. + +### Use object IDs as associated values + +In the above example we associated names to ages. However in general we +may want to index some field of an object which is stored elsewhere. +Instead of using the sorted set value directly to store the data associated +with the indexed field, it is possible to store just the ID of the object. + +For example I may have Redis hashes representing users. Each user is +represented by a single key, directly accessible by ID: + + HMSET user:1 id 1 username antirez ctime 1444809424 age 38 + HMSET user:2 id 2 username maria ctime 1444808132 age 42 + HMSET user:3 id 3 username jballard ctime 1443246218 age 33 + +If I want to create an index in order to query users by their age, I +could do: + + ZADD user.age.index 38 1 + ZADD user.age.index 42 2 + ZADD user.age.index 33 3 + +This time the value associated with the score in the sorted set is the +ID of the object. So once I query the index with [`ZRANGE`]({{< relref "/commands/zrange" >}}) with the `BYSCORE` argument, I'll +also have to retrieve the information I need with [`HGETALL`]({{< relref "/commands/hgetall" >}}) or similar +commands. The obvious advantage is that objects can change without touching +the index, as long as we don't change the indexed field. + +In the next examples we'll almost always use IDs as values associated with +the index, since this is usually the more sounding design, with a few +exceptions. + +### Update simple sorted set indexes + +Often we index things which change over time. In the above +example, the age of the user changes every year. In such a case it would +make sense to use the birth date as index instead of the age itself, +but there are other cases where we simply want some field to change from +time to time, and the index to reflect this change. + +The [`ZADD`]({{< relref "/commands/zadd" >}}) command makes updating simple indexes a very trivial operation +since re-adding back an element with a different score and the same value +will simply update the score and move the element at the right position, +so if the user `antirez` turned 39 years old, in order to update the +data in the hash representing the user, and in the index as well, we need +to execute the following two commands: + + HSET user:1 age 39 + ZADD user.age.index 39 1 + +The operation may be wrapped in a [`MULTI`]({{< relref "/commands/multi" >}})/[`EXEC`]({{< relref "/commands/exec" >}}) transaction in order to +make sure both fields are updated or none. + +### Turn multi-dimensional data into linear data + +Indexes created with sorted sets are able to index only a single numerical +value. Because of this you may think it is impossible to index something +which has multiple dimensions using this kind of indexes, but actually this +is not always true. If you can efficiently represent something +multi-dimensional in a linear way, they it is often possible to use a simple +sorted set for indexing. + +For example the [Redis geo indexing API]({{< relref "/commands/geoadd" >}}) uses a sorted +set to index places by latitude and longitude using a technique called +[Geo hash](https://en.wikipedia.org/wiki/Geohash). The sorted set score +represents alternating bits of longitude and latitude, so that we map the +linear score of a sorted set to many small *squares* in the earth surface. +By doing an 8+1 style center plus neighborhoods search it is possible to +retrieve elements by radius. + +### Limits of the score + +Sorted set elements scores are double precision floats. It means that +they can represent different decimal or integer values with different +errors, because they use an exponential representation internally. +However what is interesting for indexing purposes is that the score is +always able to represent without any error numbers between -9007199254740992 +and 9007199254740992, which is `-/+ 2^53`. + +When representing much larger numbers, you need a different form of indexing +that is able to index numbers at any precision, called a lexicographical +index. + +## Time series indexes + +When you create a new time series using the [`TS.CREATE`]({{< relref "commands/ts.create" >}}) command, you can associate one or more `LABELS` with it. Each label is a name-value pair, where the both name and value are text. Labels serve as a secondary index that allows you to execute queries on groups of time series keys using various time series commands. + +See the [time series quickstart guide]({{< relref "/develop/data-types/timeseries/quickstart#labels" >}}) for an example of creating a time series with a label. + +The [`TS.MGET`]({{< relref "commands/ts.mget" >}}), [`TS.MRANGE`]({{< relref "commands/ts.mrange" >}}), and [`TS.MREVRANGE`]({{< relref "commands/ts.mrevrange" >}}) commands operate on multiple time series based on specified labels or using a label-related filter expression. The [`TS.QUERYINDEX`]({{< relref "commands/ts.queryindex" >}}) command returns all time series keys matching a given label-related filter expression. + +## Lexicographical indexes + +Redis sorted sets have an interesting property. When elements are added +with the same score, they are sorted lexicographically, comparing the +strings as binary data with the `memcmp()` function. + +For people that don't know the C language nor the `memcmp` function, what +it means is that elements with the same score are sorted comparing the +raw values of their bytes, byte after byte. If the first byte is the same, +the second is checked and so forth. If the common prefix of two strings is +the same then the longer string is considered the greater of the two, +so "foobar" is greater than "foo". + +There are commands such as [`ZRANGE`]({{< relref "/commands/zrange" >}}) and [`ZLEXCOUNT`]({{< relref "/commands/zlexcount" >}}) that +are able to query and count ranges in a lexicographically fashion, assuming +they are used with sorted sets where all the elements have the same score. + +This Redis feature is basically equivalent to a `b-tree` data structure which +is often used in order to implement indexes with traditional databases. +As you can guess, because of this, it is possible to use this Redis data +structure in order to implement pretty fancy indexes. + +Before we dive into using lexicographical indexes, let's check how +sorted sets behave in this special mode of operation. Since we need to +add elements with the same score, we'll always use the special score of +zero. + + ZADD myindex 0 baaa + ZADD myindex 0 abbb + ZADD myindex 0 aaaa + ZADD myindex 0 bbbb + +Fetching all the elements from the sorted set immediately reveals that they +are ordered lexicographically. + + ZRANGE myindex 0 -1 + 1) "aaaa" + 2) "abbb" + 3) "baaa" + 4) "bbbb" + +Now we can use [`ZRANGE`]({{< relref "/commands/zrange" >}}) with the `BYLEX` argument in order to perform range queries. + + ZRANGE myindex [a (b BYLEX + 1) "aaaa" + 2) "abbb" + +Note that in the range queries we prefixed the `min` and `max` elements +identifying the range with the special characters `[` and `(`. +This prefixes are mandatory, and they specify if the elements +of the range are inclusive or exclusive. So the range `[a (b` means give me +all the elements lexicographically between `a` inclusive and `b` exclusive, +which are all the elements starting with `a`. + +There are also two more special characters indicating the infinitely negative +string and the infinitely positive string, which are `-` and `+`. + + ZRANGE myindex [b + BYLEX + 1) "baaa" + 2) "bbbb" + +That's it basically. Let's see how to use these features to build indexes. + +### A first example: completion + +An interesting application of indexing is completion. Completion is what +happens when you start typing your query into a search engine: the user +interface will anticipate what you are likely typing, providing common +queries that start with the same characters. + +A naive approach to completion is to just add every single query we +get from the user into the index. For example if the user searches `banana` +we'll just do: + + ZADD myindex 0 banana + +And so forth for each search query ever encountered. Then when we want to +complete the user input, we execute a range query using [`ZRANGE`]({{< relref "/commands/zrange" >}}) with the `BYLEX` argument. +Imagine the user is typing "bit" inside the search form, and we want to +offer possible search keywords starting for "bit". We send Redis a command +like that: + + ZRANGE myindex "[bit" "[bit\xff" BYLEX + +Basically we create a range using the string the user is typing right now +as start, and the same string plus a trailing byte set to 255, which is `\xff` in the example, as the end of the range. This way we get all the strings that start for the string the user is typing. + +Note that we don't want too many items returned, so we may use the **LIMIT** option in order to reduce the number of results. + +### Add frequency into the mix + +The above approach is a bit naive, because all the user searches are the same +in this way. In a real system we want to complete strings according to their +frequency: very popular searches will be proposed with a higher probability +compared to search strings typed very rarely. + +In order to implement something which depends on the frequency, and at the +same time automatically adapts to future inputs, by purging searches that +are no longer popular, we can use a very simple *streaming algorithm*. + +To start, we modify our index in order to store not just the search term, +but also the frequency the term is associated with. So instead of just adding +`banana` we add `banana:1`, where 1 is the frequency. + + ZADD myindex 0 banana:1 + +We also need logic in order to increment the index if the search term +already exists in the index, so what we'll actually do is something like +that: + + ZRANGE myindex "[banana:" + BYLEX LIMIT 0 1 + 1) "banana:1" + +This will return the single entry of `banana` if it exists. Then we +can increment the associated frequency and send the following two +commands: + + ZREM myindex 0 banana:1 + ZADD myindex 0 banana:2 + +Note that because it is possible that there are concurrent updates, the +above three commands should be send via a [Lua script]({{< relref "/commands/eval" >}}) +instead, so that the Lua script will atomically get the old count and +re-add the item with incremented score. + +So the result will be that, every time a user searches for `banana` we'll +get our entry updated. + +There is more: our goal is to just have items searched very frequently. +So we need some form of purging. When we actually query the index +in order to complete the user input, we may see something like that: + + ZRANGE myindex "[banana:" + BYLEX LIMIT 0 10 + 1) "banana:123" + 2) "banaooo:1" + 3) "banned user:49" + 4) "banning:89" + +Apparently nobody searches for "banaooo", for example, but the query was +performed a single time, so we end presenting it to the user. + +This is what we can do. Out of the returned items, we pick a random one, +decrement its score by one, and re-add it with the new score. +However if the score reaches 0, we simply remove the item from the list. +You can use much more advanced systems, but the idea is that the index in +the long run will contain top searches, and if top searches will change over +the time it will adapt automatically. + +A refinement to this algorithm is to pick entries in the list according to +their weight: the higher the score, the less likely entries are picked +in order to decrement its score, or evict them. + +### Normalize strings for case and accents + +In the completion examples we always used lowercase strings. However +reality is much more complex than that: languages have capitalized names, +accents, and so forth. + +One simple way do deal with this issues is to actually normalize the +string the user searches. Whatever the user searches for "Banana", +"BANANA" or "Ba'nana" we may always turn it into "banana". + +However sometimes we may like to present the user with the original +item typed, even if we normalize the string for indexing. In order to +do this, what we do is to change the format of the index so that instead +of just storing `term:frequency` we store `normalized:frequency:original` +like in the following example: + + ZADD myindex 0 banana:273:Banana + +Basically we add another field that we'll extract and use only for +visualization. Ranges will always be computed using the normalized strings +instead. This is a common trick which has multiple applications. + +### Add auxiliary information in the index + +When using a sorted set in a direct way, we have two different attributes +for each object: the score, which we use as an index, and an associated +value. When using lexicographical indexes instead, the score is always +set to 0 and basically not used at all. We are left with a single string, +which is the element itself. + +Like we did in the previous completion examples, we are still able to +store associated data using separators. For example we used the colon in +order to add the frequency and the original word for completion. + +In general we can add any kind of associated value to our indexing key. +In order to use a lexicographical index to implement a simple key-value store +we just store the entry as `key:value`: + + ZADD myindex 0 mykey:myvalue + +And search for the key with: + + ZRANGE myindex [mykey: + BYLEX LIMIT 0 1 + 1) "mykey:myvalue" + +Then we extract the part after the colon to retrieve the value. +However a problem to solve in this case is collisions. The colon character +may be part of the key itself, so it must be chosen in order to never +collide with the key we add. + +Since lexicographical ranges in Redis are binary safe you can use any +byte or any sequence of bytes. However if you receive untrusted user +input, it is better to use some form of escaping in order to guarantee +that the separator will never happen to be part of the key. + +For example if you use two null bytes as separator `"\0\0"`, you may +want to always escape null bytes into two bytes sequences in your strings. + +### Numerical padding + +Lexicographical indexes may look like good only when the problem at hand +is to index strings. Actually it is very simple to use this kind of index +in order to perform indexing of arbitrary precision numbers. + +In the ASCII character set, digits appear in the order from 0 to 9, so +if we left-pad numbers with leading zeroes, the result is that comparing +them as strings will order them by their numerical value. + + ZADD myindex 0 00324823481:foo + ZADD myindex 0 12838349234:bar + ZADD myindex 0 00000000111:zap + + ZRANGE myindex 0 -1 + 1) "00000000111:zap" + 2) "00324823481:foo" + 3) "12838349234:bar" + +We effectively created an index using a numerical field which can be as +big as we want. This also works with floating point numbers of any precision +by making sure we left pad the numerical part with leading zeroes and the +decimal part with trailing zeroes like in the following list of numbers: + + 01000000000000.11000000000000 + 01000000000000.02200000000000 + 00000002121241.34893482930000 + 00999999999999.00000000000000 + +### Use numbers in binary form + +Storing numbers in decimal may use too much memory. An alternative approach +is just to store numbers, for example 128 bit integers, directly in their +binary form. However for this to work, you need to store the numbers in +*big endian format*, so that the most significant bytes are stored before +the least significant bytes. This way when Redis compares the strings with +`memcmp()`, it will effectively sort the numbers by their value. + +Keep in mind that data stored in binary format is less observable for +debugging, harder to parse and export. So it is definitely a trade off. + +## Composite indexes + +So far we explored ways to index single fields. However we all know that +SQL stores are able to create indexes using multiple fields. For example +I may index products in a very large store by room number and price. + +I need to run queries in order to retrieve all the products in a given +room having a given price range. What I can do is to index each product +in the following way: + + ZADD myindex 0 0056:0028.44:90 + ZADD myindex 0 0034:0011.00:832 + +Here the fields are `room:price:product_id`. I used just four digits padding +in the example for simplicity. The auxiliary data (the product ID) does not +need any padding. + +With an index like that, to get all the products in room 56 having a price +between 10 and 30 dollars is very easy. We can just run the following +command: + + ZRANGE myindex [0056:0010.00 [0056:0030.00 BYLEX + +The above is called a composed index. Its effectiveness depends on the +order of the fields and the queries I want to run. For example the above +index cannot be used efficiently in order to get all the products having +a specific price range regardless of the room number. However I can use +the primary key in order to run queries regardless of the price, like +*give me all the products in room 44*. + +Composite indexes are very powerful, and are used in traditional stores +in order to optimize complex queries. In Redis they could be useful both +to implement a very fast in-memory Redis index of something stored into +a traditional data store, or in order to directly index Redis data. + +## Update lexicographical indexes + +The value of the index in a lexicographical index can get pretty fancy +and hard or slow to rebuild from what we store about the object. So one +approach to simplify the handling of the index, at the cost of using more +memory, is to also take alongside to the sorted set representing the index +a hash mapping the object ID to the current index value. + +So for example, when we index we also add to a hash: + + MULTI + ZADD myindex 0 0056:0028.44:90 + HSET index.content 90 0056:0028.44:90 + EXEC + +This is not always needed, but simplifies the operations of updating +the index. In order to remove the old information we indexed for the object +ID 90, regardless of the *current* fields values of the object, we just +have to retrieve the hash value by object ID and [`ZREM`]({{< relref "/commands/zrem" >}}) it in the sorted +set view. + +## Represent and query graphs using a hexastore + +One cool thing about composite indexes is that they are handy in order +to represent graphs, using a data structure which is called +[Hexastore](http://www.vldb.org/pvldb/vol1/1453965.pdf). + +The hexastore provides a representation for relations between objects, +formed by a *subject*, a *predicate* and an *object*. +A simple relation between objects could be: + + antirez is-friend-of matteocollina + +In order to represent this relation I can store the following element +in my lexicographical index: + + ZADD myindex 0 spo:antirez:is-friend-of:matteocollina + +Note that I prefixed my item with the string **spo**. It means that +the item represents a subject,predicate,object relation. + +In can add 5 more entries for the same relation, but in a different order: + + ZADD myindex 0 sop:antirez:matteocollina:is-friend-of + ZADD myindex 0 ops:matteocollina:is-friend-of:antirez + ZADD myindex 0 osp:matteocollina:antirez:is-friend-of + ZADD myindex 0 pso:is-friend-of:antirez:matteocollina + ZADD myindex 0 pos:is-friend-of:matteocollina:antirez + +Now things start to be interesting, and I can query the graph in many +different ways. For example, who are all the people `antirez` +*is friend of*? + + ZRANGE myindex "[spo:antirez:is-friend-of:" "[spo:antirez:is-friend-of:\xff" BYLEX + 1) "spo:antirez:is-friend-of:matteocollina" + 2) "spo:antirez:is-friend-of:wonderwoman" + 3) "spo:antirez:is-friend-of:spiderman" + +Or, what are all the relationships `antirez` and `matteocollina` have where +the first is the subject and the second is the object? + + ZRANGE myindex "[sop:antirez:matteocollina:" "[sop:antirez:matteocollina:\xff" BYLEX + 1) "sop:antirez:matteocollina:is-friend-of" + 2) "sop:antirez:matteocollina:was-at-conference-with" + 3) "sop:antirez:matteocollina:talked-with" + +By combining different queries, I can ask fancy questions. For example: +*Who are all my friends that, like beer, live in Barcelona, and matteocollina consider friends as well?* +To get this information I start with an `spo` query to find all the people +I'm friend with. Then for each result I get I perform an `spo` query +to check if they like beer, removing the ones for which I can't find +this relation. I do it again to filter by city. Finally I perform an `ops` +query to find, of the list I obtained, who is considered friend by +matteocollina. + +Make sure to check [Matteo Collina's slides about Levelgraph](http://nodejsconfit.levelgraph.io/) in order to better understand these ideas. + +## Multi-dimensional indexes + +A more complex type of index is an index that allows you to perform queries +where two or more variables are queried at the same time for specific +ranges. For example I may have a data set representing persons age and +salary, and I want to retrieve all the people between 50 and 55 years old +having a salary between 70000 and 85000. + +This query may be performed with a multi column index, but this requires +us to select the first variable and then scan the second, which means we +may do a lot more work than needed. It is possible to perform these kinds of +queries involving multiple variables using different data structures. +For example, multi-dimensional trees such as *k-d trees* or *r-trees* are +sometimes used. Here we'll describe a different way to index data into +multiple dimensions, using a representation trick that allows us to perform +the query in a very efficient way using Redis lexicographical ranges. + +Let's say we have points in the space, which represent our data samples, where `x` and `y` are our coordinates. The max value of both variables is 400. + +In the next figure, the blue box represents our query. We want all the points where `x` is between 50 and 100, and where `y` is between 100 and 300. + +![Points in the space](2idx_0.png) + +In order to represent data that makes these kinds of queries fast to perform, +we start by padding our numbers with 0. So for example imagine we want to +add the point 10,25 (x,y) to our index. Given that the maximum range in the +example is 400 we can just pad to three digits, so we obtain: + + x = 010 + y = 025 + +Now what we do is to interleave the digits, taking the leftmost digit +in x, and the leftmost digit in y, and so forth, in order to create a single +number: + + 001205 + +This is our index, however in order to more easily reconstruct the original +representation, if we want (at the cost of space), we may also add the +original values as additional columns: + + 001205:10:25 + +Now, let's reason about this representation and why it is useful in the +context of range queries. For example let's take the center of our blue +box, which is at `x=75` and `y=200`. We can encode this number as we did +earlier by interleaving the digits, obtaining: + + 027050 + +What happens if we substitute the last two digits respectively with 00 and 99? +We obtain a range which is lexicographically continuous: + + 027000 to 027099 + +What this maps to is to a square representing all values where the `x` +variable is between 70 and 79, and the `y` variable is between 200 and 209. +To identify this specific area, we can write random points in that interval. + +![Small area](2idx_1.png) + +So the above lexicographic query allows us to easily query for points in +a specific square in the picture. However the square may be too small for +the box we are searching, so that too many queries are needed. +So we can do the same but instead of replacing the last two digits with 00 +and 99, we can do it for the last four digits, obtaining the following +range: + + 020000 029999 + +This time the range represents all the points where `x` is between 0 and 99 +and `y` is between 200 and 299. Drawing random points in this interval +shows us this larger area. + +![Large area](2idx_2.png) + +So now our area is too big for our query, and still our search box is +not completely included. We need more granularity, but we can easily obtain +it by representing our numbers in binary form. This time, when we replace +digits instead of getting squares which are ten times bigger, we get squares +which are just two times bigger. + +Our numbers in binary form, assuming we need just 9 bits for each variable +(in order to represent numbers up to 400 in value) would be: + + x = 75 -> 001001011 + y = 200 -> 011001000 + +So by interleaving digits, our representation in the index would be: + + 000111000011001010:75:200 + +Let's see what are our ranges as we substitute the last 2, 4, 6, 8, ... +bits with 0s ad 1s in the interleaved representation: + + 2 bits: x between 74 and 75, y between 200 and 201 (range=2) + 4 bits: x between 72 and 75, y between 200 and 203 (range=4) + 6 bits: x between 72 and 79, y between 200 and 207 (range=8) + 8 bits: x between 64 and 79, y between 192 and 207 (range=16) + +And so forth. Now we have definitely better granularity! +As you can see substituting N bits from the index gives us +search boxes of side `2^(N/2)`. + +So what we do is check the dimension where our search box is smaller, +and check the nearest power of two to this number. Our search box +was 50,100 to 100,300, so it has a width of 50 and a height of 200. +We take the smaller of the two, 50, and check the nearest power of two +which is 64. 64 is 2^6, so we would work with indexes obtained replacing +the latest 12 bits from the interleaved representation (so that we end +replacing just 6 bits of each variable). + +However single squares may not cover all our search, so we may need more. +What we do is to start with the left bottom corner of our search box, +which is 50,100, and find the first range by substituting the last 6 bits +in each number with 0. Then we do the same with the right top corner. + +With two trivial nested for loops where we increment only the significant +bits, we can find all the squares between these two. For each square we +convert the two numbers into our interleaved representation, and create +the range using the converted representation as our start, and the same +representation but with the latest 12 bits turned on as end range. + +For each square found we perform our query and get the elements inside, +removing the elements which are outside our search box. + +Turning this into code is simple. Here is a Ruby example: + + def spacequery(x0,y0,x1,y1,exp) + bits=exp*2 + x_start = x0/(2**exp) + x_end = x1/(2**exp) + y_start = y0/(2**exp) + y_end = y1/(2**exp) + (x_start..x_end).each{|x| + (y_start..y_end).each{|y| + x_range_start = x*(2**exp) + x_range_end = x_range_start | ((2**exp)-1) + y_range_start = y*(2**exp) + y_range_end = y_range_start | ((2**exp)-1) + puts "#{x},#{y} x from #{x_range_start} to #{x_range_end}, y from #{y_range_start} to #{y_range_end}" + + # Turn it into interleaved form for ZRANGE query. + # We assume we need 9 bits for each integer, so the final + # interleaved representation will be 18 bits. + xbin = x_range_start.to_s(2).rjust(9,'0') + ybin = y_range_start.to_s(2).rjust(9,'0') + s = xbin.split("").zip(ybin.split("")).flatten.compact.join("") + # Now that we have the start of the range, calculate the end + # by replacing the specified number of bits from 0 to 1. + e = s[0..-(bits+1)]+("1"*bits) + puts "ZRANGE myindex [#{s} [#{e} BYLEX" + } + } + end + + spacequery(50,100,100,300,6) + +While non immediately trivial this is a very useful indexing strategy that +in the future may be implemented in Redis in a native way. +For now, the good thing is that the complexity may be easily encapsulated +inside a library that can be used in order to perform indexing and queries. +One example of such library is [Redimension](https://github.com/antirez/redimension), a proof of concept Ruby library which indexes N-dimensional data inside Redis using the technique described here. + +## Multi-dimensional indexes with negative or floating point numbers + +The simplest way to represent negative values is just to work with unsigned +integers and represent them using an offset, so that when you index, before +translating numbers in the indexed representation, you add the absolute value +of your smaller negative integer. + +For floating point numbers, the simplest approach is probably to convert them +to integers by multiplying the integer for a power of ten proportional to the +number of digits after the dot you want to retain. + +## Non-range indexes + +So far we checked indexes which are useful to query by range or by single +item. However other Redis data structures such as Sets or Lists can be used +in order to build other kind of indexes. They are very commonly used but +maybe we don't always realize they are actually a form of indexing. + +For instance I can index object IDs into a Set data type in order to use +the *get random elements* operation via [`SRANDMEMBER`]({{< relref "/commands/srandmember" >}}) in order to retrieve +a set of random objects. Sets can also be used to check for existence when +all I need is to test if a given item exists or not or has a single boolean +property or not. + +Similarly lists can be used in order to index items into a fixed order. +I can add all my items into a Redis list and rotate the list with +[`RPOPLPUSH`]({{< relref "/commands/rpoplpush" >}}) using the same key name as source and destination. This is useful +when I want to process a given set of items again and again forever in the +same order. Think of an RSS feed system that needs to refresh the local copy +periodically. + +Another popular index often used with Redis is a **capped list**, where items +are added with [`LPUSH`]({{< relref "/commands/lpush" >}}) and trimmed with [`LTRIM`]({{< relref "/commands/ltrim" >}}), in order to create a view +with just the latest N items encountered, in the same order they were +seen. + +## Index inconsistency + +Keeping the index updated may be challenging, in the course of months +or years it is possible that inconsistencies are added because of software +bugs, network partitions or other events. + +Different strategies could be used. If the index data is outside Redis +*read repair* can be a solution, where data is fixed in a lazy way when +it is requested. When we index data which is stored in Redis itself +the [`SCAN`]({{< relref "/commands/scan" >}}) family of commands can be used in order to verify, update or +rebuild the index from scratch, incrementally. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to optimize round-trip times by batching Redis commands +linkTitle: Pipelining +title: Redis pipelining +weight: 2 +--- + +Redis pipelining is a technique for improving performance by issuing multiple commands at once without waiting for the response to each individual command. Pipelining is supported by most Redis clients. This document describes the problem that pipelining is designed to solve and how pipelining works in Redis. + +## Request/Response protocols and round-trip time (RTT) + +Redis is a TCP server using the client-server model and what is called a *Request/Response* protocol. + +This means that usually a request is accomplished with the following steps: + +* The client sends a query to the server, and reads from the socket, usually in a blocking way, for the server response. +* The server processes the command and sends the response back to the client. + +So for instance a four commands sequence is something like this: + + * *Client:* INCR X + * *Server:* 1 + * *Client:* INCR X + * *Server:* 2 + * *Client:* INCR X + * *Server:* 3 + * *Client:* INCR X + * *Server:* 4 + +Clients and Servers are connected via a network link. +Such a link can be very fast (a loopback interface) or very slow (a connection established over the Internet with many hops between the two hosts). +Whatever the network latency is, it takes time for the packets to travel from the client to the server, and back from the server to the client to carry the reply. + +This time is called RTT (Round Trip Time). +It's easy to see how this can affect performance when a client needs to perform many requests in a row (for instance adding many elements to the same list, or populating a database with many keys). +For instance if the RTT time is 250 milliseconds (in the case of a very slow link over the Internet), even if the server is able to process 100k requests per second, we'll be able to process at max four requests per second. + +If the interface used is a loopback interface, the RTT is much shorter, typically sub-millisecond, but even this will add up to a lot if you need to perform many writes in a row. + +Fortunately there is a way to improve this use case. + +## Redis Pipelining + +A Request/Response server can be implemented so that it is able to process new requests even if the client hasn't already read the old responses. +This way it is possible to send *multiple commands* to the server without waiting for the replies at all, and finally read the replies in a single step. + +This is called pipelining, and is a technique widely in use for many decades. +For instance many POP3 protocol implementations already support this feature, dramatically speeding up the process of downloading new emails from the server. + +Redis has supported pipelining since its early days, so whatever version you are running, you can use pipelining with Redis. +This is an example using the raw netcat utility: + +```bash +$ (printf "PING\r\nPING\r\nPING\r\n"; sleep 1) | nc localhost 6379 ++PONG ++PONG ++PONG +``` + +This time we don't pay the cost of RTT for every call, but just once for the three commands. + +To be explicit, with pipelining the order of operations of our very first example will be the following: + + * *Client:* INCR X + * *Client:* INCR X + * *Client:* INCR X + * *Client:* INCR X + * *Server:* 1 + * *Server:* 2 + * *Server:* 3 + * *Server:* 4 + +> **IMPORTANT NOTE**: While the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it is better to send them as batches each containing a reasonable number, for instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed will be nearly the same, but the additional memory used will be at most the amount needed to queue the replies for these 10k commands. + +## It's not just a matter of RTT + +Pipelining is not just a way to reduce the latency cost associated with the +round trip time, it actually greatly improves the number of operations +you can perform per second in a given Redis server. +This is because without using pipelining, serving each command is very cheap from +the point of view of accessing the data structures and producing the reply, +but it is very costly from the point of view of doing the socket I/O. This +involves calling the `read()` and `write()` syscall, that means going from user +land to kernel land. +The context switch is a huge speed penalty. + +When pipelining is used, many commands are usually read with a single `read()` +system call, and multiple replies are delivered with a single `write()` system +call. Consequently, the number of total queries performed per second +initially increases almost linearly with longer pipelines, and eventually +reaches 10 times the baseline obtained without pipelining, as shown in this figure. + +![Pipeline size and IOPs](pipeline_iops.png) + +## A real world code example + + +In the following benchmark we'll use the Redis Ruby client, supporting pipelining, to test the speed improvement due to pipelining: + +```ruby +require 'rubygems' +require 'redis' + +def bench(descr) + start = Time.now + yield + puts "#{descr} #{Time.now - start} seconds" +end + +def without_pipelining + r = Redis.new + 10_000.times do + r.ping + end +end + +def with_pipelining + r = Redis.new + r.pipelined do |rp| + 10_000.times do + rp.ping + end + end +end + +bench('without pipelining') do + without_pipelining +end +bench('with pipelining') do + with_pipelining +end +``` + +Running the above simple script yields the following figures on my Mac OS X system, running over the loopback interface, where pipelining will provide the smallest improvement as the RTT is already pretty low: + +``` +without pipelining 1.185238 seconds +with pipelining 0.250783 seconds +``` +As you can see, using pipelining, we improved the transfer by a factor of five. + +## Pipelining vs Scripting + +Using [Redis scripting]({{< relref "/commands/eval" >}}), available since Redis 2.6, a number of use cases for pipelining can be addressed more efficiently using scripts that perform a lot of the work needed at the server side. +A big advantage of scripting is that it is able to both read and write data with minimal latency, making operations like *read, compute, write* very fast (pipelining can't help in this scenario since the client needs the reply of the read command before it can call the write command). + +Sometimes the application may also want to send [`EVAL`]({{< relref "/commands/eval" >}}) or [`EVALSHA`]({{< relref "/commands/evalsha" >}}) commands in a pipeline. +This is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD]({{< relref "/commands/script-load" >}}) command (it guarantees that [`EVALSHA`]({{< relref "/commands/evalsha" >}}) can be called without the risk of failing). + +## Appendix: Why are busy loops slow even on the loopback interface? + +Even with all the background covered in this page, you may still wonder why +a Redis benchmark like the following (in pseudo code), is slow even when +executed in the loopback interface, when the server and the client are running +in the same physical machine: + +```sh +FOR-ONE-SECOND: + Redis.SET("foo","bar") +END +``` + +After all, if both the Redis process and the benchmark are running in the same +box, isn't it just copying messages in memory from one place to another without +any actual latency or networking involved? + +The reason is that processes in a system are not always running, actually it is +the kernel scheduler that lets the process run. +So, for instance, when the benchmark is allowed to run, it reads the reply from the Redis server (related to the last command executed), and writes a new command. +The command is now in the loopback interface buffer, but in order to be read by the server, the kernel should schedule the server process (currently blocked in a system call) +to run, and so forth. +So in practical terms the loopback interface still involves network-like latency, because of how the kernel scheduler works. + +Basically a busy loop benchmark is the silliest thing that can be done when +metering performances on a networked server. The wise thing is just avoiding +benchmarking in this way. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Managing keys in Redis: Key expiration, scanning, altering and querying + the key space + + ' +linkTitle: Keyspace +title: Keyspace +weight: 1 +--- + +Redis keys are binary safe; this means that you can use any binary sequence as a +key, from a string like "foo" to the content of a JPEG file. +The empty string is also a valid key. + +A few other rules about keys: + +* Very long keys are not a good idea. For instance a key of 1024 bytes is a bad + idea not only memory-wise, but also because the lookup of the key in the + dataset may require several costly key-comparisons. Even when the task at hand + is to match the existence of a large value, hashing it (for example + with SHA1) is a better idea, especially from the perspective of memory + and bandwidth. +* Very short keys are often not a good idea. There is little point in writing + "u1000flw" as a key if you can instead write "user:1000:followers". The latter + is more readable and the added space is minor compared to the space used by + the key object itself and the value object. While short keys will obviously + consume a bit less memory, your job is to find the right balance. +* Try to stick with a schema. For instance "object-type:id" is a good + idea, as in "user:1000". Dots or dashes are often used for multi-word + fields, as in "comment:4321:reply.to" or "comment:4321:reply-to". +* The maximum allowed key size is 512 MB. + +## Altering and querying the key space + +There are commands that are not defined on particular types, but are useful +in order to interact with the space of keys, and thus, can be used with +keys of any type. + +For example the [`EXISTS`]({{< relref "/commands/exists" >}}) command returns 1 or 0 to signal if a given key +exists or not in the database, while the [`DEL`]({{< relref "/commands/del" >}}) command deletes a key +and associated value, whatever the value is. + + > set mykey hello + OK + > exists mykey + (integer) 1 + > del mykey + (integer) 1 + > exists mykey + (integer) 0 + +From the examples you can also see how [`DEL`]({{< relref "/commands/del" >}}) itself returns 1 or 0 depending on whether +the key was removed (it existed) or not (there was no such key with that +name). + +There are many key space related commands, but the above two are the +essential ones together with the [`TYPE`]({{< relref "/commands/type" >}}) command, which returns the kind +of value stored at the specified key: + + > set mykey x + OK + > type mykey + string + > del mykey + (integer) 1 + > type mykey + none + +## Key expiration + +Before moving on, we should look at an important Redis feature that works regardless of the type of value you're storing: key expiration. Key expiration lets you set a timeout for a key, also known as a "time to live", or "TTL". When the time to live elapses, the key is automatically destroyed. + +A few important notes about key expiration: + +* They can be set both using seconds or milliseconds precision. +* However the expire time resolution is always 1 millisecond. +* Information about expires are replicated and persisted on disk, the time virtually passes when your Redis server remains stopped (this means that Redis saves the date at which a key will expire). + +Use the [`EXPIRE`]({{< relref "/commands/expire" >}}) command to set a key's expiration: + + > set key some-value + OK + > expire key 5 + (integer) 1 + > get key (immediately) + "some-value" + > get key (after some time) + (nil) + +The key vanished between the two [`GET`]({{< relref "/commands/get" >}}) calls, since the second call was +delayed more than 5 seconds. In the example above we used [`EXPIRE`]({{< relref "/commands/expire" >}}) in +order to set the expire (it can also be used in order to set a different +expire to a key already having one, like [`PERSIST`]({{< relref "/commands/persist" >}}) can be used in order +to remove the expire and make the key persistent forever). However we +can also create keys with expires using other Redis commands. For example +using [`SET`]({{< relref "/commands/set" >}}) options: + + > set key 100 ex 10 + OK + > ttl key + (integer) 9 + +The example above sets a key with the string value `100`, having an expire +of ten seconds. Later the [`TTL`]({{< relref "/commands/ttl" >}}) command is called in order to check the +remaining time to live for the key. + +In order to set and check expires in milliseconds, check the [`PEXPIRE`]({{< relref "/commands/pexpire" >}}) and +the [`PTTL`]({{< relref "/commands/pttl" >}}) commands, and the full list of [`SET`]({{< relref "/commands/set" >}}) options. + +## Navigating the keyspace + +### Scan +To incrementally iterate over the keys in a Redis database in an efficient manner, you can use the [`SCAN`]({{< relref "/commands/scan" >}}) command. + +Since [`SCAN`]({{< relref "/commands/scan" >}}) allows for incremental iteration, returning only a small number of elements per call, it can be used in production without the downside of commands like [`KEYS`]({{< relref "/commands/keys" >}}) or [`SMEMBERS`]({{< relref "/commands/smembers" >}}) that may block the server for a long time (even several seconds) when called against big collections of keys or elements. + +However while blocking commands like [`SMEMBERS`]({{< relref "/commands/smembers" >}}) are able to provide all the elements that are part of a Set in a given moment. +The [`SCAN`]({{< relref "/commands/scan" >}}) family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. + +### Keys + +Another way to iterate over the keyspace is to use the [`KEYS`]({{< relref "/commands/keys" >}}) command, but this approach should be used with care, since [`KEYS`]({{< relref "/commands/keys" >}}) will block the Redis server until all keys are returned. + +**Warning**: consider [`KEYS`]({{< relref "/commands/keys" >}}) as a command that should only be used in production +environments with extreme care. + +[`KEYS`]({{< relref "/commands/keys" >}}) may ruin performance when it is executed against large databases. +This command is intended for debugging and special operations, such as changing +your keyspace layout. +Don't use [`KEYS`]({{< relref "/commands/keys" >}}) in your regular application code. +If you're looking for a way to find keys in a subset of your keyspace, consider +using [`SCAN`]({{< relref "/commands/scan" >}}) or [sets][tdts]. + +[tdts]: /develop/data-types#sets + +Supported glob-style patterns: + +* `h?llo` matches `hello`, `hallo` and `hxllo` +* `h*llo` matches `hllo` and `heeeello` +* `h[ae]llo` matches `hello` and `hallo,` but not `hillo` +* `h[^e]llo` matches `hallo`, `hbllo`, ... but not `hello` +* `h[a-b]llo` matches `hallo` and `hbllo` + +Use `\` to escape special characters if you want to match them verbatim. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Monitor changes to Redis keys and values in real time + + ' +linkTitle: Keyspace notifications +title: Redis keyspace notifications +weight: 4 +--- + +Keyspace notifications allow clients to subscribe to Pub/Sub channels in order +to receive events affecting the Redis data set in some way. + +Examples of events that can be received are: + +* All the commands affecting a given key. +* All the keys receiving an LPUSH operation. +* All the keys expiring in the database 0. + +Note: Redis Pub/Sub is *fire and forget*; that is, if your Pub/Sub client disconnects, +and reconnects later, all the events delivered during the time the client was +disconnected are lost. + +### Type of events + +Keyspace notifications are implemented by sending two distinct types of events +for every operation affecting the Redis data space. For instance a [`DEL`]({{< relref "/commands/del" >}}) +operation targeting the key named `mykey` in database `0` will trigger +the delivering of two messages, exactly equivalent to the following two +[`PUBLISH`]({{< relref "/commands/publish" >}}) commands: + + PUBLISH __keyspace@0__:mykey del + PUBLISH __keyevent@0__:del mykey + +The first channel listens to all the events targeting +the key `mykey` and the other channel listens only to `del` operation +events on the key `mykey` + +The first kind of event, with `keyspace` prefix in the channel is called +a **Key-space notification**, while the second, with the `keyevent` prefix, +is called a **Key-event notification**. + +In the previous example a `del` event was generated for the key `mykey` resulting +in two messages: + +* The Key-space channel receives as message the name of the event. +* The Key-event channel receives as message the name of the key. + +It is possible to enable only one kind of notification in order to deliver +just the subset of events we are interested in. + +### Configuration + +By default keyspace event notifications are disabled because while not +very sensible the feature uses some CPU power. Notifications are enabled +using the `notify-keyspace-events` of redis.conf or via the **CONFIG SET**. + +Setting the parameter to the empty string disables notifications. +In order to enable the feature a non-empty string is used, composed of multiple +characters, where every character has a special meaning according to the +following table: + + K Keyspace events, published with __keyspace@__ prefix. + E Keyevent events, published with __keyevent@__ prefix. + g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... + $ String commands + l List commands + s Set commands + h Hash commands + z Sorted set commands + t Stream commands + d Module key type events + x Expired events (events generated every time a key expires) + e Evicted events (events generated when a key is evicted for maxmemory) + m Key miss events (events generated when a key that doesn't exist is accessed) + n New key events (Note: not included in the 'A' class) + A Alias for "g$lshztxed", so that the "AKE" string means all the events except "m" and "n". + +At least `K` or `E` should be present in the string, otherwise no event +will be delivered regardless of the rest of the string. + +For instance to enable just Key-space events for lists, the configuration +parameter must be set to `Kl`, and so forth. + +You can use the string `KEA` to enable most types of events. + +### Events generated by different commands + +Different commands generate different kind of events according to the following list. +* [`APPEND`]({{< relref "/commands/append" >}}) generates an `append` event. +* [`COPY`]({{< relref "/commands/copy" >}}) generates a `copy_to` event. +* [`DEL`]({{< relref "/commands/del" >}}) generates a `del` event for every deleted key. +* [`EXPIRE`]({{< relref "/commands/expire" >}}) and all its variants ([`PEXPIRE`]({{< relref "/commands/pexpire" >}}), [`EXPIREAT`]({{< relref "/commands/expireat" >}}), [`PEXPIREAT`]({{< relref "/commands/pexpireat" >}})) generate an `expire` event when called with a positive timeout (or a future timestamp). Note that when these commands are called with a negative timeout value or timestamp in the past, the key is deleted and only a `del` event is generated instead. +* [`HDEL`]({{< relref "/commands/hdel" >}}) generates a single `hdel` event, and an additional `del` event if the resulting hash is empty and the key is removed. +* [`HEXPIRE`]({{< relref "/commands/hexpire" >}}) and all its variants ([`HEXPIREAT`]({{< relref "/commands/hpexpireat" >}}), [`HPEXPIRE`]({{< relref "/commands/hpexpire" >}}), [`HPEXPIREAT`]({{< relref "/commands/hpexpireat" >}})) generate `hexpire` events. Furthermore, `hexpired` events are generated when fields expire. +* [`HINCRBYFLOAT`]({{< relref "/commands/hincrbyfloat" >}}) generates an `hincrbyfloat` event. +* [`HINCRBY`]({{< relref "/commands/hincrby" >}}) generates an `hincrby` event. +* [`HPERSIST`]({{< relref "/commands/hpersist" >}}) generates an `hpersist` event. +* [`HSET`]({{< relref "/commands/hset" >}}), [`HSETNX`]({{< relref "/commands/hsetnx" >}}) and [`HMSET`]({{< relref "/commands/hmset" >}}) all generate a single `hset` event. +* [`INCRBYFLOAT`]({{< relref "/commands/incrbyfloat" >}}) generates an `incrbyfloat` events. +* [`INCR`]({{< relref "/commands/incr" >}}), [`DECR`]({{< relref "/commands/decr" >}}), [`INCRBY`]({{< relref "/commands/incrby" >}}), [`DECRBY`]({{< relref "/commands/decrby" >}}) commands all generate `incrby` events. +* [`LINSERT`]({{< relref "/commands/linsert" >}}) generates an `linsert` event. +* [`LMOVE`]({{< relref "/commands/lmove" >}}) and [`BLMOVE`]({{< relref "/commands/blmove" >}}) generate an `lpop`/`rpop` event (depending on the wherefrom argument) and an `lpush`/`rpush` event (depending on the whereto argument). In both cases the order is guaranteed (the `lpush`/`rpush` event will always be delivered after the `lpop`/`rpop` event). Additionally a `del` event will be generated if the resulting list is zero length and the key is removed. +* [`LPOP`]({{< relref "/commands/lpop" >}}) generates an `lpop` event. Additionally a `del` event is generated if the key is removed because the last element from the list was popped. +* [`LPUSH`]({{< relref "/commands/lpush" >}}) and [`LPUSHX`]({{< relref "/commands/lpushx" >}}) generates a single `lpush` event, even in the variadic case. +* [`LREM`]({{< relref "/commands/lrem" >}}) generates an `lrem` event, and additionally a `del` event if the resulting list is empty and the key is removed. +* [`LSET`]({{< relref "/commands/lset" >}}) generates an `lset` event. +* [`LTRIM`]({{< relref "/commands/ltrim" >}}) generates an `ltrim` event, and additionally a `del` event if the resulting list is empty and the key is removed. +* [`MIGRATE`]({{< relref "/commands/migrate" >}}) generates a `del` event if the source key is removed. +* [`MOVE`]({{< relref "/commands/move" >}}) generates two events, a `move_from` event for the source key, and a `move_to` event for the destination key. +* [`MSET`]({{< relref "/commands/mset" >}}) generates a separate `set` event for every key. +* [`PERSIST`]({{< relref "/commands/persist" >}}) generates a `persist` event if the expiry time associated with key has been successfully deleted. +* [`RENAME`]({{< relref "/commands/rename" >}}) generates two events, a `rename_from` event for the source key, and a `rename_to` event for the destination key. +* [`RESTORE`]({{< relref "/commands/restore" >}}) generates a `restore` event for the key. +* [`RPOPLPUSH`]({{< relref "/commands/rpoplpush" >}}) and [`BRPOPLPUSH`]({{< relref "/commands/brpoplpush" >}}) generate an `rpop` event and an `lpush` event. In both cases the order is guaranteed (the `lpush` event will always be delivered after the `rpop` event). Additionally a `del` event will be generated if the resulting list is zero length and the key is removed. +* [`RPOP`]({{< relref "/commands/rpop" >}}) generates an `rpop` event. Additionally a `del` event is generated if the key is removed because the last element from the list was popped. +* [`RPUSH`]({{< relref "/commands/rpush" >}}) and [`RPUSHX`]({{< relref "/commands/rpushx" >}}) generates a single `rpush` event, even in the variadic case. +* [`SADD`]({{< relref "/commands/sadd" >}}) generates a single `sadd` event, even in the variadic case. +* [`SETRANGE`]({{< relref "/commands/setrange" >}}) generates a `setrange` event. +* [`SET`]({{< relref "/commands/set" >}}) and all its variants ([`SETEX`]({{< relref "/commands/setex" >}}), [`SETNX`]({{< relref "/commands/setnx" >}}),[`GETSET`]({{< relref "/commands/getset" >}})) generate `set` events. However [`SETEX`]({{< relref "/commands/setex" >}}) will also generate an `expire` events. +* [`SINTERSTORE`]({{< relref "/commands/sinterstore" >}}), [`SUNIONSTORE`]({{< relref "/commands/sunionstore" >}}), [`SDIFFSTORE`]({{< relref "/commands/sdiffstore" >}}) generate `sinterstore`, `sunionstore`, `sdiffstore` events respectively. In the special case the resulting set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* [`SMOVE`]({{< relref "/commands/smove" >}}) generates an `srem` event for the source key, and an `sadd` event for the destination key. +* [`SORT`]({{< relref "/commands/sort" >}}) generates a `sortstore` event when `STORE` is used to set a new key. If the resulting list is empty, and the `STORE` option is used, and there was already an existing key with that name, the result is that the key is deleted, so a `del` event is generated in this condition. +* [`SPOP`]({{< relref "/commands/spop" >}}) generates an `spop` event, and an additional `del` event if the resulting set is empty and the key is removed. +* [`SREM`]({{< relref "/commands/srem" >}}) generates a single `srem` event, and an additional `del` event if the resulting set is empty and the key is removed. +* [`XADD`]({{< relref "/commands/xadd" >}}) generates an `xadd` event, possibly followed an `xtrim` event when used with the `MAXLEN` subcommand. +* [`XDEL`]({{< relref "/commands/xdel" >}}) generates a single `xdel` event even when multiple entries are deleted. +* [`XGROUP CREATECONSUMER`]({{< relref "/commands/xgroup-createconsumer" >}}) generates an `xgroup-createconsumer` event. +* [`XGROUP CREATE`]({{< relref "/commands/xgroup-create" >}}) generates an `xgroup-create` event. +* [`XGROUP DELCONSUMER`]({{< relref "/commands/xgroup-delconsumer" >}}) generates an `xgroup-delconsumer` event. +* [`XGROUP DESTROY`]({{< relref "/commands/xgroup-destroy" >}}) generates an `xgroup-destroy` event. +* [`XGROUP SETID`]({{< relref "/commands/xgroup-setid" >}}) generates an `xgroup-setid` event. +* [`XSETID`]({{< relref "/commands/xsetid" >}}) generates an `xsetid` event. +* [`XTRIM`]({{< relref "/commands/xtrim" >}}) generates an `xtrim` event. +* [`ZADD`]({{< relref "/commands/zadd" >}}) generates a single `zadd` event even when multiple elements are added. +* [`ZDIFFSTORE`]({{< relref "/commands/zdiffstore" >}}), [`ZINTERSTORE`]({{< relref "/commands/zinterstore" >}}) and [`ZUNIONSTORE`]({{< relref "/commands/zunionstore" >}}) respectively generate `zdiffstore`, `zinterstore` and `zunionstore` events. In the special case the resulting sorted set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* [`ZINCRBY`]({{< relref "/commands/zincrby" >}}) generates a `zincr` event. +* [`ZREMRANGEBYRANK`]({{< relref "/commands/zremrangebyrank" >}}) generates a single `zrembyrank` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* [`ZREMRANGEBYSCORE`]({{< relref "/commands/zremrangebyscore" >}}) generates a single `zrembyscore` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* [`ZREM`]({{< relref "/commands/zrem" >}}) generates a single `zrem` event even when multiple elements are deleted. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* Every time a key with a time to live associated is removed from the data set because it expired, an `expired` event is generated. +* Every time a key is evicted from the data set in order to free memory as a result of the `maxmemory` policy, an `evicted` event is generated. +* Every time a new key is added to the data set, a `new` event is generated. + +**IMPORTANT** all the commands generate events only if the target key is really modified. For instance an [`SREM`]({{< relref "/commands/srem" >}}) deleting a non-existing element from a Set will not actually change the value of the key, so no event will be generated. + +If in doubt about how events are generated for a given command, the simplest +thing to do is to watch yourself: + + $ redis-cli config set notify-keyspace-events KEA + $ redis-cli --csv psubscribe '__key*__:*' + Reading messages... (press Ctrl-C to quit) + "psubscribe","__key*__:*",1 + +At this point use `redis-cli` in another terminal to send commands to the +Redis server and watch the events generated: + + "pmessage","__key*__:*","__keyspace@0__:foo","set" + "pmessage","__key*__:*","__keyevent@0__:set","foo" + ... + +### Timing of expired events + +Keys with a time to live associated are expired by Redis in two ways: + +* When the key is accessed by a command and is found to be expired. +* Via a background system that looks for expired keys in the background, incrementally, in order to be able to also collect keys that are never accessed. + +The `expired` events are generated when a key is accessed and is found to be expired by one of the above systems, as a result there are no guarantees that the Redis server will be able to generate the `expired` event at the time the key time to live reaches the value of zero. + +If no command targets the key constantly, and there are many keys with a TTL associated, there can be a significant delay between the time the key time to live drops to zero, and the time the `expired` event is generated. + +Expired (`expired`) events are generated when the Redis server deletes the key and not when the time to live theoretically reaches the value of zero. + +### Events in a cluster + +Every node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications **are not** broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes. + +@history + +* `>= 6.0`: Key miss events were added. +* `>= 7.0`: Event type `new` added + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: A developer's guide to Redis +linkTitle: Use Redis +title: Use Redis +weight: 50 +--- +--- +Title: Redis for AI documentation +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: An overview of Redis for AI documentation +linkTitle: Redis for AI +weight: 40 +--- +Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within [hashes]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json" >}}) documents for [indexing]({{< relref "/develop/interact/search-and-query/indexing" >}}) and [querying]({{< relref "/develop/interact/search-and-query/query" >}}). + +| Vector | RAG | RedisVL | +| :-- | :-- | :-- | +| {{AI Redis icon.}}[Redis vector database quick start guide]({{< relref "/develop/get-started/vector-database" >}}) |{{AI Redis icon.}} [Retrieval-Augmented Generation quick start guide]({{< relref "/develop/get-started/rag" >}}) | {{AI Redis icon.}}[Redis vector Python client library documentation]({{< relref "/integrate/redisvl/" >}}) | + +#### Overview + +This page organized into a few sections depending on what you’re trying to do: +* **How to's** - The comprehensive reference section for every feature, API, and setting. It’s your source for detailed, technical information to support any level of development. +* **Concepts** - Explanations of foundational ideas and core principles to help you understand the reason behind the product’s features and design. +* **Quickstarts** - Short, focused guides to get you started with key features or workflows in minutes. +* **Tutorials** - In-depth walkthroughs that dive deeper into specific use cases or processes. These step-by-step guides help you master essential tasks and workflows. +* **Integrations** - Guides and resources to help you connect and use the product with popular tools, frameworks, or platforms. +* **Benchmarks** - Performance comparisons and metrics to demonstrate how the product performs under various scenarios. This helps you understand its efficiency and capabilities. +* **Best practices** - Recommendations and guidelines for maximizing effectiveness and avoiding common pitfalls. This section equips you to use the product effectively and efficiently. + +## How to's + +1. [**Create a vector index**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#create-a-vector-index" >}}): Redis maintains a secondary index over your data with a defined schema (including vector fields and metadata). Redis supports [`FLAT`]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#flat-index" >}}) and [`HNSW`]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) vector index types. +1. [**Store and update vectors**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#store-and-update-vectors" >}}): Redis stores vectors and metadata in hashes or JSON objects. +1. [**Search with vectors**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#search-with-vectors" >}}): Redis supports several advanced querying strategies with vector fields including k-nearest neighbor ([KNN]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}})), [vector range queries]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#vector-range-queries" >}}), and [metadata filters]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#filters" >}}). +1. [**Configure vector queries at runtime**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#runtime-query-parameters" >}}). Select the best filter mode to optimize query execution. + +#### Learn how to index and query vector embeddings +* [redis-py (Python)]({{< relref "/develop/clients/redis-py/vecsearch" >}}) +* [NRedisStack (C#/.NET)]({{< relref "/develop/clients/dotnet/vecsearch" >}}) +* [node-redis (JavaScript)]({{< relref "/develop/clients/nodejs/vecsearch" >}}) +* [Jedis (Java)]({{< relref "/develop/clients/jedis/vecsearch" >}}) +* [go-redis (Go)]({{< relref "/develop/clients/go/vecsearch" >}}) + +## Concepts + +Learn to perform vector search and use gateways and semantic caching in your AI/ML projects. + +| Search | LLM memory | Semantic caching | Semantic routing | AI Gateways | +| :-- | :-- | :-- | :-- | :-- | +| {{AI Redis icon.}}[Vector search guide]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) | {{LLM memory icon.}}[Store memory for LLMs](https://redis.io/blog/level-up-rag-apps-with-redis-vector-library/) | {{AI Redis icon.}}[Semantic caching for faster, smarter LLM apps](https://redis.io/blog/what-is-semantic-caching) | {{Semantic routing icon.}}[Semantic routing chooses the best tool](https://redis.io/blog/level-up-rag-apps-with-redis-vector-library/) | {{AI Redis icon.}}[Deploy an enhanced gateway with Redis](https://redis.io/blog/ai-gateways-what-are-they-how-can-you-deploy-an-enhanced-gateway-with-redis/) | {{AI Redis icon.}}[Semantic caching for faster, smarter LLM apps](https://redis.io/blog/what-is-semantic-caching) | + +## Quickstarts + +Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent. Get started with the following Redis Python notebooks. + +* [The place to start if you are brand new to Redis](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/redis-intro/00_redis_intro.ipynb) + +#### Hybrid and vector search +Vector search retrieves results based on the similarity of high-dimensional numerical embeddings, while hybrid search combines this with traditional keyword or metadata-based filtering for more comprehensive results. +* [Implementing hybrid search with Redis](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/02_hybrid_search.ipynb) +* [Vector search with Redis Python client](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb) +* [Vector search with Redis Vector Library](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/01_redisvl.ipynb) +* [Shows how to convert a float 32 index to float16 or integer data types](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/03_dtype_support.ipynb) + +#### RAG +Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM. +* [RAG from scratch with the Redis Vector Library](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/01_redisvl.ipynb) +* [RAG using Redis and LangChain](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/02_langchain.ipynb) +* [RAG using Redis and LlamaIndex](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/03_llamaindex.ipynb) +* [Advanced RAG with redisvl](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/04_advanced_redisvl.ipynb) +* [RAG using Redis and Nvidia](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/05_nvidia_ai_rag_redis.ipynb) +* [Utilize RAGAS framework to evaluate RAG performance](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/06_ragas_evaluation.ipynb) +* [Vector search with Azure](https://techcommunity.microsoft.com/blog/azuredevcommunityblog/vector-similarity-search-with-azure-cache-for-redis-enterprise/3822059) +* [RAG with Spring AI](https://redis.io/blog/building-a-rag-application-with-redis-and-spring-ai/) +* [RAG with Vertex AI](https://github.com/redis-developer/gcp-redis-llm-stack/tree/main) +* [Notebook for additional tips and techniques to improve RAG quality](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/04_advanced_redisvl.ipynb) +* [Implement a simple RBAC policy with vector search using Redis](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/07_user_role_based_rag.ipynb) + +#### Agents +AI agents can act autonomously to plan and execute tasks for the user. +* [Redis Notebooks for LangGraph](https://github.com/redis-developer/langgraph-redis/tree/main/examples) +* [Notebook to get started with LangGraph and agents](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/00_langgraph_redis_agentic_rag.ipynb) +* [Build a collaborative movie recommendation system using Redis for data storage, CrewAI for agent-based task execution, and LangGraph for workflow management.](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/01_crewai_langgraph_redis.ipynb) +* [Full-Featured Agent Architecture](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/02_full_featured_agent.ipynb) + +#### LLM memory +LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance. +* [LLM session manager with semantic similarity](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/llm-session-manager/00_llm_session_manager.ipynb) +* [Handle multiple simultaneous chats with one instance](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/llm-session-manager/01_multiple_sessions.ipynb) + +#### Semantic caching +An estimated 31% of LLM queries are potentially redundant. Redis enables semantic caching to help cut down on LLM costs quickly. +* [Build a semantic cache using the Doc2Cache framework and Llama3.1](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/doc2cache_llama3_1.ipynb) +* [Build a semantic cache with Redis and Google Gemini](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/semantic_caching_gemini.ipynb) +* [Optimize semantic cache threshold with RedisVL](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/02_semantic_cache_optimization.ipynb) + +#### Semantic routing +Routing is a simple and effective way of preventing misuses with your AI application or for creating branching logic between data sources etc. +* [Simple examples of how to build an allow/block list router in addition to a multi-topic router](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-router/00_semantic_routing.ipynb) +* [Use RouterThresholdOptimizer from redisvl to setup best router config](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-router/01_routing_optimization.ipynb) + +#### Computer vision +Build a facial recognition system using the Facenet embedding model and RedisVL. +* [Facial recognition](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/computer-vision/00_facial_recognition_facenet.ipynb) + +#### Recommendation systems +* [Intro content filtering example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/00_content_filtering.ipynb) +* [Intro collaborative filtering example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/01_collaborative_filtering.ipynb) +* [Intro deep learning two tower example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/02_two_towers.ipynb) + +#### Feature store +* [Credit scoring system using Feast with Redis as the online store](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/feature-store/00_feast_credit_score.ipynb) + +## Tutorials +Need a deeper-dive through different use cases and topics? + +#### RAG +* [Agentic RAG](https://github.com/redis-developer/agentic-rag) - A tutorial focused on agentic RAG with LlamaIndex and Amazon Bedrock +* [RAG on Vertex AI](https://github.com/redis-developer/gcp-redis-llm-stack/tree/main) - A RAG tutorial featuring Redis with Vertex AI +* [RAG workbench](https://github.com/redis-developer/redis-rag-workbench) - A development playground for exploring RAG techniques with Redis +* [ArXiv Chat](https://github.com/redis-developer/ArxivChatGuru) - Streamlit demo of RAG over ArXiv documents with Redis & OpenAI + +#### Recommendations and search +* [Recommendation systems w/ NVIDIA Merlin & Redis](https://github.com/redis-developer/redis-nvidia-recsys) - Three examples, each escalating in complexity, showcasing the process of building a realtime recsys with NVIDIA and Redis +* [Redis product search](https://github.com/redis-developer/redis-product-search) - Build a real-time product search engine using features like full-text search, vector similarity, and real-time data updates +* [ArXiv Search](https://github.com/redis-developer/redis-arxiv-search) - Full stack implementation of Redis with React FE + +## Ecosystem integrations + +* [LangGraph & Redis: Build smarter AI agents with memory & persistence](https://redis.io/blog/langgraph-redis-build-smarter-ai-agents-with-memory-persistence/) +* [Amazon Bedrock setup guide]({{< relref "/integrate/amazon-bedrock/set-up-redis" >}}) +* [LangChain Redis Package: Smarter AI apps with advanced vector storage and faster caching](https://redis.io/blog/langchain-redis-partner-package/) +* [LlamaIndex integration for Redis as a vector store](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/RedisIndexDemo.html) +* [Redis Cloud available on Vercel](https://redis.io/blog/redis-cloud-now-available-on-vercel-marketplace/) +* [Create a Redis Cloud database with the Vercel integration]({{< relref "/operate/rc/cloud-integrations/vercel" >}}) +* [Building a RAG application with Redis and Spring AI](https://redis.io/blog/building-a-rag-application-with-redis-and-spring-ai/) +* [Deploy GenAI apps faster with Redis and NVIDIA NIM](https://redis.io/blog/use-redis-with-nvidia-nim-to-deploy-genai-apps-faster/) +* [Building LLM Applications with Kernel Memory and Redis](https://redis.io/blog/building-llm-applications-with-kernel-memory-and-redis/) +* [DocArray integration of Redis as a vector database by Jina AI](https://docs.docarray.org/user_guide/storing/index_redis/) +* [Semantic Kernel: A popular library by Microsoft to integrate LLMs with plugins](https://learn.microsoft.com/en-us/semantic-kernel/concepts/vector-store-connectors/out-of-the-box-connectors/redis-connector?pivots=programming-language-csharp) +* [LiteLLM integration](https://docs.litellm.ai/docs/caching/all_caches#initialize-cache---in-memory-redis-s3-bucket-redis-semantic-disk-cache-qdrant-semantic) + +## Benchmarks +See how we stack up against the competition. +* [Redis vector benchmark results](https://redis.io/blog/benchmarking-results-for-vector-databases/) +* [1 billion vectors](https://redis.io/blog/redis-8-0-m02-the-fastest-redis-ever/) + +## Best practices +See how leaders in the industry are building their RAG apps. +* [Advanced RAG example](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/04_advanced_redisvl.ipynb) +* [Get better RAG responses with Ragas](https://redis.io/blog/get-better-rag-responses-with-ragas/) +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to use pub/sub channels in Redis +linkTitle: Pub/sub +title: Redis Pub/Sub +weight: 40 +--- + +[`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}), [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}) and [`PUBLISH`]({{< relref "/commands/publish" >}}) implement the [Publish/Subscribe messaging paradigm](http://en.wikipedia.org/wiki/Publish/subscribe) where (citing Wikipedia) senders (publishers) are not programmed to send their messages to specific receivers (subscribers). +Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be. +Subscribers express interest in one or more channels and only receive messages that are of interest, without knowledge of what (if any) publishers there are. +This decoupling of publishers and subscribers allows for greater scalability and a more dynamic network topology. + +For instance, to subscribe to channels "channel11" and "ch:00" the client issues a [`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}) providing the names of the channels: + +```bash +SUBSCRIBE channel11 ch:00 +``` + +Messages sent by other clients to these channels will be pushed by Redis to all the subscribed clients. +Subscribers receive the messages in the order that the messages are published. + +A client subscribed to one or more channels shouldn't issue commands, although it can [`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}) and [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}) to and from other channels. +The replies to subscription and unsubscribing operations are sent in the form of messages so that the client can just read a coherent stream of messages where the first element indicates the type of message. +The commands that are allowed in the context of a subscribed RESP2 client are: + +* [`PING`]({{< relref "/commands/ping" >}}) +* [`PSUBSCRIBE`]({{< relref "/commands/psubscribe" >}}) +* [`PUNSUBSCRIBE`]({{< relref "/commands/punsubscribe" >}}) +* [`QUIT`]({{< relref "/commands/quit" >}}) +* [`RESET`]({{< relref "/commands/reset" >}}) +* [`SSUBSCRIBE`]({{< relref "/commands/ssubscribe" >}}) +* [`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}) +* [`SUNSUBSCRIBE`]({{< relref "/commands/sunsubscribe" >}}) +* [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}) + +However, if RESP3 is used (see [`HELLO`]({{< relref "/commands/hello" >}})), a client can issue any commands while in the subscribed state. + +Please note that when using `redis-cli`, in subscribed mode commands such as [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}) and [`PUNSUBSCRIBE`]({{< relref "/commands/punsubscribe" >}}) cannot be used because `redis-cli` will not accept any commands and can only quit the mode with `Ctrl-C`. + +## Delivery semantics + +Redis' Pub/Sub exhibits _at-most-once_ message delivery semantics. +As the name suggests, it means that a message will be delivered once if at all. +Once the message is sent by the Redis server, there's no chance of it being sent again. +If the subscriber is unable to handle the message (for example, due to an error or a network disconnect) the message is forever lost. + +If your application requires stronger delivery guarantees, you may want to learn about [Redis Streams]({{< relref "/develop/data-types/streams" >}}). +Messages in streams are persisted, and support both _at-most-once_ as well as _at-least-once_ delivery semantics. + +## Format of pushed messages + +A message is an [array-reply]({{< relref "/develop/reference/protocol-spec#array-reply" >}}) with three elements. + +The first element is the kind of message: + +* `subscribe`: means that we successfully subscribed to the channel given as the second element in the reply. + The third argument represents the number of channels we are currently subscribed to. + +* `unsubscribe`: means that we successfully unsubscribed from the channel given as second element in the reply. + The third argument represents the number of channels we are currently subscribed to. + When the last argument is zero, we are no longer subscribed to any channel, and the client can issue any kind of Redis command as we are outside the Pub/Sub state. + +* `message`: it is a message received as a result of a [`PUBLISH`]({{< relref "/commands/publish" >}}) command issued by another client. + The second element is the name of the originating channel, and the third argument is the actual message payload. + +## Database & Scoping + +Pub/Sub has no relation to the key space. +It was made to not interfere with it on any level, including database numbers. + +Publishing on db 10, will be heard by a subscriber on db 1. + +If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production...). + +## Wire protocol example + +``` +SUBSCRIBE first second +*3 +$9 +subscribe +$5 +first +:1 +*3 +$9 +subscribe +$6 +second +:2 +``` + +At this point, from another client we issue a [`PUBLISH`]({{< relref "/commands/publish" >}}) operation against the channel named `second`: + +``` +> PUBLISH second Hello +``` + +This is what the first client receives: + +``` +*3 +$7 +message +$6 +second +$5 +Hello +``` + +Now the client unsubscribes itself from all the channels using the [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}) command without additional arguments: + +``` +UNSUBSCRIBE +*3 +$11 +unsubscribe +$6 +second +:1 +*3 +$11 +unsubscribe +$5 +first +:0 +``` + +## Pattern-matching subscriptions + +The Redis Pub/Sub implementation supports pattern matching. +Clients may subscribe to glob-style patterns to receive all the messages sent to channel names matching a given pattern. + +For instance: + +``` +PSUBSCRIBE news.* +``` + +Will receive all the messages sent to the channel `news.art.figurative`, `news.music.jazz`, etc. +All the glob-style patterns are valid, so multiple wildcards are supported. + +``` +PUNSUBSCRIBE news.* +``` + +Will then unsubscribe the client from that pattern. +No other subscriptions will be affected by this call. + +Messages received as a result of pattern matching are sent in a different format: + +* The type of the message is `pmessage`: it is a message received as a result from a [`PUBLISH`]({{< relref "/commands/publish" >}}) command issued by another client, matching a pattern-matching subscription. + The second element is the original pattern matched, the third element is the name of the originating channel, and the last element is the actual message payload. + +Similarly to [`SUBSCRIBE`]({{< relref "/commands/subscribe" >}}) and [`UNSUBSCRIBE`]({{< relref "/commands/unsubscribe" >}}), [`PSUBSCRIBE`]({{< relref "/commands/psubscribe" >}}) and [`PUNSUBSCRIBE`]({{< relref "/commands/punsubscribe" >}}) commands are acknowledged by the system sending a message of type `psubscribe` and `punsubscribe` using the same format as the `subscribe` and `unsubscribe` message format. + +## Messages matching both a pattern and a channel subscription + +A client may receive a single message multiple times if it's subscribed to multiple patterns matching a published message, or if it is subscribed to both patterns and channels matching the message. +This is shown by the following example: + +``` +SUBSCRIBE foo +PSUBSCRIBE f* +``` + +In the above example, if a message is sent to channel `foo`, the client will receive two messages: one of type `message` and one of type `pmessage`. + +## The meaning of the subscription count with pattern matching + +In `subscribe`, `unsubscribe`, `psubscribe` and `punsubscribe` message types, the last argument is the count of subscriptions still active. +This number is the total number of channels and patterns the client is still subscribed to. +So the client will exit the Pub/Sub state only when this count drops to zero as a result of unsubscribing from all the channels and patterns. + +## Sharded Pub/Sub + +From Redis 7.0, sharded Pub/Sub is introduced in which shard channels are assigned to slots by the same algorithm used to assign keys to slots. +A shard message must be sent to a node that owns the slot the shard channel is hashed to. +The cluster makes sure the published shard messages are forwarded to all nodes in the shard, so clients can subscribe to a shard channel by connecting to either the master responsible for the slot, or to any of its replicas. +[`SSUBSCRIBE`]({{< relref "/commands/ssubscribe" >}}), [`SUNSUBSCRIBE`]({{< relref "/commands/sunsubscribe" >}}) and [`SPUBLISH`]({{< relref "/commands/spublish" >}}) are used to implement sharded Pub/Sub. + +Sharded Pub/Sub helps to scale the usage of Pub/Sub in cluster mode. +It restricts the propagation of messages to be within the shard of a cluster. +Hence, the amount of data passing through the cluster bus is limited in comparison to global Pub/Sub where each message propagates to each node in the cluster. +This allows users to horizontally scale the Pub/Sub usage by adding more shards. + +## Programming example + +Pieter Noordhuis provided a great example using EventMachine and Redis to create [a multi user high performance web chat](https://gist.github.com/pietern/348262). + +## Client library implementation hints + +Because all the messages received contain the original subscription causing the message delivery (the channel in the case of message type, and the original pattern in the case of pmessage type) client libraries may bind the original subscription to callbacks (that can be anonymous functions, blocks, function pointers), using a hash table. + +When a message is received an O(1) lookup can be done to deliver the message to the registered callback. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis Query Engine use cases +linkTitle: Use cases +title: Use cases +weight: 5 +--- + +**Application search and external secondary index** + +Redis Open Source supports application search, whether the source of record is Redis or another database. In the latter case, you can use Redis as an external secondary index for numeric or full-text data. + +**Secondary index for Redis data** + +You can represent your data model using Redis hashes and JSON documents. You can then declare secondary indexes to support various queries on your data set. Redis updates indexes automatically whenever a hash or JSON document that matches the indexes is added or updated. + +**Geo-distributed search** + +In geo-distributed search, hashes and JSON documents are handled in the usual [active-active manner](https://docs.redis.com/latest/rs/databases/active-active/). The index follows whatever is written in the documents in the database. Create an index on each database, then add synonyms (if used) to each database. + +**Unified search** + +You can use Redis to search across several source systems, like file servers, content management systems (CMS), or customer relationship management (CRM) systems. You can process source data in batches using, for example, ETL tools, or as live streams (for example, Kafka or Redis streams). + +**Analytics** + +Data often originates from several source systems. Redis can provide a unified view of dimensions and facts. You can query data based on dimensions, group by dimension, and apply aggregations to facts. + +{{% alert title="Redis for faceted search" color="warning" %}} + +Facets are multiple explicit dimensions implemented as tags in the Redis Query Engine. You can query data based on facets using aggregations (`COUNT`, `TOLIST`, `FIRST_VALUE`, and `RANDOM_SAMPLE`). + +{{% /alert %}} + +**Ephemeral search (retail)** + +When the user logs on to the site, the purchase-search history is populated into an index from another datastore. This requires lightweight index creation, index expiry, and quick document indexing. + +The application/service creates a temporary and user-specific, full-text index when a user logs in. The application/service has direct access to the user-specific index and the primary datastore. When the user logs out of the service, the index is explicitly removed. Otherwise, the index expires after a while (for example, after the user's session expires). + +Using Redis for this type of application provides these benefits: + +- Search index is only populated when needed. +- Only a small portion (for example, 2%) of users are active at the same time. +- Users are only active for a short period of time. +- A small number of documents are indexed, which is very cost effective in comparison to a persistent search index. + +**Real-time inventory (retail)** + +In real-time inventory retail, the key question is product availability: "What is available where?" The challenges with such projects are performance and accuracy. Redis allows for real-time searching and aggregations over millions of store/SKU combinations. + +You can establish real-time event capture from a legacy inventory system to Redis and then have several inventory services query it. Then, you can use combined queries such as item counts, price ranges, categories, and locations. Take advantage of geo-distributed search (Active-Active) for your remote store locations. + +Using Redis for this type of application provides these benefits: + +- Low-latency queries for downstream consumers like marketing, stores/e-commerce, and fulfillment +- Immediate and higher consistency between stores and data-centers +- Improved customer experience +- Real-time pricing decisions +- Less shopping cart abandonment +- Less remediation (refund, cancellation) + +**Real-time conversation analysis (telecom)** + +Collect, access, store, and utilize communication data in real time. Capture network traffic and store it in a full-text index for the purposes of getting insights into the data. + +Gather data using connection information gathering (source IPs, DNS) and conversation data gathering (Wireshark/TShark live capture). Then filter, transform, and store the conversation data in Redis to perform search queries and create custom dashboards for your analyses. + +Using Redis for this type of application provides these benefits: + +- Insights into performance issues, security threats, and network faults +- Improved service uptime and security + +**Research portal (academia)** + +Research portals let users search for articles, research, specifications, past solutions, and data to answer specific questions and take advantage of existing knowledge and history. + +To build such a system, you can use indexes supporting tag queries, numeric range queries, geo-location queries, and full-text search. + +Using Redis for this type of application provides these benefits: + +- Create relevant, personalized search experiences while enforcing internal and regulatory data governance policies +- Increased productivity, security, and compliance --- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use query dialects +linkTitle: Query dialects +title: Query dialects +weight: 5 +--- + +Redis Open Source currently supports four query dialects for use with the [`FT.SEARCH`]({{< relref "/commands/ft.search/" >}}), [`FT.AGGREGATE`]({{< relref "/commands/ft.aggregate/" >}}), and other Redis Query Engine commands. +Dialects provide for enhancing the query API incrementally, introducing innovative behaviors and new features that support new use cases in a way that does not break the API for existing applications. + +{{< note >}}Dialects 1, 3, and 4 are deprecated in Redis 8 in Redis Open Source. However, DIALECT 1 remains the default. +{{< /note >}} + +## `DIALECT 1` (Deprecated) + +Dialect version 1 was the default query syntax dialect from the first release of search and query until dialect version 2 was introduced with version [2.4](https://github.com/RediSearch/RediSearch/releases/tag/v2.4.3). +This dialect is also the default dialect. See below for information about changing the default dialect. + +## `DIALECT 2` + +Dialect version 2 was introduced in the [2.4](https://github.com/RediSearch/RediSearch/releases/tag/v2.4.3) release to address query parser inconsistencies found in previous versions of Redis. Dialect version 1 remains the default dialect. To use dialect version 2, append `DIALECT 2` to your query command. +Support for vector search also was introduced in the 2.4 release and requires `DIALECT 2`. See [here]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) for more details. +`FT.SEARCH ... DIALECT 2` + +It was determined that under certain conditions some query parsing rules did not behave as originally intended. +Particularly, some queries containing the operators below could return unexpected results. + +1. AND, multi-word phrases that imply intersection +1. `"..."` (exact), `~` (optional), `-` (negation), and `%` (fuzzy) +1. OR, words separated by the `|` (pipe) character that imply union +1. wildcard characters + +Existing queries that used dialect 1 may behave differently using dialect 2 if they fall into any of the following categories: + +1. Your query has a field modifier followed by multiple words. Consider the sample query: + + `@name:James Brown` + + Here, the field modifier `@name` is followed by two words, `James` and `Brown`. + + In `DIALECT 1`, this query would be interpreted as find `James Brown` in the `@name` field. + In `DIALECT 2`, this query would be interpreted as find `James` in the `@name` field, and `Brown` in any text field. In other words, it would be interpreted as `(@name:James) Brown`. + In `DIALECT 2`, you could achieve the dialect 1 behavior by updating your query to `@name:(James Brown)`. + +1. Your query uses `"..."`, `~`, `-`, and/or `%`. Consider a simple query with negation: + + `-hello world` + + In `DIALECT 1`, this query is interpreted as find values in any field that do not contain `hello` and do not contain `world`; the equivalent of `-(hello world)` or `-hello -world`. + In `DIALECT 2`, this query is interpreted as `-hello` and `world` (only `hello` is negated). + In `DIALECT 2`, you could achieve the dialect 1 behavior by updating your query to `-(hello world)`. + +1. Your query used `|`. Consider the simple query: + + `hello world | "goodbye" moon` + + In `DIALECT 1`, this query is interpreted as searching for `(hello world | "goodbye") moon`. + In `DIALECT 2`, this query is interpreted as searching for either `hello world` `"goodbye" moon`. + +1. Your query uses a wildcard pattern. Consider the simple query: + + `"w'foo*bar?'"` + + As shown above, you must use double quotes to contain the `w` pattern. + +With `DIALECT 2` you can use un-escaped spaces in tag queries, even with stopwords. + +{{% alert title=Note %}} +`DIALECT 2` is required with vector searches. +{{% /alert %}} + +`DIALECT 2` functionality was enhanced in the 2.10 release. +It introduces support for new comparison operators for `NUMERIC` fields: + +* `==` (equal). + + `FT.SEARCH idx "@numeric==3456" DIALECT 2` + + and + + `FT.SEARCH idx "@numeric:[3456]" DIALECT 2` +* `!=` (not equal). + + `FT.SEARCH idx "@numeric!=3456" DIALECT 2` +* `>` (greater than). + + `FT.SEARCH idx "@numeric>3456" DIALECT 2` +* `>=` (greater than or equal). + + `FT.SEARCH idx "@numeric>=3456" DIALECT 2` +* `<` (less than). + + `FT.SEARCH idx "@numeric<3456" DIALECT 2` +* `<=` (less than or equal). + + `FT.SEARCH idx "@numeric<=3456" DIALECT 2` + +The Dialect version 2 enhancements also introduce simplified syntax for logical operations: + +* `|` (or). + + `FT.SEARCH idx "@tag:{3d3586fe-0416-4572-8ce1 | 3d3586fe-0416-6758-4ri8}" DIALECT 2` + + which is equivalent to + + `FT.SEARCH idx "(@tag:{3d3586fe-0416-4572-8ce1} | @tag{3d3586fe-0416-6758-4ri8})" DIALECT 2` + +* `` (and). + + `FT.SEARCH idx "(@tag:{3d3586fe-0416-4572-8ce1} @tag{3d3586fe-0416-6758-4ri8})" DIALECT 2` + +* `-` (negation). + + `FT.SEARCH idx "(@tag:{3d3586fe-0416-4572-8ce1} -@tag{3d3586fe-0416-6758-4ri8})" DIALECT 2` + +* `~` (optional/proximity). + + `FT.SEARCH idx "(@tag:{3d3586fe-0416-4572-8ce1} ~@tag{3d3586fe-0416-6758-4ri8})" DIALECT 2` + +## `DIALECT 3` (Deprecated) + +Dialect version 3 was introduced in the [2.6](https://github.com/RediSearch/RediSearch/releases/tag/v2.6.3) release. This version introduced support for multi-value indexing and querying of attributes for any attribute type ( [TEXT]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-text" >}}), [TAG]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-tag" >}}), [NUMERIC]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-numeric" >}}), [GEO]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-geo" >}}) and [VECTOR]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-vector" >}})) defined by a [JSONPath]({{< relref "/develop/data-types/json/path" >}}) leading to an array or multiple scalar values. Support for [GEOSHAPE]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) queries was also introduced in this dialect. + +The primary difference between dialects version 2 and version 3 is that JSON is returned rather than scalars for multi-value attributes. Apart from specifying `DIALECT 3` at the end of a [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command, there are no other syntactic changes. Dialect version 1 remains the default dialect. To use dialect version 3, append `DIALECT 3` to your query command. + +`FT.SEARCH ... DIALECT 3` + +**Example** + +Sample JSON: + +``` +{ + "id": 123, + "underlyings": [ + { + "currency": "USD", + "spot": 99, + "underlier": "AAPL UW" + }, + { + "currency": "USD", + "spot": 100, + "underlier": "NFLX UW" + } + ] +} +``` + +Create an index: + +``` +FT.CREATE js_idx ON JSON PREFIX 1 js: SCHEMA $.underlyings[*].underlier AS und TAG +``` + +Now search, with and without `DIALECT 3`. + +- With dialect 1 (default): + + ``` + ft.search js_idx * return 1 und + 1) (integer) 1 + 2) "js:1" + 3) 1) "und" + 2) "AAPL UW" + ``` + + Only the first element of the expected two elements is returned. + +- With dialect 3: + + ``` + ft.search js_idx * return 1 und DIALECT 3 + 1) (integer) 1 + 2) "js:1" + 3) 1) "und" + 2) "[\"AAPL UW\",\"NFLX UW\"]" + ``` + + Both elements are returned. + +{{% alert title=Note %}} +DIALECT 3 is required for shape-based (`POINT` or `POLYGON`) geospatial queries. +{{% /alert %}} + +## `DIALECT 4` (Deprecated) + +Dialect version 4 was introduced in the [2.8](https://github.com/RediSearch/RediSearch/releases/tag/v2.8.4) release. It introduces performance optimizations for sorting operations on [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) and [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}). Apart from specifying `DIALECT 4` at the end of a [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command, there are no other syntactic changes. Dialect version 1 remains the default dialect. To use dialect version 4, append `DIALECT 4` to your query command. + +`FT.SEARCH ... DIALECT 4` + +Dialect version 4 will improve performance in four different scenarios: + +1. **Skip sorter** - applied when there is no sorting to be done. The query can return once it reaches the `LIMIT` of requested results. +1. **Partial range** - applied when there is a `SORTBY` on a numeric field, either with no filter or with a filter by the same numeric field. Such queries will iterate on a range large enough to satisfy the `LIMIT` of requested results. +1. **Hybrid** - applied when there is a `SORTBY` on a numeric field in addition to another non-numeric filter. It could be the case that some results will get filtered, leaving too small a range to satisfy any specified `LIMIT`. In such cases, the iterator then is re-wound and additional iterations occur to collect result up to the requested `LIMIT`. +1. **No optimization** - If there is a sort by score or by a non-numeric field, there is no other option but to retrieve all results and compare their values to the search parameters. + +## Use `FT.EXPLAINCLI` to compare dialects + +The [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli/" >}}) command is a powerful tool that provides a window into the inner workings of your queries. It's like a roadmap that details your query's journey from start to finish. + +When you run [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli/" >}}), it returns an array representing the execution plan of a complex query. This plan is a step-by-step guide of how Redis interprets your query and how it plans to fetch results. It's a behind-the-scenes look at the process, giving you insights into how the search engine works. + +The [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli/" >}}) accepts a `DIALECT` argument, allowing you to execute the query using different dialect versions, allowing you to compare the resulting query plans. + +To use [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli/" >}}), you need to provide an index and a query predicate. The index is the name of the index you created using [`FT.CREATE`]({{< relref "commands/ft.create/" >}}), and the query predicate is the same as if you were sending it to [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) or [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}). + +Here's an example of how to use [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli/" >}}) to understand differences in dialect versions 1 and 2. + +Negation of the intersection between tokens `hello` and `world`: + +```FT.EXPLAINCLI idx:dialects "-hello world" DIALECT 1 +1) NOT { +2) INTERSECT { +3) hello +4) world +5) } +6) } +7) +``` + +Intersection of the negation of the token `hello` together with token `world`: + +``` +FT.EXPLAINCLI idx:dialects "-hello world" DIALECT 2 + 1) INTERSECT { + 2) NOT { + 3) hello + 4) } + 5) UNION { + 6) world + 7) +world(expanded) + 8) } + 9) } +10) +``` + +Same result as `DIALECT 1`: + +``` +FT.EXPLAINCLI idx:dialects "-(hello world)" DIALECT 2 +1) NOT { +2) INTERSECT { +3) hello +4) world +5) } +6) } +7) +``` + +{{% alert title=Note %}} +[`FT.EXPLAIN`]({{< relref "commands/ft.explain/" >}}) doesn't execute the query. It only explains the plan. It's a way to understand how your query is interpreted by the query engine, which can be invaluable when you're trying to optimize your searches. +{{% /alert %}} + +## Change the default dialect + +The default dialect is `DIALECT 1`. If you wish to change that, you can do so by using the `DEFAULT_DIALECT` parameter when loading the RediSearch module: + +``` +$ redis-server --loadmodule ./redisearch.so DEFAULT_DIALECT 2 +``` + +You can also change the query dialect on an already running server using the `FT.CONFIG` command: + +``` +FT.CONFIG SET DEFAULT_DIALECT 2 +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Chinese support for searching and querying in Redis Open Source +linkTitle: Chinese +title: Chinese support +weight: 15 +--- + +Support for adding documents in Chinese is available starting at version 0.99.0. + +Chinese support allows Chinese documents to be added and tokenized using segmentation +rather than simple tokenization using whitespace and/or punctuation. + +Indexing a Chinese document is different than indexing a document in most other +languages because of how tokens are extracted. While most languages can have +their tokens distinguished by separation characters and whitespace, this +is not common in Chinese. + +Chinese tokenization is done by scanning the input text and checking every +character or sequence of characters against a dictionary of predefined terms, +and determining the most likely match based on the surrounding terms and characters. + +Redis makes use of the [Friso](https://github.com/lionsoul2014/friso) +Chinese tokenization library for this purpose. This is largely transparent to +the user and often no additional configuration is required. + +## Example: using chinese in queries + +In pseudo-code: + +``` +FT.CREATE idx ON HASH SCHEMA txt TEXT +HSET docCn txt "Redis支持主从同步。数据可以从主服务器向任意数量的从服务器上同步,从服务器可以是关联其他从服务器的主服务器。这使得Redis可执行单层树复制。从盘可以有意无意的对数据进行写操作。由于完全实现了发布/订阅机制,使得从数据库在任何地方同步树时,可订阅一个频道并接收主服务器完整的消息发布记录。同步对读取操作的可扩展性和数据冗余很有帮助。[8]" +FT.SEARCH idx "数据" LANGUAGE chinese HIGHLIGHT SUMMARIZE +# Outputs: +# 数据?... 数据进行写操作。由于完全实现了发布... 数据冗余很有帮助。[8... +``` + +Using the Python client: + +``` +# -*- coding: utf-8 -*- + +from redisearch.client import Client, Query +from redisearch import TextField + +client = Client('idx') +try: + client.drop_index() +except: + pass + +client.create_index([TextField('txt')]) + +# Add a document +client.add_document('docCn1', + txt='Redis支持主从同步。数据可以从主服务器向任意数量的从服务器上同步从服务器可以是关联其他从服务器的主服务器。这使得Redis可执行单层树复制。从盘可以有意无意的对数据进行写操作。由于完全实现了发布/订阅机制,使得从数据库在任何地方同步树时,可订阅一个频道并接收主服务器完整的消息发布记录。同步对读取操作的可扩展性和数据冗余很有帮助。[8]', + language='chinese') +print client.search(Query('数据').summarize().highlight().language('chinese')).docs[0].txt +# Outputs: +# 数据?... 数据进行写操作。由于完全实现了发布... 数据冗余很有帮助。[8... +``` + +## Using custom dictionaries + +If you wish to use a custom dictionary, you can do so at the module level when +loading the module. The `FRISOINI` setting can point to the location of a +`friso.ini` file that contains the relevant settings and paths to the dictionary +files. + +Note that there is no default `friso.ini` file location. RediSearch comes with +its own `friso.ini` and dictionary files, which are compiled into the module +binary at build-time. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Groupings, projections, and aggregation functions +linkTitle: Aggregations +title: Aggregations +weight: 1 +--- + +Aggregations are a way to process the results of a search query. Aggregation allows you to group, sort, and transform your result data, and to extract analytic insights from it. Much like aggregation queries in other databases and search engines, they can be used to create analytics reports, or perform [faceted search](https://en.wikipedia.org/wiki/Faceted_search) style queries. + +For example, indexing a web-server's logs, you can create a report for unique users by hour, country, or any other breakdown. Or you can create different reports for errors, warnings, etc. + +## Core concepts + +The basic idea of an aggregate query is this: + +* Perform a search query, filtering for records you wish to process. +* Build a pipeline of operations that transform the results by zero or more sequences of: + * **Group and reduce**: grouping by fields in the results, and applying reducer functions to each group. + * **Sort**: sort the results based on one or more fields. + * **Apply transformations**: apply mathematical and string functions on fields in the pipeline, optionally creating new fields or replacing existing ones. + * **Limit**: limit the result, regardless of how the result is sorted. + * **Filter**: filter the results (post-query) based on predicates relating to its values. + +The pipeline is dynamic and re-entrant, and every operation can be repeated. For example, you can group by property X, sort the top 100 results by group size, then group by property Y and sort the results by some other property, then apply a transformation on the output. + +Figure 1: Aggregation Pipeline Example + + + +## Aggregate request format + +The aggregate request's syntax is defined as follows: + +```sql +FT.AGGREGATE + {index_name:string} + {query_string:string} + [VERBATIM] + [LOAD {nargs:integer} {property:string} ...] + [GROUPBY + {nargs:integer} {property:string} ... + REDUCE + {FUNC:string} + {nargs:integer} {arg:string} ... + [AS {name:string}] + ... + ] ... + [SORTBY + {nargs:integer} {string} ... + [MAX {num:integer}] ... + ] ... + [APPLY + {EXPR:string} + AS {name:string} + ] ... + [FILTER {EXPR:string}] ... + [LIMIT {offset:integer} {num:integer} ] ... + [PARAMS {nargs} {name} {value} ... ] +``` + +### Parameters in detail + +Parameters that can take a variable number of arguments are expressed in the +form of `param {nargs} {property_1... property_N}`. The first argument to the +parameter is the number of arguments following the parameter. This allows +Redis to avoid a parsing ambiguity in case one of your arguments has the +name of another parameter. For example, to sort by first name, last name, and +country, one would specify `SORTBY 6 firstName ASC lastName DESC country ASC`. + +* **index_name**: The index the query is executed against. + +* **query_string**: The base filtering query that retrieves the documents. It follows the exact same syntax as the search query, including filters, unions, not, optional, etc. + +* **LOAD {nargs} {property} ...** : Load document fields from the document HASH objects. This should be avoided as a general rule of thumb. Fields needed for aggregations should be stored as SORTABLE (and optionally UNF to avoid any normalization), where they are available to the aggregation pipeline with very low latency. LOAD hurts the performance of aggregate queries considerably since every processed record needs to execute the equivalent of HMGET against a Redis key, which when executed over millions of keys, amounts to very high processing times. +The document ID can be loaded using `@__key`. + +* **GROUPBY {nargs} {property} ...** : Group the results in the pipeline based on one or more properties. Each group should have at least one reducer (See below), a function that handles the group entries, either counting them or performing multiple aggregate operations (see below). + +* **REDUCE {func} {nargs} {arg} ... [AS {name}]**: Reduce the matching results in each group into a single record, using a reduction function. For example, COUNT will count the number of records in the group. See the Reducers section below for more details on available reducers. + + The reducers can have their own property names using the `AS {name}` optional argument. If a name is not given, the resulting name will be the name of the reduce function and the group properties. For example, if a name is not given to COUNT_DISTINCT by property `@foo`, the resulting name will be `count_distinct(@foo)`. + +* **SORTBY {nargs} {property} {ASC|DESC} [MAX {num}]**: Sort the pipeline up until the point of SORTBY, using a list of properties. By default, sorting is ascending, but `ASC` or `DESC ` can be added for each property. `nargs` is the number of sorting parameters, including ASC and DESC. for example: `SORTBY 4 @foo ASC @bar DESC`. + + `MAX` is used to optimized sorting, by sorting only for the n-largest elements. Although it is not connected to `LIMIT`, you usually need just `SORTBY … MAX` for common queries. + +* **APPLY {expr} AS {name}**: Apply a one-to-one transformation on one or more properties, and either store the result as a new property down the pipeline, or replace any property using this transformation. `expr` is an expression that can be used to perform arithmetic operations on numeric properties, or functions that can be applied on properties depending on their types (see below), or any combination thereof. For example: `APPLY "sqrt(@foo)/log(@bar) + 5" AS baz` will evaluate this expression dynamically for each record in the pipeline and store the result as a new property called baz, that can be referenced by further APPLY / SORTBY / GROUPBY / REDUCE operations down the pipeline. + +* **LIMIT {offset} {num}**. Limit the number of results to return just `num` results starting at index `offset` (zero based). AS mentioned above, it is much more efficient to use `SORTBY … MAX` if you are interested in just limiting the output of a sort operation. + + However, limit can be used to limit results without sorting, or for paging the n-largest results as determined by `SORTBY MAX`. For example, getting results 50-100 of the top 100 results is most efficiently expressed as `SORTBY 1 @foo MAX 100 LIMIT 50 50`. Removing the MAX from SORTBY will result in the pipeline sorting all the records and then paging over results 50-100. + +* **FILTER {expr}**. Filter the results using predicate expressions relating to values in each result. The expressions are applied post-query and relate to the current state of the pipeline. See FILTER Expressions below for full details. + +* **PARAMS {nargs} {name} {value}**. Define one or more value parameters. Each parameter has a name and a value. Parameters can be referenced in the query string by a `$`, followed by the parameter name, e.g., `$user`, and each such reference in the search query to a parameter name is substituted by the corresponding parameter value. For example, with parameter definition `PARAMS 4 lon 29.69465 lat 34.95126`, the expression `@loc:[$lon $lat 10 km]` would be evaluated to `@loc:[29.69465 34.95126 10 km]`. Parameters cannot be referenced in the query string where concrete values are not allowed, such as in field names, e.g., `@loc` +* +## Example + +A log of visits to a website might look like the following, each record of which has the following fields/properties: + +* **url** (text, sortable) +* **timestamp** (numeric, sortable) - Unix timestamp of visit entry. +* **country** (tag, sortable) +* **user_id** (text, sortable, not indexed) + +### Example 1: unique users by hour, ordered chronologically. + +The first step is to determine the index name and the filtering query. A filter query of `*` means "get all records": + +``` +FT.AGGREGATE myIndex "*" +``` + +Next, group the results by hour. The data contains visit times as unix timestamps in second resolution, so you'll need to extract the hour component of the timestamp. To do so, add an APPLY step that strips the sub-hour information from the timestamp and stores is as a new property, `hour`: + +``` +FT.AGGREGATE myIndex "*" + APPLY "@timestamp - (@timestamp % 3600)" AS hour +``` + +Next, group the results by hour and count the distinct user ids in each hour. This is done by a GROUPBY/REDUCE step: + +``` +FT.AGGREGATE myIndex "*" + APPLY "@timestamp - (@timestamp % 3600)" AS hour + + GROUPBY 1 @hour + REDUCE COUNT_DISTINCT 1 @user_id AS num_users +``` + +Next, sort the results by hour, ascending: + +``` +FT.AGGREGATE myIndex "*" + APPLY "@timestamp - (@timestamp % 3600)" AS hour + + GROUPBY 1 @hour + REDUCE COUNT_DISTINCT 1 @user_id AS num_users + + SORTBY 2 @hour ASC +``` + +And as a final step, format the hour as a human readable timestamp. This is done by calling the transformation function `timefmt` that formats Unix timestamps. You can specify a format to be passed to the system's `strftime` function ([see documentation](https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html)), but not specifying one is equivalent to specifying `%FT%TZ` to `strftime`. + +``` +FT.AGGREGATE myIndex "*" + APPLY "@timestamp - (@timestamp % 3600)" AS hour + + GROUPBY 1 @hour + REDUCE COUNT_DISTINCT 1 @user_id AS num_users + + SORTBY 2 @hour ASC + + APPLY timefmt(@hour) AS hour +``` + +### Example 2: Sort visits to a specific URL by day and country: + +The next example filters by the url, transforms the timestamp to its day part, and groups by the day and country, counting the number of visits per group, sorting by day ascending and country descending. + +``` +FT.AGGREGATE myIndex "@url:\"about.html\"" + APPLY "@timestamp - (@timestamp % 86400)" AS day + GROUPBY 2 @day @country + REDUCE count 0 AS num_visits + SORTBY 4 @day ASC @country DESC +``` + +## GROUPBY reducers + +`GROUPBY` works similarly to SQL `GROUP BY` clauses, and creates groups of results based on one or more properties in each record. For each group, Redis returns the group keys, or the values common to all records in the group, and the results of zero or more `REDUCE` clauses. + +Each `GROUPBY` step in the pipeline may be accompanied by zero or more `REDUCE` clauses. Reducers apply an accumulation function to each record in the group and reduces them into a single record representing the group. When the processing is complete, all the records upstream of the `GROUPBY` step emit their reduced record. + +For example, the simplest reducer is COUNT, which simply counts the number of records in each group. + +If multiple `REDUCE` clauses exist for a single `GROUPBY` step, each reducer works independently on each result and writes its final output once. Each reducer may have its own alias determined using the `AS` optional parameter. If `AS` is not specified, the alias is the reduce function and its parameters, e.g. `count_distinct(foo,bar)`. + +### Supported GROUPBY reducers + +#### COUNT + +**Format** + +``` +REDUCE COUNT 0 +``` + +**Description** + +Count the number of records in each group + +#### COUNT_DISTINCT + +**Format** + +```` +REDUCE COUNT_DISTINCT 1 {property} +```` + +**Description** + +Count the number of distinct values for `property`. + +{{% alert title="Note" color="info" %}} +The reducer creates a hash-set per group, and hashes each record. This can be memory heavy if the groups are big. +{{% /alert %}} + +#### COUNT_DISTINCTISH + +**Format** + +``` +REDUCE COUNT_DISTINCTISH 1 {property} +``` + +**Description** + +Same as COUNT_DISTINCT, provides an approximation instead of an exact count, which consumes less memory and CPU for big groups. + +{{% alert title="Note" color="info" %}} +The reducer uses [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) counters per group, at ~3% error rate, and 1024 bytes of constant space per group. This means it is ideal for a few huge groups and not ideal for many small groups. In the former case, it can be an order of magnitude faster and consume much less memory than COUNT_DISTINCT, but again, it does not fit every use case. +{{% /alert %}} + +#### SUM + +**Format** + +``` +REDUCE SUM 1 {property} +``` + +**Description** + +Return the sum of all numeric values of a given property in a group. Non-numeric values in the group are counted as 0. + +#### MIN + +**Format** + +``` +REDUCE MIN 1 {property} +``` + +**Description** + +Return the minimal value of a property, whether it is a string, number, or NULL. + +#### MAX + +**Format** + +``` +REDUCE MAX 1 {property} +``` + +**Description** + +Return the maximal value of a property, whether it is a string, number or NULL. + +#### AVG + +**Format** + +``` +REDUCE AVG 1 {property} +``` + +**Description** + +Return the average value of a numeric property. This is equivalent to reducing by sum and count, and later, applying the ratio of them as an APPLY step. + +#### STDDEV + +**Format** + +``` +REDUCE STDDEV 1 {property} +``` + +**Description** + +Return the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of a numeric property in the group. + +#### QUANTILE + +**Format** + +``` +REDUCE QUANTILE 2 {property} {quantile} +``` + +**Description** + +Return the value of a numeric property at a given quantile of the results. Quantile is expressed as a number between 0 and 1. For example, the median can be expressed as the quantile at 0.5, e.g. `REDUCE QUANTILE 2 @foo 0.5 AS median` . + +If multiple quantiles are required, just repeat the QUANTILE reducer for each quantile. For example, `REDUCE QUANTILE 2 @foo 0.5 AS median REDUCE QUANTILE 2 @foo 0.99 AS p99`. + +#### TOLIST + +**Format** + +``` +REDUCE TOLIST 1 {property} +``` + +**Description** + +Merge all distinct values of a given property into a single array. + +#### FIRST_VALUE + +**Format** + +``` +REDUCE FIRST_VALUE {nargs} {property} [BY {property} [ASC|DESC]] +``` + +**Description** + +Return the first or top value of a given property in the group, optionally by comparing it to another property. For example, you can extract the name of the oldest user in the group: + +``` +REDUCE FIRST_VALUE 4 @name BY @age DESC +``` + +If no `BY` is specified, the first value encountered in the group is returned. + +If you wish to get the top or bottom value in the group sorted by the same value, you are better off using the `MIN/MAX` reducers, but the same effect will be achieved by doing `REDUCE FIRST_VALUE 4 @foo BY @foo DESC`. + +#### RANDOM_SAMPLE + +**Format** + +``` +REDUCE RANDOM_SAMPLE {nargs} {property} {sample_size} +``` + +**Description** + +Perform a reservoir sampling of the group elements with a given size, and return an array of the sampled items with an even distribution. + +## APPLY expressions + +`APPLY` performs a one-to-one transformation on one or more properties in each record. It either stores the result as a new property down the pipeline, or replaces any property using this transformation. + +The transformations are expressed as a combination of arithmetic expressions and built in functions. Evaluating functions and expressions is recursively nested and can be composed without limit. For example: `sqrt(log(foo) * floor(@bar/baz)) + (3^@qaz % 6)` or simply `@foo/@bar`. + +If an expression or a function is applied to values that do not match the expected types, no error is emitted and a NULL value is set as the result. + +APPLY steps must have an explicit alias determined by the `AS` parameter. + +### Literals inside expressions + +* Numbers are expressed as integers or floating point numbers, e.g., `2`, `3.141`, and `-34`. `inf` and `-inf` are acceptable as well. +* Strings are quoted with either single or double quotes. Single quotes are acceptable inside strings quoted with double quotes and vice versa. Punctuation marks can be escaped with backslashes. e.g. `"foo's bar"` ,`'foo\'s bar'`, `"foo \"bar\""` . +* Any literal or sub-expression can be wrapped in parentheses to resolve ambiguities of operator precedence. + +### Arithmetic operations + +For numeric expressions and properties, addition (`+`), subtraction (`-`), multiplication (`*`), division (`/`), modulo (`%`), and power (`^`) are supported. Bitwise logical operators are not supported. + +Note that these operators apply only to numeric values and numeric sub-expressions. Any attempt to multiply a string by a number, for instance, will result in a NULL output. + +### List of field APPLY functions + +| Function | Description | Example | +| -------- | ------------------------------------------------------------ | ------------------ | +| exists(s)| Checks whether a field exists in a document. | `exists(@field)` | + +### List of numeric APPLY functions + +| Function | Description | Example | +| -------- | ------------------------------------------------------------ | ------------------ | +| log(x) | Return the logarithm of a number, property or subexpression | `log(@foo)` | +| abs(x) | Return the absolute number of a numeric expression | `abs(@foo-@bar)` | +| ceil(x) | Round to the smallest value not less than x | `ceil(@foo/3.14)` | +| floor(x) | Round to largest value not greater than x | `floor(@foo/3.14)` | +| log2(x) | Return the logarithm of x to base 2 | `log2(2^@foo)` | +| exp(x) | Return the exponent of x, e.g., `e^x` | `exp(@foo)` | +| sqrt(x) | Return the square root of x | `sqrt(@foo)` | + +### List of string APPLY functions + +| Function | | | +| -------------------------------- | ------------------------------------------------------------ | -------------------------------------------------------- | +| upper(s) | Return the uppercase conversion of s | `upper('hello world')` | +| lower(s) | Return the lowercase conversion of s | `lower("HELLO WORLD")` | +| startswith(s1,s2) | Return `1` if s2 is the prefix of s1, `0` otherwise. | `startswith(@field, "company")` | +| contains(s1,s2) | Return the number of occurrences of s2 in s1, `0` otherwise. If s2 is an empty string, return `length(s1) + 1`. | `contains(@field, "pa")` | +| strlen(s) | Return the length of s | `strlen(@t)` | +| substr(s, offset, count) | Return the substring of s, starting at _offset_ and having _count_ characters.
If offset is negative, it represents the distance from the end of the string.
If count is -1, it means "the rest of the string starting at offset". | `substr("hello", 0, 3)`
`substr("hello", -2, -1)` | +| format( fmt, ...) | Use the arguments following `fmt` to format a string.
Currently the only format argument supported is `%s` and it applies to all types of arguments. | `format("Hello, %s, you are %s years old", @name, @age)` | +| matched_terms([max_terms=100]) | Return the query terms that matched for each record (up to 100), as a list. If a limit is specified, Redis will return the first N matches found, based on query order. | `matched_terms()` | +| split(s, [sep=","], [strip=" "]) | Split a string by any character in the string sep, and strip any characters in strip. If only s is specified, it is split by commas and spaces are stripped. The output is an array. | split("foo,bar") | + +### List of date/time APPLY functions + +| Function | Description | +| ------------------- | ------------------------------------------------------------ | +| timefmt(x, [fmt]) | Return a formatted time string based on a numeric timestamp value x.
See [strftime](https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html) for formatting options.
Not specifying `fmt` is equivalent to `%FT%TZ`. | +| parsetime(timesharing, [fmt]) | The opposite of timefmt() - parse a time format using a given format string | +| day(timestamp) | Round a Unix timestamp to midnight (00:00) start of the current day. | +| hour(timestamp) | Round a Unix timestamp to the beginning of the current hour. | +| minute(timestamp) | Round a Unix timestamp to the beginning of the current minute. | +| month(timestamp) | Round a unix timestamp to the beginning of the current month. | +| dayofweek(timestamp) | Convert a Unix timestamp to the day number (Sunday = 0). | +| dayofmonth(timestamp) | Convert a Unix timestamp to the day of month number (1 .. 31). | +| dayofyear(timestamp) | Convert a Unix timestamp to the day of year number (0 .. 365). | +| year(timestamp) | Convert a Unix timestamp to the current year (e.g. 2018). | +| monthofyear(timestamp) | Convert a Unix timestamp to the current month (0 .. 11). | + +### List of geo APPLY functions + +| Function | Description | Example | +| -------- | ------------------------------------------------------------ | ------------------ | +| geodistance(field,field) | Return distance in meters. | `geodistance(@field1,@field2)` | +| geodistance(field,"lon,lat") | Return distance in meters. | `geodistance(@field,"1.2,-3.4")` | +| geodistance(field,lon,lat) | Return distance in meters. | `geodistance(@field,1.2,-3.4)` | +| geodistance("lon,lat",field) | Return distance in meters. | `geodistance("1.2,-3.4",@field)` | +| geodistance("lon,lat","lon,lat")| Return distance in meters. | `geodistance("1.2,-3.4","5.6,-7.8")` | +| geodistance("lon,lat",lon,lat) | Return distance in meters. | `geodistance("1.2,-3.4",5.6,-7.8)` | +| geodistance(lon,lat,field) | Return distance in meters. | `geodistance(1.2,-3.4,@field)` | +| geodistance(lon,lat,"lon,lat") | Return distance in meters. | `geodistance(1.2,-3.4,"5.6,-7.8")` | +| geodistance(lon,lat,lon,lat) | Return distance in meters. | `geodistance(1.2,-3.4,5.6,-7.8)` | + +``` +FT.AGGREGATE myIdx "*" LOAD 1 location APPLY "geodistance(@location,\"-1.1,2.2\")" AS dist +``` + +To retrieve the distance: + +``` +FT.AGGREGATE myIdx "*" LOAD 1 location APPLY "geodistance(@location,\"-1.1,2.2\")" AS dist +``` + +Note: the geo field must be preloaded using `LOAD`. + +Results can also be sorted by distance: + +``` +FT.AGGREGATE idx "*" LOAD 1 @location FILTER "exists(@location)" APPLY "geodistance(@location,-117.824722,33.68590)" AS dist SORTBY 2 @dist DESC +``` + +Note: Make sure no location is missing, otherwise the SORTBY will not return any results. +Use FILTER to make sure you do the sorting on all valid locations. + +## FILTER expressions + +FILTER expressions filter the results using predicates relating to values in the result set. + +The FILTER expressions are evaluated post-query and relate to the current state of the pipeline. Thus they can be useful to prune the results based on group calculations. Note that the filters are not indexed and will not speed up processing. + +Filter expressions follow the syntax of APPLY expressions, with the addition of the conditions `==`, `!=`, `<`, `<=`, `>`, `>=`. Two or more predicates can be combined with logical AND (`&&`) and OR (`||`). A single predicate can be negated with a NOT prefix (`!`). + +For example, filtering all results where the user name is 'foo' and the age is less than 20 is expressed as: + +``` +FT.AGGREGATE + ... + FILTER "@name=='foo' && @age < 20" + ... +``` + +Several filter steps can be added, although at the same stage in the pipeline, it is more efficient to combine several predicates into a single filter step. + +## Cursor API + +``` +FT.AGGREGATE ... WITHCURSOR [COUNT {read size} MAXIDLE {idle timeout}] +FT.CURSOR READ {idx} {cid} [COUNT {read size}] +FT.CURSOR DEL {idx} {cid} +``` + +You can use cursors with [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}), with the `WITHCURSOR` keyword. Cursors allow you to +consume only part of the response, allowing you to fetch additional results as needed. +This is much quicker than using `LIMIT` with offset, since the query is executed only +once, and its state is stored on the server. + +To use cursors, specify the `WITHCURSOR` keyword in [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}). For example: + +``` +FT.AGGREGATE idx * WITHCURSOR +``` + +This will return a response of an array with two elements. The first element is +the actual (partial) result, and the second is the cursor ID. The cursor ID +can then be fed to [`FT.CURSOR READ`]({{< relref "commands/ft.cursor-read/" >}}) repeatedly until the cursor ID is 0, in +which case all results have been returned. + +To read from an existing cursor, use [`FT.CURSOR READ`]({{< relref "commands/ft.cursor-read/" >}}). For example: + +``` +FT.CURSOR READ idx 342459320 +``` + +Assuming `342459320` is the cursor ID returned from the [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}) request, here is an example in pseudo-code: + +``` +response, cursor = FT.AGGREGATE "idx" "redis" "WITHCURSOR"; +while (1) { + processResponse(response) + if (!cursor) { + break; + } + response, cursor = FT.CURSOR read "idx" cursor +} +``` + +Note that even if the cursor is 0, a partial result may still be returned. + +### Cursor settings + +#### Read size + +You can control how many rows are read for each cursor fetch by using the +`COUNT` parameter. This parameter can be specified both in [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}) +(immediately after `WITHCURSOR`) or in [`FT.CURSOR READ`]({{< relref "commands/ft.cursor-read/" >}}). + +The following example will read 10 rows at a time: +``` +FT.AGGREGATE idx query WITHCURSOR COUNT 10 +``` + +You can override this setting by also specifying `COUNT` in `CURSOR READ`. The following example will return at most 50 results: + +``` +FT.CURSOR READ idx 342459320 COUNT 50 +``` + +The default read size is 1000. + +#### Timeouts and limits + +Because cursors are stateful resources that occupy memory on the server, they +have a limited lifetime. To safeguard against orphaned/stale cursors, +cursors have an idle timeout value. If no activity occurs on the cursor before +the idle timeout, the cursor is deleted. The idle timer resets to 0 whenever +the cursor is read from using `CURSOR READ`. + +The default idle timeout is 300000 milliseconds (or 300 seconds). You can modify +the idle timeout using the `MAXIDLE` keyword when creating the cursor. Note that +the value cannot exceed the default 300s. + +For example, to set a limit of ten seconds: + +``` +FT.AGGREGATE idx query WITHCURSOR MAXIDLE 10000 +``` + +### Other cursor commands + +Cursors can be explicitly deleted using the `CURSOR DEL` command. For example: + +``` +FT.CURSOR DEL idx 342459320 +``` + +Note that cursors are automatically deleted if all their results have been +returned, or if they have timed out. + +All idle cursors can be forcefully purged at the same time using `FT.CURSOR GC idx 0` command. +By default, Redis uses a lazy throttled approach to garbage collection, which +collects idle cursors every 500 operations, or every second, whichever is later. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Stop words support +linkTitle: Stop words +title: Stop words +weight: 1 +--- + +Redis Open Source has a default list of [stop words](https://en.wikipedia.org/wiki/Stop_words). These are words that are usually so common that they do not add much information to search, but take up a lot of space and CPU time in the index. + +When indexing, stop words are discarded and not indexed. When searching, they are also ignored and treated as if they were not sent to the query processor. This is done when parsing the query. + +At the moment, the default stop word list applies to all full-text indexes in all languages and can be overridden manually at index creation time. + +## Default stop word list + +The following words are treated as stop words by default: + +``` + a, is, the, an, and, are, as, at, be, but, by, for, + if, in, into, it, no, not, of, on, or, such, that, their, + then, there, these, they, this, to, was, will, with +``` + +## Overriding the default stop word list + +Stop words for an index can be defined (or disabled completely) on index creation using the `STOPWORDS` argument with the [[`FT.CREATE`]({{< relref "commands/ft.create/" >}}) command. + +The format is `STOPWORDS {number} {stopword} ...` where number is the number of stop words given. The `STOPWORDS` argument must come before the `SCHEMA` argument. For example: + +``` +FT.CREATE myIndex STOPWORDS 3 foo bar baz SCHEMA title TEXT body TEXT +``` + +## Disable the use of stop words + +Disabling stop words completely can be done by passing `STOPWORDS 0` to [`FT.CREATE`]({{< relref "commands/ft.create/" >}}). + + +## Avoiding stop word detection in search queries + +In rare use cases, where queries are very long and are guaranteed by the client application not to contain stop words, it is possible to avoid checking for them when parsing the query. This saves some CPU time and is only worth it if the query has dozens or more terms in it. Using this without verifying that the query doesn't contain stop words might result in empty queries. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details about phonetic matching capabilities +linkTitle: Phonetic +title: Phonetic matching +weight: 14 +--- + +Phonetic matching, for example "Jon" vs. "John", allows searching for terms based on their pronunciation. This capability can be a useful tool when searching for names of people. + +Phonetic matching is based on the use of a phonetic algorithm. A phonetic algorithm transforms the input term to an approximate representation of its pronunciation. This allows terms to be indexed and searched by their pronunciation. + +As of v1.4, Redis Query Engine, which is included in Redis Open Source, provides phonetic matching of text fields specified with the `PHONETIC` attribute. This causes the terms in such fields to be indexed both by their textual value as well as their phonetic approximation. + +Performing a search on `PHONETIC` fields will, by default, also return results for phonetically similar terms. This behavior can be controlled with the [`$phonetic` query attribute]({{< relref "/develop/interact/search-and-query/query/#query-attributes" >}}). + +## Phonetic algorithms support + +Redis currently supports a single phonetic algorithm, the [Double Metaphone](https://en.wikipedia.org/wiki/Metaphone#Double_Metaphone) (DM). It uses the implementation at the [slacy/double-metaphone GitHub site](https://github.com/slacy/double-metaphone), which provides general support for Latin languages. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Highlighting full-text results +linkTitle: Highlighting +title: Highlighting +weight: 7 +--- + +Redis Open Source uses advanced algorithms for highlighting and summarizing, which enable only the relevant portions of a document to appear in response to a search query. This feature allows users to immediately understand the relevance of a document to their search criteria, typically highlighting the matching terms in bold text. + +## Command syntax + +``` +FT.SEARCH ... + SUMMARIZE [FIELDS {num} {field}] [FRAGS {numFrags}] [LEN {fragLen}] [SEPARATOR {sepstr}] + HIGHLIGHT [FIELDS {num} {field}] [TAGS {openTag} {closeTag}] +``` + +There are two sub-commands used for highlighting. The first is `HIGHLIGHT`, which surrounds matching text with an open and/or close tag. The second is `SUMMARIZE`, which splits a field into contextual fragments surrounding the found terms. It is possible to summarize a field, highlight a field, or perform both actions in the same query. + +### Summarization + +``` +FT.SEARCH ... + SUMMARIZE [FIELDS {num} {field}] [FRAGS {numFrags}] [LEN {fragLen}] [SEPARATOR {sepStr}] +``` + +Summarization will fragment the text into smaller sized snippets, each of which containing the found term(s) and some additional surrounding context. + +Redis can perform summarization using the `SUMMARIZE` keyword. If no additional arguments are passed, all returned fields are summarized using built-in defaults. + +The `SUMMARIZE` keyword accepts the following arguments: + +* **`FIELDS`**: If present, it must be the first argument. This should be followed + by the number of fields to summarize, which itself is followed by a list of + fields. Each field is summarized. If no `FIELDS` directive is passed, + then all returned fields are summarized. + +* **`FRAGS`**: The number of fragments to be returned. If not specified, a default is 3. + +* **`LEN`**: The number of context words each fragment should contain. Context + words surround the found term. A higher value will return a larger block of + text. If not specified, the default value is 20. + +* **`SEPARATOR`**: The string used to divide individual summary snippets. + The default is `... ` which is common among search engines, but you may + override this with any other string if you desire to programmatically divide the snippets + later on. You may also use a newline sequence, as newlines are stripped from the + result body during processing. + +### Highlighting + +``` +FT.SEARCH ... HIGHLIGHT [FIELDS {num} {field}] [TAGS {openTag} {closeTag}] +``` + +Highlighting will surround the found term (and its variants) with a user-defined pair of tags. This may be used to display the matched text in a different typeface using a markup language, or to otherwise make the text appear differently. + +Redis performs highlighting using the `HIGHLIGHT` keyword. If no additional arguments are passed, all returned fields are highlighted using built-in defaults. + +The `HIGHLIGHT` keyword accepts the following arguments: + +* **`FIELDS`**: If present, it must be the first argument. This should be followed + by the number of fields to highlight, which itself is followed by a list of + fields. Each field present is highlighted. If no `FIELDS` directive is passed, + then all returned fields are highlighted. + +* **`TAGS`**: If present, it must be followed by two strings. The first string is prepended + to each matched term. The second string is appended to each matched term. If no `TAGS` are + specified, a built-in tag pair is prepended and appended to each matched term. + + +#### Field selection + +If no specific fields are passed to the `RETURN`, `SUMMARIZE`, or `HIGHLIGHT` keywords, then all of a document's fields are returned. However, if any of these keywords contain a `FIELD` directive, then the `SEARCH` command will only return the sum total of all fields enumerated in any of those directives. + +The `RETURN` keyword is treated specially, as it overrides any fields specified in `SUMMARIZE` or `HIGHLIGHT`. + +In the command `RETURN 1 foo SUMMARIZE FIELDS 1 bar HIGHLIGHT FIELDS 1 baz`, the fields `foo` is returned as-is, while `bar` and `baz` are not returned, because `RETURN` was specified, but did not include those fields. + +In the command `SUMMARIZE FIELDS 1 bar HIGHLIGHT FIELDS 1 baz`, `bar` is returned summarized and `baz` is returned highlighted. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use the autocomplete feature of Redis for efficient prefix-based suggestion retrieval. +linkTitle: Autocomplete +title: Autocomplete with Redis +weight: 1 +--- + +## Overview + +Redis Query Engine provides an autocomplete feature using suggestions that are stored in a [trie-based](https://en.wikipedia.org/wiki/Trie) data structure. +This feature allows you to store and retrieve ranked suggestions based on user input prefixes, making it useful for applications like search boxes, command completion, and chatbot responses. + +This guide covers how to use the [`FT.SUGADD`]({{< relref "/commands/ft.sugadd" >}}), [`FT.SUGGET`]({{< relref "/commands/ft.sugget" >}}), [`FT.SUGDEL`]({{< relref "/commands/ft.sugdel" >}}), and [`FT.SUGLEN`]({{< relref "/commands/ft.suglen" >}}) commands to implement autocomplete, and some examples of how you can use these commands with [`FT.SEARCH`]({{< relref "/commands/ft.search" >}}). + +## Add autocomplete suggestions + +To add phrases or words to a suggestions dictionary, use the [`FT.SUGADD`]({{< relref "/commands/ft.sugadd" >}}) command. +You will assign a score to each entry, which determines its ranking in the results. + +``` +FT.SUGADD autocomplete "hello world" 100 +FT.SUGADD autocomplete "hello there" 90 +FT.SUGADD autocomplete "help me" 80 +FT.SUGADD autocomplete "hero" 70 +``` + +Integer scores were used in the above examples, but the scores argument is described as being floating point. +Integer scores are converted to floating point internally. +Also, "`autocomplete`" in the above examples is just the name of the key; you can use any key name you wish. + +### Optional arguments + +The `FT.SUGADD` command can take two optional arguments: + +* `INCR`: increments the existing entry of the suggestion by the given score instead of replacing the score. This is useful for updating the dictionary based on user queries in real time. +* `PAYLOAD`: saves a string with the suggestion, which can be fetched by adding the `WITHPAYLOADS` argument to `FT.SUGGET`. + +## Retrieve suggestions + +To get autocomplete suggestions for a given prefix, use the [`FT.SUGGET`]({{< relref "/commands/ft.sugget" >}}) command. + +``` +redis> FT.SUGGET autocomplete "he" +1) "hero" +2) "help me" +3) "hello world" +4) "hello there" +``` + +If you wish to see the scores, use the `WITHSCORES` option: + +``` +redis> FT.SUGGET autocomplete "he" WITHSCORES +1) "hero" +2) "40.414520263671875" +3) "help me" +4) "32.65986251831055" +5) "hello world" +6) "31.62277603149414" +7) "hello there" +8) "28.460498809814453" +``` + +### Enable fuzzy matching + +If you want to allow for small spelling mistakes or typos, use the `FUZZY` option. This option performs a fuzzy prefix search, including prefixes at a [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) of 1 from the provided prefix. + +``` +redis> FT.SUGGET autocomplete hell FUZZY +1) "hello world" +2) "hello there" +3) "help me" +``` + +### Optional arguments + +There are three additional arguments you can use wit `FT.SUGGET`: + +* `MAX num`: limits the results to a maximum of `num`. The default for `MAX` is 5. +* `WITHSCORES`: returns the score of each suggestion. +* `WITHPAYLOADS`: returns optional payloads saved with the suggestions. If no payload is present for an entry, a `nil` reply is returned. + ``` + redis> FT.SUGADD autocomplete hero 70 PAYLOAD "you're no hero" + (integer) 4 + redis> FT.SUGGET autocomplete "hr" FUZZY WITHPAYLOADS + 1) "hero" + 2) "you're no hero" + 3) "help me" + 4) (nil) + 5) "hello world" + 6) (nil) + 7) "hello there" + 8) (nil) + ``` + +## Delete suggestions + +To remove a specific suggestion from the dictionary, use the `FT.SUGDEL` command. + +``` +redis> FT.SUGDEL autocomplete "help me" +(integer 1) +``` + +After deletion, running `FT.SUGGET autocomplete hell FUZZY` will no longer return "help me". + +## Check the number of suggestions + +To get a count of the number of entries in a given suggestions dictionary, use the `FT.SUGLEN` command. + +``` +redis> FT.SUGLEN autocomplete +(integer) 3 +``` + +## Use autocomplete with search + +A common approach is to: + +1. Use FT.SUGGET to suggest query completions as users type in a text field. +1. Once the user selects a suggestion, run FT.SEARCH using the selected term to get full search results. + +Example workflow + +1. Get suggestions for a given user input. + + ``` + FT.SUGGET autocomplete "hel" + ``` +1. Capture the user's selection. +1. Use the selected suggestion in a full-text search. + + ``` + FT.SEARCH index "hello world" + ``` + +### When to use autocomplete versus full-text search + +* Use `FT.SUGGET` when you need fast, real-time prefix-based suggestion retrieval. +* Use `FT.SEARCH` when you need document retrieval, filtering, and ranking based on relevance. + +## Autocomplete use cases + +The autocomplete feature in Redis Query Engine is useful for: + +- **Search box suggestions**: providing live suggestions as users type. +- **Command completion**: offering autocompletion for CLI tools. +- **Product search**: suggesting product names in e-commerce applications. +- **Chatbot responses**: recommending common phrases dynamically. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Controlling text tokenization and escaping +linkTitle: Tokenization +title: Tokenization +weight: 4 +--- + +Full-text search works by comparing words, URLs, numbers, and other elements of the query +against the text in the searchable fields of each document. However, +it would be very inefficient to compare the entire text of the query against the +entire text of each field over and over again, so the search system doesn't do this. +Instead, it splits the document text into short, significant sections +called *tokens* during the indexing process and stores the tokens as part of the document's +index data. + +During a search, the query system also tokenizes the +query text and then simply compares the tokens from the query against the tokens stored +for each document. Finding a match like this is much more efficient than pattern-matching on +the whole text and also lets you use +[stemming]({{< relref "/develop/interact/search-and-query/advanced-concepts/stemming" >}}) and +[stop words]({{< relref "/develop/interact/search-and-query/advanced-concepts/stopwords" >}}) +to improve the search even further. See this article about +[Tokenization](https://queryunderstanding.com/tokenization-c8cdd6aef7ff) +for a general introduction to the concepts. + +Redis uses a very simple tokenizer for documents and a slightly more sophisticated tokenizer for queries. Both allow a degree of control over string escaping and tokenization. + +The sections below describe the rules for tokenizing text fields and queries. +Note that +[Tag fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) +are essentially text fields but they use a simpler form of tokenization, as described +separately in the +[Tokenization rules for tag fields](#tokenization-rules-for-tag-fields) section. + +## Tokenization rules for text fields + +1. All punctuation marks and whitespace (besides underscores) separate the document and queries into tokens. For example, any character of `,.<>{}[]"':;!@#$%^&*()-+=~` will break the text into terms, so the text `foo-bar.baz...bag` will be tokenized into `[foo, bar, baz, bag]` + +2. Escaping separators in both queries and documents is done by prepending a backslash to any separator. For example, the text `hello\-world hello-world` will be tokenized as `[hello-world, hello, world]`. In most languages you will need an extra backslash to signify an actual backslash when formatting the document or query, so the actual text entered into redis-cli will be `hello\\-world`. + +3. Underscores (`_`) are not used as separators in either document or query, so the text `hello_world` will remain as is after tokenization. + +4. Repeating spaces or punctuation marks are stripped. + +5. Latin characters are converted to lowercase. + +6. A backslash before the first digit will tokenize it as a term. This will translate the `-` sign as NOT, which otherwise would make the number negative. Add a backslash before `.` if you are searching for a float. For example, `-20 -> {-20} vs -\20 -> {NOT{20}}`. + +## Tokenization rules for tag fields + +[Tag fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}) interpret +a text field as a list of *tags* delimited by a +[separator]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags#creating-a-tag-field" >}}) +character (which is a comma "," by +default). The tokenizer simply splits the text wherever it finds the separator and so most +punctuation marks and whitespace are valid characters within each tag token. The only +changes that the tokenizer makes to the tags are: + +- Trimming whitespace at the start and end of the tag. Other whitespace in the tag text is left intact. +- Converting Latin alphabet characters to lowercase. You can override this by adding the + [`CASESENSITIVE`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#tag-fields" >}}) option in the indexing schema for the tag field. + +This means that when you define a tag field, you don't need to escape any characters, except +in the unusual case where you want leading or trailing spaces to be part of the tag text. +However, you do need to escape certain characters in a *query* against a tag field. See the +[Query syntax]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax#tag-filters" >}}) and +[Exact match]({{< relref "/develop/interact/search-and-query/query/exact-match" >}}) pages for more information about escaping +and how to use [DIALECT 2]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-2" >}}), which is required for +exact match queries involving tags. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Learn how to use query syntax + + ' +linkTitle: Query syntax +title: Query syntax +weight: 5 +--- + +{{< note >}}The query syntax that RediSearch uses has improved over time, +adding new features and making queries simpler to write. However, +changing the syntax like this could potentially break existing queries that rely on +an older version of the syntax. To avoid this problem, RediSearch supports +different query syntax *dialects* to ensure backward compatibility. +Any breaking changes to the syntax are introduced in a new dialect, while +RediSearch continues to support older dialects. This means you can always choose +the correct dialect to support the query you are using. +See +[Query dialects]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}) +for full details of the dialects and the RediSearch versions that introduced them. +{{< /note >}} + +## Basic syntax + +You can use simple syntax for complex queries using these rules: + +* Exact phrases are wrapped in quotes, for example, `"hello world"`. +* Multiword phrases are lists of tokens, for example, `foo bar baz`, and imply intersection (AND) of the terms. +* `OR` unions are expressed with a pipe (`|`) character, for example, `hello|hallo|shalom|hola`. + + **Notes**: + + Consider the differences in parser behavior in example `hello world | "goodbye" moon`: + * In DIALECT 1, this query is interpreted as searching for `(hello world | "goodbye") moon`. + * In DIALECT 2 or greater, this query is interpreted as searching for either `hello world` **OR** `"goodbye" moon`. + +* `NOT` negation of expressions or subqueries is expressed with a subtraction symbol (`-`), for example, `hello -world`. Purely negative queries such as `-foo` and `-@title:(foo|bar)` are also supported. + + **Notes**: + + Consider a simple query with negation `-hello world`: + * In DIALECT 1, this query is interpreted as "find values in any field that does not contain `hello` **AND** does not contain `world`". The equivalent is `-(hello world)` or `-hello -world`. + * In DIALECT 2 or greater, this query is interpreted `as -hello` **AND** `world` (only `hello` is negated). + * In DIALECT 2 or greater, to achieve the default behavior of DIALECT 1, update your query to `-(hello world)`. + +* Prefix/infix/suffix matches (all terms starting/containing/ending with a term) are expressed with an asterisk `*`. For performance reasons, a minimum term length is enforced. The default is 2, but it's configurable. +* In DIALECT 2 or greater, wildcard pattern matches are expressed as `"w'foo*bar?'"`. Note the use of double quotes to contain the _w_ pattern. +* A special wildcard query that returns all results in the index is just the asterisk `*`. This cannot be combined with other options. +* As of v2.6.1, `DIALECT 3` returns JSON rather than scalars from multivalue attributes. +* Selection of specific fields using the syntax `hello @field:world`. +* Numeric range matches on numeric fields with the syntax `@field:[{min} {max}]`. +* Georadius matches on geo fields with the syntax `@field:[{lon} {lat} {radius} {m|km|mi|ft}]`. +* As of 2.6, range queries on vector fields with the syntax `@field:[VECTOR_RANGE {radius} $query_vec]`, where `query_vec` is given as a query parameter. +* As of v2.4, k-nearest neighbors (KNN) queries on vector fields with or without pre-filtering with the syntax `{filter_query}=>[KNN {num} @field $query_vec]`. +* Tag field filters with the syntax `@field:{tag | tag | ...}`. See the full documentation on [tags]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}). +* Optional terms or clauses: `foo ~bar` means bar is optional but documents containing `bar` will rank higher. +* Fuzzy matching on terms: `%hello%` means all terms with Levenshtein distance of 1 from it. Use multiple pairs of '%' brackets, up to three deep, to increase the Levenshtein distance. +* An expression in a query can be wrapped in parentheses to disambiguate, for example, `(hello|hella) (world|werld)`. +* Query attributes can be applied to individual clauses, for example, `(foo bar) => { $weight: 2.0; $slop: 1; $inorder: false; }`. +* Combinations of the above can be used together, for example, `hello (world|foo) "bar baz" bbbb`. + +## Pure negative queries + +As of v0.19.3, it is possible to have a query consisting of just a negative expression. For example `-hello` or `-(@title:(foo|bar))`. The results are all the documents not containing the query terms. + +{{% alert title="Warning" color="warning" %}} +Any complex expression can be negated this way, however, caution should be taken here: if a negative expression has little or no results, this is equivalent to traversing and ranking all the documents in the index, which can be slow and cause high CPU consumption. +{{% /alert %}} + +## Field modifiers + +You can specify field modifiers in a query, and not just by using the `INFIELDS` global keyword. + +To specify which fields the query matches, prepend the expression with the `@` symbol, the field name, and a `:` (colon) symbol, for each expression or subexpression. + +If a field modifier precedes multiple words or expressions, it applies only to the adjacent expression with DIALECT 1. With DIALECT 2 or greater, you extend the query to other fields. + +Consider this simple query: `@name:James Brown`. Here, the field modifier `@name` is followed by two words: `James` and `Brown`. + +* In DIALECT 1, this query would be interpreted as "find `James Brown` in the `@name` field". +* In DIALECT 2 or greater, this query will be interpreted as "find `James` in the `@name` field **AND** `Brown` in **ANY** text field. In other words, it would be interpreted as `(@name:James) Brown`. +* In DIALECT 2 or greater, to achieve the default behavior of DIALECT 1, update your query to `@name:(James Brown)`. + +If a field modifier precedes an expression in parentheses, it applies only to the expression inside the parentheses. The expression should be valid for the specified field, otherwise it is skipped. + +To create complex filtering on several fields, you can combine multiple modifiers. For example, if you have an index of car models, with a vehicle class, country of origin, and engine type, you can search for SUVs made in Korea with hybrid or diesel engines using the following query: + +``` +FT.SEARCH cars "@country:korea @engine:(diesel|hybrid) @class:suv" +``` + +You can apply multiple modifiers to the same term or grouped terms: + +``` +FT.SEARCH idx "@title|body:(hello world) @url|image:mydomain" +``` + +Now, you search for documents that have `"hello"` and `"world"` either in the body or the title and the term `mydomain` in their `url` or `image` fields. + +## Numeric filters in query + +If a field in the schema is defined as NUMERIC, it is possible to use the FILTER argument in the Redis request or filter with it by specifying filtering rules in the query. The syntax is `@field:[{min} {max}]`, for example, `@price:[100 200]`. + +### A few notes on numeric predicates + +1. It is possible to specify a numeric predicate as the entire query, whereas it is impossible to do it with the `FILTER` argument. + +2. It is possible to intersect or union multiple numeric filters in the same query, be it for the same field or different ones. + +3. `-inf`, `inf` and `+inf` are acceptable numbers in a range. Therefore, _greater than 100_ is expressed as `[(100 inf]`. + +4. Numeric filters are inclusive. Exclusive min or max are expressed with `(` prepended to the number, for example, `[(100 (200]`. + +5. It is possible to negate a numeric filter by prepending a `-` sign to the filter. For example, returning a result where price differs from 100 is expressed as: `@title:foo -@price:[100 100]`. + +## Tag filters + +As of v0.91, you can use a special field type called a +[_tag field_]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}), with simpler +[tokenization]({{< relref "/develop/interact/search-and-query/advanced-concepts/escaping#tokenization-rules-for-tag-fields" >}}) +and encoding in the index. You can't access the values in these fields using a general fieldless search. Instead, you use special syntax: + +``` +@field:{ tag | tag | ...} +``` + +Example: + +``` +@cities:{ New York | Los Angeles | Barcelona } +``` + +Tags can have multiple words or include other punctuation marks other than the field's separator (`,` by default). The following characters in tags should be escaped with a backslash (`\`): `$`, `{`, `}`, `\`, and `|`. + +{{% alert title="Note" color="warning" %}} +Before RediSearch 2.4, it was also recommended to escape spaces. The reason was that, if a multiword tag included stopwords, a syntax error was returned. So tags, like "to be or not to be" needed be escaped as "to\ be\ or\ not\ to\ be". For good measure, you also could escape all spaces within tags. Starting with RediSearch 2.4, using [`DIALECT 2`]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-2" >}}) or greater you can use spaces in a `tag` query, even with stopwords. +{{% /alert %}} + +Notice that multiple tags in the same clause create a union of documents containing either tags. To create an intersection of documents containing all tags, you should repeat the tag filter several times. For example: + +``` +# Return all documents containing all three cities as tags +@cities:{ New York } @cities:{Los Angeles} @cities:{ Barcelona } + +# Now, return all documents containing either city +@cities:{ New York | Los Angeles | Barcelona } +``` + +Tag clauses can be combined into any subclause, used as negative expressions, optional expressions, and so on. + +## Geo filters + +As of v0.21, it is possible to add geo radius queries directly into the query language with the syntax `@field:[{lon} {lat} {radius} {m|km|mi|ft}]`. This filters the result to a given radius from a lon,lat point, defined in meters, kilometers, miles or feet. See Redis's own [`GEORADIUS`]({{< relref "/commands/georadius" >}}) command for more details. + +Radius filters can be added into the query just like numeric filters. For example, in a database of businesses, looking for Chinese restaurants near San Francisco (within a 5km radius) would be expressed as: `chinese restaurant @location:[-122.41 37.77 5 km]`. + +## Polygon search + +Geospatial databases are essential for managing and analyzing location-based data in a variety of industries. They help organizations make data-driven decisions, optimize operations, and achieve their strategic goals more efficiently. Polygon search extends Redis's geospatial search capabilities to be able to query against a value in a `GEOSHAPE` attribute. This value must follow a ["well-known text"](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) (WKT) representation of geometry. Two such geometries are supported: + +- `POINT`, for example `POINT(2 4)`. +- `POLYGON`, for example `POLYGON((2 2, 2 8, 6 11, 10 8, 10 2, 2 2))`. + +There is a new schema field type called `GEOSHAPE`, which can be specified as either: + +- `FLAT` for Cartesian X Y coordinates +- `SPHERICAL` for geographic longitude and latitude coordinates. This is the default coordinate system. + +Finally, there's new [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) syntax that allows you to query for polygons that either contain or are within a given geoshape. + +`@field:[{WITHIN|CONTAINS} $geometry] PARAMS 2 geometry {geometry}` + +Here's an example using two stacked polygons that represent a box contained within a house. + +{{< image filename="develop/interact/search-and-query/img/polygons.png" >}} + +First, create an index using a `FLAT` `GEOSHAPE`, representing a 2D X Y coordinate system. + +`FT.CREATE polygon_idx PREFIX 1 shape: SCHEMA g GEOSHAPE FLAT t TEXT` + +Next, create the data structures that represent the geometries in the picture. + +```bash +HSET shape:1 t "this is my house" g "POLYGON((2 2, 2 8, 6 11, 10 8, 10 2, 2 2))" +HSET shape:2 t "this is a square in my house" g "POLYGON((4 4, 4 6, 6 6, 6 4, 4 4))" +``` +Finally, use [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) to query the geometries. Note the use of `DIALECT 3`, which is required. Here are a few examples. + +Search for a polygon that contains a specified point: + +```bash +FT.SEARCH polygon_idx "@g:[CONTAINS $point]" PARAMS 2 point 'POINT(8 8)' DIALECT 3 +1) (integer) 1 +2) "shape:1" +3) 1) "t" + 2) "this is my house" + 3) "g" + 4) "POLYGON((2 2, 2 8, 6 11, 10 8, 10 2, 2 2))" +``` + +Search for geometries contained in a specified polygon: + +```bash +FT.SEARCH polygon_idx "@g:[WITHIN $poly]" PARAMS 2 poly 'POLYGON((0 0, 0 100, 100 100, 100 0, 0 0))' DIALECT 3 +1) (integer) 2 +2) "shape:2" +3) 1) "t" + 2) "this is a square in my house" + 3) "g" + 4) "POLYGON((4 4, 4 6, 6 6, 6 4, 4 4))" +4) "shape:1" +5) 1) "t" + 2) "this is my house" + 3) "g" + 4) "POLYGON((2 2, 2 8, 6 11, 10 8, 10 2, 2 2))" +``` + +Search for a polygon that is not contained in the indexed geometries: + +```bash +FT.SEARCH polygon_idx "@g:[CONTAINS $poly]" PARAMS 2 poly 'POLYGON((14 4, 14 6, 16 6, 16 4, 14 4))' DIALECT 3 +1) (integer) 0 +``` + +Search for a polygon that is known to be contained within the geometries (the box): + +```bash +FT.SEARCH polygon_idx "@g:[CONTAINS $poly]" PARAMS 2 poly 'POLYGON((4 4, 4 6, 6 6, 6 4, 4 4))' DIALECT 3 +1) (integer) 2 +2) "shape:1" +3) 1) "t" + 2) "this is my house" + 3) "g" + 4) "POLYGON((2 2, 2 8, 6 11, 10 8, 10 2, 2 2))" +4) "shape:2" +5) 1) "t" + 2) "this is a square in my house" + 3) "g" + 4) "POLYGON((4 4, 4 6, 6 6, 6 4, 4 4))" +``` + +Note that both the house and box shapes were returned. + +{{< alert title="Note" >}} +GEOSHAPE does not support JSON multi-value or SORTABLE options. +{{< /alert >}} + +For more examples, see the [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) and [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command pages. + +## Vector search + +You can add vector similarity queries directly into the query language by: + +1. Using a **range** query with the syntax of `@vector:[VECTOR_RANGE {radius} $query_vec]`, which filters the results to a given radius from a given query vector. The distance metric derives from the definition of a @vector field in the index schema, for example, Cosine or L2 (as of v2.6.1). + +2. Running a k nearest neighbors (KNN) query on a @vector field. The basic syntax is `"*=>[ KNN {num|$num} @vector $query_vec ]"`. +It is also possible to run a hybrid query on filtered results. A hybrid query allows the user to specify a filter criteria that all results in a KNN query must satisfy. The filter criteria can include any type of field (i.e., indexes created on both vectors and other values, such as TEXT, PHONETIC, NUMERIC, GEO, etc.). +The general syntax for hybrid query is `{some filter query}=>[ KNN {num|$num} @vector $query_vec]`, where `=>` separates the filter query from the vector KNN query. + +**Examples:** + +* Return 10 nearest neighbors entities in which `query_vec` is closest to the vector stored in `@vector_field`: + + `*=>[KNN 10 @vector_field $query_vec]` + +* Among entities published between 2020 and 2022, return 10 nearest neighbors entities in which `query_vec` is closest to the vector stored in `@vector_field`: + + `@published_year:[2020 2022]=>[KNN 10 @vector_field $query_vec]` + +* Return every entity for which the distance between the vector stored under its @vector_field and `query_vec` is at most 0.5, in terms of the @vector_field distance metric: + + `@vector_field:[VECTOR_RANGE 0.5 $query_vec]` + +As of v2.4, the KNN vector search can be used at most once in a query, while, as of v2.6, the vector range filter can be used multiple times in a query. For more information on vector similarity syntax, see [Querying vector fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}), and [Vector search examples]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#vector-search-examples" >}}) sections. + +## Prefix matching + +When indexes are updated, Redis maintains a dictionary of all terms in the index. This can be used to match all terms starting with a given prefix. Selecting prefix matches is done by appending `*` to a prefix token. For example: + +``` +hel* world +``` + +Will be expanded to cover `(hello|help|helm|...) world`. + +### A few notes on prefix searches + +1. As prefixes can be expanded into many terms, use them with caution. The expansion will create a Union operation of all suffixes. + +2. As a protective measure to avoid selecting too many terms, blocking Redis, which is single threaded, there are two limitations on prefix matching: + + * Prefixes are limited to 2 letters or more. You can change this number by using the `MINPREFIX` setting on the module command line. + + * The minimum word length to stem is 4 letters or more. You can change this number by using the `MINSTEMLEN` setting on the module command line. + + * Expansion is limited to 200 terms or less. You can change this number by using the `MAXEXPANSIONS` setting on the module command line. + +3. Prefix matching fully supports Unicode and is case insensitive. + +4. Currently, there is no sorting or bias based on suffix popularity. + +## Infix/suffix matching + +As of v2.6.0, the dictionary can be used for infix (contains) or suffix queries by appending `*` to the token. For example: + +``` +*sun* *ing +``` + +These queries are CPU intensive because they require iteration over the whole dictionary. + +{{% alert title="Note" color="warning" %}} +All notes about prefix searches also apply to infix/suffix queries. +{{% /alert %}} + +### Using a suffix trie + +A suffix trie maintains a list of terms that match the suffix. If you add a suffix trie to a field using the `WITHSUFFIXTRIE` keyword, you can create more efficient infix and suffix queries because it eliminates the need to iterate over the whole dictionary. However, the iteration on the union does not change. + +Suffix queries create a union of the list of terms from the suffix term node. Infix queries use the suffix terms as prefixes to the trie and create a union of all terms from all matching nodes. + +## Wildcard matching + +As of v2.6.0, you can use the dictionary for wildcard matching queries with these parameters. + +* `?` - for any single character +* `*` - for any character repeating zero or more times +* `\` - for escaping; other special characters are ignored + +An example of the syntax is `"w'foo*bar?'"`. + +### Using a suffix trie + +A suffix trie maintains a list of terms which match the suffix. If you add a suffix trie to a field using the `WITHSUFFIXTRIE` keyword, you can create more efficient wildcard matching queries because it eliminates the need to iterate over the whole dictionary. However, the iteration on the union does not change. + +With a suffix trie, the wildcard pattern is broken into tokens at every `*` character. A heuristic is used to choose the token with the least terms, and each term is matched with the wildcard pattern. + +## Fuzzy matching + +As of v1.2.0, the dictionary of all terms in the index can also be used to perform [fuzzy matching](https://en.wikipedia.org/wiki/Approximate_string_matching). +Fuzzy matches are performed based on [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) (LD). +Fuzzy matching on a term is performed by surrounding the term with '%', for example: + +``` +%hello% world +``` + +This performs fuzzy matching on `hello` for all terms where LD is 1. + +As of v1.4.0, the LD of the fuzzy match can be set by the number of '%' characters surrounding it, so that `%%hello%%` will perform fuzzy matching on 'hello' for all terms where LD is 2. + +The maximum LD for fuzzy matching is 3. + +## Wildcard queries + +As of v1.1.0, you can use a special query to retrieve all the documents in an index. This is meant mostly for the aggregation engine. You can call it by specifying only a single star sign as the query string, in other words, `FT.SEARCH myIndex *`. + +You can't combine this with any other filters, field modifiers, or anything inside the query. It is technically possible to use the deprecated `FILTER` and `GEOFILTER` request parameters outside the query string in conjunction with a wildcard, but this makes the wildcard meaningless and only hurts performance. + +## Query attributes + +As of v1.2.0, you can apply specific query modifying attributes to specific clauses of the query. + +The syntax is `(foo bar) => { $attribute: value; $attribute:value; ...}`: + +``` +(foo bar) => { $weight: 2.0; $slop: 1; $inorder: true; } +~(bar baz) => { $weight: 0.5; } +``` + +The supported attributes are: + +1. **$weight**: determines the weight of the sub-query or token in the overall ranking on the result (default: 1.0). +2. **$slop**: determines the maximum allowed slop (space between terms) in the query clause (default: 0). +3. **$inorder**: whether or not the terms in a query clause must appear in the same order as in the query. This is usually set alongside with `$slop` (default: false). +4. **$phonetic**: whether or not to perform phonetic matching (default: true). Note: setting this attribute to true for fields which were not created as `PHONETIC` will produce an error. + +As of v2.6.1, the query attributes syntax supports these additional attributes: + +* **$yield_distance_as**: specifies the distance field name, used for later sorting and/or returning, for clauses that yield some distance metric. It is currently supported for vector queries only (both KNN and range). +* **vector query params**: pass optional parameters for [vector queries]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#querying-vector-fields" >}}) in key-value format. + +## A few query examples + +* Simple phrase query - `hello` _AND_ `world`: + + hello world + +* Exact phrase query - `hello` _FOLLOWED BY_ `world`: + + "hello world" + +* Union - documents containing either `hello` _OR_ `world`: + + hello|world + +* Not - documents containing `hello` _BUT NOT_ `world`: + + hello -world + +* Intersection of unions: + + (hello|halo) (world|werld) + +* Negation of union: + + hello -(world|werld) + +* Union inside phrase: + + (barack|barrack) obama + +* Optional terms with higher priority to ones containing more matches: + + obama ~barack ~michelle + +* Exact phrase in one field, one word in another field: + + @title:"barack obama" @job:president + +* Combined _AND_, _OR_ with field specifiers: + + @title:"hello world" @body:(foo bar) @category:(articles|biographies) + +* Prefix/infix/suffix queries: + + hello worl* + + hel* *worl + + hello -*worl* + +* Wildcard matching queries: + + "w'foo??bar??baz'" + + "w'???????'" + + "w'hello*world'" + +* Numeric filtering - products named `tv` with a price range of 200 to 500: + + @name:tv @price:[200 500] + +* Numeric filtering - users with age greater than 18: + + @age:[(18 +inf] + +## Mapping common SQL predicates to Redis Query Engine + +| SQL Condition | Redis Query Engine Equivalent | Comments | +|---------------|-----------------------|----------| +| WHERE x='foo' AND y='bar' | @x:foo @y:bar | for less ambiguity use (@x:foo) (@y:bar) | +| WHERE x='foo' AND y!='bar' | @x:foo -@y:bar | +| WHERE x='foo' OR y='bar' | (@x:foo)\|(@y:bar) | +| WHERE x IN ('foo', 'bar','hello world') | @x:(foo\|bar\|"hello world") | quotes mean exact phrase | +| WHERE y='foo' AND x NOT IN ('foo','bar') | @y:foo (-@x:foo) (-@x:bar) | +| WHERE x NOT IN ('foo','bar') | -@x:(foo\|bar) | +| WHERE num BETWEEN 10 AND 20 | @num:[10 20] | +| WHERE num >= 10 | @num:[10 +inf] | +| WHERE num > 10 | @num:[(10 +inf] | +| WHERE num < 10 | @num:[-inf (10] | +| WHERE num <= 10 | @num:[-inf 10] | +| WHERE num < 10 OR num > 20 | @num:[-inf (10] \| @num:[(20 +inf] | +| WHERE name LIKE 'john%' | @name:john* | + +## Technical notes + +The query parser is built using the Lemon Parser Generator and a Ragel based lexer. You can see the `DIALECT 2` grammar definition [at this git repo](https://github.com/RediSearch/RediSearch/blob/master/src/query_parser/v2/parser.y). + +You can also see the [search-default-dialect]({{< relref "/develop/interact/search-and-query/administration/configuration#search-default-dialect" >}}) configuration parameter. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Stemming support +linkTitle: Stemming +title: Stemming +weight: 10 +--- + +RediSearch supports stemming - that is adding the base form of a word to the index. This allows the query for "`hiring`" to also return results for "`hire`" and "`hired`", for example. + +The current stemming support is based on the Snowball stemmer library, which supports most European languages, as well as Arabic and other. See the "[Supported languages](#supported-languages)" section below. We hope to include more languages soon (if you need a specific language support, please open an issue). + +For further details see the [Snowball Stemmer website](https://snowballstem.org/). + + +## How it works? + +Stemming maps different forms of the same word to a common root - "stem" - for example, the English stemmer maps *studied* ,*studies* and *study* to *studi* . So a searching for *studied* would also find documents which only have the other forms. + + +In order to define which language the Stemmer should apply when building the index, you need to specify the `LANGUAGE` parameter for the entire index or for the specific field. For more details check the [FT.CREATE]({{< relref "commands/ft.create" >}}) syntax. + +**Create a index with language definition** + +Create a index for words in German "`wort:`" with a single `TEXT` field "`wort`" + +{{< highlight bash >}} +redis> FT.CREATE idx:german ON HASH PREFIX 1 "wort:" LANGUAGE GERMAN SCHEMA wort TEXT +{{< / highlight >}} + +**Adding words** + +Adding some words with same stem in German, all variations of the word `stück` ( `piece` in english): `stück stücke stuck stucke` => `stuck` + +{{< highlight bash >}} +redis> HSET wort:1 wort stück +(integer) 1 +redis> HSET wort:2 wort stücke +(integer) 1 +redis> HSET wort:3 wort stuck +(integer) 1 +redis> HSET wort:4 wort stucke +(integer) 1 +{{< / highlight >}} + +**Searching for a common stem** + +Search for "stuck" (german for "piece"). As of v2.10, it's only necessary to specify the `LANGUAGE` argument when it wasn't specified to create the index being used to search. +Note the results for words that contains "`ü`" are encoded in UTF-8. + +{{< highlight bash >}} +redis> FT.SEARCH idx:german '@wort:(stuck)' German +1) (integer) 4 +2) "wort:3" +3) 1) "wort" + 2) "stuck" +4) "wort:4" +5) 1) "wort" + 2) "stucke" +6) "wort:1" +7) 1) "wort" + 2) "st\xc3\xbcck" +8) "wort:2" +9) 1) "wort" + 2) "st\xc3\xbccke" +{{< / highlight >}} + +## Supported languages + +The following languages are supported and can be passed to the engine when indexing or querying using lowercase: + +* arabic +* armenian +* danish +* dutch +* english +* finnish +* french +* german +* hungarian +* italian +* norwegian +* portuguese +* romanian +* russian +* serbian +* spanish +* swedish +* tamil +* turkish +* yiddish +* chinese (see below) + +## Chinese support + +Indexing a Chinese document is different than indexing a document in most other languages because of how tokens are extracted. While most languages can have their tokens distinguished by separation characters and whitespace, this is not common in Chinese. + +Chinese tokenization is done by scanning the input text and checking every character or sequence of characters against a dictionary of predefined terms and determining the most likely match based on the surrounding terms and characters. + +Redis Open Source makes use of the [Friso](https://github.com/lionsoul2014/friso) chinese tokenization library for this purpose. This is largely transparent to the user and often no additional configuration is required. + +## Using custom dictionaries + +If you wish to use a custom dictionary, you can do so at the module level when loading the module. The `FRISOINI` setting can point to the location of a `friso.ini` file which contains the relevant settings and paths to the dictionary files. + +Note that there is no default `friso.ini` file location. RedisSearch comes with its own `friso.ini` and dictionary files that are compiled into the module binary at build-time. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use vector fields and perform vector searches in Redis +linkTitle: Vectors +math: true +title: Vectors +weight: 14 +--- + +Redis includes a [high-performance vector database](https://redis.io/blog/benchmarking-results-for-vector-databases/) that lets you perform semantic searches over vector embeddings. You can augment these searches with filtering over text, numerical, geospatial, and tag metadata. + +To quickly get started, check out the [Redis vector quickstart guide]({{< relref "develop/get-started/vector-database" >}}) and the [Redis AI Resources](https://github.com/redis-developer/redis-ai-resources) Github repo. + + +## Overview + +1. [**Create a vector index**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#create-a-vector-index" >}}): Redis maintains a secondary index over your data with a defined schema (including vector fields and metadata). Redis supports [`FLAT`]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#flat-index" >}}) and [`HNSW`]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#hnsw-index" >}}) vector index types. +1. [**Store and update vectors**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#store-and-update-vectors" >}}): Redis stores vectors and metadata in hashes or JSON objects. +1. [**Search with vectors**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#search-with-vectors" >}}): Redis supports several advanced querying strategies with vector fields including k-nearest neighbor ([KNN]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}})), [vector range queries]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#vector-range-queries" >}}), and [metadata filters]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#filters" >}}). +1. [**Configure vector queries at runtime**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#runtime-query-params" >}}). +1. [**Vector search examples**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#vector-search-examples" >}}): Explore several vector search examples that cover different use cases and techniques. + +## Create a vector index + +When you define the schema for an index, you can include one or more vector fields as shown below. + +**Syntax** + +``` +FT.CREATE + ON + PREFIX 1 + SCHEMA ... VECTOR + [ ...] +``` + +Refer to the full [indexing]({{< relref "develop/interact/search-and-query/indexing/" >}}) documentation for additional fields, options, and noted limitations. + +**Parameters** + +| Parameter | Description | +|:----------|:------------| +| `index_name` | Name of the index. | +| `storage_type` | Storage option (`HASH` or `JSON`). | +| `prefix` (optional) | Key prefix used to select which keys should be indexed. Defaults to all keys if omitted. | +| `field_name` | Name of the vector field. | +| `algorithm` | Vector index algorithm (`FLAT` or `HNSW`). | +| `index_attribute_count` | Number of vector field attributes. | +| `index_attribute_name` | Vector field attribute name. | +| `index_attribute_value` | Vector field attribute value. | + +### FLAT index + +Choose the `FLAT` index when you have small datasets (< 1M vectors) or when perfect search accuracy is more important than search latency. + +**Required attributes** + +| Attribute | Description | +|:-------------------|:-----------------------------------------| +| `TYPE` | Vector type (`BFLOAT16`, `FLOAT16`, `FLOAT32`, `FLOAT64`). `BFLOAT16` and `FLOAT16` require v2.10 or later. | +| `DIM` | The width, or number of dimensions, of the vector embeddings stored in this field. In other words, the number of floating point elements comprising the vector. `DIM` must be a positive integer. The vector used to query this field must have the exact dimensions as the field itself. | +| `DISTANCE_METRIC` | Distance metric (`L2`, `IP`, `COSINE`). | + +**Example** + +``` +FT.CREATE documents + ON HASH + PREFIX 1 docs: + SCHEMA doc_embedding VECTOR FLAT 6 + TYPE FLOAT32 + DIM 1536 + DISTANCE_METRIC COSINE +``` +In the example above, an index named `documents` is created over hashes with the key prefix `docs:` and a `FLAT` vector field named `doc_embedding` with three index attributes: `TYPE`, `DIM`, and `DISTANCE_METRIC`. + +### HNSW index + +`HNSW`, or hierarchical navigable small world, is an approximate nearest neighbors algorithm that uses a multi-layered graph to make vector search more scalable. +- The lowest layer contains all data points, and each higher layer contains a subset, forming a hierarchy. +- At runtime, the search traverses the graph on each layer from top to bottom, finding the local minima before dropping to the subsequent layer. + +Choose the `HNSW` index type when you have larger datasets (> 1M documents) or when search performance and scalability are more important than perfect search accuracy. + +**Required attributes** + +| Attribute | Description | +|:-------------------|:-----------------------------------------| +| `TYPE` | Vector type (`BFLOAT16`, `FLOAT16`, `FLOAT32`, `FLOAT64`). `BFLOAT16` and `FLOAT16` require v2.10 or later. | +| `DIM` | The width, or number of dimensions, of the vector embeddings stored in this field. In other words, the number of floating point elements comprising the vector. `DIM` must be a positive integer. The vector used to query this field must have the exact dimensions as the field itself. | +| `DISTANCE_METRIC` | Distance metric (`L2`, `IP`, `COSINE`). | + +**Optional attributes** + +[`HNSW`](https://arxiv.org/ftp/arxiv/papers/1603/1603.09320.pdf) supports a number of additional parameters to tune +the accuracy of the queries, while trading off performance. + +| Attribute | Description | +|:-------------------|:--------------------------------------------------------------------------------------------| +| `M` | Max number of outgoing edges (connections) for each node in a graph layer. On layer zero, the max number of connections will be `2 * M`. Higher values increase accuracy, but also increase memory usage and index build time. The default is 16. | +| `EF_CONSTRUCTION` | Max number of connected neighbors to consider during graph building. Higher values increase accuracy, but also increase index build time. The default is 200. | +| `EF_RUNTIME` | Max top candidates during KNN search. Higher values increase accuracy, but also increase search latency. The default is 10. | +| `EPSILON` | Relative factor that sets the boundaries in which a range query may search for candidates. That is, vector candidates whose distance from the query vector is `radius * (1 + EPSILON)` are potentially scanned, allowing more extensive search and more accurate results, at the expense of run time. The default is 0.01. | + +**Example** + +``` +FT.CREATE documents + ON HASH + PREFIX 1 docs: + SCHEMA doc_embedding VECTOR HNSW 10 + TYPE FLOAT64 + DIM 1536 + DISTANCE_METRIC COSINE + M 40 + EF_CONSTRUCTION 250 +``` + +In the example above, an index named `documents` is created over hashes with the key prefix `docs:` and an `HNSW` vector field named `doc_embedding` with five index attributes: `TYPE`, `DIM`, `DISTANCE_METRIC`, `M`, and `EF_CONSTRUCTION`. + +### Distance metrics + +Redis supports three popular distance metrics to measure the degree of similarity between two vectors $u$, $v$ $\in \mathbb{R}^n$, where $n$ is the length of the vectors: + +| Distance metric | Description | Mathematical representation | +|:--------------- |:----------- |:--------------------------- | +| `L2` | Euclidean distance between two vectors. | $d(u, v) = \sqrt{ \displaystyle\sum_{i=1}^n{(u_i - v_i)^2}}$ | +| `IP` | Inner product of two vectors. | $d(u, v) = 1 -u\cdot v$ | +| `COSINE` | Cosine distance of two vectors. | $d(u, v) = 1 -\frac{u \cdot v}{\lVert u \rVert \lVert v \rVert}$ | + +The above metrics calculate distance between two vectors, where the smaller the value is, the closer the two vectors are in the vector space. + +## Store and update vectors + +On index creation, the `` dictates how vector and metadata are structured and loaded into Redis. + +### Hash + +Store or update vectors and any metadata in [hashes]({{< relref "develop/data-types/hashes/" >}}) using the [`HSET`]({{< relref "commands/hset/" >}}) command. + +**Example** + +``` +HSET docs:01 doc_embedding category sports +``` + +{{% alert title="Tip" color="warning" %}} +Hash values are stored as binary-safe strings. The value `` represents the vector's underlying memory buffer. +{{% /alert %}} + +A common method for converting vectors to bytes uses the [redis-py](https://redis-py.readthedocs.io/en/stable/examples/search_vector_similarity_examples.html) client library and the Python [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.tobytes.html) library. + +**Example** + +```py +import numpy as np +from redis import Redis + +redis_client = Redis(host='localhost', port=6379) + +# Create a FLOAT32 vector +vector = np.array([0.34, 0.63, -0.54, -0.69, 0.98, 0.61], dtype=np.float32) + +# Convert vector to bytes +vector_bytes = vector.tobytes() + +# Use the Redis client to store the vector bytes and metadata at a specified key +redis_client.hset('docs:01', mapping = {"vector": vector_bytes, "category": "sports"}) +``` + +{{% alert title="Tip" color="warning" %}} +The vector blob size must match the dimension and float type of the vector field specified in the index's schema; otherwise, indexing will fail. +{{% /alert %}} + +### JSON +You can store or update vectors and any associated metadata in [JSON]({{< relref "develop/data-types/json/" >}}) using the [`JSON.SET`]({{< relref "commands/json.set/" >}}) command. + +To store vectors in Redis as JSON, you store the vector as a JSON array of floats. Note that this differs from vector storage in Redis hashes, which are instead stored as raw bytes. + +**Example** + +``` +JSON.SET docs:01 $ '{"doc_embedding":[0.34,0.63,-0.54,-0.69,0.98,0.61], "category": "sports"}' +``` + +One of the benefits of JSON is schema flexibility. As of v2.6.1, JSON supports multi-value indexing. +This allows you to index multiple vectors under the same [JSONPath]({{< relref "/develop/data-types/json/path" >}}). + +Here are some examples of multi-value indexing with vectors: + +**Multi-value indexing example** + +``` +JSON.SET docs:01 $ '{"doc_embedding":[[1,2,3,4], [5,6,7,8]]}' +JSON.SET docs:01 $ '{"chunk1":{"doc_embedding":[1,2,3,4]}, "chunk2":{"doc_embedding":[5,6,7,8]}}' +``` + +Additional information and examples are available in the [Indexing JSON documents]({{< relref "develop/interact/search-and-query/indexing/#index-json-arrays-as-vector" >}}) section. + +## Search with vectors + +You can run vector search queries with the [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) or [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}) commands. + +To issue a vector search query with `FT.SEARCH`, you must set the `DIALECT` option to >= `2`. See the [dialects documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}) for more information. + +### KNN vector search + +KNN vector search finds the top k nearest neighbors to a query vector. It has the following syntax: + +**Syntax** + +``` +FT.SEARCH + =>[KNN @ $ $ AS ] + PARAMS [$ ...] + SORTBY + DIALECT 2 +``` +**Parameters** + +| Parameter | Description | +|:------------------|:--------------------------------------------------------------------------------------------------| +| `index_name` | Name of the index. | +| `primary_filter_query` | [Filter]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#filters" >}}) criteria. Use `*` when no filters are required. | +| `top_k` | Number of nearest neighbors to fetch from the index. | +| `vector_field` | Name of the vector field to search against. | +| `vector_blob_param` | The query vector, passed in as a blob of raw bytes. The blob's byte size must match the vector field's dimensions and type. | +| `vector_query_params` (optional) | An optional section for marking one or more vector query parameters passed through the `PARAMS` section. Valid parameters should be provided as key-value pairs. See which [runtime query params]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#runtime-query-params" >}}) are supported for each vector index type. | +| `distance_field` (optional) | The optional distance field name used in the response and/or for sorting. By default, the distance field name is `___score` and it can be used for sorting without using `AS ` in the query. | +| `vector_query_params_count` | The number of vector query parameters. | +| `vector_query_param_name` | The name of the vector query parameter. | +| `vector_query_param_value` | The value of the vector query parameter. | + +**Example** + +``` +FT.SEARCH documents "*=>[KNN 10 @doc_embedding $BLOB]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" DIALECT 2 +``` + +**Use query attributes** + +Alternatively, as of v2.6, `` and `` name can be specified in runtime +[query attributes]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax" >}}#query-attributes) as shown below. + +``` +[KNN @ $]=>{$yield_distance_as: } +``` + +### Vector range queries + +Vector range queries allow you to filter the index using a `radius` parameter representing the semantic distance between an input query vector and indexed vector fields. This is useful in scenarios when you don't know exactly how many nearest (`top_k`) neighbors to fetch, but you do know how similar the results should be. + +For example, imagine a fraud or anomaly detection scenario where you aren't sure if there are any matches in the vector index. You can issue a vector range query to quickly check if there are any records of interest in the index within the specified radius. + +Vector range queries operate slightly different than KNN vector queries: +- Vector range queries can appear multiple times in a query as filter criteria. +- Vector range queries can be a part of the `` in KNN vector search. + +**Syntax** + +``` +FT.SEARCH + @:[VECTOR_RANGE ( | $) $ $] + PARAMS [ ...] + SORTBY + DIALECT 2 +``` + +| Parameter | Description | +|:------------------|:--------------------------------------------------------------------------------------------------| +| `index_name` | Name of the index. | +| `vector_field` | Name of the vector field in the index. | +| `radius` or `radius_param` | The maximum semantic distance allowed between the query vector and indexed vectors. You can provide the value directly in the query, passed to the `PARAMS` section, or as a query attribute. +| `vector_blob_param` | The query vector, passed in as a blob of raw bytes. The blob's byte size must match the vector field's dimensions and type. | +| `vector_query_params` (optional) | An optional section for marking one or more vector query parameters passed through the `PARAMS` section. Valid parameters should be provided as key-value pairs. See which [runtime query params]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#runtime-query-params" >}}) are supported for each vector index type. | +| `vector_query_params_count` | The number of vector query parameters. | +| `vector_query_param_name` | The name of the vector query parameter. | +| `vector_query_param_value` | The value of the vector query parameter. | + + +**Use query attributes** + +A vector range query clause can be followed by a query attributes section as follows: + +``` +@: [VECTOR_RANGE ( | $) $]=>{$: ( | + $); ... } +``` + +where the relevant parameters in that case are `$yield_distance_as` and `$epsilon`. Note that there is no default distance field name in range queries. + +### Filters + +Redis supports vector searches that include filters to narrow the search space based on defined criteria. If your index contains searchable fields (for example, `TEXT`, `TAG`, `NUMERIC`, `GEO`, `GEOSHAPE`, and `VECTOR`), you can perform vector searches with filters. + +**Supported filter types** + +- [Exact match](https://redis.io/docs/develop/interact/search-and-query/query/exact-match/) +- [Numeric range](https://redis.io/docs/develop/interact/search-and-query/query/range/) +- [Full-text](https://redis.io/docs/develop/interact/search-and-query/query/full-text/) +- [Geospatial](https://redis.io/docs/develop/interact/search-and-query/query/geo-spatial/) + +You can also [combine multiple queries](https://redis.io/docs/develop/interact/search-and-query/query/combined/) as a filter. + +**Syntax** + +Vector search queries with filters follow this basic structure: + +``` +FT.SEARCH =>[...] +``` + +where `` defines document selection and filtering. + +**Example** + +``` +FT.SEARCH documents "(@title:Sports @year:[2020 2022])=>[KNN 10 @doc_embedding $BLOB]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" DIALECT 2 +``` + +### How filtering works + +Redis uses internal algorithms to optimize the filtering computation for vector search. +The runtime algorithm is determined by heuristics that aim to minimize query latency based on several factors derived from the query and the index. + +**Batches mode** + +Batches mode works by paginating through small batches of nearest neighbors from the index: +- A batch of high-scoring documents from the vector index is retrieved. These documents are yielded only if the `` is satisfied. In other words, the document must contain a similar vector and meet the filter criteria. +- The iterative procedure terminates when `` documents that pass the filter criteria are yielded, or after every vector in the index has been processed. +- The batch size is determined automatically by heuristics based on `` and the ratio between the expected number of documents in the index that pass the `` and the vector index size. +- The goal is to minimize the total number of batches required to get the `` results while preserving the smallest batch size possible. Note that the batch size may change dynamically in each iteration based on the number of results that pass the filter in previous batches. + +**Ad-hoc brute force mode** + +- The score of every vector corresponding to a document that passes the filter is computed, and the `` results are selected and returned. +- This approach is preferable when the number of documents passing the `` is relatively small. +- The results of the KNN query will always be accurate in this mode, even if the underlying vector index algorithm is an approximate one. + +The execution mode may switch from batch mode to ad-hoc brute-force mode during the run, based on updated estimations of relevant factors from one batch to another. + + +## Runtime query parameters + +### Filter mode + +By default, Redis selects the best filter mode to optimize query execution. You can override the auto-selected policy using these optional parameters: + +| Parameter | Description | Options | +|:-----------------|:------------|:--------| +| `HYBRID_POLICY` | Specifies the filter mode to use during vector search with filters (hybrid). | `BATCHES` or `ADHOC_BF` | +| `BATCH_SIZE` | A fixed batch size to use in every iteration when the `BATCHES` policy is auto-selected or requested. | Positive integer. | + + +### Index-specific query parameters + +**FLAT** + +Currently, there are no runtime parameters available for FLAT indexes. + +**HNSW** + +Optional runtime parameters for HNSW indexes are: + +| Parameter | Description | Default value | +|:----------------|:----------------------------------------------------------------------------------------------------------|:--------------------| +| `EF_RUNTIME` | The maximum number of top candidates to hold during the KNN search. Higher values lead to more accurate results at the expense of a longer query runtime. | The value passed during index creation. The default is 10. | +| `EPSILON` | The relative factor that sets the boundaries for a vector range query. Vector candidates whose distance from the query vector is `radius * (1 + EPSILON)` are potentially scanned, allowing a more extensive search and more accurate results at the expense of runtime. | The value passed during index creation. The default is 0.01. | + + +### Important notes + +{{% alert title="Important notes" color="info" %}} + +1. When performing a KNN vector search, you specify `` nearest neighbors. However, the default Redis query `LIMIT` parameter (used for pagination) is 10. In order to get `` returned results, you must also specify `LIMIT 0 ` in your search command. See examples below. + +2. By default, the results are sorted by their document's score. To sort by vector similarity score, use `SORTBY `. See examples below. + +3. Depending on your chosen distance metric, the calculated distance between vectors in an index have different bounds. For example, `Cosine` distance is bounded by `2`, while `L2` distance is not bounded. When performing a vector range query, the best practice is to adjust the `` parameter based on your use case and required recall or precision metrics. + +{{% /alert %}} + + +## Vector search examples + +Below are a number of examples to help you get started. For more comprehensive walkthroughs, see the [Redis vector quickstart guide]({{< relref "develop/get-started/vector-database" >}}) and the [Redis AI Resources](https://github.com/redis-developer/redis-ai-resources) Github repo. + +### KNN vector search examples + +Return the 10 nearest neighbor documents for which the `doc_embedding` vector field is the closest to the query vector represented by the following 4-byte blob: + +``` +FT.SEARCH documents "*=>[KNN 10 @doc_embedding $BLOB]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY __vector_score DIALECT 2 +``` + +Return the top 10 nearest neighbors and customize the `K` and `EF_RUNTIME` parameters using query parameters. See the "Optional arguments" section in [FT.SEARCH command]({{< relref "commands/ft.search" >}}). Set the `EF_RUNTIME` value to 150, assuming `doc_embedding` is an `HNSW` index: + +``` +FT.SEARCH documents "*=>[KNN $K @doc_embedding $BLOB EF_RUNTIME $EF]" PARAMS 6 BLOB "\x12\xa9\xf5\x6c" K 10 EF 150 DIALECT 2 +``` + +Assign a custom name to the distance field (`vector_distance`) and then sort using that name: + +``` +FT.SEARCH documents "*=>[KNN 10 @doc_embedding $BLOB AS vector_distance]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY vector_distance DIALECT 2 +``` + +Use [query attributes]({{< relref "develop/interact/search-and-query/advanced-concepts/query_syntax#query-attributes" >}}) syntax to specify optional parameters and the distance field name: + +``` +FT.SEARCH documents "*=>[KNN 10 @doc_embedding $BLOB]=>{$EF_RUNTIME: $EF; $YIELD_DISTANCE_AS: vector_distance}" PARAMS 4 EF 150 BLOB "\x12\xa9\xf5\x6c" SORTBY vector_distance DIALECT 2 +``` + +To explore additional Python vector search examples, review recipes for the [`Redis Python`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb) client library and the [`Redis Vector Library`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/01_redisvl.ipynb). + +### Filter examples + +For these examples, assume you created an index named `movies` with records of different movies and their metadata. + +Among the movies that have `'Dune'` in the `title` field and `year` between `[2020, 2022]`, return the top 10 nearest neighbors, sorted by `movie_distance`: + +``` +FT.SEARCH movies "(@title:Dune @year:[2020 2022])=>[KNN 10 @movie_embedding $BLOB AS movie_distance]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY movie_distance DIALECT 2 +``` + +Among the movies that have `action` as a category tag, but not `drama`, return the top 10 nearest neighbors, sorted by `movie_distance`: + +``` +FT.SEARCH movies "(@category:{action} ~@category:{drama})=>[KNN 10 @doc_embedding $BLOB AS movie_distance]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY movie_distance DIALECT 2 +``` + +Among the movies that have `drama` or `action` as a category tag, return the top 10 nearest neighbors and explicitly set the filter mode (hybrid policy) to "ad-hoc brute force" rather than it being auto-selected: + +``` +FT.SEARCH movies "(@category:{drama | action})=>[KNN 10 @doc_embedding $BLOB HYBRID_POLICY ADHOC_BF]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY __vec_score DIALECT 2 +``` + +Among the movies that have `action` as a category tag, return the top 10 nearest neighbors and explicitly set the filter mode (hybrid policy) to "batches" and batch size 50 using a query parameter: + +``` +FT.SEARCH movies "(@category:{action})=>[KNN 10 @doc_embedding $BLOB HYBRID_POLICY BATCHES BATCH_SIZE $BATCH_SIZE]" PARAMS 4 BLOB "\x12\xa9\xf5\x6c" BATCH_SIZE 50 DIALECT 2 +``` + +Run the same query as above and use the query attributes syntax to specify optional parameters: + +``` +FT.SEARCH movies "(@category:{action})=>[KNN 10 @doc_embedding $BLOB]=>{$HYBRID_POLICY: BATCHES; $BATCH_SIZE: 50}" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" DIALECT 2 +``` + +To explore additional Python vector search examples, review recipes for the [`Redis Python`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb) client library and the [`Redis Vector Library`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/01_redisvl.ipynb). + + +### Range query examples + +For these examples, assume you created an index named `products` with records of different products and metadata from an ecommerce site. + +Return 100 products for which the distance between the `description_vector` field and the specified query vector blob is at most 5: + +``` +FT.SEARCH products "@description_vector:[VECTOR_RANGE 5 $BLOB]" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" LIMIT 0 100 DIALECT 2 +``` + +Run the same query as above and set the `EPSILON` parameter to `0.5`, assuming `description_vector` is HNSW index, yield the vector distance between `description_vector` and the query result in a field named `vector_distance`, and sort the results by that distance. + +``` +FT.SEARCH products "@description_vector:[VECTOR_RANGE 5 $BLOB]=>{$EPSILON:0.5; $YIELD_DISTANCE_AS: vector_distance}" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY vector_distance LIMIT 0 100 DIALECT 2 +``` + +Use the vector range query as a filter: return all the documents that contain either `'shirt'` in their `type` tag with their `year` value in the range `[2020, 2022]` or a vector stored in `description_vector` whose distance from the query vector is no more than `0.8`, then sort the results by their vector distance, if it is in the range: + +``` +FT.SEARCH products "(@type:{shirt} @year:[2020 2022]) | @description_vector:[VECTOR_RANGE 0.8 $BLOB]=>{$YIELD_DISTANCE_AS: vector_distance}" PARAMS 2 BLOB "\x12\xa9\xf5\x6c" SORTBY vector_distance DIALECT 2 +``` + +To explore additional Python vector search examples, review recipes for the [`Redis Python`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb) client library and the [`Redis Vector Library`](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/01_redisvl.ipynb). + +## Memory consumption comparison + +Following is a Python+NumPy example of vector sizes for the supported vector types; `BFLOAT16`, `FLOAT16`, `FLOAT32`, and `FLOAT64`. + +```python +import numpy as np + +#install ml_dtypes from pip install ml-dtypes +from ml_dtypes import bfloat16 + +# random float64 100 dimensions +double_precision_vec = np.random.rand(100) + +# for float64 and float32 +print(f'length of float64 vector: {len(double_precision_vec.tobytes())}') # >>> 800 +print(f'length of float32 vector: {len(double_precision_vec.astype(np.float32).tobytes())}') # >>> 400 + +# for float16 +np_data_type = np.float16 +half_precision_vec_float16 = double_precision_vec.astype(np_data_type) +print(f'length of float16 vector: {len(half_precision_vec_float16.tobytes())}') # >>> 200 + +# for bfloat16 +bfloat_dtype = bfloat16 +half_precision_vec_bfloat16 = double_precision_vec.astype(bfloat_dtype) +print(f'length of bfloat16 vector: {len(half_precision_vec_bfloat16.tobytes())}') # >>> 200 +``` + +## Next steps + +Vector embeddings and vector search are not new concepts. Many of the largest companies have used +vectors to represent products in ecommerce catalogs or content in advertising pipelines for well over a decade. + +With the emergence of Large Language Models (LLMs) and the proliferation of applications that require advanced information +retrieval techniques, Redis is well positioned to serve as your high performance query engine for semantic search and more. + +Here are some additonal resources that apply vector search for different use cases: + +- [Retrieval augmented generation from scratch](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/01_redisvl.ipynb) +- [Semantic caching](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/semantic_caching_gemini.ipynb) + +## Continue learning with Redis University + +{{< university-links >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details about tag fields +linkTitle: Tags +title: Tags +weight: 6 +--- + +Tag fields are similar to full-text fields but they interpret the text as a simple +list of *tags* delimited by a +[separator](#creating-a-tag-field) character (which is a comma "," by default). +This limitation means that tag fields can use simpler +[tokenization]({{< relref "/develop/interact/search-and-query/advanced-concepts/escaping" >}}) +and encoding in the index, which is more efficient than full-text indexing. + +The values in tag fields cannot be accessed by general field-less search and can be used only with a special syntax. + +The main differences between tag and full-text fields are: + +1. [Tokenization]({{< relref "/develop/interact/search-and-query/advanced-concepts/escaping#tokenization-rules-for-tag-fields" >}}) + is very simple for tags. + +1. Stemming is not performed on tag indexes. + +1. Tags cannot be found from a general full-text search. If a document has a field called "tags" + with the values "foo" and "bar", searching for foo or bar without a special tag modifier (see below) will not return this document. + +1. The index is much simpler and more compressed: frequencies or offset vectors of field flags + are not stored. The index contains only document IDs encoded as deltas. This means that an entry in + a tag index is usually one or two bytes long. This makes them very memory-efficient and fast. + +1. You can create up to 1024 tag fields per index. + +## Creating a tag field + +Tag fields can be added to the schema with the following syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} TAG [SEPARATOR {sep}] [CASESENSITIVE] +``` + +For hashes, SEPARATOR can be any printable ASCII character; the default is a comma (`,`). For JSON, there is no default separator; you must declare one explicitly if needed. + +For example: + +``` +JSON.SET key:1 $ '{"colors": "red, orange, yellow"}' +FT.CREATE idx on JSON PREFIX 1 key: SCHEMA $.colors AS colors TAG SEPARATOR "," + +> FT.SEARCH idx '@colors:{orange}' +1) "1" +2) "key:1" +3) 1) "$" + 2) "{\"colors\":\"red, orange, yellow\"}" +``` + +CASESENSITIVE can be specified to keep the original case. + +## Querying tag fields + +As mentioned above, just searching for a tag without any modifiers will not retrieve documents +containing it. + +The syntax for matching tags in a query is as follows (the curly braces are part of the syntax): + + ``` + @:{ | | ...} + ``` + +For example, this query finds documents with either the tag `hello world` or `foo bar`: + +``` + FT.SEARCH idx "@tags:{ hello world | foo bar }" +``` + +Tag clauses can be combined into any sub-clause, used as negative expressions, optional expressions, etc. For example, given the following index: + +``` +FT.CREATE idx ON HASH PREFIX 1 test: SCHEMA title TEXT price NUMERIC tags TAG SEPARATOR ";" +``` + +You can combine a full-text search on the title field, a numerical range on price, and match either the `foo bar` or `hello world` tag like this: + +``` +FT.SEARCH idx "@title:hello @price:[0 100] @tags:{ foo bar | hello world } +``` + +Tags support prefix matching with the regular `*` character: + +``` +FT.SEARCH idx "@tags:{ hell* }" +FT.SEARCH idx "@tags:{ hello\\ w* }" + +``` + +## Multiple tags in a single filter + +Notice that including multiple tags in the same clause creates a union of all documents that contain any of the included tags. To create an intersection of documents containing all of the given tags, you should repeat the tag filter several times. + +For example, imagine an index of travelers, with a tag field for the cities each traveler has visited: + +``` +FT.CREATE myIndex ON HASH PREFIX 1 traveler: SCHEMA name TEXT cities TAG + +HSET traveler:1 name "John Doe" cities "New York, Barcelona, San Francisco" +``` + +For this index, the following query will return all the people who visited at least one of the following cities: + +``` +FT.SEARCH myIndex "@cities:{ New York | Los Angeles | Barcelona }" +``` + +But the next query will return all people who have visited all three cities: + +``` +FT.SEARCH myIndex "@cities:{ New York } @cities:{Los Angeles} @cities:{ Barcelona }" +``` + +## Including punctuation and spaces in tags + +A tag field can contain any punctuation characters except for the field separator. +You can use punctuation without escaping when you *define* a tag field, +but you typically need to escape certain characters when you *query* the field +because the query syntax itself uses the same characters. +(See [Query syntax]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax#tag-filters" >}}) +for the full set of characters that require escaping.) + +For example, given the following index: + +``` +FT.CREATE punctuation ON HASH PREFIX 1 test: SCHEMA tags TAG +``` + +You can add tags that contain punctuation like this: + +``` +HSET test:1 tags "Andrew's Top 5,Justin's Top 5" +``` + +However, when you query for those tags, you must escape the punctuation characters +with a backslash (`\`). So, querying for the tag `Andrew's Top 5` in +[`redis-cli`]({{< relref "/develop/tools/cli" >}}) looks like this: + +``` +FT.SEARCH punctuation "@tags:{ Andrew\\'s Top 5 }" +``` + +(Note that you need the double backslash here because the terminal app itself +uses the backslash as an escape character. +Programming languages commonly use this convention also.) + +You can include spaces in a tag filter without escaping *unless* you are +using a version of RediSearch earlier than v2.4 or you are using +[query dialect 1]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-1" >}}). +See +[Query syntax]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax#tag-filters" >}}) +for a full explanation. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Learn how to use geospatial fields and perform geospatial queries in Redis +linkTitle: Geospatial +math: true +title: Geospatial +weight: 14 +--- + +Redis Query Engine supports geospatial data. This feature +lets you store geographical locations and geometric shapes +in the fields of JSON objects. + +{{< note >}}Take care not to confuse the geospatial indexing +features in Redis Query Engine with the +[Geospatial data type]({{< relref "/develop/data-types/geospatial" >}}) +that Redis also supports. Although there are some similarities between +these two features, the data type is intended for simpler use +cases and doesn't have the range of format options and queries +available in Redis Query Engine. +{{< /note >}} + +You can index these fields and use queries to find the objects +by their location or the relationship of their shape to other shapes. +For example, if you add the locations of a set of shops, you can +find all the shops within 5km of a user's position or determine +which ones are within the boundary of a particular town. + +Redis uses coordinate points to represent geospatial locations. +You can store individual points but you can also +use a set of points to define a polygon shape (the shape of a +town, for example). You can query several types of interactions +between points and shapes, such as whether a point lies within +a shape or whether two shapes overlap. + +Redis can interpret coordinates either as geographical longitude +and latitude or as Cartesian coordinates on a flat plane. +Geographical coordinates are ideal for large real-world locations +and areas (such as towns and countries). Cartesian coordinates +are more suitable for smaller areas (such as rooms in a building) +or for games, simulations, and other artificial scenarios. + +## Storing geospatial data + +Redis supports two different +[schema types]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options" >}}) +for geospatial data: + +- [`GEO`](#geo): This uses a simple format where individual geospatial + points are specified as numeric longitude-latitude pairs. + +- [`GEOSHAPE`](#geoshape): [Redis Open Source]({{< relref "/operate/oss_and_stack" >}}) also + supports `GEOSHAPE` indexing in v7.2 and later. + This uses a subset of the + [Well-Known Text (WKT)](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) + format to specify both points and polygons using either geographical + coordinates or Cartesian coordinates. A + `GEOSHAPE` field supports more advanced queries than `GEO`, + such as checking if one shape overlaps or contains another. + +The sections below describe these schema types in more detail. + +## `GEO` + +A `GEO` index lets you represent geospatial data either as +a string containing a longitude-latitude pair (for example, +"-104.991531, 39.742043") or as a JSON array of these +strings. Note that the longitude value comes first in the +string. + +For example, you could index the `location` fields of the +the [JSON]({{< relref "/develop/data-types/json" >}}) objects +shown below as `GEO`: + +```json +{ + "description": "Navy Blue Slippers", + "price": 45.99, + "city": "Denver", + "location": "-104.991531, 39.742043" +} + +{ + "description": "Bright Red Boots", + "price": 185.75, + "city": "Various", + "location": [ + "-104.991531, 39.742043", + "-105.0618814,40.5150098" + ] +} +``` + +`GEO` fields allow only basic point and radius queries. +For example, the query below finds products within a 100 mile radius of Colorado Springs +(Longitude=-104.800644, Latitude=38.846127). + +```bash +FT.SEARCH productidx '@location:[-104.800644 38.846127 100 mi]' +``` + +See [Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information about the available query options and see +[Geospatial indexing]({{< relref "/develop/interact/search-and-query/indexing/geoindex" >}}) +for examples of indexing `GEO` fields. + +## `GEOSHAPE` + +Fields indexed as `GEOSHAPE` support the `POINT` and `POLYGON` primitives from the +[Well-Known Text](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) +representation of geometry. The `POINT` primitive defines a single point +in a similar way to a `GEO` field. +The `geom` field of the example JSON object shown below specifies a point +(in Cartesian coordinates, using the standard x,y order): + +```json +{ + "name": "Purple Point", + "geom": "POINT (2 2)" +} +``` + +The `POLYGON` primitive can approximate the outline of any shape using a +sequence of points. Specify the coordinates of the corners in the order they +occur around the shape (either clockwise or counter-clockwise) and ensure the +shape is "closed" by making the final coordinate exactly the same as the first. + +Note that `POLYGON` requires double parentheses around the coordinate list. +This is because you can specify additional shapes as a comma-separated list +that define "holes" within the enclosing polygon. The holes must have the opposite +winding order to the outer polygon (so, if the outer polygon uses a clockwise winding +order, the holes must use counter-clockwise). +The `geom` field of the example JSON object shown below specifies a +square using Cartesian coordinates in a clockwise winding order: + +```json +{ + "name": "Green Square", + "geom": "POLYGON ((1 1, 1 3, 3 3, 3 1, 1 1))" +} +``` + +The following examples define one `POINT` and three `POLYGON` primitives, +which are shown in the image below: + +``` +POINT (2 2) +POLYGON ((1 1, 1 3, 3 3, 3 1, 1 1)) +POLYGON ((2 2.5, 2 3.5, 3.5 3.5, 3.5 2.5, 2 2.5)) +POLYGON ((3.5 1, 3.75 2, 4 1, 3.5 1)) +``` + +{{< image filename="/images/dev/rqe/geoshapes.jpg" >}} + +You can run various types of queries against a geospatial index. For +example, the query below returns one primitive that lies within the boundary +of the green square (from the example above) but omits the square itself: + +```bash +> FT.SEARCH geomidx "(-@name:(Green Square) @geom:[WITHIN $qshape])" PARAMS 2 qshape "POLYGON ((1 1, 1 3, 3 3, 3 1, 1 1))" RETURN 1 name DIALECT 2 + +1) (integer) 1 +2) "shape:4" +3) 1) "name" + 2) "[\"Purple Point\"]" +``` + +There are four query operations that you can use with `GEOSHAPE` fields: + +- `WITHIN`: Find points or shapes that lie entirely within an + enclosing shape that you specify in the query. +- `CONTAINS`: Find shapes that completely contain the specified point + or shape. +- `INTERSECTS`: Find shapes whose boundary overlaps another specified + shape. +- `DISJOINT`: Find shapes whose boundary does not overlap another specified + shape. + +See +[Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information about these query types and see +[Geospatial indexing]({{< relref "/develop/interact/search-and-query/indexing/geoindex" >}}) +for examples of indexing `GEOSHAPE` fields. + +## Limitations of geographical coordinates + +Planet Earth is actually shaped more like an +[ellipsoid](https://en.wikipedia.org/wiki/Earth_ellipsoid) than a perfect sphere. +The spherical coordinate system used by Redis Query Engine is a close +approximation to the shape of the Earth but not exact. For most practical +uses of geospatial queries, the approximation works very well, but you +shouldn't rely on it if you need very precise location data (for example, to track +the GPS locations of boats in an emergency response system). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Support for sorting query results +linkTitle: Sorting +title: Sorting by indexed fields +weight: 5 +--- + +As of RediSearch 0.15, you can bypass the scoring function mechanism and order search results by the value of different document attributes (fields) directly, even if the sorting field is not used by the query. For example, you can search for first name and sort by last name. + +## Declaring sortable fields + +When creating an index with [`FT.CREATE`]({{< relref "commands/ft.create/" >}}), you can declare `TEXT`, `TAG`, `NUMERIC`, and `GEO` attributes as `SORTABLE`. When an attribute is sortable, you can order the results by its values with relatively low latency. When an attribute is not sortable, it can still be sorted by its values, but with increased latency. For example, in the following schema: + +``` +FT.CREATE users SCHEMA first_name TEXT last_name TEXT SORTABLE age NUMERIC SORTABLE +``` + +The fields `last_name` and `age` are sortable, but `first_name` isn't. This means you can search by either first and/or last name, and sort by last name or age. + +### Note on sortable fields + +In the current implementation, when declaring a sortable field, its content gets copied into a special location in the index that provides for fast access during sorting. This means that making long fields sortable is very expensive and you should be careful with it. + +### Normalization (UNF option) + +By default, text fields get normalized and lowercased in a Unicode-safe way when stored for sorting. For example, `America` and `america` are considered equal in terms of sorting. + +Using the `UNF` (un-normalized form) argument, it is possible to disable the normalization and keep the original form of the value. Therefore, `America` will come before `america`. + +## Specifying SORTBY + +If an index includes sortable fields, you can add the `SORTBY` parameter to the search request (outside the query body) to order the results. This overrides the scoring function mechanism, and the two cannot be combined. If `WITHSCORES` is specified together with `SORTBY`, the scores returned are simply the relative position of each result in the result set. + +The syntax for `SORTBY` is: + +``` +SORTBY {field_name} [ASC|DESC] +``` + +* `field_name` must be a sortable field defined in the schema. + +* `ASC` means ascending order, `DESC` means descending order. + +* The default ordering is `ASC`. + +## Example + +``` +> FT.CREATE users ON HASH PREFIX 1 "user" SCHEMA first_name TEXT SORTABLE last_name TEXT age NUMERIC SORTABLE + +# Add some users +> HSET user1 first_name "alice" last_name "jones" age 35 +> HSET user2 first_name "bob" last_name "jones" age 36 + +# Searching while sorting + +# Searching by last name and sorting by first name +> FT.SEARCH users "@last_name:jones" SORTBY first_name DESC + +# Searching by both first and last name, and sorting by age +> FT.SEARCH users "jones" SORTBY age ASC +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details on synonym support with Redis Open Source +linkTitle: Synonym +title: Synonym support +weight: 11 +--- + +Redis Open Source supports synonyms. That is, searching for synonym words defined by the synonym data structure. + +The synonym data structure is a set of groups, each of which contains synonym terms. For example, the following synonym data structure contains three groups, and each group contains three synonym terms: + +``` +{boy, child, baby} +{girl, child, baby} +{man, person, adult} +``` + +When these three groups are located inside the synonym data structure, it is possible to search for "child" and receive documents containing "boy", "girl", "child" and "baby". + +## The synonym search technique + +A simple HashMap is used to map between the terms and the group IDs. During index creation, a check is made to see if the current term appears in the synonym map, and if it does, take all the group IDs that the term belongs to. + +For each group ID, another record is added to the inverted index called "\~\" that contains the same information as the term itself. When performing a search, a check is made to see if the searched term appears in the synonym map, and if it does, take all the group IDs the term belongs to. For each group ID, search for "\~\" and return the combined results. This technique ensures that all the synonyms of a given term will be returned. + +## Handling concurrency + +Since the indexing is performed in a separate thread, the synonyms map may change during indexing, which in turn may cause data corruption or crashes during indexing or searching. To solve this issue, a read-only copy is created for indexing purposes. The read-only copy is maintained using reference count. + +As long as the synonyms map does not change, the original synonym map holds a reference to its read-only copy, so it will not be freed. After the data inside the synonyms map has changed, the synonyms map decreses the reference count of its read only copy. This ensures that when all the indexers are done using the read-only copy, it will automatically be freed. This ensures that the next time an indexer asks for a read-only copy, the synonyms map will create a new copy (containing the new data) and return it. + +## Example + +``` +# Create an index +> FT.CREATE idx schema t text + +# Create a synonym group +> FT.SYNUPDATE idx group1 hello world + +# Insert documents +> HSET foo t hello +(integer) 1 +> HSET bar t world +(integer) 1 + +# Search +> FT.SEARCH idx hello +1) (integer) 2 +2) "foo" +3) 1) "t" + 2) "hello" +4) "bar" +5) 1) "t" + 2) "world" +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Query spelling correction support +linkTitle: Spellchecking +title: Spellchecking +weight: 13 +--- + +Query spelling correction provides suggestions for misspelled search terms. For example, the term 'reids' may be a misspelled version of 'redis'. + +In such cases, and as of v1.4, RediSearch can be used for generating alternatives to misspelled query terms. A misspelled term is a full text term (i.e., a word) that is: + + 1. Not a stop word + 2. Not in the index + 3. At least 3 characters long + +The alternatives for a misspelled term are generated from the corpus of already-indexed terms and, optionally, one or more custom dictionaries. Alternatives become spelling suggestions based on their respective Levenshtein distances from the misspelled term. Each spelling suggestion is given a normalized score based on its occurrences in the index. + +To obtain the spelling corrections for a query, refer to the documentation of the [`FT.SPELLCHECK`]({{< relref "commands/ft.spellcheck/" >}}) command. + +## Custom dictionaries + +A dictionary is a set of terms. Dictionaries can be added with terms, have terms deleted from them, and have their entire contents dumped using the [`FT.DICTADD`]({{< relref "commands/ft.dictadd/" >}}), [`FT.DICTDEL`]({{< relref "commands/ft.dictdel/" >}}) and [`FT.DICTDUMP`]({{< relref "commands/ft.dictdump/" >}}) commands, respectively. + +Dictionaries can be used to modify the behavior of spelling corrections by including or excluding their contents from potential spelling correction suggestions. + +When used for term inclusion, the terms in a dictionary can be provided as spelling suggestions regardless of their occurrence in the index. Scores of suggestions from inclusion dictionaries are always 0. + +Conversely, terms in an exclusion dictionary will never be returned as spelling alternatives. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details about query syntax, aggregation, scoring, and other search and + query options +linkTitle: Advanced concepts +title: Advanced concepts +weight: 7 +--- + +Redis Open Source supports the following Redis Query Engine features. This article provides you an overview. + +## Indexing features + +* Secondary indexing +* Vector indexing +* Index on [JSON]({{< relref "/develop/data-types/json/" >}}) documents +* Full-text indexing of multiple fields in a document +* Incremental indexing without performance loss +* Document deletion and updating with index garbage collection + + +## Query features + +* Multi-field queries +* Query on [JSON]({{< relref "/develop/data-types/json/" >}}) documents +* [Aggregation]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations" >}}) +* Boolean queries with AND, OR, and NOT operators between subqueries +* Optional query clauses +* Retrieval of full document contents or only their IDs +* Exact phrase search and slop-based search +* Numeric filters and ranges +* Geo-filtering using Redis [geo commands]({{< relref "/commands/" >}}?group=geo) +* [Vector search]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}) + + +## Full-text search features + +* [Prefix-based searches]({{< relref "/develop/interact/search-and-query/query/#prefix-matching" >}}) +* Field weights +* [Auto-complete]({{< relref "develop/interact/search-and-query/administration/overview#auto-complete" >}}) and fuzzy prefix suggestions +* [Stemming]({{< relref "/develop/interact/search-and-query/advanced-concepts/stemming" >}})-based query expansion for [many languages]({{< relref "develop/interact/search-and-query/advanced-concepts/stemming#supported-languages" >}}) using [Snowball](http://snowballstem.org/) +* Support for custom functions for query expansion and scoring (see [Extensions]({{< relref "/develop/interact/search-and-query/administration/extensions" >}})) +* Unicode support (UTF-8 input required) +* Document ranking + +## Cluster support + +The Redis Query Engine features of Redis Open Source are also available for distributed databases that can scale to billions of documents and hundreds of servers. + +## Supported platforms +Redis Open Source is developed and tested on Linux and macOS on x86_64 CPUs. + +Atom CPUs are not supported. + +
--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Full-text scoring functions +linkTitle: Scoring +title: Scoring documents +weight: 8 +--- + +When searching, documents are scored based on their relevance to the query. The score is a floating point number between 0.0 and 1.0, where 1.0 is the highest score. The score is returned as part of the search results and can be used to sort the results. + +Redis Open Source comes with a few very basic scoring functions to evaluate document relevance. They are all based on document scores and term frequency. This is regardless of the ability to use [sortable fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/sorting" >}}). Scoring functions are specified by adding the `SCORER {scorer_name}` argument to a search query. + +If you prefer a custom scoring function, it is possible to add more functions using the [extension API]({{< relref "/develop/interact/search-and-query/administration/extensions" >}}). + +The following is a list of the pre-bundled scoring functions available in Redis and a short explanation about how they work. Each function is mentioned by registered name, which can be passed as a `SCORER` argument in [`FT.SEARCH`]({{< relref "/commands/ft.search/" >}}). + +## TFIDF (default) + +Basic [TF-IDF scoring](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) with a few extra features: + +1. For each term in each result, the TF-IDF score of that term is calculated to that document. Frequencies are weighted based on field weights that are pre-determined, and each term's frequency is normalized by the highest term frequency in each document. + +2. The total TF-IDF for the query term is multiplied by the presumptive document score given on `FT.CREATE` via `SCORE_FIELD`. + +3. A penalty is assigned to each result based on "slop" or cumulative distance between the search terms. Exact matches will get no penalty, but matches where the search terms are distant will have their score reduced significantly. For each bigram of consecutive terms, the minimal distance between them is determined. The penalty is the square root of the sum of the distances squared; e.g., `1/sqrt(d(t2-t1)^2 + d(t3-t2)^2 + ...)`. + +Given N terms in document D, `T1...Tn`, the resulting score could be described with this Python function: + +```py +def get_score(terms, doc): + # the sum of tf-idf + score = 0 + + # the distance penalty for all terms + dist_penalty = 0 + + for i, term in enumerate(terms): + # tf normalized by maximum frequency + tf = doc.freq(term) / doc.max_freq + + # idf is global for the index, and not calculated each time in real life + idf = log2(1 + total_docs / docs_with_term(term)) + + score += tf*idf + + # sum up the distance penalty + if i > 0: + dist_penalty += min_distance(term, terms[i-1])**2 + + # multiply the score by the document score + score *= doc.score + + # divide the score by the root of the cumulative distance + if len(terms) > 1: + score /= sqrt(dist_penalty) + + return score +``` + +## TFIDF.DOCNORM + +Identical to the default `TFIDF` scorer, with one important distinction: + +Term frequencies are normalized by the length of the document, expressed as the total number of terms. The length is weighted, so that if a document contains two terms, one in a field that has a weight 1 and one in a field with a weight of 5, the total frequency is 6, not 2. + +``` +FT.SEARCH myIndex "foo" SCORER TFIDF.DOCNORM +``` + +## BM25 + +A variation on the basic `TFIDF` scorer, see [this Wikipedia article for more info](https://en.wikipedia.org/wiki/Okapi_BM25). + +The relevance score for each document is multiplied by the presumptive document score and a penalty is applied based on slop as in `TFIDF`. + +``` +FT.SEARCH myIndex "foo" SCORER BM25 +``` + +## DISMAX + +A simple scorer that sums up the frequencies of matched terms. In the case of union clauses, it will give the maximum value of those matches. No other penalties or factors are applied. + +It is not a one-to-one implementation of [Solr's DISMAX algorithm](https://wiki.apache.org/solr/DisMax), but it follows it in broad terms. + +``` +FT.SEARCH myIndex "foo" SCORER DISMAX +``` + +## DOCSCORE + +A scoring function that just returns the presumptive score of the document without applying any calculations to it. Since document scores can be updated, this can be useful if you'd like to use an external score and nothing further. + +``` +FT.SEARCH myIndex "foo" SCORER DOCSCORE +``` + +## HAMMING + +Scoring by the inverse Hamming distance between the document's payload and the query payload is performed. Since the nearest neighbors are of interest, the inverse Hamming distance (`1/(1+d)`) is used so that a distance of 0 gives a perfect score of 1 and is the highest rank. + +This only works if: + +1. The document has a payload. +2. The query has a payload. +3. Both are exactly the same length. + +Payloads are binary-safe, and having payloads with a length that is a multiple of 64 bits yields slightly faster results. + +Example: + +``` +> HSET key:1 foo hello payload aaaabbbb +(integer) 2 + +> HSET key:2 foo bar payload aaaacccc +(integer) 2 + +> FT.CREATE idx ON HASH PREFIX 1 key: PAYLOAD_FIELD payload SCHEMA foo TEXT +"OK" + +> FT.SEARCH idx "*" PAYLOAD "aaaabbbc" SCORER HAMMING WITHSCORES +1) "2" +2) "key:1" +3) "0.5" +4) 1) "foo" + 2) "hello" +5) "key:2" +6) "0.25" +7) 1) "foo" + 2) "bar" +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Available field types and options. +linkTitle: Field and type options +title: Field and type options +weight: 2 +--- + + +Redis Open Source provides various field types that allow you to store and search different kinds of data in your indexes. This page explains the available field types, their characteristics, and how they can be used effectively. + +## Numeric fields + +Numeric fields are used to store non-textual, countable values. They can hold integer or floating-point values. Numeric fields are sortable, meaning you can perform range-based queries and retrieve documents based on specific numeric conditions. For example, you can search for documents with a price between a certain range or retrieve documents with a specific rating value. + +You can add number fields to a schema in [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) using this syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} NUMERIC [SORTABLE] [NOINDEX] +``` + +where: + +- `SORTABLE` indicates that the field can be sorted. This is useful for performing range queries and sorting search results based on numeric values. +- `NOINDEX` indicates that the field is not indexed. This is useful for storing numeric values that you don't want to search for, but that you want to retrieve in search results. + +You can search for documents with specific numeric values using the `@:[ ]` query syntax. For example, this query finds documents with a price between 200 and 300: + +``` +FT.SEARCH products "@price:[200 300]" +``` + +You can also use the following query syntax to perform more complex numeric queries: + +| **Comparison operator** | **Query string** | **Comment** | +|-------------------------|-------------------------------|--------------------------| +| min <= x <= max | @field:[min max] | Fully inclusive range | +| | "@field>=min @field<=max" | Fully inclusive range \* | +| min < x < max | @field:[(min (max] | Fully exclusive range | +| | "@field>min @field= min | @field:[min +inf] | Upper open range | +| | @field>=min | Upper open range \* | +| x <= max | @field:[-inf max] | Lower open range | +| | @field<=max | Lower open range \* | +| x == val | @field:[val val] | Equal | +| | @field:[val] | Equal \* | +| | @field==val | Equal \* | +| x != val | -@field:[val val] | Not equal | +| | @field!=val | Not equal \* | +| x == val1 or x == val2 | "@field==val1 \| @field==val2" | Grouping with a bar denotes OR relationship \* | + +\* New syntax as of RediSearch v2.10. Requires [`DIALECT 2`]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}#dialect-2). + + +## Geo fields + +Geo fields are used to store geographical coordinates such as longitude and latitude. They enable geospatial radius queries, which allow you to implement location-based search functionality in your applications such as finding nearby restaurants, stores, or any other points of interest. + +Redis Query Engine also supports [geoshape fields](#geoshape-fields) for more advanced +geospatial queries. See the +[Geospatial]({{< relref "/develop/interact/search-and-query/advanced-concepts/geo" >}}) +reference page for an introduction to the format and usage of both schema types. + +You can add geo fields to the schema in [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) using this syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} GEO [SORTABLE] [NOINDEX] +``` + +Where: +- `SORTABLE` indicates that the field can be sorted. This is useful for performing range queries and sorting search results based on coordinates. +- `NOINDEX` indicates that the field is not indexed. This is useful for storing coordinates that you don't want to search for, but that you still want to retrieve in search results. + +You can query geo fields using the `@:[ ]` query syntax. For example, this query finds documents within 1000 kilometers from the point `2.34, 48.86`: + +``` +FT.SEARCH cities "@coords:[2.34 48.86 1000 km]" +``` + +See +[Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information and code examples. + +## Geoshape fields + +Geoshape fields provide more advanced functionality than [Geo](#geo-fields). +You can use them to represent locations as points but also to define +shapes and query the interactions between points and shapes (for example, +to find all points that are contained within an enclosing shape). You can +also choose between geographical coordinates (on the surface of a sphere) +or standard Cartesian coordinates. Use geoshape fields for spatial queries +such as finding all office locations in a specified region or finding +all rooms in a building that fall within range of a wi-fi router. + +See the +[Geospatial]({{< relref "/develop/interact/search-and-query/advanced-concepts/geo" >}}) +reference page for an introduction to the format and usage of both the +geoshape and geo schema types. + +Add geoshape fields to the schema in +[`FT.CREATE`]({{< relref "commands/ft.create/" >}}) using the following syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} GEOSHAPE [FLAT|SPHERICAL] [NOINDEX] +``` + +Where: +- `FLAT` indicates Cartesian (planar) coordinates. +- `SPHERICAL` indicates spherical (geographical) coordinates. This is the + default option if you don't specify one explicitly. +- `NOINDEX` indicates that the field is not indexed. This is useful for storing + coordinates that you don't want to search for, but that you still want to retrieve + in search results. + +Note that unlike geo fields, geoshape fields don't support the `SORTABLE` option. + +Query geoshape fields using the syntax `@:[ ]` +where `` is one of `WITHIN`, `CONTAINS`, `INTERSECTS`, or `DISJOINT`, +and `` is the shape of interest, specified in the +[Well-known text](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) +format. For example, the query below finds shapes that contain the point (2, 2): + +``` +FT.SEARCH idx "(@geom:[CONTAINS $qshape])" PARAMS 2 qshape "POINT (2 2)" RETURN 1 name DIALECT 2 +``` + +See +[Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information and code examples. + +## Vector fields + +Vector fields are floating-point vectors that are typically generated by external machine learning models. These vectors represent unstructured data such as text, images, or other complex features. Redis allows you to search for similar vectors using vector search algorithms like cosine similarity, Euclidean distance, and inner product. This enables you to build advanced search applications, recommendation systems, or content similarity analysis. + +You can add vector fields to the schema in [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) using this syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} VECTOR {algorithm} {count} [{attribute_name} {attribute_value} ...] +``` + +Where: + +* `{algorithm}` must be specified and be a supported vector similarity index algorithm. The supported algorithms are: + + - `FLAT`: brute force algorithm. + - `HNSW`: hierarchical, navigable, small world algorithm. + + The `{algorithm}` attribute specifies the algorithm to use when searching `k` most similar vectors in the index or filtering vectors by range. + +* `{count}` specifies the number of attributes for the index and it must be present. +Notice that `{count}` represents the total number of attribute pairs passed in the command. Algorithm parameters should be submitted as named arguments. + + For example: + + ``` + FT.CREATE my_idx SCHEMA vec_field VECTOR FLAT 6 TYPE FLOAT32 DIM 128 DISTANCE_METRIC L2 + ``` + + Here, three parameters are passed for the index ([`TYPE`]({{< relref "/commands/type" >}}), `DIM`, `DISTANCE_METRIC`), and `count` is the total number of attributes (6). + +* `{attribute_name} {attribute_value}` are algorithm attributes for the creation of the vector index. Every algorithm has its own mandatory and optional attributes. + +For more information about vector fields, see [vector fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). + +## Tag fields + +Tag fields are used to store textual data that represents a collection of data tags or labels. Tag fields are characterized by their low cardinality, meaning they typically have a limited number of distinct values. Unlike text fields, tag fields are stored as-is without tokenization or stemming. They are useful for organizing and categorizing data, making it easier to filter and retrieve documents based on specific tags. + +Tag fields can be added to the schema with the following syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} TAG [SEPARATOR {sep}] [CASESENSITIVE] +``` + +where + +- `SEPARATOR` defaults to a comma (`,`), and can be any printable ASCII character. It is used to separate tags in the field value. For example, if the field value is `hello,world`, the tags are `hello` and `world`. + +- `CASESENSITIVE` indicates that the field is case-sensitive. By default, tag fields are case-insensitive. + +You can search for documents with specific tags using the `@:{}` query syntax. For example, this query finds documents with the tag `blue`: + +``` +FT.SEARCH idx "@tags:{blue}" +``` + +For more information about tag fields, see [Tag Fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/tags" >}}). + +## Text fields + +Text fields are specifically designed for storing human language text. When indexing text fields, Redis performs several transformations to optimize search capabilities. The text is transformed to lowercase, allowing case-insensitive searches. The data is tokenized, meaning it is split into individual words or tokens, which enables efficient full-text search functionality. Text fields can be weighted to assign different levels of importance to specific fields during search operations. Additionally, text fields can be sorted based on their values, enabling the sorting of search results by relevance or other criteria. + +Text fields can be added to the schema with the following syntax: + +``` +FT.CREATE ... SCHEMA ... {field_name} TEXT [WEIGHT] [NOSTEM] [PHONETIC {matcher}] [SORTABLE] [NOINDEX] [WITHSUFFIXTRIE] +``` + +where + +- `WEIGHT` indicates that the field is weighted. This is useful for assigning different levels of importance to specific fields during search operations. +- `NOSTEM` indicates that the field is not stemmed. This is useful for storing text that you don't want to be tokenized, such as URLs or email addresses. +- `PHONETIC {matcher}` Declaring a text attribute as `PHONETIC` will perform phonetic matching on it in searches by default. The obligatory matcher argument specifies the phonetic algorithm and language used. The following matchers are supported: + + - `dm:en` - double metaphone for English + - `dm:fr` - double metaphone for French + - `dm:pt` - double metaphone for Portuguese + - `dm:es` - double metaphone for Spanish + + For more information, see [Phonetic Matching]({{< relref "/develop/interact/search-and-query/advanced-concepts/phonetic_matching" >}}). +- `SORTABLE` indicates that the field can be sorted. This is useful for performing range queries and sorting search results based on text values. +- `NOINDEX` indicates that the field is not indexed. This is useful for storing text that you don't want to search for, but that you still want to retrieve in search results. +- `WITHSUFFIXTRIE` indicates that the field will be indexed with a suffix trie. The index will keep a suffix trie with all terms which match the suffix. It is used to optimize `contains (*foo*)` and `suffix (*foo)` queries. Otherwise, a brute-force search on the trie is performed. If a suffix trie exists for some fields, these queries will be disabled for other fields. + +You can search for documents with specific text values using the `` or the `@:{}` query syntax. Here are a couple of examples: + +- Search for a term in every text attribute: + ``` + FT.SEARCH books-idx "wizard" + ``` + +- Search for a term only in the `title` attribute + ``` + FT.SEARCH books-idx "@title:dogs" + ``` + +## Unicode considerations + +Redis Query Engine only supports Unicode characters in the [basic multilingual plane](https://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane); U+0000 to U+FFFF. Unicode characters beyond U+FFFF, such as Emojis, are not supported and would not be retrieved by queries including such characters in the following use cases: + +* Querying TEXT fields with Prefix/Suffix/Infix +* Querying TEXT fields with fuzzy + +Examples: + +``` +redis> FT.CREATE idx SCHEMA tag TAG text TEXT +OK +redis> HSET doc:1 tag '😀😁🙂' text '😀😁🙂' +(integer) 2 +redis> HSET doc:2 tag '😀😁🙂abc' text '😀😁🙂abc' +(integer) 2 +redis> FT.SEARCH idx '@text:(*😀😁🙂)' NOCONTENT +1) (integer) 0 +redis> FT.SEARCH idx '@text:(*😀😁🙂*)' NOCONTENT +1) (integer) 0 +redis> FT.SEARCH idx '@text:(😀😁🙂*)' NOCONTENT +1) (integer) 0 + +redis> FT.SEARCH idx '@text:(%😀😁🙃%)' NOCONTENT +1) (integer) 0 +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'How to define the schema of an index. + + ' +linkTitle: Schema definition +title: Schema definition +weight: 1 +--- + +An index structure is defined by a schema. The schema specifies the fields, their types, whether they should be indexed or stored, and other additional configuration options. By properly configuring the schema, you can optimize search performance and control the storage requirements of your index. + +``` +FT.CREATE idx + ON HASH + PREFIX 1 blog:post: +SCHEMA + title TEXT WEIGHT 5.0 + content TEXT + author TAG + created_date NUMERIC SORTABLE + views NUMERIC +``` + +In this example, a schema is defined for an index named `idx` that will index all hash documents whose keyname starts with `blog:post:`. +The schema includes the fields `title`, `content`, `author`, `created_date`, and `views`. The `TEXT` type indicates that the `title` and `content` fields are text-based, the `TAG` type is used for the `author` field, and the `NUMERIC` type is used for the `created_date` and `views` fields. Additionally, a weight of 5.0 is assigned to the `title` field to give it more relevance in search results, and `created_date` is marked as `SORTABLE` to enable sorting based on this field. + +You can learn more about the available field types and options on the [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) page. + +## More schema definition examples + +##### Index tags with a separator + +Index books that have a `categories` attribute, where each category is separated by a `;` character. + +``` +FT.CREATE books-idx + ON HASH + PREFIX 1 book:details +SCHEMA + title TEXT + categories TAG SEPARATOR ";" +``` + +##### Index a single field in multiple ways + +Index the `sku` attribute from a hash as both a `TAG` and as `TEXT`: + +``` +FT.CREATE idx + ON HASH + PREFIX 1 blog:post: +SCHEMA + sku AS sku_text TEXT + sku AS sku_tag TAG SORTABLE +``` + +##### Index documents with multiple prefixes + +Index two different hashes, one containing author data and one containing book data: +``` +FT.CREATE author-books-idx + ON HASH + PREFIX 2 author:details: book:details: +SCHEMA + author_id TAG SORTABLE + author_ids TAG + title TEXT name TEXT +``` + +In this example, keys for author data use the key pattern `author:details:`, while keys for book data use the pattern `book:details:`. + +##### Only index documents if a field specifies a certain value using `FILTER` + +Index authors whose names start with G: + +``` +FT.CREATE g-authors-idx + ON HASH + PREFIX 1 author:details + FILTER 'startswith(@name, "G")' +SCHEMA + name TEXT +``` + +Index only books that have a subtitle: + +``` +FT.CREATE subtitled-books-idx + ON HASH + PREFIX 1 book:details + FILTER '@subtitle != ""' +SCHEMA + title TEXT +``` + +##### Index a JSON document using a JSONPath expression + +Index a JSON document that has `title` and `categories` fields. The `title` field is indexed as `TEXT` and the `categories` field is indexed as `TAG`. + +``` +FT.CREATE idx + ON JSON +SCHEMA + $.title AS title TEXT + $.categories AS categories TAG +``` + + +You can learn more about the available field types and options on the [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) page.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Basic constructs for searching and querying Redis data +linkTitle: Basic constructs +title: Basic constructs +weight: 2 +--- + +You can use Redis Open Source as a powerful search and query engine. It allows you to create indexes and perform efficient queries on structured data, as well as text-based and vector searches on unstructured data. + +This section introduces the basic constructs of querying and searching, and explains how to use them to build powerful search capabilities into your applications. + +## Documents + +A document is the basic unit of information. It can be any hash or JSON data object you want to be able to index and search. Each document is uniquely identifiable by its key name. + +## Fields + +A document consists of multiple fields, where each field represents a specific attribute or property of the document. Fields can store different types of data, such as strings, numbers, geo-location or even more complex structures like vectors. By indexing these fields, you enable efficient querying and searching based on their values. + +Not all documents need to have the same fields. You can include or exclude fields based on the specific requirements of your application or data model. + +## Indexing fields + +Not all fields are relevant to perform search operations, and indexing all fields may lead to unnecessary overhead. That's why you have the flexibility to choose which fields should be indexed for efficient search operations. By indexing a field, you enable Redis to create an index structure that optimizes search performance on that field. + +Fields that are not indexed will not contribute to search results. However, they can still be retrieved as part of the document data when fetching search results. + +## Schema + +The index structure is defined by a schema. The schema defines how fields are stored and indexed. It specifies the type of each field and other important information. + +To create an index, you need to define the schema for your collection. Learn more about how to define the schema on the [schema definition]({{< relref "/develop/interact/search-and-query/basic-constructs/schema-definition" >}}) page. + +## Learn more: +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Technical overview of the search and query features of Redis Open Source' +linkTitle: Technical overview +title: Technical overview +weight: 1 +--- + +## Abstract + +Redis Query Engine ("RQE") is a powerful text search and secondary indexing engine that is built on top of Redis Open Source. + +Unlike other Redis search libraries, it does not use the internal data structures of Redis such as sorted sets. Using its own highly optimized data structures and algorithms, it allows for advanced search features, high performance, and a low memory footprint. It can perform simple text searches, as well as complex structured queries, filtering by numeric properties and geographical distances. + +RQE supports continuous indexing with no performance degradation, maintaining concurrent loads of querying and indexing. This makes it ideal for searching frequently updated databases without the need for batch indexing and service interrupts. + +The Enterprise version of RQE supports scaling the search engine across many servers, allowing it to easily grow to billions of documents on hundreds of servers. + +All of this is done while taking advantage of Redis's robust architecture and infrastructure. Using Redis's protocol, replication, persistence, and clustering, RQE delivers a powerful yet simple to manage and maintain search and indexing engine that can be used as a standalone database, or to augment existing Redis databases with advanced powerful indexing capabilities. + +--- + +## Main features + +* Full-Text indexing of multiple fields in a document, including: + * Exact phrase matching. + * Stemming in many languages. + * Chinese tokenization support. + * Prefix queries. + * Optional, negative, and union queries. +* Distributed search on billions of documents. +* Numeric property indexing. +* Geographical indexing and radius filters. +* Incremental indexing without performance loss. +* A structured query language for advanced queries: + * Unions and intersections + * Optional and negative queries + * Tag filtering + * Prefix matching +* A powerful autocomplete engine with fuzzy matching. +* Multiple scoring models and sorting by values. +* Concurrent, low-latency insertion and updates of documents. +* Concurrent searches allowing long-running queries without blocking Redis. +* An extension mechanism allowing custom scoring models and query extension. +* Support for indexing existing Hash objects in Redis databases. + +--- + +## Indexing documents + +Redis needs to know how to index documents in order to search effectively. A document may have several fields, each with its own weight. For example, a title is usually more important than the text itself. The engine can also use numeric or geographical fields for filtering. Hence, the first step is to create the index definition, which tells Redis how to treat the documents that will be added. For example, to define an index of products, indexing their title, description, brand, and price fields, the index creation would look like: + +``` +FT.CREATE idx PREFIX 1 doc: SCHEMA + title TEXT WEIGHT 5 + description TEXT + brand TEXT + PRICE numeric +``` + +When a document is added to this index, as in the following example, each field of the document is broken into its terms (tokenization), and indexed by marking the index for each of the terms in the index. As a result, the product is added immediately to the index and can now be found in future searches. + +``` +HSET doc:1 + title "Acme 42 inch LCD TV" + description "42 inch brand new Full-HD tv with smart tv capabilities" + brand "Acme" + price 300 +``` + +This tells Redis to take the document, break each field into its terms (tokenization), and index it by marking the index for each of the terms in the index as contained in this document. Thus, the product is added immediately to the index and can now be found in future searches. + + +## Searching + +Now that the products have been added to our index, searching is very simple: + +``` +FT.SEARCH idx "full hd tv" +``` + +This will tell Redis to intersect the lists of documents for each term and return all documents containing the three terms. Of course, more complex queries can be performed, and the full syntax of the query language is detailed below. + +## Data structures + +Redis uses its own custom data structures and uses Redis' native structures only for storing the actual document content (using Hash objects). + +Using specialized data structures allows faster searches and more memory effective storage of index records, utilizing compression techniques like delta encoding. + +These are the data structures Redis uses under the hood: + +### Index and document metadata + +For each search _index_, there is a root data structure containing the schema, statistics, etc., but most importantly, compact metadata about each document indexed. + +Internally, inside the index, Redis uses delta encoded lists of numeric, incremental, 32-bit document ids. This means that the user given keys or ids for documents, need to be replaced with the internal ids on indexing, and back to the original ids on search. + +For that, Redis saves two tables, mapping the two kinds of ids in two ways (one table uses a compact trie, the other is simply an array where the internal document ID is the array index). On top of that, for each document, its user given presumptive score is stored, along with some status bits and any optional payload attached to the document by the user. + +Accessing the document metadata table is an order of magnitude faster than accessing the hash object where the document is actually saved, so scoring functions that need to access metadata about the document can operate fast enough. + +### Inverted index + +For each term appearing in at least one document, an inverted index is kept, which is basically a list of all the documents where this term appears. The list is compressed using delta coding, and the document ids are always incrementing. + +For example, when a user indexes the documents "foo", "bar", and "baz", they are assigned incrementing ids, e.g., `1025, 1045, 1080`. When encoding them into the index, only the first ID is encoded, followed by the deltas between each entry and the previous one, e.g., `1025, 20, 35`. + +Using variable-width encoding, one byte is used to express numbers under 255, two bytes for numbers between 256 and 16,383, and so on. This can compress the index by up to 75%. + +On top of the IDs, the frequency of each term in each document, a bit mask representing the fields in which the term appeared in the document, and a list of the positions in which the term appeared is saved. + +The structure of the default search record is as follows. Usually, all the entries are one byte long: + +``` ++----------------+------------+------------------+-------------+------------------------+ +| docId_delta | frequency | field mask | offsets len | offset, offset, .... | +| (1-4 bytes) | (1-2 bytes)| (1-16 bytes) | (1-2 bytes)| (1-2 bytes per offset) | ++----------------+------------+------------------+-------------+------------------------+ +``` + +Optionally, you can choose not to save any one of those attributes besides the ID, degrading the features available to the engine. + +### Numeric index + +Numeric properties are indexed in a special data structure that enables filtering by numeric ranges in an efficient way. One could view a numeric value as a term operating just like an inverted index. For example, all the products with the price $100 are in a specific list, which is intersected with the rest of the query. See [query execution engine]({{< relref "develop/interact/search-and-query/administration/design#query-execution-engine" >}}) for more information. + +However, in order to filter by a range of prices, you would have to intersect the query with all the distinct prices within that range, or perform a union query. If the range has many values in it, this becomes highly inefficient. + +To avoid this, numeric entries are grouped, with close values together, in a single range node. These nodes are stored in a binary range tree, which allows the engine to select the relevant nodes and union them together. Each entry in a range node contains a document Id and the actual numeric value for that document. To further optimize, the tree uses an adaptive algorithm to try to merge as many nodes as possible within the same range node. + +### Tag index + +Tag indexes are similar to full-text indexes, but use simpler tokenization and encoding in the index. The values in these fields cannot be accessed by general field-less search and can be used only with a special syntax. + +The main differences between tag fields and full-text fields are: + +1. The tokenization is simpler. The user can determine a separator (defaults to a comma) for multiple tags. Whitespace trimming is done only at the end of tags. Thus, tags can contain spaces, punctuation marks, accents, etc. The only two transformations that are performed are lower-casing (for latin languages only as of now) and whitespace trimming. + +2. Tags cannot be found from a general full-text search. If a document has a field called *tags* with the values *foo* and *bar*, searching for foo or bar without a special tag modifier (see below) will not return this document. + +3. The index is much simpler and more compressed. Only the document IDs are stored in the index, usually resulting in 1-2 bytes per index entry. + +### Geo index + +Geo indexes utilize Redis's own geo-indexing capabilities. At query time, the geographical part of the query (a radius filter) is sent to Redis, returning only the ids of documents that are within that radius. Longitude and latitude should be passed as a string `lon,lat`. For example, `1.23,4.56`. + +### Autocomplete + +The autocomplete engine (see below for a fuller description) uses a compact trie or prefix tree to encode terms and search them by prefix. + +## Query language + +Simple syntax is supported for complex queries that can be combined together to express complex filtering and matching rules. The query is a text string in the [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) request that is parsed using a complex query processor. + +* Multi-word phrases are lists of tokens, e.g., `foo bar baz`, and imply intersection (logical AND) of the terms. +* Exact phrases are wrapped in quotes, e.g `"hello world"`. +* OR unions (e.g., `word1 OR word2`), are expressed with a pipe (`|`) character. For example, `hello|hallo|shalom|hola`. +* NOT negation (e.g., `word1 NOT word2`) of expressions or sub-queries use the dash (`-`) character. For example, `hello -world`. +* Prefix matches (all terms starting with a prefix) are expressed with a `*` following a 2-letter or longer prefix. +* Selection of specific fields using the syntax `@field:hello world`. +* Numeric Range matches on numeric fields with the syntax `@field:[{min} {max}]`. +* Geo radius matches on geo fields with the syntax `@field:[{lon} {lat} {radius} {m|km|mi|ft}]` +* Tag field filters with the syntax `@field:{tag | tag | ...}`. See the [full documentation on tag fields]({{< relref "/develop/interact/search-and-query/query/#tag-filters" >}}). +* Optional terms or clauses: `foo ~bar` means bar is optional but documents with bar in them will rank higher. + +### Complex query examples + +Expressions can be combined together to express complex rules. For example, given a database of products, where each entity has the fields `title`, `brand`, `tags` and `price`, expressing a generic search would be simply: + +``` +lcd tv +``` + +This would return documents containing these terms in any field. Limiting the search to specific fields (title only in this case) is expressed as: + +``` +@title:(lcd tv) +``` + +Numeric filters can be combined to filter by price within a given price range: + +``` + @title:(lcd tv) + @price:[100 500.2] +``` + +Multiple text fields can be accessed in different query clauses. For example, to select products of multiple brands: + +``` + @title:(lcd tv) + @brand:(sony | samsung | lg) + @price:[100 500.2] +``` + +Tag fields can be used to index multi-term properties without actual full-text tokenization: + +``` + @title:(lcd tv) + @brand:(sony | samsung | lg) + @tags:{42 inch | smart tv} + @price:[100 500.2] +``` + +And negative clauses can also be added to filter out plasma and CRT TVs: + +``` + @title:(lcd tv) + @brand:(sony | samsung | lg) + @tags:{42 inch | smart tv} + @price:[100 500.2] + + -@tags:{plasma | crt} +``` + +## Scoring model + +Redis comes with a few very basic scoring functions to evaluate document relevance. They are all based on document scores and term frequency. This is regardless of the ability to use sortable fields (see below). Scoring functions are specified by adding the `SCORER {scorer_name}` argument to a search request. + +If you prefer a custom scoring function, it is possible to add more functions using the [extension API]({{< relref "/develop/interact/search-and-query/administration/extensions" >}}). + +These are the pre-bundled scoring functions available in Redis: + +* **TFIDF** (default) + + Basic [TF-IDF scoring](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) with document score and proximity boosting factored in. + +* **TFIDF.DOCNORM** +* + Identical to the default TFIDF scorer, with one important distinction: + +* **BM25** + + A variation on the basic TF-IDF scorer. See [this Wikipedia article for more information](https://en.wikipedia.org/wiki/Okapi_BM25). + +* **DISMAX** + + A simple scorer that sums up the frequencies of the matched terms. In the case of union clauses, it will give the maximum value of those matches. + +* **DOCSCORE** + + A scoring function that just returns the presumptive score of the document without applying any calculations to it. Since document scores can be updated, this can be useful if you'd like to use an external score and nothing further. + + +## Sortable fields + +It is possible to bypass the scoring function mechanism and order search results by the value of different document properties (fields) directly, even if the sorting field is not used by the query. For example, you can search for first name and sort by the last name. + +When creating the index with [`FT.CREATE`]({{< relref "commands/ft.create/" >}}), you can declare `TEXT`, `TAG`, `NUMERIC`, and `GEO` properties as `SORTABLE`. When a property is sortable, you can later decide to order the results by its values with relatively low latency. When a property is not sortable, it can still be sorted by its values, but may increase latency. For example, the following schema: + +``` +FT.CREATE users SCHEMA first_name TEXT last_name TEXT SORTABLE age NUMERIC SORTABLE +``` + +Would allow the following query: + +``` +FT.SEARCH users "john lennon" SORTBY age DESC +``` + +## Result highlighting and summarization + +Redis uses advanced algorithms for highlighting and summarizing, which enable only the relevant portions of a document to appear in response to a search query. This feature allows users to immediately understand the relevance of a document to their search criteria, typically highlighting the matching terms in bold text. The syntax is as follows: + +``` +FT.SEARCH ... + SUMMARIZE [FIELDS {num} {field}] [FRAGS {numFrags}] [LEN {fragLen}] [SEPARATOR {separator}] + HIGHLIGHT [FIELDS {num} {field}] [TAGS {openTag} {closeTag}] + +``` + +Summarization will fragment the text into smaller sized snippets. Each snippet will contain the found term(s) and some additional surrounding context. + +Highlighting will highlight the found term and its variants with a user-defined tag. This may be used to display the matched text in a different typeface using a markup language, or to otherwise make the text appear differently. + +## Autocomplete + +Another important feature for Redis Open Source is its autocomplete engine. This allows users to create dictionaries of weighted terms, and then query them for completion suggestions to a given user prefix. Completions can have payloads, which are user-provided pieces of data that can be used for display. For example, completing the names of users, it is possible to add extra metadata about users to be displayed. + +For example, if a user starts to put the term “lcd tv” into a dictionary, sending the prefix “lc” will return the full term as a result. The dictionary is modeled as a compact trie (prefix tree) with weights, which is traversed to find the top suffixes of a prefix. + +Redis also allows fuzzy suggestions, meaning you can get suggestions to prefixes even if the user makes a typo in their prefix. This is enabled using a Levenshtein automaton, allowing efficient searching of the dictionary for all terms within a maximal Levenshtein distance of a term or prefix. Suggestions are then weighted based on both their original score and their distance from the prefix typed by the user. + +However, searching for fuzzy prefixes (especially very short ones) will traverse an enormous number of suggestions. In fact, fuzzy suggestions for any single letter will traverse the entire dictionary, so the recommendation is to use this feature carefully and in full consideration of the performance penalty it incurs. + +The autocomplete engine supports Unicode, allowing for fuzzy matches in non-latin languages as well. + +See the [autocomplete page]({{< relref "/develop/interact/search-and-query/advanced-concepts/autocomplete" >}}) for more information and examples. + +## Search engine internals + +### The Redis module API + +RQE is implemented using the [Redis module API]({{< relref "/develop/reference/modules/" >}}) and is loaded into Redis as an extension module at start-up. + +Redis modules make it possible to extend Redis's core functionality, implementing new Redis commands, data structures, and capabilities with similar performance to native core Redis itself. Redis modules are dynamic libraries that can be loaded into Redis at start-up or loaded at run-time using the [`MODULE LOAD`]({{< relref "/commands/module-load" >}}) command. Redis exports a C API, in the form of a single C header file called `redismodule.h`. + +While the logic of RQE and its algorithms are mostly independent, and it could be ported quite easily to run as a stand-alone server, it still takes advantage of Redis as a robust infrastructure for a database server. Building on top of Redis means that, by default, modules are afforded: + +* A high performance network protocol server +* Robust replication +* Highly durable persistence as snapshots of transaction logs +* Cluster mode + +### Query execution engine + +Redis uses a high-performance flexible query processing engine that can evaluate very complex queries in real time. + +The above query language is compiled into an execution plan that consists of a tree of index iterators or filters. These can be any of: + +* Numeric filter +* Tag filter +* Text filter +* Geo filter +* Intersection operation (combining 2 or more filters) +* Union operation (combining 2 or more filters) +* NOT operation (negating the results of an underlying filter) +* Optional operation (wrapping an underlying filter in an optional matching filter) + +The query parser generates a tree of these filters. For example, a multi-word search would be resolved into an intersect operation of multiple text filters, each traversing an inverted index of a different term. Simple optimizations such as removing redundant layers in the tree are applied. + +Each of the filters in the resulting tree evaluates one match at a time. This means that at any given moment, the query processor is busy evaluating and scoring one matching document. This means that very little memory allocation is done at run-time, resulting in higher performance. + +The resulting matching documents are then fed to a post-processing chain of result processors that are responsible for scoring them, extracting the top-N results, loading the documents from storage, and sending them to the client. That chain is dynamic as well, which adapts based on the attributes of the query. For example, a query that only needs to return document ids will not include a stage for loading documents from storage. + +### Concurrent updates and searches + +While RQE is extremely fast and uses highly optimized data structures and algorithms, it was facing the same problem with regards to concurrency. Depending on the size of your data set and the cardinality of search queries, queries can take anywhere between a few microseconds to hundreds of milliseconds, or even seconds in extreme cases. When that happens, the entire Redis server process is blocked. + +Think, for example, of a full-text query intersecting the terms "hello" and "world", each with a million entries, and a half-million common intersection points. To perform that query in a millisecond, Redis would have to scan, intersect, and rank each result in one nanosecond, [which is impossible with current hardware](https://gist.github.com/jboner/2841832). The same goes for indexing a 1,000 word document. It blocks Redis entirely for the duration of the query. + +RQE uses the Redis Module API's concurrency features to avoid stalling the server for long periods of time. The idea is simple - while Redis itself is single-threaded, a module can run many threads, and any one of those threads can acquire the **Global Lock** when it needs to access Redis data, operate on it, and release it. + +Redis cannot be queried in parallel, as only one thread can acquire the lock, including the Redis main thread, but care is taken to make sure that a long-running query will give other queries time to run by yielding this lock from time to time. + +The following design principles were adopted to allow concurrency: + +1. RQE has a thread pool for running concurrent search queries. + +2. When a search request arrives, it is passed to the handler, parsed on the main thread, and then a request object is passed to the thread pool via a queue. + +3. The thread pool runs a query processing function in its own thread. + +4. The function locks the Redis Global lock and starts executing the query. + +5. Since the search execution is basically an iterator running in a cycle, the elapsed time is sampled every several iterations (sampling on each iteration would slow things down as it has a cost of its own). + +6. If enough time has elapsed, the query processor releases the Global Lock and immediately tries to acquire it again. When the lock is released, the kernel will schedule another thread to run - be it Redis's main thread, or another query thread. + +7. When the lock is acquired again, all Redis resources that were held before releasing the lock are re-opened (keys might have been deleted while the thread has been sleeping) and work resumes from the previous state. + +Thus the operating system's scheduler makes sure all query threads get CPU time to run. While one is running the rest wait idly, but since execution is yielded about 5,000 times a second, it creates the effect of concurrency. Fast queries will finish in one go without yielding execution, slow ones will take many iterations to finish, but will allow other queries to run concurrently. + +### Index garbage collection + +RQE is optimized for high write, update, and delete throughput. One of the main design choices dictated by this goal is that deleting and updating documents do not actually delete anything from the index: + +1. Deletion simply marks the document deleted in a global document metadata table using a single bit. +2. Updating, on the other hand, marks a document as deleted, assigns it a new incremental document ID, and re-indexes the document under a new ID, without performing a comparison of the change. + +What this means, is that index entries belonging to deleted documents are not removed from the index, and can be seen as garbage. Over time, an index with many deletes and updates will contain mostly garbage, both slowing things down and consuming unnecessary memory. + +To overcome this, RQE employs a background garbage collection (GC) mechanism. During normal operation of the index, a special thread randomly samples indexes, traverses them, and looks for garbage. Index sections containing garbage are cleaned and memory is reclaimed. This is done in a non- intrusive way, operating on very small amounts of data per scan, and utilizing Redis's concurrency mechanism (see above) to avoid interrupting searches and indexing. The algorithm also tries to adapt to the state of the index, increasing the GC's frequency if the index contains a lot of garbage, and decreasing it if it doesn't, to the point of hardly scanning if the index does not contain garbage. + +### Extension model + +RedisSearch supports an extension mechanism, much like Redis supports modules. The API is very minimal at the moment, and it does not yet support dynamic loading of extensions at run-time. Instead, extensions must be written in C (or a language that has an interface with C) and compiled into dynamic libraries that will be loaded at start-up. + +There are two kinds of extension APIs at the moment: + +1. **Query expanders**, whose role is to expand query tokens (i.e., stemmers). +2. **Scoring functions**, whose role is to rank search results at query time. + +Extensions are compiled into dynamic libraries and loaded into RQE on initialization of the module. The mechanism is based on the code of Redis's own module system, albeit far simpler. + +--- + +## Scalable distributed search + +While RQE is very fast and memory efficient, if an index is big enough, at some point it will be too slow or consume too much memory. It must then be scaled out and partitioned over several machines, each of which will hold a small part of the complete search index. + +Traditional clusters map different keys to different shards to achieve this. However, with search indexes this approach is not practical. If each word’s index was mapped to a different shard, it would be necessary to intersect records from different servers for multi-term queries. + +The way to address this challenge is to employ a technique called index partitioning, which is very simple at its core: + +* The index is split across many machines/partitions by document ID. +* Every partition has a complete index of all the documents mapped to it. +* All shards are queried concurrently and the results from each shard are merged into a single result. + +To facilitate this, a new component called a coordinator is added to the cluster. When searching for documents, the coordinator receives the query and sends it to N partitions, each holding a sub-index of 1/N documents. Since we’re only interested in the top K results of all partitions, each partition returns just its own top K results. Then, the N lists of K elements are merged and the top K elements are extracted from the merged list. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details about extensions for query expanders and scoring functions +linkTitle: Extensions +title: Extend existing search and query features +weight: 9 +--- + +RediSearch supports an extension mechanism, much like Redis supports modules. The API is very minimal at the moment, and it does not yet support dynamic loading of extensions on a running server. Instead, extensions must be written in C (or a language that has an interface with C) and compiled into dynamic libraries that can be loaded at start up. + +There are two kinds of extension APIs at the moment: + +1. **Query expanders**, whose role is to expand query tokens (i.e., stemmers). +2. **Scoring functions**, whose role is to rank search results at query time. + +## Registering and loading extensions + +Extensions should be compiled into dynamic library files (e.g., `.so` files), and loaded into the RediSearch module during initialization. + +### Compiling + + Extensions should be compiled and linked as dynamic libraries. An example Makefile for an extension [can be found here](https://github.com/RediSearch/RediSearch/blob/master/tests/ctests/ext-example/Makefile). + + That folder also contains an example extension that is used for testing and can be taken as a skeleton for implementing your own extension. + +### Loading + + Loading an extension is done by appending `EXTLOAD {path/to/ext.so}` after the `loadmodule` configuration directive when loading the RediSearch module. For example: + + ```sh + $ redis-server --loadmodule ./redisearch.so EXTLOAD ./ext/my_extension.so + ``` + + This causes the RediSearch module to automatically load the extension and register its expanders and scorers. + +## Initializing an extension + +The entry point of an extension is a function with the signature: + +```c +int RS_ExtensionInit(RSExtensionCtx *ctx); +``` + +When loading an extension, RediSearch looks for this function and calls it. This function is responsible for registering and initializing the expanders and scorers. + +It should return REDISEARCH_ERR on error or REDISEARCH_OK on success. + +### Example init function + +```c +#include //must be in the include path + +int RS_ExtensionInit(RSExtensionCtx *ctx) { + + /* Register a scoring function with an alias my_scorer and no special private data and free function */ + if (ctx->RegisterScoringFunction("my_scorer", MyCustomScorer, NULL, NULL) == REDISEARCH_ERR) { + return REDISEARCH_ERR; + } + + /* Register a query expander */ + if (ctx->RegisterQueryExpander("my_expander", MyExpander, NULL, NULL) == + REDISEARCH_ERR) { + return REDISEARCH_ERR; + } + + return REDISEARCH_OK; +} +``` + +## Calling your custom functions + +When performing a query, you can use your scorers or expanders by specifying the SCORER or EXPANDER arguments with the given alias. For example: + +``` +FT.SEARCH my_index "foo bar" EXPANDER my_expander SCORER my_scorer +``` + +**NOTE**: Expander and scorer aliases are **case sensitive**. + +## The query expander API + +Only basic query expansion is supported, one token at a time. An expander can decide to expand any given token with as many tokens it wishes, which will be union-merged in query time. + +The API for an expander is the following: + +```c +#include //must be in the include path + +void MyQueryExpander(RSQueryExpanderCtx *ctx, RSToken *token) { + ... +} +``` + +### RSQueryExpanderCtx + +`RSQueryExpanderCtx` is a context that contains private data of the extension, and a callback method to expand the query. It is defined as: + +```c +typedef struct RSQueryExpanderCtx { + + /* Opaque query object used internally by the engine, and should not be accessed */ + struct RSQuery *query; + + /* Opaque query node object used internally by the engine, and should not be accessed */ + struct RSQueryNode **currentNode; + + /* Private data of the extension, set on extension initialization */ + void *privdata; + + /* The language of the query, defaults to "english" */ + const char *language; + + /* ExpandToken allows the user to add an expansion of the token in the query, that will be + * union-merged with the given token in query time. str is the expanded string, len is its length, + * and flags is a 32 bit flag mask that can be used by the extension to set private information on + * the token */ + void (*ExpandToken)(struct RSQueryExpanderCtx *ctx, const char *str, size_t len, + RSTokenFlags flags); + + /* SetPayload allows the query expander to set GLOBAL payload on the query (not unique per token) + */ + void (*SetPayload)(struct RSQueryExpanderCtx *ctx, RSPayload payload); + +} RSQueryExpanderCtx; +``` + +### RSToken + +`RSToken` represents a single query token to be expanded, and is defined as: + +```c +/* A token in the query. The expanders receive query tokens and can expand the query with more query + * tokens */ +typedef struct { + /* The token string - which may or may not be NULL terminated */ + const char *str; + /* The token length */ + size_t len; + + /* 1 if the token is the result of query expansion */ + uint8_t expanded:1; + + /* Extension specific token flags that can be examined later by the scoring function */ + RSTokenFlags flags; +} RSToken; +``` + +## The scoring function API + +For the final ranking, the scoring function analyzes each document retrieved by the query, taking into account not only the terms that triggered the document's retrieval but also metadata like its prior score, length, and so on. + +Since the scoring function is evaluated for each document, potentially millions of times, and since +redis is single threaded, it is important that it works as fast as possible and be heavily optimized. + +A scoring function is applied to each potential result for each document and is implemented with the following signature: + +```c +double MyScoringFunction(RSScoringFunctionCtx *ctx, RSIndexResult *res, + RSDocumentMetadata *dmd, double minScore); +``` + +`RSScoringFunctionCtx` is a context that implements some helper methods. + +`RSIndexResult` is the result information containing the document id, frequency, terms, and offsets. + +`RSDocumentMetadata` is an object holding global information about the document, such as its presumptive score. + +`minScore` is the minimal score that will yield a result that is relevant to the search. It can be used to stop processing midway or before or even before it starts. + +The return value of the function is a `double` representing the final score of the result. +Returning 0 causes the result to be counted, but if there are results with a score greater than 0, they will appear above it. +To completely filter out a result and not count it in the totals, the scorer should return the special value `RS_SCORE_FILTEROUT`, which is internally set to negative infinity, or -1/0. + +### RSScoringFunctionCtx + +This is an object containing the following members: + +* `void *privdata`: a pointer to an object set by the extension on initialization time. +* `RSPayload payload*`: A Payload object set either by the query expander or the client. +* `int GetSlop(RSIndexResult *res)*`: A callback method that yields the total minimal distance between the query terms. This can be used to prefer results where the slop is smaller and the terms are nearer to each other. + +### RSIndexResult + +This is an object holding the information about the current result in the index, which is an aggregate of all the terms that resulted in the current document being considered a valid result. See `redisearch.h` for details. + +### RSDocumentMetadata + +This is an object describing global information, unrelated to the current query, about the document being evaluated by the scoring function. + +## Example query expander + +This example query expander expands each token with the term foo: + +```c +#include //must be in the include path + +void DummyExpander(RSQueryExpanderCtx *ctx, RSToken *token) { + ctx->ExpandToken(ctx, strdup("foo"), strlen("foo"), 0x1337); +} +``` + +## Example scoring function + +This is an actual scoring function, which calculates TF-IDF for the document, multiplies it by the document score, and divides it by the slop: + +```c +#include //must be in the include path + +double TFIDFScorer(RSScoringFunctionCtx *ctx, RSIndexResult *h, RSDocumentMetadata *dmd, + double minScore) { + // no need to evaluate documents with score 0 + if (dmd->score == 0) return 0; + + // calculate sum(tf-idf) for each term in the result + double tfidf = 0; + for (int i = 0; i < h->numRecords; i++) { + // take the term frequency and multiply by the term IDF, add that to the total + tfidf += (float)h->records[i].freq * (h->records[i].term ? h->records[i].term->idf : 0); + } + // normalize by the maximal frequency of any term in the document + tfidf /= (double)dmd->maxFreq; + + // multiply by the document score (between 0 and 1) + tfidf *= dmd->score; + + // no need to factor the slop if tfidf is already below minimal score + if (tfidf < minScore) { + return 0; + } + + // get the slop and divide the result by it, making sure we prefer results with closer terms + tfidf /= (double)ctx->GetSlop(h); + + return tfidf; +} +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: This document describes how documents are added to the index. +linkTitle: Indexing +title: Document Indexing +--- + +## Components + +* `Document` - contains the actual document and its fields. +* `RSAddDocumentCtx` - the per-document state that is used while it + is being indexed. The state is discarded after the indexing is complete. +* `ForwardIndex` - contains terms found in the document. The forward index + is used to write the `InvertedIndex`. +* `InvertedIndex` - an index that maps terms to occurrences within applicable + documents. + +## Architecture + +The indexing process begins by creating a new `RSAddDocumentCtx` and adding a +document to it. Internally, this is divided into several steps. + + +1. Submission + + A `DocumentContext` is created, and is associated with a document (as received) + from input. The submission process will also perform some preliminary caching. + +2. Preprocessing + + After a document has been submitted, it is preprocessed. Preprocessing performs + stateless processing on all document input fields. For text fields, this + means tokenizing the document and creating a forward index. The preprocesors + will store this information in per-field variables within the `AddDocumentCtx`. + This computed result is then written to the persistent index later on during + the indexing phase. + + If the document is sufficiently large, the preprocessing is done in a separate + thread, which allows concurrent preprocessing and also avoids blocking other + threads. If the document is smaller, the preprocessing is done within the main + thread, avoiding the overhead of additional context switching. + The `SELF_EXC_THRESHOLD` macro contains the threshold for 'sufficiently large'. + + After the document is preprocessed, it is submitted to be indexed. + +3. Indexing + + Indexing proper consists of committing the precomputed results of the + preprocessing phase. It is done in a single thread, and is in the form + of a queue. + + Because documents must be written to the index in the exact order of their + document ID assignment, and because the indexing process must also yield to other potential + indexing processes, you may end up in a situation where document IDs are written + to the index out-of-order. To solve that, the order in which documents + are actually written must be well-defined. If there is only one thread writing + documents, then this thread will not need to worry about out-of-order IDs + while writing. + + Having a single background thread also helps optimize in several areas, as + will be seen later on. The basic idea is that when there are a lot of + documents queued for the indexing thread, the indexing thread may treat them + as batch commands, greatly reducing the number of locks/unlocks of the GIL + and the number of times term keys need to be opened and closed. + +4. Skipping already indexed documents + + The phases below may operate on more than one document at a time. When a document + is fully indexed, it is marked as done. When the thread iterates over the queue + it will only perform processing/indexing on items not yet marked as done. + +5. Term merging + + Term merging, or forward index merging, is done when there is more than a + single document in the queue. The forward index of each document in the queue + is scanned, and a larger, master forward index is constructed in its place. + Each entry in the forward index contains a reference to the origin document + as well as the normal offset/score/frequency information. + + Creating a master forward index avoids opening common term keys more than once per + document. + + If there is only one document within the queue, a master forward index + is not created. + + Note that the internal type of the master forward index is not actually + `ForwardIndex`. + +6. Document ID assignment + + At this point, the GIL is locked and every document in the queue is assigned + a document ID. The assignment is done immediately before writing to the index + so as to reduce the number of times the GIL is locked; thus, the GIL is + locked only once, right before the index is written. + +7. Writing to Indexes + + With the GIL being locked, any pending index data is written to the indexes. + This usually involves opening one or more Redis keys, and writing/copying + computed data into those keys. + + After this is done, the reply for the given document is sent, and the + `AddDocumentCtx` freed. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Details about garbage collection +linkTitle: Garbage collection +title: Garbage collection +weight: 2 +--- + +## The need for garbage collection + +* When documents are deleted by the user, Redis only marks them as deleted in the global document table rather than deleting them outright. This is done for efficiency. Depending on the length of a document, deletion can be a long operation. +* This means that it is no longer the case that an internal numeric id is assigned to a deleted document. When the index is traversed, a check is made for deletion. +* All inverted index entries belonging to deleted document ids are garbage. +* Updating a document is basically the same as deleting it and then adding it again with a new incremental internal ID. No diffing is performed, and the indexes are appended, so the IDs remain incremental, and the updates fast. + +All of the above means that if there are a lot of updates and deletes, a large portion of our inverted index will become garbage, both slowing things down and consuming unnecessary memory. + +You want to optimize the index, but you also don't want to disturb normal operation. This means that optimization or garbage collection should be a background process that's non-intrusive. It only needs to be faster than the deletion rate over a sufficient period of time so that you don't create more garbage than you can collect. + +## Garbage collecting a single-term index + +A single-term inverted index is an array of blocks, each of which contains an encoded list of records; e.g., a document id delta plus other data depending on the index encoding scheme. When some of these records refer to deleted documents this is called garbage. + +The algorithm is simple: + +0. Create a reader and writer for each block. +1. Read each block's records one by one. +2. If no record is invalid, do nothing. +3. When a garbage record is found, the reader is advanced, but not the writer. +4. When at least one garbage record is found, the next records are encoded to the writer, recalculating the deltas. + +Pseudo code: + +``` +foreach index_block as block: + + reader = new_reader(block) + writer = new_write(block) + garbage = 0 + while not reader.end(): + record = reader.decode_next() + if record.is_valid(): + if garbage != 0: + # Write the record at the writer's tip with a newly calculated delta + writer.write_record(record) + else: + writer.advance(record.length) + else: + garbage += record.length +``` + +### GC on numeric indexes + +Numeric indexes are a tree of inverted indexes with a special encoding of (docId delta, value). This means the same algorithm can be applied to them, only traversing each inverted index object in the tree. + +## FORK GC + +Information about FORK GC can be found in this [blog](https://redislabs.com/blog/increased-garbage-collection-performance-redisearch-1-4-1/). + +Since v1.6, the FORK GC is the default GC policy and was proven very efficient both in cleaning the index and not reducing query and indexing performance, even for very write-internsive use cases. +--- +aliases: /develop/interact/search-and-query/basic-constructs/configuration-parameters +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis Query Engine can be tuned through multiple configuration parameters. Some of these parameters can only be set at load-time, while other parameters can be set either at load-time or at run-time. +linkTitle: Configuration parameters +title: Configuration parameters +weight: 1 +--- +{{< note >}} +As of Redis 8 in Redis Open Source (Redis 8), configuration parameters for the time series data structure are now set in the following ways: +* At load time via your `redis.conf` file. +* At run time (where applicable) using the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command. + +Also, Redis 8 persists RQE configuration parameters just like any other configuration parameters (e.g., using the [`CONFIG REWRITE`]({{< relref "/commands/config-rewrite/" >}}) command). +{{< /note >}} + +## RQE configuration parameters + +The following table summarizes which configuration parameters can be set at run-time, and compatibility with Redis Software and Redis Cloud. + +| Parameter name
(version < 8.0) | Parameter name
(version ≥ 8.0) | Run-time | Redis
Software | Redis
Cloud | +| :------- | :------- | :------- | :------- | :------- | +| BG_INDEX_SLEEP_GAP | [search-bg-index-sleep-gap](#search-bg-index-sleep-gap) | :white_large_square: ||| +| CONCURRENT_WRITE_MODE | [search-concurrent-write-mode](#search-concurrent-write-mode) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| CONN_PER_SHARD | [search-conn-per-shard](#search-conn-per-shard) | :white_check_mark: ||| +| CURSOR_MAX_IDLE | [search-cursor-max-idle](#search-cursor-max-idle) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| CURSOR_READ_SIZE | [search-cursor-read-size](#search-cursor-read-size) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| CURSOR_REPLY_THRESHOLD | [search-cursor-reply-threshold](#search-cursor-reply-threshold) | :white_check_mark: ||| +| DEFAULT_DIALECT | [search-default-dialect](#search-default-dialect) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| EXTLOAD | [search-ext-load](#search-ext-load) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| FORK_GC_CLEAN_THRESHOLD | [search-fork-gc-clean-threshold](#search-fork-gc-clean-threshold) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| FORK_GC_RETRY_INTERVAL | [search-fork-gc-retry-interval](#search-fork-gc-retry-interval) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| FORK_GC_RUN_INTERVAL | [search-fork-gc-run-interval](#search-fork-gc-run-interval) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| FORKGC_SLEEP_BEFORE_EXIT | [search-fork-gc-sleep-before-exit](#search-fork-gc-sleep-before-exit) | :white_check_mark: ||| +| FRISOINI | [search-friso-ini](#search-friso-ini) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| [GC_POLICY](#gc_policy) | There is no matching `CONFIG` parameter. | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| GCSCANSIZE | [search-gc-scan-size](#search-gc-scan-size) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| INDEX_CURSOR_LIMIT | [search-index-cursor-limit](#search-index-cursor-limit) | :white_large_square: ||| +| INDEX_THREADS | search-index-threads | :white_large_square: ||| +| MAXAGGREGATERESULTS | [search-max-aggregate-results](#search-max-aggregate-results) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MAXDOCTABLESIZE | [search-max-doctablesize](#search-max-doctablesize) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MAXEXPANSIONS | [search-max-expansions](#search-max-expansions) | :white_check_mark: ||| +| MAXPREFIXEXPANSIONS | [search-max-prefix-expansions](#search-max-prefix-expansions) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MAXSEARCHRESULTS | [search-max-search-results](#search-max-search-results) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MIN_OPERATION_WORKERS | [search-min-operation-workers](#search-min-operation-workers) | :white_check_mark: ||| +| MIN_PHONETIC_TERM_LEN | [search-min-phonetic-term-len](#search-min-phonetic-term-len) | :white_check_mark: ||| +| MINPREFIX | [search-min-prefix](#search-min-prefix) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MINSTEMLEN | [search-min-stem-len](#search-min-stem-len) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| MULTI_TEXT_SLOP | [search-multi-text-slop](#search-multi-text-slop) | :white_large_square: ||| +| NO_MEM_POOLS | [search-no-mem-pools](#search-no-mem-pools) | :white_large_square: ||| +| NOGC | [search-no-gc](#search-no-gc) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| ON_TIMEOUT | [search-on-timeout](#search-on-timeout) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| PARTIAL_INDEXED_DOCS | [search-partial-indexed-docs](#search-partial-indexed-docs) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| RAW_DOCID_ENCODING | [search-raw-docid-encoding](#search-raw-docid-encoding) | :white_large_square: ||| +| SEARCH_THREADS | [search-threads](#search-threads) | :white_large_square: ||| +| TIERED_HNSW_BUFFER_LIMIT | [search-tiered-hnsw-buffer-limit](#search-tiered-hnsw-buffer-limit) | :white_large_square: ||| +| TIMEOUT | [search-timeout](#search-timeout) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| TOPOLOGY_VALIDATION_TIMEOUT | [search-topology-validation-timeout](#search-topology-validation-timeout) | :white_check_mark: ||| +| UNION_ITERATOR_HEAP | [search-union-iterator-heap](#search-union-iterator-heap) | :white_check_mark: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| UPGRADE_INDEX | [search-upgrade-index](#search-upgrade-index) | :white_large_square: | ✅ Supported

| ✅ Flexible & Annual
❌ Free & Fixed | +| VSS_MAX_RESIZE | [search-vss-max-resize](#search-vss-max-resize) | :white_check_mark: ||| +| WORKERS_PRIORITY_BIAS_THRESHOLD | [search-workers-priority-bias-threshold](#search-workers-priority-bias-threshold) | :white_large_square: ||| +| WORKERS | [search-workers](#search-workers) | :white_check_mark: ||| +| OSS_GLOBAL_PASSWORD | Deprecated in v8.0.0. Replace with the `masterauth` password. | :white_large_square: | ✅ Supported

| ❌ Flexible & Annual
❌ Free & Fixed | +| MT_MODE | Deprecated in v8.0.0. Use search-workers. | :white_large_square: ||| +| PRIVILEGED_THREADS_NUM | Deprecated in v8.0.0. Use search-workers-priority-bias-threshold.| :white_large_square: ||| +| WORKER_THREADS | Deprecated in v8.0.0. Use search-min-operation-workers. | :white_large_square: ||| +| SAFEMODE | Deprecated in v1.6.0. This is now the default setting. | :white_large_square: ||| +| FORK_GC_CLEAN_NUMERIC_EMPTY_NODES | Deprecated in v8.0.0. | :white_large_square: ||| + +{{< note >}} +Parameter names for Redis Open Source versions < 8.0, while deprecated, will still be supported in Redis 8. +{{< /note >}} + +--- + +### search-bg-index-sleep-gap + +The number of iterations to run while performing background indexing before `usleep(1)` (sleep for 1 microsecond) is called, ensuring that Redis can process other commands. + +Type: integer + +Valid range: `[1 .. 4294967295]` + +Default: `100` + +### search-concurrent-write-mode + +If enabled, the tokenization of write queries will be performed concurrently. + +Type: boolean + +Default: `FALSE` + +### search-conn-per-shard + +The number of connections to each shard in a cluster. +If `0`, the number of connections is set to `search-workers` + 1. + +Type: integer + +Valid range: `[0 .. 9,223,372,036,854,775,807]` + +Default: `0` + +### search-cursor-max-idle + +The maximum idle time (in ms) that can be set to the [cursor api]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations#cursor-api" >}}). + +Type: integer + +Valid range: `[0 .. 9,223,372,036,854,775,807]` + +Default: `300000` + +### search-cursor-read-size + +Type: integer + +Default: `1000` + +### search-cursor-reply-threshold + +The maximum number of replies to accumulate before triggering `_FT.CURSOR READ` on the shards. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `1` + +### search-default-dialect + +The default +[DIALECT]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}) +to be used by [`FT.CREATE`]({{< relref "/commands/ft.create/" >}}), [`FT.AGGREGATE`]({{< relref "/commands/ft.aggregate/" >}}), [`FT.EXPLAIN`]({{< relref "/commands/ft.explain/" >}}), [`FT.EXPLAINCLI`]({{< relref "/commands/ft.explaincli/" >}}), and [`FT.SPELLCHECK`]({{< relref "/commands/ft.spellcheck/" >}}). +See [Query dialects]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}) +for more information. + +Default: `1` + +### search-ext-load + +If present, Redis will try to load an extension dynamic library from the specified file path. +See [Extensions]({{< relref "/develop/interact/search-and-query/administration/extensions" >}}) for details. + +Type: string + +Default: not set + +### search-fork-gc-clean-numeric-empty-nodes + +Clean empty nodes from numeric tree. + +Type: boolean + +Default: `TRUE` + +### search-fork-gc-clean-threshold + +The fork GC will only start to clean when the number of uncleaned documents exceeds this threshold, otherwise it will skip this run. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `100` + +### search-fork-gc-retry-interval + +Interval (in seconds) in which Redis will retry to run fork GC in case of a failure. +This setting can only be combined with the [`search-gc-policy`](#search-gc-policy) `FORK` setting. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `5` + +### search-fork-gc-run-interval + +Interval (in seconds) between two consecutive fork GC runs. +This setting can only be combined with the [`search-gc-policy`](#search-gc-policy) `FORK` setting. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `30` + +### search-fork-gc-sleep-before-exit + +The number of seconds for the fork GC to sleep before exit. This value should always be set to 0 except when testing. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `0` + +### search-friso-ini + +If present, load the custom Chinese dictionary from the specified path. See [Using custom dictionaries]({{< relref "/develop/interact/search-and-query/advanced-concepts/chinese#using-custom-dictionaries" >}}) for more details. + +Type: string + +Default: not set + +### GC_POLICY + +The garbage collection policy. The two supported policies are: +* FORK: uses a forked thread for garbage collection (v1.4.1 and above). This is the default GC policy since v1.6.1 and is ideal for general purpose workloads. +* LEGACY: uses a synchronous, in-process fork. This is ideal for read-heavy and append-heavy workloads with very few updates/deletes. Deprecated in v2.6.0. + +Note: When `GC_POLICY` is set to `FORK`, it can be combined with the `search-fork-gc-run-interval` and `search-fork-gc-retry-interval` settings. + +Type: string + +Valid values: `FORK` or `DEFAULT` + +Default: `FORK` + +### search-gc-scan-size + +The bulk size of the internal GC used for cleaning up indexes. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Redis Open Source default: `100` + +Redis Software default: `-1` (unlimited) + +Redis Cloud defaults: +- Flexible & Annual: `-1` (unlimited) +- Free & Fixed: `10000` + +### search-index-cursor-limit + +Added in v2.10.8. + +The maximum number of cursors that can be opened, per shard, at any given time. Cursors can be opened by the user via [`FT.AGGREGATE WITHCURSOR`]({{< relref "/commands/ft.aggregate/" >}}). Cursors are also opened internally by the RQE for long-running queries. Once `INDEX_CURSOR_LIMIT` is reached, any further attempts to open a cursor will result in an error. + +{{% alert title="Notes" color="info" %}} +* Caution should be used in modifying this parameter. Every open cursor results in additional memory usage. +* Cursor usage should be regulated first by use of [`FT.CURSOR DEL`]({{< relref "/commands/ft.cursor-del/" >}}) and/or [`MAXIDLE`]({{< relref "/commands/ft.aggregate/" >}}) prior to modifying `INDEX_CURSOR_LIMIT` +* See [Cursor API]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations#cursor-api" >}}) for more details. +{{% /alert %}} + +Type: integer + +Default: `128` + +### search-max-aggregate-results + +The maximum number of results to be returned by the `FT.AGGREGATE` command if `LIMIT` is used. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Redis Open Source default: `-1` (unlimited) + +Redis Software default: `-1` (unlimited) + +Redis Cloud defaults: +- Flexible & Annual: `-1` (unlimited) +- Free & Fixed: `10000` + +### search-max-doctablesize + +The maximum size of the internal hash table used for storing documents. +Note: this configuration option doesn't limit the number of documents that can be stored. It only affects the hash table internal array maximum size. +Decreasing this property can decrease the memory overhead in cases where the index holds a small number of documents that are constantly updated. + +Type: integer + +Valid range: `[1 .. 18,446,744,073,709,551,615]` + +Default: `1000000` + +### search-max-expansions + +This parameter is an alias for [search-max-prefix-expansions](#search-max-prefix-expansions). + +### search-max-prefix-expansions + +The maximum number of expansions allowed for query prefixes. +The maximum number of expansions allowed for query prefixes. Setting it too high can cause performance issues. If `search-max-prefix-expansions` is reached, the query will continue with the first acquired results. The configuration is applicable for all affix queries including prefix, suffix, and infix (contains) queries. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `200` + +### search-max-search-results + +The maximum number of results to be returned by the `FT.SEARCH` command if `LIMIT` is used. Set it to `-1` to remove the limit. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Redis Open Source default: `1000000` + +Redis Software default: `1000000` + +Redis Cloud defaults: +- Flexible & Annual: `1000000` +- Free & Fixed: `10000` + +### search-min-operation-workers + +The number of worker threads to use for background tasks when the server is in an operation event. + +Type: integer + +Valid range: `[0 .. 8192]` + +Default: `4` + +### search-min-phonetic-term-len + +The minimum length of term to be considered for phonetic matching. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `3` + +### search-min-prefix + +The minimum number of characters allowed for prefix queries (for example, hel*). Setting it to `1` can reduce performance. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `2` + +### search-min-stem-len + +The minimum word length to stem. Setting it lower than `4` can reduce performance. + +Type: integer + +Valid range: `[2 .. 4,294,967,295]` + +Redis Open Source default: `4` + +Redis Software and Redis Cloud default: `2` + +### search-multi-text-slop + +Set the delta that is used to increase positional offsets between array slots for multi text values. +This will allow you to control the level of separation between phrases in different array slots; related to the `SLOP` parameter of `FT.SEARCH` command. + +Type: integer + +Valid range: `[0 .. 4,294,967,295]` + +Default: `100` + +### search-no-mem-pools + +Set RQE to run without memory pools. + +Type: boolean + +Default: `FALSE` + +### search-no-gc + +If set to `TRUE`, garbage collection is disabled for all indexes. + +Type: boolean + +Default: `FALSE` + +### search-on-timeout + +The response policy for queries that exceed the [`search-timeout`](#search-timeout) setting can be one of the following: + +* `RETURN`: this policy will return the top results accumulated by the query until it timed out. +* `FAIL`: will return an error when the query exceeds the timeout value. + +Type: string + +Valid values: `RETURN`, `FAIL` + +Default: `RETURN` + +### search-partial-indexed-docs + +Added in v2.0.0. + +Enable/disable the Redis command filter. The filter optimizes partial updates of hashes +and may avoid re-indexing the hash if changed fields are not part of the schema. + +The Redis command filter will be executed upon each Redis command. Though the filter is +optimized, this will introduce a small increase in latency on all commands. +This configuration is best used with partially indexed documents where the non-indexed fields are updated frequently. + +Type: integer + +Valid values: `0` (false), `1` (true) + +Default: `0` + +### search-raw-docid-encoding + +Disable compression for DocID inverted indexes to boost CPU performance. + +Type: boolean + +Default: `FALSE` + +### search-threads + +Sets the number of search threads in the coordinator thread pool. + +Type: integer + +### search-tiered-hnsw-buffer-limit + +Used for setting the buffer limit threshold for vector tiered HNSW indexes. If Redis is using `WORKERS` for indexing, and the number of vectors waiting in the buffer to be indexed exceeds this limit, new vectors are inserted directly into HNSW. + +Type: integer + +Valid range: `[0 .. 9,223,372,036,854,775,807]` + +Default: `1024` + +### search-timeout + +The maximum amount of time in milliseconds that a search query is allowed to run. If this time is exceeded, Redis returns the top results accumulated so far, or an error depending on the policy set with [`search-on-timeout`](#search-on-timeout). The timeout can be disabled by setting it to `0`. + +{{% alert title="Notes" color="info" %}} +* `search-timeout` refers to query time only. +* Parsing the query is not counted towards `search-timeout`. +* If `search-timeout` was not reached during the search, finalizing operations such as loading document content or reducers continue. +{{% /alert %}} + +Type: integer + +Value range: `[1 .. 9,223,372,036,854,775,807]` + +Redis Open Source default: `500` + +Redis Software default: `500` + +Redis Cloud defaults: +- Flexible & Annual: `500` +- Free & Fixed: `100` + +### search-topology-validation-timeout + +Sets the timeout in milliseconds for topology validation. After this timeout, any pending requests will be processed, even if the topology is not fully connected. A value of `0` means no timeout. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `30000` + +### search-union-iterator-heap + +The minimum number of iterators in a union at which the iterator will switch to a heap based implementation. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `20` + +### search-upgrade-index + +Relevant only when loading an v1.x RDB file. Specify the argument for upgrading the index. +This configuration setting is a special configuration option introduced to upgrade indexes from v1.x RQE versions, otherwise known as legacy indexes. This configuration option needs to be given for each legacy index, followed by the index name and all valid options for the index description (also referred to as the `ON` arguments for following hashes) as described on [FT.CREATE]({{< relref "/commands/ft.create/" >}}) command page. + +Type: string + +Default: there is no default for index name, and the other arguments have the same defaults as with the [`FT.CREATE`]({{< relref "/commands/ft.create/" >}}) command. + +**Example** + +``` +search-upgrade-index idx PREFIX 1 tt LANGUAGE french LANGUAGE_FIELD MyLang SCORE 0.5 SCORE_FIELD MyScore + PAYLOAD_FIELD MyPayload UPGRADE_INDEX idx1 +``` + +{{% alert title="Notes" color="info" %}} +* If the RDB file does not contain a legacy index that's specified in the configuration, a warning message will be added to the log file, and loading will continue. +* If the RDB file contains a legacy index that wasn't specified in the configuration, loading will fail and the server won't start. +{{% /alert %}} + +### search-vss-max-resize + +Added in v2.4.8. + +The maximum memory resize (in bytes) for vector indexes. +The maximum memory resize (in bytes) for vector indexes. This value will override default memory limits if you need to allow for a large [`BLOCK_SIZE`]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors/#creation-attributes-per-algorithm" >}}). + +Type: integer + +Valid range: `[0 .. 4,294,967,295]` + +Default: `0` + +### search-workers-priority-bias-threshold + +The number of high priority tasks to be executed at any given time by the worker thread pool before executing low priority tasks. After this number of high priority tasks are being executed, the worker thread pool will execute high and low priority tasks alternately. + +Type: integer + +Valid range: `[1 .. 9,223,372,036,854,775,807]` + +Default: `1` + +### search-workers + +The number of worker threads to use for query processing and background tasks. + +Type: integer + +Valid range: `[0 .. 8192]` + +Default: `0` + +## Set configuration parameters at module load-time (deprecated) + +These methods are deprecated beginning with Redis 8. + +Setting configuration parameters at load-time is done by appending arguments after the `--loadmodule` argument when starting a server from the command line, or after the `loadmodule` directive in a Redis config file. For example: + +In [redis.conf]({{< relref "/operate/oss_and_stack/management/config" >}}): + +``` +loadmodule ./redisearch.so [OPT VAL]... +``` + +From the [Redis CLI]({{< relref "/develop/tools/cli" >}}), using the [MODULE LOAD]({{< relref "/commands/module-load" >}}) command: + +``` +127.0.0.6379> MODULE LOAD redisearch.so [OPT VAL]... +``` + +From the command line: + +``` +$ redis-server --loadmodule ./redisearch.so [OPT VAL]... +``` + +## Set configuration parameters at run-time (for supported parameters, deprecated) + +These methods are deprecated beginning with Redis 8. + +RQE exposes the `FT.CONFIG` endpoint to allow for the setting and retrieval of configuration parameters at run-time. + +To set the value of a configuration parameter at run-time (for supported parameters), simply run: + +```sh +FT.CONFIG SET OPT1 VAL1 +``` + +Similarly, you can retrieve current configuration parameter values using: + +```sh +FT.CONFIG GET OPT1 +FT.CONFIG GET * +``` + +Values set using [`FT.CONFIG SET`]({{< relref "/commands/ft.config-set/" >}}) are not persisted after server restart.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Details about design choices and implementations + + ' +linkTitle: Internal design +title: Internal design +weight: 1 +--- + +Redis Open Source implements inverted indexes on top of Redis, but unlike previous implementations of Redis inverted indexes, it uses a custom data encoding that allows more memory and CPU efficient searches, and more advanced search features. + +This document details some of the design choices and how these features are implemented. + +## Intro: Redis String DMA + +The main feature that this module takes advantage of is Redis Modules Strings Direct Memory Access (DMA). + +This feature is simple, yet very powerful. It allows modules to allocate data on Redis string keys, and then get direct pointers to the data allocated by the keys without copying or serializing. + +This allows very fast access to huge amounts of memory. From the module's perspective, the string value is exposed simply as `char *`, meaning it can be cast to any data structure. + +You simply call `RedisModule_StringTruncate` to resize a memory chunk to the size needed. Then you call `RedisModule_StringDMA` to get direct access to the memory in that key. See [https://github.com/RedisLabs/RedisModulesSDK/blob/master/FUNCTIONS.md#redismodule_stringdma](https://github.com/RedisLabs/RedisModulesSDK/blob/master/FUNCTIONS.md#redismodule_stringdma) + +This API is used in the module mainly to encode inverted indexes, and also for other auxiliary data structures. + +A generic "Buffer" implementation using DMA strings can be found in [buffer.c](https://github.com/RediSearch/RediSearch/blob/master/src/buffer.c). It automatically resizes the Redis string it uses as raw memory when the capacity needs to grow. + +## Inverted index encoding + +An [inverted index](https://en.wikipedia.org/wiki/Inverted_index) is the data structure at the heart of all search engines. The idea is simple. For each word or search term, a list of all the documents it appears in is kept. Other data is kept as well, such as term frequency, and the offsets where a term appeared in the document. Offsets are used for exact match type searches, or for ranking of results. + +When a search is performed, either a single index is traversed, or the intersection or union of two or more indexes is traversed. Classic Redis implementations of search engines use sorted sets as inverted indexes. This works but has significant memory overhead, and it also does not allow for encoding of offsets, as explained above. + +Redis Open Source uses string DMA (see above) to efficiently encode inverted indexes. It combines [delta encoding](https://en.wikipedia.org/wiki/Delta_encoding) and [varint encoding](https://developers.google.com/protocol-buffers/docs/encoding#varints) to encode entries, minimizing space used for indexes, while keeping decompression and traversal efficient. + +For each hit (document/word entry), the following items are encoded: + +* The document ID as a delta from the previous document. +* The term frequency, factored by the document's rank (see below). +* Flags, that can be used to filter only specific fields or other user-defined properties. +* An offset vector of all the document offsets of the word. + +{{% alert title="Note" color="info" %}} +Document IDs as entered by the user are converted to internal incremental document IDs, that allow delta encoding to be efficient and let the inverted indexes be sorted by document ID. +{{% /alert %}} + +This allows for a single index hit entry to be encoded in as little as 6 bytes. Note: this is the best case. Depending on the number of occurrences of the word in the document, this can get much higher. + +To optimize searches, two additional auxiliary data structures are kept in different DMA string keys: + +1. **Skip index**: a table of the index offset of 1/50th of the index entries. This allows faster lookup when intersecting inverted indexes, as the entire list doesn't need to be traversed. +2. **Score index**: In simple single-word searches, there is no real need to traverse all the results, just the top N results the user is interested in. So an auxiliary index of the top 20 or so entries is stored for each term, which are used when applicable. + +## Document and result ranking + +Each document entered to the engine has a [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) scoring of each word to rank the results. + +As an optimization, each inverted index hit is encoded with `TF * Document_rank` as its score, and only IDF is applied during searches. This may change in the future. + +On top of that, in the case of intersection queries, the minimal distance between the terms in the query is factored into the ranking. The closest the terms are to each other, the better the result. + +When searching, priority queue of the top N results requested is maintained, which are eventually returned, sorted by rank. + +## Index ppecs and field weights + +When creating an "index" using [`FT.CREATE`]({{< relref "commands/ft.create/" >}}), the user specifies the fields to be indexed and their respective weights. This can be used to give some document fields, like a title, more weight in ranking results. + +For example: + +``` +FT.CREATE my_index title 10.0 body 1.0 url 2.0 +``` + +will create an index on fields named title, body, and url, with scores of 10, 1, and 2 respectively. + +When documents are indexed, the weights are taken from the saved *index Spec* that is stored in a special Redis key, and only fields that appear in this spec are indexed. + +## Query execution engine + +A chained-iterator based approach is used as part of query execution, which is similar to [Python generators](https://wiki.python.org/moin/Generators) in concept. + +Iterators that yield index hits are chained together. Those can be: + +1. **Read Iterators**, reading hits one by one from an inverted index. For example, `hello`. +2. **Intersect Iterators**, aggregating two or more iterators, yielding only their intersection points. For example, `hello AND world`. +3. **Exact Intersect Iterators** - same as above, but yielding results only if the intersection is an exact phrase. For example, `hello NEAR world`. +4. **Union Iterators** - combining two or more iterators, and yielding a union of their hits. For example, `hello OR world`. + +These are combined based on the query as an execution plan that is evaluated lazily. For example: + +``` +hello ==> read("hello") + +hello world ==> intersect( read("hello"), read("world") ) + +"hello world" ==> exact_intersect( read("hello"), read("world") ) + +"hello world" foo ==> intersect( + exact_intersect( + read("hello"), + read("world") + ), + read("foo") + ) +``` + +All these iterators are lazily evaluated, entry by entry, with constant memory overhead. + +The root iterator is read by the query execution engine and filtered for the top N results contained in it. + +## Numeric filters + +It's possible to define a field in the index schema as `NUMERIC`, meaning you will be able to limit search results only to those where the given value falls within a specific range. Filtering is done by adding `FILTER` predicates (more than one is supported) to your query. For example: + +``` +FT.SEARCH products "hd tv" FILTER price 100 (300 +``` + +The filter syntax follows the ZRANGEBYSCORE semantics of Redis, meaning `-inf` and `+inf` are supported, and prepending `(` to a number means an exclusive range. + +As of release 0.6, the implementation uses a multi-level range tree, saving ranges at multiple resolutions to allow efficient range scanning. Adding numeric filters can accelerate slow queries if the numeric range is small relative to the entire span of the filtered field. For example, a filter on dates focusing on a few days out of years of data can speed a heavy query by an order of magnitude. + +## Auto-complete and fuzzy suggestions + +Another important feature for searching and querying is auto-completion or suggestion. It allows you to create dictionaries of weighted terms, and then query them for completion suggestions to a given user prefix. For example, if you put the term “lcd tv” into a dictionary, sending the prefix “lc” will return it as a result. The dictionary is modeled as a compressed trie (prefix tree) with weights, that is traversed to find the top suffixes of a prefix. + +Redis Open Source also allows for fuzzy suggestions, meaning you can get suggestions to user prefixes even if the user has a typo in the prefix. This is enabled using a Levenshtein automaton, allowing efficient searching of a dictionary for all terms within a maximal Levenshtein distance of a term or prefix. Suggestions are weighted based on both their original score and their distance from a prefix typed by the user. Only suggestions where the prefix is up to one Levenshtein distance away from the typed prefix are supported for performance reasons. + +However, since searching for fuzzy prefixes, especially very short ones, will traverse an enormous amount of suggestions (in fact, fuzzy suggestions for any single letter will traverse the entire dictionary!), it is recommended that you use this feature carefully, and only when considering the performance penalty it incurs. Since Redis is single threaded, blocking it for any amount of time means no other queries can be processed at that time. + +To support unicode fuzzy matching, 16-bit runes are used inside the trie and not bytes. This increases memory consumption if the text is purely ASCII, but allows completion with the same level of support to all modern languages. This is done in the following manner: + +1. Assume all input to `FT.SUG*` commands is valid UTF-8. +2. Input strings are converted to 32-bit Unicode, optionally normalizing, case-folding, and removing accents on the way. If the conversion fails it's because the input is not valid UTF-8. +3. The 32-bit runes are trimmed to 16-bit runes using the lower 16 bits. These can be used for insertion, deletion, and search. +4. The output of searches is converted back to UTF-8.--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis Query Engine Administration +linkTitle: Administration +title: Administration +weight: 9 +--- +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Options for indexing geospatial data +linkTitle: Geospatial +title: Geospatial indexing +weight: 3 +--- + +Redis supports two different +[schema types]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options" >}}) +for geospatial data: + +- [`GEO`](#geo): This uses a simple format where individual geospatial + points are specified as numeric longitude-latitude pairs. +- [`GEOSHAPE`](#geoshape): This uses a subset of the + [Well-Known Text (WKT)](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) + format to specify both points and polygons using either geographical + coordinates or Cartesian coordinates. + +The sections below explain how to index these schema types. See the +[Geospatial]({{< relref "/develop/interact/search-and-query/advanced-concepts/geo" >}}) +reference page for a full description of both types. + +## `GEO` + +The following command creates a `GEO` index for JSON objects that contain +the geospatial data in a field called `location`: + +{{< clients-example geoindex create_geo_idx >}} +> FT.CREATE productidx ON JSON PREFIX 1 product: SCHEMA $.location AS location GEO +OK +{{< /clients-example >}} + +If you now add JSON objects with the `product:` prefix and a `location` field, +they will be added to the index automatically: + +{{< clients-example geoindex add_geo_json >}} +> JSON.SET product:46885 $ '{"description": "Navy Blue Slippers","price": 45.99,"city": "Denver","location": "-104.991531, 39.742043"}' +OK +> JSON.SET product:46886 $ '{"description": "Bright Green Socks","price": 25.50,"city": "Fort Collins","location": "-105.0618814,40.5150098"}' +OK +{{< /clients-example >}} + +The query below finds products within a 100 mile radius of Colorado Springs +(Longitude=-104.800644, Latitude=38.846127). This returns only the location in +Denver, but a radius of 200 miles would also include the location in Fort Collins: + +{{< clients-example geoindex geo_query >}} +> FT.SEARCH productidx '@location:[-104.800644 38.846127 100 mi]' +1) "1" +2) "product:46885" +3) 1) "$" + 2) "{\"description\":\"Navy Blue Slippers\",\"price\":45.99,\"city\":\"Denver\",\"location\":\"-104.991531, 39.742043\"}" +{{< /clients-example >}} + +See [Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information about the available options. + +## `GEOSHAPE` + +The following command creates an index for JSON objects that include +geospatial data in a field called `geom`. The `FLAT` option at the end +of the field definition specifies Cartesian coordinates instead of +the default spherical geographical coordinates. Use `SPHERICAL` in +place of `FLAT` to choose the coordinate space explicitly. + +{{< clients-example geoindex create_gshape_idx >}} +> FT.CREATE geomidx ON JSON PREFIX 1 shape: SCHEMA $.name AS name TEXT $.geom AS geom GEOSHAPE FLAT +OK +{{< /clients-example >}} + +Use the `shape:` prefix for the JSON objects to add them to the index: + +{{< clients-example geoindex add_gshape_json >}} +> JSON.SET shape:1 $ '{"name": "Green Square", "geom": "POLYGON ((1 1, 1 3, 3 3, 3 1, 1 1))"}' +OK +> JSON.SET shape:2 $ '{"name": "Red Rectangle", "geom": "POLYGON ((2 2.5, 2 3.5, 3.5 3.5, 3.5 2.5, 2 2.5))"}' +OK +> JSON.SET shape:3 $ '{"name": "Blue Triangle", "geom": "POLYGON ((3.5 1, 3.75 2, 4 1, 3.5 1))"}' +OK +> JSON.SET shape:4 $ '{"name": "Purple Point", "geom": "POINT (2 2)"}' +OK +{{< /clients-example >}} + +You can now run various geospatial queries against the index. For +example, the query below returns any shapes within the boundary +of the green square but omits the green square itself: + +{{< clients-example geoindex gshape_query >}} +> FT.SEARCH geomidx "(-@name:(Green Square) @geom:[WITHIN $qshape])" PARAMS 2 qshape "POLYGON ((1 1, 1 3, 3 3, 3 1, 1 1))" RETURN 1 name DIALECT 2 + +1) (integer) 1 +2) "shape:4" +3) 1) "name" + 2) "[\"Purple Point\"]" +{{< /clients-example >}} + +You can also run queries to find whether shapes in the index completely contain +or overlap each other. See +[Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +for more information. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to index and search JSON documents +linkTitle: Indexing +title: Indexing +weight: 3 +--- + +In addition to indexing Redis hashes, Redis Open Source can also index JSON documents. + +## Create index with JSON schema + +When you create an index with the [`FT.CREATE`]({{< relref "commands/ft.create/" >}}) command, include the `ON JSON` keyword to index any existing and future JSON documents stored in the database. + +To define the `SCHEMA`, you can provide [JSONPath]({{< relref "/develop/data-types/json/path" >}}) expressions. +The result of each JSONPath expression is indexed and associated with a logical name called an `attribute` (previously known as a `field`). +You can use these attributes in queries. + +{{% alert title="Note" color="info" %}} +Note: `attribute` is optional for [`FT.CREATE`]({{< relref "commands/ft.create/" >}}). +{{% /alert %}} + +Use the following syntax to create a JSON index: + +```sql +FT.CREATE {index_name} ON JSON SCHEMA {json_path} AS {attribute} {type} +``` + +For example, this command creates an index that indexes the name, description, price, and image vector embedding of each JSON document that represents an inventory item: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx ON JSON PREFIX 1 item: SCHEMA $.name AS name TEXT $.description as description TEXT $.price AS price NUMERIC $.embedding AS embedding VECTOR FLAT 6 DIM 4 DISTANCE_METRIC L2 TYPE FLOAT32 +``` + +See [Index limitations](#index-limitations) for more details about JSON index `SCHEMA` restrictions. + +## Add JSON documents + +After you create an index, Redis automatically indexes any existing, modified, or newly created JSON documents stored in the database. For existing documents, indexing runs asynchronously in the background, so it can take some time before the document is available. Modified and newly created documents are indexed synchronously, so the document will be available by the time the add or modify command finishes. + +You can use any JSON write command, such as [`JSON.SET`]({{< relref "commands/json.set/" >}}) and [`JSON.ARRAPPEND`]({{< relref "commands/json.arrappend/" >}}), to create or modify JSON documents. + +The following examples use these JSON documents to represent individual inventory items. + +Item 1 JSON document: + +```json +{ + "name": "Noise-cancelling Bluetooth headphones", + "description": "Wireless Bluetooth headphones with noise-cancelling technology", + "connection": { + "wireless": true, + "type": "Bluetooth" + }, + "price": 99.98, + "stock": 25, + "colors": [ + "black", + "silver" + ], + "embedding": [0.87, -0.15, 0.55, 0.03] +} +``` + +Item 2 JSON document: + +```json +{ + "name": "Wireless earbuds", + "description": "Wireless Bluetooth in-ear headphones", + "connection": { + "wireless": true, + "type": "Bluetooth" + }, + "price": 64.99, + "stock": 17, + "colors": [ + "black", + "white" + ], + "embedding": [-0.7, -0.51, 0.88, 0.14] +} +``` + +Use [`JSON.SET`]({{< relref "commands/json.set/" >}}) to store these documents in the database: + +```sql +127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"],"embedding":[0.87,-0.15,0.55,0.03]}' +"OK" +127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":64.99,"stock":17,"colors":["black","white"],"embedding":[-0.7,-0.51,0.88,0.14]}' +"OK" +``` + +Because indexing is synchronous in this case, the documents will be available on the index as soon as the [`JSON.SET`]({{< relref "commands/json.set/" >}}) command returns. +Any subsequent queries that match the indexed content will return the document. + +## Search the index + +To search the index for JSON documents, use the [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command. +You can search any attribute defined in the `SCHEMA`. + +For example, use this query to search for items with the word "earbuds" in the name: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@name:(earbuds)' +1) "1" +2) "item:2" +3) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"connection\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embedding\":[-0.7,-0.51,0.88,0.14]}" +``` + +This query searches for all items that include "bluetooth" and "headphones" in the description: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(bluetooth headphones)' +1) "2" +2) "item:1" +3) 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"], \"embedding\":[0.87,-0.15,0.55,0.03]}" +4) "item:2" +5) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"connection\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embedding\":[-0.7,-0.51,0.88,0.14]}" +``` + +Now search for Bluetooth headphones with a price less than 70: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(bluetooth headphones) @price:[0 70]' +1) "1" +2) "item:2" +3) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"connection\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embedding\":[-0.7,-0.51,0.88,0.14]}" +``` + +And lastly, search for the Bluetooth headphones that are most similar to an image whose embedding is [1.0, 1.0, 1.0, 1.0]: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(bluetooth headphones)=>[KNN 2 @embedding $blob]' PARAMS 2 blob \x01\x01\x01\x01 DIALECT 2 +1) "2" +2) "item:1" +3) 1) "__embedding_score" + 2) "1.08280003071" + 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"embedding\":[0.87,-0.15,0.55,0.03]}" +2) "item:2" +3) 1) "__embedding_score" + 2) "1.54409992695" + 3) "$" + 4) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"connection\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embedding\":[-0.7,-0.51,0.88,0.14]}" +``` + +For more information about search queries, see [Search query syntax]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax" >}}). + +{{% alert title="Note" color="info" %}} +[`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) queries require `attribute` modifiers. Don't use JSONPath expressions in queries because the query parser doesn't fully support them. +{{% /alert %}} + +## Index JSON arrays as TAG + +The preferred method for indexing a JSON field with multivalued terms is using JSON arrays. Each value of the array is indexed, and those values must be scalars. If you want to index string or boolean values as TAGs within a JSON array, use the [JSONPath]({{< relref "/develop/data-types/json/path" >}}) wildcard operator. + +To index an item's list of available colors, specify the JSONPath `$.colors.*` in the `SCHEMA` definition during index creation: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx2 ON JSON PREFIX 1 item: SCHEMA $.colors.* AS colors TAG $.name AS name TEXT $.description as description TEXT +``` + +Now you can search for silver headphones: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx2 "@colors:{silver} (@name:(headphones)|@description:(headphones))" +1) "1" +2) "item:1" +3) 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"]}" +``` + +## Index JSON arrays as TEXT +Starting with RediSearch v2.6.0, full text search can be done on an array of strings or on a JSONPath leading to multiple strings. + +If you want to index multiple string values as TEXT, use either a JSONPath leading to a single array of strings, or a JSONPath leading to multiple string values, using JSONPath operators such as wildcard, filter, union, array slice, and/or recursive descent. + +To index an item's list of available colors, specify the JSONPath `$.colors` in the `SCHEMA` definition during index creation: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx3 ON JSON PREFIX 1 item: SCHEMA $.colors AS colors TEXT $.name AS name TEXT $.description as description TEXT +``` + +```sql +127.0.0.1:6379> JSON.SET item:3 $ '{"name":"True Wireless earbuds","description":"True Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":74.99,"stock":20,"colors":["red","light blue"]}' +"OK" +``` + +Now you can do full text search for light colored headphones: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx3 '@colors:(white|light) (@name|description:(headphones))' RETURN 1 $.colors +1) (integer) 2 +2) "item:2" +3) 1) "$.colors" + 2) "[\"black\",\"white\"]" +4) "item:3" +5) 1) "$.colors" + 2) "[\"red\",\"light blue\"]" +``` + +### Limitations +- When a JSONPath may lead to multiple values and not only to a single array, e.g., when a JSONPath contains wildcards, etc., specifying `SLOP` or `INORDER` in [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) will return an error, since the order of the values matching the JSONPath is not well defined, leading to potentially inconsistent results. + + For example, using a JSONPath such as `$..b[*]` on a JSON value such as + ```json + { + "a": [ + {"b": ["first first", "first second"]}, + {"c": + {"b": ["second first", "second second"]}}, + {"b": ["third first", "third second"]} + ] + } + ``` + may match values in various orderings, depending on the specific implementation of the JSONPath library being used. + + Since `SLOP` and `INORDER` consider relative ordering among the indexed values, and results may change in future releases, an error will be returned. + +- When JSONPath leads to multiple values: + - String values are indexed + - `null` values are skipped + - Any other value type will cause an indexing failure + +- `SORTBY` only sorts by the first value +- No `HIGHLIGHT` and `SUMMARIZE` support +- `RETURN` of a Schema attribute, whose JSONPath leads to multiple values, returns only the first value (as a JSON String) +- If a JSONPath is specified by the `RETURN`, instead of a Schema attribute, all values are returned (as a JSON String) + +### Handling phrases in different array slots: + +When indexing, a predefined delta is used to increase positional offsets between array slots for multiple text values. This delta controls the level of separation between phrases in different array slots (related to the `SLOP` parameter of [`FT.SEARCH`]({{< relref "commands/ft.search/" >}})). +This predefined value is set by the configuration parameter `MULTI_TEXT_SLOP` (at module load-time). The default value is 100. + +## Index JSON arrays as NUMERIC + +Starting with RediSearch v2.6.1, search can be done on an array of numerical values or on a JSONPath leading to multiple numerical values. + +If you want to index multiple numerical values as NUMERIC, use either a JSONPath leading to a single array of numbers, or a JSONPath leading to multiple numbers, using JSONPath operators such as wildcard, filter, union, array slice, and/or recursive descent. + +For example, add to the item's list the available `max_level` of volume (in decibels): + +```sql +127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"], "max_level":[60, 70, 80, 90, 100]}' +OK + +127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":64.99,"stock":17,"colors":["black","white"], "max_level":[80, 100, 120]}' +OK + +127.0.0.1:6379> JSON.SET item:3 $ '{"name":"True Wireless earbuds","description":"True Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":74.99,"stock":20,"colors":["red","light blue"], "max_level":[90, 100, 110, 120]}' +OK +``` + +To index the `max_level` array, specify the JSONPath `$.max_level` in the `SCHEMA` definition during index creation: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx4 ON JSON PREFIX 1 item: SCHEMA $.max_level AS dB NUMERIC +OK +``` + +You can now search for headphones with specific max volume levels, for example, between 70 and 80 (inclusive), returning items with at least one value in their `max_level` array, which is in the requested range: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx4 '@dB:[70 80]' +1) (integer) 2 +2) "item:1" +3) 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"max_level\":[60,70,80,90,100]}" +4) "item:2" +5) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"max_level\":[80,100,120]}" +``` + +You can also search for items with all values in a specific range. For example, all values are in the range [90, 120] (inclusive): + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx4 '-@dB:[-inf (90] -@dB:[(120 +inf]' +1) (integer) 1 +2) "item:3" +3) 1) "$" + 2) "{\"name\":\"True Wireless earbuds\",\"description\":\"True Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":74.99,\"stock\":20,\"colors\":[\"red\",\"light blue\"],\"max_level\":[90,100,110,120]}" +``` + +### Limitations + +When JSONPath leads to multiple numerical values: + - Numerical values are indexed + - `null` values are skipped + - Any other value type will cause an indexing failure + +## Index JSON arrays as GEO and GEOSHAPE + +You can use `GEO` and `GEOSHAPE` fields to store geospatial data, +such as geographical locations and geometric shapes. See +[Geospatial indexing]({{< relref "/develop/interact/search-and-query/indexing/geoindex" >}}) +to learn how to use these schema types and see the +[Geospatial]({{< relref "/develop/interact/search-and-query/advanced-concepts/geo" >}}) +reference page for an introduction to their format and usage. + +## Index JSON arrays as VECTOR + +Starting with RediSearch 2.6.0, you can index a JSONPath leading to an array of numeric values as a VECTOR type in the index schema. + +For example, assume that your JSON items include an array of vector embeddings, where each vector represents an image of a product. To index these vectors, specify the JSONPath `$.embedding` in the schema definition during index creation: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx5 ON JSON PREFIX 1 item: SCHEMA $.embedding AS embedding VECTOR FLAT 6 DIM 4 DISTANCE_METRIC L2 TYPE FLOAT32 +OK +127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","price":99.98,"stock":25,"colors":["black","silver"],"embedding":[0.87,-0.15,0.55,0.03]}' +OK +127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","price":64.99,"stock":17,"colors":["black","white"],"embedding":[-0.7,-0.51,0.88,0.14]}' +OK +``` + +Now you can search for the two headphones that are most similar to the image embedding by using vector search KNN query. (Note that the vector queries are supported as of dialect 2.) For example: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx5 '*=>[KNN 2 @embedding $blob AS dist]' SORTBY dist PARAMS 2 blob \x01\x01\x01\x01 DIALECT 2 +1) (integer) 2 +2) "item:1" +3) 1) "dist" + 2) "1.08280003071" + 3) "$" + 4) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"embedding\":[0.87,-0.15,0.55,0.03]}" +4) "item:2" +5) 1) "dist" + 2) "1.54409992695" + 3) "$" + 4) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embedding\":[-0.7,-0.51,0.88,0.14]}" +``` + +If you want to index multiple numeric arrays as VECTOR, use a [JSONPath]({{< relref "/develop/data-types/json/path" >}}) leading to multiple numeric arrays using JSONPath operators such as wildcard, filter, union, array slice, and/or recursive descent. + +For example, assume that your JSON items include an array of vector embeddings, where each vector represents a different image of the same product. To index these vectors, specify the JSONPath `$.embeddings[*]` in the schema definition during index creation: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx5 ON JSON PREFIX 1 item: SCHEMA $.embeddings[*] AS embeddings VECTOR FLAT 6 DIM 4 DISTANCE_METRIC L2 TYPE FLOAT32 +OK +127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","price":99.98,"stock":25,"colors":["black","silver"],"embeddings":[[0.87,-0.15,0.55,0.03]]}' +OK +127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","price":64.99,"stock":17,"colors":["black","white"],"embeddings":[[-0.7,-0.51,0.88,0.14],[-0.8,-0.15,0.33,-0.01]]}' +OK +``` + +{{% alert title="Important note" color="info" %}} +Unlike the case with the NUMERIC type, setting a static path such as `$.embedding` in the schema for the VECTOR type does not allow you to index multiple vectors stored under that field. Hence, if you set `$.embedding` as the path to the index schema, specifying an array of vectors in the `embedding` field in your JSON will cause an indexing failure. +{{% /alert %}} + +Now you can search for the two headphones that are most similar to an image embedding by using vector search KNN query. (Note that the vector queries are supported as of dialect 2.) The distance between a document to the query vector is defined as the minimum distance between the query vector to a vector that matches the JSONPath specified in the schema. For example: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx5 '*=>[KNN 2 @embeddings $blob AS dist]' SORTBY dist PARAMS 2 blob \x01\x01\x01\x01 DIALECT 2 +1) (integer) 2 +2) "item:2" +3) 1) "dist" + 2) "0.771500051022" + 3) "$" + 4) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"],\"embeddings\":[[-0.7,-0.51,0.88,0.14],[-0.8,-0.15,0.33,-0.01]]}" +4) "item:1" +5) 1) "dist" + 2) "1.08280003071" + 3) "$" + 4) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"embeddings\":[[0.87,-0.15,0.55,0.03]]}" +``` +Note that `0.771500051022` is the L2 distance between the query vector and `[-0.8,-0.15,0.33,-0.01]`, which is the second element in the embedding array, and it is lower than the L2 distance between the query vector and `[-0.7,-0.51,0.88,0.14]`, which is the first element in the embedding array. + +For more information on vector similarity syntax, see [Vector fields]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). + +## Index JSON objects + +You cannot index JSON objects. If the JSONPath expression returns an object, it will be ignored. + +To index the contents of a JSON object, you need to index the individual elements within the object in separate attributes. + +For example, to index the `connection` JSON object, define the `$.connection.wireless` and `$.connection.type` fields as separate attributes when you create the index: + +```sql +127.0.0.1:6379> FT.CREATE itemIdx3 ON JSON SCHEMA $.connection.wireless AS wireless TAG $.connection.type AS connectionType TEXT +"OK" +``` + +After you create the new index, you can search for items with the wireless TAG set to `true`: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx3 '@wireless:{true}' +1) "2" +2) "item:2" +3) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"connection\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"]}" +4) "item:1" +5) 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"]}" +``` + +You can also search for items with a Bluetooth connection type: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx3 '@connectionType:(bluetooth)' +1) "2" +2) "item:1" +3) 1) "$" + 2) "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"]}" +4) "item:2" +5) 1) "$" + 2) "{\"name\":\"Wireless earbuds\",\"description\":\"Wireless Bluetooth in-ear headphones\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":64.99,\"stock\":17,\"colors\":[\"black\",\"white\"]}" +``` + +## Field projection + +[`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) returns the entire JSON document by default. If you want to limit the returned search results to specific attributes, you can use field projection. + +### Return specific attributes + +When you run a search query, you can use the `RETURN` keyword to specify which attributes you want to include in the search results. You also need to specify the number of fields to return. + +For example, this query only returns the `name` and `price` of each set of headphones: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(headphones)' RETURN 2 name price +1) "2" +2) "item:1" +3) 1) "name" + 2) "Noise-cancelling Bluetooth headphones" + 3) "price" + 4) "99.98" +4) "item:2" +5) 1) "name" + 2) "Wireless earbuds" + 3) "price" + 4) "64.99" +``` + +### Project with JSONPath + +You can use [JSONPath]({{< relref "/develop/data-types/json/path" >}}) expressions in a `RETURN` statement to extract any part of the JSON document, even fields that were not defined in the index `SCHEMA`. + +For example, the following query uses the JSONPath expression `$.stock` to return each item's stock in addition to the name and price attributes. + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(headphones)' RETURN 3 name price $.stock +1) "2" +2) "item:1" +3) 1) "name" + 2) "Noise-cancelling Bluetooth headphones" + 3) "price" + 4) "99.98" + 5) "$.stock" + 6) "25" +4) "item:2" +5) 1) "name" + 2) "Wireless earbuds" + 3) "price" + 4) "64.99" + 5) "$.stock" + 6) "17" +``` + +Note that the returned property name is the JSONPath expression itself: `"$.stock"`. + +You can use the `AS` option to specify an alias for the returned property: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '@description:(headphones)' RETURN 5 name price $.stock AS stock +1) "2" +2) "item:1" +3) 1) "name" + 2) "Noise-cancelling Bluetooth headphones" + 3) "price" + 4) "99.98" + 5) "stock" + 6) "25" +4) "item:2" +5) 1) "name" + 2) "Wireless earbuds" + 3) "price" + 4) "64.99" + 5) "stock" + 6) "17" +``` + +This query returns the field as the alias `"stock"` instead of the JSONPath expression `"$.stock"`. + +### Highlight search terms + +You can [highlight]({{< relref "/develop/interact/search-and-query/advanced-concepts/highlight" >}}) relevant search terms in any indexed `TEXT` attribute. + +For [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}), you have to explicitly set which attributes you want highlighted after the `RETURN` and `HIGHLIGHT` parameters. + +Use the optional `TAGS` keyword to specify the strings that will surround (or highlight) the matching search terms. + +For example, highlight the word "bluetooth" with bold HTML tags in item names and descriptions: + +```sql +127.0.0.1:6379> FT.SEARCH itemIdx '(@name:(bluetooth))|(@description:(bluetooth))' RETURN 3 name description price HIGHLIGHT FIELDS 2 name description TAGS '' '' +1) "2" +2) "item:1" +3) 1) "name" + 2) "Noise-cancelling Bluetooth headphones" + 3) "description" + 4) "Wireless Bluetooth headphones with noise-cancelling technology" + 5) "price" + 6) "99.98" +4) "item:2" +5) 1) "name" + 2) "Wireless earbuds" + 3) "description" + 4) "Wireless Bluetooth in-ear headphones" + 5) "price" + 6) "64.99" +``` + +## Aggregate with JSONPath + +You can use [aggregation]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations" >}}) to generate statistics or build facet queries. + +The `LOAD` option accepts [JSONPath]({{< relref "/develop/data-types/json/path" >}}) expressions. You can use any value in the pipeline, even if the value is not indexed. + +This example uses aggregation to calculate a 10% price discount for each item and sorts the items from least expensive to most expensive: + +```sql +127.0.0.1:6379> FT.AGGREGATE itemIdx '*' LOAD 4 name $.price AS originalPrice APPLY '@originalPrice - (@originalPrice * 0.10)' AS salePrice SORTBY 2 @salePrice ASC +1) "2" +2) 1) "name" + 2) "Wireless earbuds" + 3) "originalPrice" + 4) "64.99" + 5) "salePrice" + 6) "58.491" +3) 1) "name" + 2) "Noise-cancelling Bluetooth headphones" + 3) "originalPrice" + 4) "99.98" + 5) "salePrice" + 6) "89.982" +``` + +{{% alert title="Note" color="info" %}} +[`FT.AGGREGATE`]({{< relref "commands/ft.aggregate/" >}}) queries require `attribute` modifiers. Don't use JSONPath expressions in queries, except with the `LOAD` option, because the query parser doesn't fully support them. +{{% /alert %}} + +## Index missing or empty values +As of v2.10, you can search for missing properties, that is, properties that do not exist in a given document, using the `INDEXMISSING` option to `FT.CREATE` in conjunction with the `ismissing` query function with `FT.SEARCH`. You can also search for existing properties with no value (i.e., empty) using the `INDEXEMPTY` option with `FT.CREATE`. Both query types require DIALECT 2. Examples below: + +``` +JSON.SET key:1 $ '{"propA": "foo"}' +JSON.SET key:2 $ '{"propA": "bar", "propB":"abc"}' +FT.CREATE idx ON JSON PREFIX 1 key: SCHEMA $.propA AS propA TAG $.propB AS propB TAG INDEXMISSING + +> FT.SEARCH idx 'ismissing(@propB)' DIALECT 2 +1) "1" +2) "key:1" +3) 1) "$" + 2) "{\"propA\":\"foo\"}" +``` + +``` +JSON.SET key:1 $ '{"propA": "foo", "propB":""}' +JSON.SET key:2 $ '{"propA": "bar", "propB":"abc"}' +FT.CREATE idx ON JSON PREFIX 1 key: SCHEMA $.propA AS propA TAG $.propB AS propB TAG INDEXEMPTY + +> FT.SEARCH idx '@propB:{""}' DIALECT 2 +1) "1" +2) "key:1" +3) 1) "$" + 2) "{\"propA\":\"foo\",\"propB\":\"\"}" +``` + +## Index limitations + +### Schema mapping + +During index creation, you need to map the JSON elements to `SCHEMA` fields as follows: + +- Strings as `TEXT`, `TAG`, or `GEO`. +- Numbers as `NUMERIC`. +- Booleans as `TAG`. +- JSON array + - Array of strings as `TAG` or `TEXT`. + - Array of numbers as `NUMERIC` or `VECTOR`. + - Array of geo coordinates as `GEO`. + - `null` values in such arrays are ignored. +- You cannot index JSON objects. Index the individual elements as separate attributes instead. +- `null` values are ignored. + +### Sortable tags + +If you create an index for JSON documents with a JSONPath leading to an array or to multiple values, only the first value is considered by the sort. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Query for data based on vector embeddings +linkTitle: Vector +title: Vector search +weight: 5 +--- + +This article gives you a good overview of how to perform vector search queries with the Redis Query Engine, which is part of Redis Open Source. See the [Redis as a vector database quick start guide]({{< relref "/develop/get-started/vector-database" >}}) for more information about Redis as a vector database. You can also find more detailed information about all the parameters in the [vector reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). + +A vector search query on a vector field allows you to find all vectors in a vector space that are close to a given vector. You can query for the k-nearest neighbors or vectors within a given radius. + +The examples in this article use a schema with the following fields: + +| JSON field | Field alias | Field type | Description | +| ------------------------ | ----------- | ----------- | ----------- | +| `$.description` | `description` | `TEXT` | The description of a bicycle as unstructured text | +| `$.description_embeddings` | `vector` | `VECTOR` | The vector that a machine learning model derived from the description text | + +## K-neareast neighbours (KNN) + +The Redis command [FT.SEARCH]({{< relref "commands/ft.search" >}}) takes the index name, the query string, and additional query parameters as arguments. You need to pass the number of nearest neighbors, the vector field name, and the vector's binary representation in the following way: + +``` +FT.SEARCH index "(*)=>[KNN num_neighbours @field $vector]" PARAMS 2 vector "binary_data" DIALECT 2 +``` + +Here is a more detailed explanation of this query: + +1. **Pre-filter**: The first expression within the round brackets is a filter. It allows you to decide which vectors should be taken into account before the vector search is performed. The expression `(*)` means that all vectors are considered. +2. **Next step**: The `=>` arrow indicates that the pre-filtering happens before the vector search. +3. **KNN query**: The expression `[KNN num_neighbours @field $vector]` is a parameterized query expression. A parameter name is indicated by the `$` prefix within the query string. +4. **Vector binary data**: You need to use the `PARAMS` argument to substitute `$vector` with the binary representation of the vector. The value `2` indicates that `PARAMS` is followed by two arguments, the parameter name `vector` and the parameter value. +5. **Dialect**: The vector search feature has been available since version two of the query dialect. + +You can read more about the `PARAMS` argument in the [FT.SEARCH]({{< relref "commands/ft.search" >}}) command reference. + +The following example shows you how to query for three bikes based on their description embeddings, and by using the field alias `vector`. The result is returned in ascending order based on the distance. You can see that the query only returns the fields `__vector_score` and `description`. The field `__vector_score` is present by default. Because you can have multiple vector fields in your schema, the vector score field name depends on the name of the vector field. If you change the field name `@vector` to `@foo`, the score field name changes to `__foo_score`. + +{{< clients-example query_vector vector1 >}} +FT.SEARCH idx:bikes_vss "(*)=>[KNN 3 @vector $query_vector]" PARAMS 2 "query_vector" "Z\xf8\x15:\xf23\xa1\xbfZ\x1dI>\r\xca9..." SORTBY "__vector_score" ASC RETURN 2 "__vector_score" "description" DIALECT 2 +{{< /clients-example >}} + + + +{{% alert title="Note" color="warning" %}} +The binary value of the query vector is significantly shortened in the CLI example above. +{{% /alert %}} + + +## Radius + +Instead of the number of nearest neighbors, you need to pass the radius along with the index name, the vector field name, and the vector's binary value: + +``` +FT.SEARCH index "@field:[VECTOR_RANGE radius $vector]" PARAMS 2 vector "binary_data" DIALECT 2 +``` + +If you want to sort by distance, then you must yield the distance via the range query parameter `$YIELD_DISTANCE_AS`: + +``` +FT.SEARCH index "@field:[VECTOR_RANGE radius $vector]=>{$YIELD_DISTANCE_AS: dist_field}" PARAMS 2 vector "binary_data" SORTBY dist_field DIALECT 2 +``` + +Here is a more detailed explanation of this query: + +1. **Range query**: the syntax of a radius query is very similar to the regular range query, except for the keyword `VECTOR_RANGE`. You can also combine a vector radius query with other queries in the same way as regular range queries. See [combined queries article]({{< relref "/develop/interact/search-and-query/query/combined" >}}) for more details. +2. **Additional step**: the `=>` arrow means that the range query is followed by evaluating additional parameters. +3. **Range query parameters**: parameters such as `$YIELD_DISTANCE_AS` can be found in the [vectors reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). +4. **Vector binary data**: you need to use `PARAMS` to pass the binary representation of the vector. +5. **Dialect**: vector search has been available since version two of the query dialect. + + +{{% alert title="Note" color="warning" %}} +By default, [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) returns only the first ten results. The [range query article]({{< relref "/develop/interact/search-and-query/query/range" >}}) explains to you how to scroll through the result set. +{{% /alert %}} + +The example below shows a radius query that returns the description and the distance within a radius of `0.5`. The result is sorted by the distance. + +{{< clients-example query_vector vector2 >}} +FT.SEARCH idx:bikes_vss "@vector:[VECTOR_RANGE 0.5 $query_vector]=>{$YIELD_DISTANCE_AS: vector_dist}" PARAMS 2 "query_vector" "Z\xf8\x15:\xf23\xa1\xbfZ\x1dI>\r\xca9..." SORTBY vector_dist ASC RETURN 2 vector_dist description DIALECT 2 +{{< /clients-example >}} + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Perform simple exact match queries +linkTitle: Exact match +title: Exact match queries +weight: 1 +--- + +An exact match query allows you to select all documents where a field matches a specific value. + +You can use exact match queries on several field types. The query syntax varies depending on the type. + +The examples in this article use a schema with the following fields: + +| Field name | Field type | +| ---------- | ---------- | +| `description`| `TEXT` | +| `condition` | `TAG` | +| `price` | `NUMERIC` | + +You can find more details about creating the index and loading the demo data in the [quick start guide]({{< relref "/develop/get-started/document-database" >}}). + +## Numeric field + +To perform an exact match query on a numeric field, you need to construct a range query with the same start and end value: + +``` +FT.SEARCH index "@field:[value value]" + +or + +FT.SEARCH index "@field:[value]" DIALECT 2 # requires v2.10 + +or + +FT.SEARCH index "@field==value" DIALECT 2 # requires v2.10 +``` + +As described in the [article about range queries]({{< relref "/develop/interact/search-and-query/query/range" >}}), you can also use the `FILTER` argument: + +``` +FT.SEARCH index "*" FILTER field start end +``` + +The following examples show you how to query for bicycles with a price of exactly 270 USD: + +{{< clients-example query_em em1 >}} +> FT.SEARCH idx:bicycle "@price:[270 270]" +1) (integer) 1 +2) "bicycle:0" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, ... + +> FT.SEARCH idx:bicycle "@price:[270]" # requires v2.10 +1) (integer) 1 +2) "bicycle:0" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, ... + +> FT.SEARCH idx:bicycle "@price==270" # requires v2.10 +1) (integer) 1 +2) "bicycle:0" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, ... + +> FT.SEARCH idx:bicycle "*" FILTER price 270 270 +1) (integer) 1 +2) "bicycle:0" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, ... +{{< /clients-example >}} + + +## Tag field + +A tag is a short sequence of text, for example, "new" or "Los Angeles". + +{{% alert title="Important" color="warning" %}} +If you need to query for short texts, use a tag query instead of a full-text query. Tag fields are more space-efficient for storing index entries and often lead to lower query complexity for exact match queries. +{{% /alert %}} + +You can construct a tag query for a single tag in the following way: + +``` +FT.SEARCH index "@field:{tag}" +``` + +{{% alert title="Note" color="warning" %}} +The curly brackets are mandatory for tag queries. +{{% /alert %}} + +This short example shows you how to query for new bicycles: + +{{< clients-example query_em em2 >}} +> FT.SEARCH idx:bicycle "@condition:{new}" + 1) (integer) 5 + 2) "bicycle:0" + 3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, -73.9510 40.7578, -73.9510 40.6678, -74.0610 40.6678, -74.0610 40.7578))\",\"store_location\":\"-74.0060,40.7128\",\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\xe2\x80\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}" + 4) "bicycle:5" + 5) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-0.1778 51.5524, 0.0822 51.5524, 0.0822 51.4024, -0.1778 51.4024, -0.1778 51.5524))\",\"store_location\":\"-0.1278,51.5074\",\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike \xe2\x80\x93 but that\xe2\x80\x99s not to say that it\xe2\x80\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\xe2\x80\x99s, this is a bike which doesn\xe2\x80\x99t break the bank and delivers craved performance.\",\"condition\":\"new\"}" + 6) "bicycle:6" + 7) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((2.1767 48.9016, 2.5267 48.9016, 2.5267 48.5516, 2.1767 48.5516, 2.1767 48.9016))\",\"store_location\":\"2.3522,48.8566\",\"brand\":\"ScramBikes\",\"model\":\"WattBike\",\"price\":2300,\"description\":\"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\xe2\x80\x99s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\",\"condition\":\"new\"}" + 8) "bicycle:7" + 9) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((13.3260 52.5700, 13.6550 52.5700, 13.6550 52.2700, 13.3260 52.2700, 13.3260 52.5700))\",\"store_location\":\"13.4050,52.5200\",\"brand\":\"Peaknetic\",\"model\":\"Secto\",\"price\":430,\"description\":\"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\",\"condition\":\"new\"}" +10) "bicycle:8" +11) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((1.9450 41.4301, 2.4018 41.4301, 2.4018 41.1987, 1.9450 41.1987, 1.9450 41.4301))\",\"store_location\":\"2.1734, 41.3851\",\"brand\":\"nHill\",\"model\":\"Summit\",\"price\":1200,\"description\":\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\xe2\x80\x99re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\",\"condition\":\"new\"}" +{{< /clients-example >}} + +Use double quotes and [DIALECT 2]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}#dialect-2) for exact match queries involving tags that contain special characters. As of v2.10, the only character that needs escaping in queries involving double-quoted tags is the double-quote character. Here's an example of using double-quoted tags that contain special characters: + +{{< clients-example query_em em3 >}} +> FT.CREATE idx:email ON JSON PREFIX 1 key: SCHEMA $.email AS email TAG +OK +> JSON.SET key:1 $ '{"email": "test@redis.com"}' +OK +> FT.SEARCH idx:email '@email:{"test@redis.com"}' DIALECT 2 +1) (integer) 1 +2) "key:1" +3) 1) "$" + 2) "{\"email\":\"test@redis.com\"}" +{{< /clients-example>}} + +## Full-text field + +A detailed explanation of full-text queries is available in the [full-text queries documentation]({{< relref "/develop/interact/search-and-query/query/full-text" >}}). You can also query for an exact match of a phrase within a text field: + +``` +FT.SEARCH index "@field:\"phrase\"" +``` + +{{% alert title="Important" color="warning" %}} +The phrase must be wrapped by escaped double quotes for an exact match query. + +You can't use a phrase that starts with a [stop word]({{< relref "/develop/interact/search-and-query/advanced-concepts/stopwords" >}}). +{{% /alert %}} + +Here is an example for finding all bicycles that have a description containing the exact text 'rough terrain': + +{{< clients-example query_em em4 >}} +FT.SEARCH idx:bicycle "@description:\"rough terrain\"" +1) (integer) 1 +2) "bicycle:8" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((1.9450 41.4301, 2.4018 41.4301, 2.4018 41.1987, 1.9450 41.1987, 1.9450 41.4301))\",\"store_location\":\"2.1734, 41.3851\",\"brand\":\"nHill\",\"model\":\"Summit\",\"price\":1200,\"description\":\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\xe2\x80\x99re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\",\"condition\":\"new\"}" +{{< /clients-example >}}--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Combine query expressions +linkTitle: Combined +title: Combined queries +weight: 9 +--- + +A combined query is a combination of several query types, such as: + +* [Exact match]({{< relref "/develop/interact/search-and-query/query/exact-match" >}}) +* [Range]({{< relref "/develop/interact/search-and-query/query/range" >}}) +* [Full-text]({{< relref "/develop/interact/search-and-query/query/full-text" >}}) +* [Geospatial]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +* [Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) + +You can use logical query operators to combine query expressions for numeric, tag, and text fields. For vector fields, you can combine a KNN query with a pre-filter. + +{{% alert title="Note" color="warning" %}} +The operators are interpreted slightly differently depending on the query dialect used. The default dialect is `DIALECT 1`; see [this article]({{< relref "/develop/interact/search-and-query/administration/configuration#search-default-dialect" >}}) for information on how to change the dialect version. This article uses the second version of the query dialect, `DIALECT 2`, and uses additional brackets (`(...)`) to help clarify the examples. Further details can be found in the [query syntax documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/query_syntax" >}}). +{{% /alert %}} + +The examples in this article use the following schema: + +| Field name | Field type | +| ----------- | ---------- | +| `description` | `TEXT` | +| `condition` | `TAG` | +| `price` | `NUMERIC` | +| `vector` | `VECTOR` | + +## AND + +The binary operator ` ` (space) is used to intersect the results of two or more expressions. + +``` +FT.SEARCH index "(expr1) (expr2)" +``` + +If you want to perform an intersection based on multiple values within a specific text field, then you should use the following simplified notion: + +``` +FT.SEARCH index "@text_field:( value1 value2 ... )" +``` + +The following example shows you a query that finds bicycles in new condition and in a price range from 500 USD to 1000 USD: + +{{< clients-example query_combined combined1 >}} +FT.SEARCH idx:bicycle "@price:[500 1000] @condition:{new}" +{{< /clients-example >}} + +You might also be interested in bicycles for kids. The query below shows you how to combine a full-text search with the criteria from the previous query: + +{{< clients-example query_combined combined2 >}} +FT.SEARCH idx:bicycle "kids (@price:[500 1000] @condition:{used})" +{{< /clients-example >}} + +## OR + +You can use the binary operator `|` (vertical bar) to perform a union. + +``` +FT.SEARCH index "(expr1) | (expr2)" +``` + +{{% alert title="Note" color="warning" %}} +The logical `AND` takes precedence over `OR` when using dialect version two. The expression `expr1 expr2 | expr3 expr4` means `(expr1 expr2) | (expr3 expr4)`. Version one of the query dialect behaves differently. Using parentheses in query strings is advised to ensure the order is clear. + {{% /alert %}} + + +If you want to perform the union based on multiple values within a single tag or text field, then you should use the following simplified notion: + +``` +FT.SEARCH index "@text_field:( value1 | value2 | ... )" +``` + +``` +FT.SEARCH index "@tag_field:{ value1 | value2 | ... }" +``` + +The following query shows you how to find used bicycles that contain either the word 'kids' or 'small': + +{{< clients-example query_combined combined3 >}} +FT.SEARCH idx:bicycle "(kids | small) @condition:{used}" +{{< /clients-example >}} + +The previous query searches across all text fields. The following example shows you how to limit the search to the description field: + +{{< clients-example query_combined combined4 >}} +FT.SEARCH idx:bicycle "@description:(kids | small) @condition:{used}" +{{< /clients-example >}} + +If you want to extend the search to new bicycles, then the below example shows you how to do that: + +{{< clients-example query_combined combined5 >}} +FT.SEARCH idx:bicycle "@description:(kids | small) @condition:{new | used}" +{{< /clients-example >}} + +## NOT + +A minus (`-`) in front of a query expression negates the expression. + +``` +FT.SEARCH index "-(expr)" +``` + +If you want to exclude new bicycles from the search within the previous price range, you can use this query: + +{{< clients-example query_combined combined6 >}} +FT.SEARCH idx:bicycle "@price:[500 1000] -@condition:{new}" +{{< /clients-example >}} + +## Numeric filter + +The [FT.SEARCH]({{< relref "commands/ft.search" >}}) command allows you to combine any query expression with a numeric filter. + +``` +FT.SEARCH index "expr" FILTER numeric_field start end +``` + +Please see the [range query article]({{< relref "/develop/interact/search-and-query/query/range" >}}) to learn more about numeric range queries and such filters. + + +## Pre-filter for a KNN vector query + +You can use a simple or more complex query expression with logical operators as a pre-filter in a KNN vector query. + +``` +FT.SEARCH index "(filter_expr)=>[KNN num_neighbours @field $vector]" PARAMS 2 vector "binary_data" DIALECT 2 +``` + +Here is an example: + +{{< clients-example query_combined combined7 >}} +FT.SEARCH idx:bikes_vss "(@price:[500 1000] @condition:{new})=>[KNN 3 @vector $query_vector]" PARAMS 2 "query_vector" "Z\xf8\x15:\xf23\xa1\xbfZ\x1dI>\r\xca9..." DIALECT 2 +{{< /clients-example >}} + +The [vector search article]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) provides further details about vector queries in general. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Query based on geographic data +linkTitle: Geospatial +title: Geospatial queries +weight: 4 +--- + +The geospatial feature in Redis Open Source allows you to query for data associated with geographic locations. You can either query for locations within a specific radius or based on geometric shapes, such as polygons. A polygon shape could, for instance, represent a lake or the layout of a building. + +The examples in this article use the following schema: + +| Field name | Field type | +| -------------- | ---------- | +| `store_location` | `GEO` | +| `pickup_zone` | `GEOSHAPE` | + + +{{% alert title="Note" color="warning" %}} +Redis version 7.2.0 or higher is required to use the `GEOSHAPE` field type. +{{% /alert %}} + +## Radius + +You can construct a radius query by passing the center coordinates (longitude, latitude), the radius, and the distance unit to the [FT.SEARCH]({{< relref "commands/ft.search" >}}) command. + +``` +FT.SEARCH index "@geo_field:[lon lat radius unit]" +``` + +Allowed units are `m`, `km`, `mi`, and `ft`. + +The following query finds all bicycle stores within a radius of 20 miles around London: + +{{< clients-example query_geo geo1 >}} +FT.SEARCH idx:bicycle "@store_location:[-0.1778 51.5524 20 mi]" +{{< /clients-example >}} + +## Shape + +The only supported shapes are points and polygons. You can query for polygons or points that either contain or are within a given geometric shape. + +``` +FT.SEARCH index "@geo_shape_field:[{WITHIN|CONTAINS|INTERSECTS|DISJOINT} $shape] PARAMS 2 shape "shape_as_wkt" DIALECT 3 +``` + +Here is a more detailed explanation of this query: + +1. **Field name**: you need to replace `geo_shape_field` with the `GEOSHAPE` field's name on which you want to query. +2. **Spatial operator**: spatial operators define the relationship between the shapes in the database and the shape you are searching for. You can either use `WITHIN`, `CONTAINS`, `INTERSECTS`, or `DISJOINT`. `WITHIN` finds any shape in the database that is inside the given shape. `CONTAINS` queries for any shape that surrounds the given shape. `INTERSECTS` finds any shape that has coordinates in common with the provided shape. `DISJOINT` finds any shapes that have nothing in common with the provided shape. `INTERSECTS` and `DISJOINT` were introduced in v2.10. +3. **Parameter**: the query refers to a parameter named `shape`. You can use any parameter name here. You need to use the `PARAMS` clause to set the parameter value. The value follows the [well-known text representation of a geometry](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry). Supported types are `POINT(x y)` and `POLYGON((x1 y1, x2 y2, ...))`. +4. **Dialect**: Shape-based queries have been available since version three of the query dialect. + +The following example query verifies if a bicycle is within a pickup zone: + +{{< clients-example query_geo geo2 >}} +FT.SEARCH idx:bicycle "@pickup_zone:[CONTAINS $bike]" PARAMS 2 bike "POINT(-0.1278 51.5074)" DIALECT 3 +{{< /clients-example >}} + +If you want to find all pickup zones that are approximately within Europe, then you can use the following query: + +{{< clients-example query_geo geo3 >}} +FT.SEARCH idx:bicycle "@pickup_zone:[WITHIN $europe]" PARAMS 2 europe "POLYGON((-25 35, 40 35, 40 70, -25 70, -25 35))" DIALECT 3 +{{< /clients-example >}}--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Group and aggregate query results +linkTitle: Aggregation +title: Aggregation queries +weight: 10 +--- + +An aggregation query allows you to perform the following actions: + +- Apply simple mapping functions. +- Group data based on field values. +- Apply aggregation functions on the grouped data. + +This article explains the basic usage of the [FT.AGGREGATE]({{< relref "commands/ft.aggregate" >}}) command. For further details, see the [command specification]({{< relref "commands/ft.aggregate" >}}) and the [aggregations reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations" >}}). + +The examples in this article use a schema with the following fields: + +| Field name | Field type | +| ---------- | ---------- | +| `condition` | `TAG` | +| `price` | `NUMERIC` | + +## Simple mapping + +The `APPLY` clause allows you to apply a simple mapping function to a result set that is returned based on the query expression. + +``` +FT.AGGREGATE index "query_expr" LOAD n "field_1" .. "field_n" APPLY "function_expr" AS "result_field" +``` + +Here is a more detailed explanation of the query syntax: + +1. **Query expression**: you can use the same query expressions as you would use with the [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command. You can substitute `query_expr` with any of the expressions explained in the articles of this [query topic]({{< relref "/develop/interact/search-and-query/query/" >}}). Vector search queries are an exception. You can't combine a vector search with an aggregation query. +2. **Loaded fields**: if field values weren't already loaded into the aggregation pipeline, you can force their presence via the `LOAD` clause. This clause takes the number of fields (`n`), followed by the field names (`"field_1" .. "field_n"`). +3. **Mapping function**: this mapping function operates on the field values. A specific field is referenced as `@field_name` within the function expression. The result is returned as `result_field`. + +The following example shows you how to calculate a discounted price for new bicycles: + +{{< clients-example query_agg agg1 >}} +FT.AGGREGATE idx:bicycle "@condition:{new}" LOAD 2 "__key" "price" APPLY "@price - (@price * 0.1)" AS "discounted" +{{< /clients-example >}} + +The field `__key` is a built-in field. + +The output of this query is: + +``` +1) "1" +2) 1) "__key" + 1) "bicycle:0" + 2) "price" + 3) "270" + 4) "discounted" + 5) "243" +3) 1) "__key" + 1) "bicycle:5" + 2) "price" + 3) "810" + 4) "discounted" + 5) "729" +4) 1) "__key" + 1) "bicycle:6" + 2) "price" + 3) "2300" + 4) "discounted" + 5) "2070" +... +``` + +## Grouping with aggregation + +The previous example did not group the data. You can group and aggregate data based on one or many criteria in the following way: + +``` +FT.AGGREGATE index "query_expr" ... GROUPBY n "field_1" .. "field_n" REDUCE AGG_FUNC m "@field_param_1" .. "@field_param_m" AS "aggregated_result_field" +``` + +Here is an explanation of the additional constructs: + +1. **Grouping**: you can group by one or many fields. Each ordered sequence of field values then defines one group. It's also possible to group by values that resulted from a previous `APPLY ... AS`. +2. **Aggregation**: you must replace `AGG_FUNC` with one of the supported aggregation functions (e.g., `SUM` or `COUNT`). A complete list of functions is available in the [aggregations reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/aggregations" >}}). Replace `aggregated_result_field` with a value of your choice. + +The following query shows you how to group by the field `condition` and apply a reduction based on the previously derived `price_category`. The expression `@price<1000` causes a bicycle to have the price category `1` if its price is lower than 1000 USD. Otherwise, it has the price category `0`. The output is the number of affordable bicycles grouped by price category. + +{{< clients-example query_agg agg2 >}} +FT.AGGREGATE idx:bicycle "*" LOAD 1 price APPLY "@price<1000" AS price_category GROUPBY 1 @condition REDUCE SUM 1 "@price_category" AS "num_affordable" +{{< /clients-example >}} + +``` +1) "3" +2) 1) "condition" + 1) "refurbished" + 2) "num_affordable" + 3) "1" +3) 1) "condition" + 1) "used" + 2) "num_affordable" + 3) "1" +4) 1) "condition" + 1) "new" + 2) "num_affordable" + 3) "3" +``` + +{{% alert title="Note" color="warning" %}} +You can also create more complex aggregation pipelines with [FT.AGGREGATE]({{< relref "commands/ft.aggregate" >}}). Applying multiple reduction functions under one `GROUPBY` clause is possible. In addition, you can also chain groupings and mix in additional mapping steps (e.g., `GROUPBY ... REDUCE ... APPLY ... GROUPBY ... REDUCE`) +{{% /alert %}} + + +## Aggregating without grouping + +You can't use an aggregation function outside of a `GROUPBY` clause, but you can construct your pipeline in a way that the aggregation happens on a single group that spans all documents. If your documents don't share a common attribute, you can add it via an extra `APPLY` step. + +Here is an example that adds a type attribute `bicycle` to each document before counting all documents with that type: + +{{< clients-example query_agg agg3 >}} +FT.AGGREGATE idx:bicycle "*" APPLY "'bicycle'" AS type GROUPBY 1 @type REDUCE COUNT 0 AS num_total +{{< /clients-example >}} + +The result is: + +``` +1) "1" +2) 1) "type" + 1) "bicycle" + 2) "num_total" + 3) "10" +``` + +## Grouping without aggregation + +It's sometimes necessary to group your data without applying a mathematical aggregation function. If you need a grouped list of values, then the `TOLIST` function is helpful. + +The following example shows how to group all bicycles by `condition`: + +{{< clients-example query_agg agg4 >}} +FT.AGGREGATE idx:bicycle "*" LOAD 1 "__key" GROUPBY 1 "@condition" REDUCE TOLIST 1 "__key" AS bicycles +{{< /clients-example >}} + +The output of this query is: + +``` +1) "3" +2) 1) "condition" + 1) "refurbished" + 2) "bicycles" + 3) 1) "bicycle:9" +3) 1) "condition" + 1) "used" + 2) "bicycles" + 3) 1) "bicycle:1" + 1) "bicycle:2" + 2) "bicycle:3" + 3) "bicycle:4" +4) 1) "condition" + 1) "new" + 2) "bicycles" + 3) 1) "bicycle:0" + 1) "bicycle:5" + 2) "bicycle:6" + 3) "bicycle:8" + 4) "bicycle:7" +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Perform a full-text search +linkTitle: Full-text +title: Full-text search +weight: 3 +--- + +A full-text search finds words or phrases within larger texts. You can search within a specific text field or across all text fields. + +This article provides a good overview of the most relevant full-text search capabilities. Please find further details about all the full-text search features in the [reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/" >}}). + +The examples in this article use a schema with the following fields: + +| Field name | Field type | +| ---------- | ---------- | +| `brand` | `TEXT` | +| `model` | `TEXT` | +| `description`| `TEXT` | + + +## Single word + +To search for a word (or word stem) across all text fields, you can construct the following simple query: + +``` +FT.SEARCH index "word" +``` + +Instead of searching across all text fields, you might want to limit the search to a specific text field. + +``` +FT.SEARCH index "@field: word" +``` + +Words that occur very often in natural language, such as `the` or `a` for the English language, aren't indexed and will not return a search result. You can find further details in the [stop words article]({{< relref "/develop/interact/search-and-query/advanced-concepts/stopwords" >}}). + +The following example searches for all bicycles that have the word 'kids' in the description: + +{{< clients-example query_ft ft1 >}} +FT.SEARCH idx:bicycle "@description: kids" +{{< /clients-example >}} + +## Phrase + +A phrase is a sentence, sentence fragment, or small group of words. You can find further details about how to find exact phrases in the [exact match article]({{< relref "/develop/interact/search-and-query/query/exact-match" >}}). + + +## Word prefix + +You can also search for words that match a given prefix. + +``` +FT.SEARCH index "prefix*" +``` + +``` +FT.SEARCH index "@field: prefix*" +``` + +{{% alert title="Important" color="warning" %}} +The prefix needs to be at least two characters long. +{{% /alert %}} + +Here is an example that shows you how to search for bicycles with a brand that starts with 'ka': + +{{< clients-example query_ft ft2 >}} +FT.SEARCH idx:bicycle "@model: ka*" +{{< /clients-example >}} + +## Word suffix + +Similar to the prefix, it is also possible to search for words that share the same suffix. + +``` +FT.SEARCH index "*suffix" +``` + +You can also combine prefix- and suffix-based searches within a query expression. + +``` +FT.SEARCH index "*infix*" +``` + +Here is an example that finds all brands that end with 'bikes': + +{{< clients-example query_ft ft3 >}} +FT.SEARCH idx:bicycle "@brand:*bikes" +{{< /clients-example >}} + +## Fuzzy search + +A fuzzy search allows you to find documents with words that approximately match your search term. To perform a fuzzy search, you wrap search terms with pairs of `%` characters. A single pair represents a (Levenshtein) distance of one, two pairs represent a distance of two, and three pairs, the maximum distance, represents a distance of three. + +Here is the command that searches across all text fields with a distance of one: + +``` +FT.SEARCH index "%word%" +``` + +The following example finds all documents that contain a word that has a distance of one to the incorrectly spelled word 'optamized'. You can see that this matches the word 'optimized'. + +{{< clients-example query_ft ft4 >}} +FT.SEARCH idx:bicycle "%optamized%" +{{< /clients-example >}} + +If you want to increase the maximum word distance to two, you can use the following query: + +{{< clients-example query_ft ft5 >}} +FT.SEARCH idx:bicycle "%%optamised%%" +{{< /clients-example >}} + +## Unicode considerations + +Redis Query Engine only supports Unicode characters in the [basic multilingual plane](https://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane); U+0000 to U+FFFF. Unicode characters beyond U+FFFF, such as Emojis, are not supported and would not be retrieved by queries including such characters in the following use cases: + +* Querying TEXT fields with Prefix/Suffix/Infix +* Querying TEXT fields with fuzzy + +Examples: + +``` +redis> FT.CREATE idx SCHEMA tag TAG text TEXT +OK +redis> HSET doc:1 tag '😀😁🙂' text '😀😁🙂' +(integer) 2 +redis> HSET doc:2 tag '😀😁🙂abc' text '😀😁🙂abc' +(integer) 2 +redis> FT.SEARCH idx '@text:(*😀😁🙂)' NOCONTENT +1) (integer) 0 +redis> FT.SEARCH idx '@text:(*😀😁🙂*)' NOCONTENT +1) (integer) 0 +redis> FT.SEARCH idx '@text:(😀😁🙂*)' NOCONTENT +1) (integer) 0 + +redis> FT.SEARCH idx '@text:(%😀😁🙃%)' NOCONTENT +1) (integer) 0 +```--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how to query, search, and aggregate Redis data +hideListLinks: true +linkTitle: Query +title: Query data +weight: 5 +--- + +Redis Open Source distinguishes between the [FT.SEARCH]({{< relref "/commands/ft.search" >}}) and [FT.AGGREGATE]({{< relref "/commands/ft.aggregate" >}}) query commands. You should use [FT.SEARCH]({{< relref "/commands/ft.search" >}}) if you want to perform selections and projections only. If you also need to apply mapping functions, group, or aggregate data, use the [FT.AGGREGATE]({{< relref "/commands/ft.aggregate" >}}) command. + +* **Selection**: A selection allows you to return all documents that fulfill specific criteria. +* **Projection**: Projections are used to return specific fields of the result set. You can also map/project to calculated field values. +* **Aggregation**: Aggregations collect and summarize data across several fields. + +Here is a short SQL comparison using the [bicycle dataset](./data/bicycles.txt): + +|Type| SQL | Redis | +|----------| --- | ----------- | +| Selection | `SELECT * FROM bicycles WHERE price >= 1000` | `FT.SEARCH idx:bicycle "@price:[1000 +inf]"` | +| Simple projection | `SELECT id, price FROM bicycles` | `FT.SEARCH idx:bicycle "*" RETURN 2 __key, price` | +| Calculated projection| `SELECT id, price-price*0.1 AS discounted FROM bicycles`| `FT.AGGREGATE idx:bicycle "*" LOAD 2 __key price APPLY "@price-@price*0.1" AS discounted`| +| Aggregation | `SELECT condition, AVG(price) AS avg_price FROM bicycles GROUP BY condition` | `FT.AGGREGATE idx:bicycle "*" GROUPBY 1 @condition REDUCE AVG 1 @price AS avg_price` | + +The following articles provide an overview of how to query data with the [FT.SEARCH]({{< relref "commands/ft.search" >}}) command: + +* [Exact match queries]({{< relref "/develop/interact/search-and-query/query/exact-match" >}}) +* [Range queries]({{< relref "/develop/interact/search-and-query/query/range" >}}) +* [Full-text search ]({{< relref "/develop/interact/search-and-query/query/full-text" >}}) +* [Geospatial queries]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) +* [Vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) +* [Combined queries]({{< relref "/develop/interact/search-and-query/query/combined" >}}) + +You can find further details about aggregation queries with [FT.AGGREGATE]({{< relref "commands/ft.aggregate" >}}) in the following article: + +* [Aggregation queries]({{< relref "/develop/interact/search-and-query/query/aggregation" >}})--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Perform numeric range queries +linkTitle: Range +title: Range queries +weight: 2 +--- + +A range query on a numeric field returns the values that are in between a given start and end value: + +``` +FT.SEARCH index "@field:[start end]" +``` + +You can also use the `FILTER` argument, but you need to know that the query execution plan is different because the filter is applied after the query string (e.g., `*`) is evaluated: + +``` +FT.SEARCH index "*" FILTER field start end +``` + +## Start and end values + +Start and end values are by default inclusive, but you can prepend `(` to a value to exclude it from the range. + +The values `-inf`, `inf`, and `+inf` are valid values that allow you to define open ranges. + +## Result set + +An open-range query can lead to a large result set. + +By default, [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) returns only the first ten results. The `LIMIT` argument helps you to scroll through the result set. The `SORTBY` argument ensures that the documents in the result set are returned in the specified order. + +``` +FT.SEARCH index "@field:[start end]" SORTBY field LIMIT page_start page_end +``` + +You can find further details about using the `LIMIT` and `SORTBY` in the [[`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command reference](/commands/ft.search/). + +## Examples + +The examples in this section use a schema with the following fields: + +| Field name | Field type | +| ---------- | ---------- | +| `price` | `NUMERIC` | + +The following query finds bicycles within a price range greater than or equal to 500 USD and smaller than or equal to 1000 USD (`500 <= price <= 1000`): + +{{< clients-example query_range range1 >}} +> FT.SEARCH idx:bicycle "@price:[500 1000]" +1) (integer) 3 +2) "bicycle:2" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-87.6848 41.9331, -87.5748 41.9331, -87.5748 41.8231, -87.6848 41.8231, -87.6848 41.9331))\",\"store_location\":\"-87.6298,41.8781\",\"brand\":\"Nord\",\"model\":\"Chook air 5\",\"price\":815,\"description\":\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\",\"condition\":\"used\"}" +4) "bicycle:5" +5) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-0.1778 51.5524, 0.0822 51.5524, 0.0822 51.4024, -0.1778 51.4024, -0.1778 51.5524))\",\"store_location\":\"-0.1278,51.5074\",\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike \xe2\x80\x93 but that\xe2\x80\x99s not to say that it\xe2\x80\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\xe2\x80\x99s, this is a bike which doesn\xe2\x80\x99t break the bank and delivers craved performance.\",\"condition\":\"new\"}" +6) "bicycle:9" +7) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((12.4464 42.1028, 12.5464 42.1028, 12.5464 41.7028, 12.4464 41.7028, 12.4464 42.1028))\",\"store_location\":\"12.4964,41.9028\",\"model\":\"ThrillCycle\",\"brand\":\"BikeShind\",\"price\":815,\"description\":\"An artsy, retro-inspired bicycle that\xe2\x80\x99s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\xe2\x80\x99t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\",\"condition\":\"refurbished\"}" +{{< /clients-example >}} + +This is semantically equivalent to: + +{{< clients-example query_range range2 >}} +> FT.SEARCH idx:bicycle "*" FILTER price 500 1000 +1) (integer) 3 +2) "bicycle:2" +3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-87.6848 41.9331, -87.5748 41.9331, -87.5748 41.8231, -87.6848 41.8231, -87.6848 41.9331))\",\"store_location\":\"-87.6298,41.8781\",\"brand\":\"Nord\",\"model\":\"Chook air 5\",\"price\":815,\"description\":\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\",\"condition\":\"used\"}" +4) "bicycle:5" +5) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-0.1778 51.5524, 0.0822 51.5524, 0.0822 51.4024, -0.1778 51.4024, -0.1778 51.5524))\",\"store_location\":\"-0.1278,51.5074\",\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike \xe2\x80\x93 but that\xe2\x80\x99s not to say that it\xe2\x80\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\xe2\x80\x99s, this is a bike which doesn\xe2\x80\x99t break the bank and delivers craved performance.\",\"condition\":\"new\"}" +6) "bicycle:9" +7) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((12.4464 42.1028, 12.5464 42.1028, 12.5464 41.7028, 12.4464 41.7028, 12.4464 42.1028))\",\"store_location\":\"12.4964,41.9028\",\"model\":\"ThrillCycle\",\"brand\":\"BikeShind\",\"price\":815,\"description\":\"An artsy, retro-inspired bicycle that\xe2\x80\x99s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\xe2\x80\x99t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\",\"condition\":\"refurbished\"}" +{{< /clients-example >}} + +For bicycles with a price greater than 1000 USD (`price > 1000`), you can use: + +{{< clients-example query_range range3 >}} +> FT.SEARCH idx:bicycle "@price:[(1000 +inf]" + 1) (integer) 5 + 2) "bicycle:1" + 3) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-118.2887 34.0972, -118.1987 34.0972, -118.1987 33.9872, -118.2887 33.9872, -118.2887 34.0972))\",\"store_location\":\"-118.2437,34.0522\",\"brand\":\"Bicyk\",\"model\":\"Hillcraft\",\"price\":1200,\"description\":\"Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\\\" wheel bike is just too clumsy coming off a 24\\\" bike. The Hillcraft 26 is just the solution they need!\",\"condition\":\"used\"}" + 4) "bicycle:4" + 5) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-122.4644 37.8199, -122.3544 37.8199, -122.3544 37.7099, -122.4644 37.7099, -122.4644 37.8199))\",\"store_location\":\"-122.4194,37.7749\",\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\xe2\x80\x99s saddle, different bars and unique colourway.\",\"condition\":\"used\"}" + 6) "bicycle:3" + 7) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((-80.2433 25.8067, -80.1333 25.8067, -80.1333 25.6967, -80.2433 25.6967, -80.2433 25.8067))\",\"store_location\":\"-80.1918,25.7617\",\"brand\":\"Eva\",\"model\":\"Eva 291\",\"price\":3400,\"description\":\"The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It\xe2\x80\x99s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!\",\"condition\":\"used\"}" + 8) "bicycle:6" + 9) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((2.1767 48.9016, 2.5267 48.9016, 2.5267 48.5516, 2.1767 48.5516, 2.1767 48.9016))\",\"store_location\":\"2.3522,48.8566\",\"brand\":\"ScramBikes\",\"model\":\"WattBike\",\"price\":2300,\"description\":\"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\xe2\x80\x99s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\",\"condition\":\"new\"}" +10) "bicycle:8" +11) 1) "$" + 2) "{\"pickup_zone\":\"POLYGON((1.9450 41.4301, 2.4018 41.4301, 2.4018 41.1987, 1.9450 41.1987, 1.9450 41.4301))\",\"store_location\":\"2.1734, 41.3851\",\"brand\":\"nHill\",\"model\":\"Summit\",\"price\":1200,\"description\":\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\xe2\x80\x99re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\",\"condition\":\"new\"}" +{{< /clients-example >}} + +The example below returns bicycles with a price lower than or equal to 2000 USD (`price <= 2000`) by returning the five cheapest bikes: + +{{< clients-example query_range range4 >}} +> FT.SEARCH idx:bicycle "@price:[-inf 2000]" SORTBY price LIMIT 0 5 + 1) (integer) 7 + 2) "bicycle:0" + 3) 1) "price" + 2) "270" + 3) "$" + 4) "{\"pickup_zone\":\"POLYGON((-74.0610 40.7578, -73.9510 40.7578, -73.9510 40.6678, -74.0610 40.6678, -74.0610 40.7578))\",\"store_location\":\"-74.0060,40.7128\",\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\xe2\x80\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}" + 4) "bicycle:7" + 5) 1) "price" + 2) "430" + 3) "$" + 4) "{\"pickup_zone\":\"POLYGON((13.3260 52.5700, 13.6550 52.5700, 13.6550 52.2700, 13.3260 52.2700, 13.3260 52.5700))\",\"store_location\":\"13.4050,52.5200\",\"brand\":\"Peaknetic\",\"model\":\"Secto\",\"price\":430,\"description\":\"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\",\"condition\":\"new\"}" + 6) "bicycle:5" + 7) 1) "price" + 2) "810" + 3) "$" + 4) "{\"pickup_zone\":\"POLYGON((-0.1778 51.5524, 0.0822 51.5524, 0.0822 51.4024, -0.1778 51.4024, -0.1778 51.5524))\",\"store_location\":\"-0.1278,51.5074\",\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike \xe2\x80\x93 but that\xe2\x80\x99s not to say that it\xe2\x80\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\xe2\x80\x99s, this is a bike which doesn\xe2\x80\x99t break the bank and delivers craved performance.\",\"condition\":\"new\"}" + 8) "bicycle:2" + 9) 1) "price" + 2) "815" + 3) "$" + 4) "{\"pickup_zone\":\"POLYGON((-87.6848 41.9331, -87.5748 41.9331, -87.5748 41.8231, -87.6848 41.8231, -87.6848 41.9331))\",\"store_location\":\"-87.6298,41.8781\",\"brand\":\"Nord\",\"model\":\"Chook air 5\",\"price\":815,\"description\":\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\",\"condition\":\"used\"}" +10) "bicycle:9" +11) 1) "price" + 2) "815" + 3) "$" + 4) "{\"pickup_zone\":\"POLYGON((12.4464 42.1028, 12.5464 42.1028, 12.5464 41.7028, 12.4464 41.7028, 12.4464 42.1028))\",\"store_location\":\"12.4964,41.9028\",\"model\":\"ThrillCycle\",\"brand\":\"BikeShind\",\"price\":815,\"description\":\"An artsy, retro-inspired bicycle that\xe2\x80\x99s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\xe2\x80\x99t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\",\"condition\":\"refurbished\"}" +{{< /clients-example >}} + +## Non-numeric range queries + +You can learn more about non-numeric range queries, such as [geospatial]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) or [vector search]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) queries, in their dedicated articles.--- +Title: Move from Development to Production with Redis Query Engine +alwaysopen: false +categories: +- docs +- develop +- stack +- oss +- kubernetes +- clients +linkTitle: RQE DEV to PROD +weight: 2 +--- + +Transitioning a Redis Open Source with Redis Query Engine (RQE) environment from development to production requires thoughtful consideration of configuration, performance tuning, and resource allocation. This guide outlines key practices to ensure your Redis deployment operates optimally under production workloads. + +## Configuration parameter considerations + +RQE offers several configurable parameters that influence query results and performance. While a full list of these parameters and their functions can be found [here]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}), this section highlights the most commonly adjusted parameters for production environments. + +### 1. `TIMEOUT` + +- Purpose: limits the duration a query is allowed to execute. +- Default: 500 milliseconds. +- Behavior: + - Ensures that queries do not monopolize the main Redis thread. + - If a query exceeds the `TIMEOUT` value, its outcome is determined by the `ON_TIMEOUT` setting: + - `FAIL`: the query will return an error. + - `PARTIAL`: this setting will return the top results accumulated by the query until it timed out. +- Recommendations: + - Caution: be mindful when increasing `TIMEOUT`, as long-running queries can degrade overall system performance. + + +### 2. `MINPREFIX` + +- Purpose: sets the minimum number of characters required for wildcard searches. +- Default: 2 characters. +- Behavior: + - Queries like `he*` are valid, while `h*` would not meet the threshold. +- Recommendations: + - Lowering this value to 1 can significantly increase result sets, which may lead to degraded performance. + - Keep the default unless there is a strong use case for single-character wildcards. + +### 3. `MAXPREFIXEXPANSIONS` + +- Purpose: Defines the maximum number of expansions for a wildcard query term. +- Default: 200 expansions. +- Behavior: + - Expansions: when a wildcard query term is processed, Redis generates a list of all possible matches from the index that satisfy the wildcard. For example, the query he* might expand to terms like hello, hero, and heat. Each of these matches is an "expansion." + - This parameter limits how many of these expansions Redis will generate and process. If the number of possible matches exceeds the limit, the query may return incomplete results or fail, depending on the query context. +- Recommendations: + - Avoid increasing this parameter excessively, as it can lead to performance bottlenecks during query execution. + - If wildcard searches are common, consider optimizing your index to reduce the reliance on large wildcard expansions. + +### 4. `DEFAULT_DIALECT` + +- Purpose: specifies the default query dialect used by [`FT.SEARCH`]({{< relref "commands/ft.search" >}}) and [`FT.AGGREGATE`]({{< relref "commands/ft.aggregate" >}}) commands. +- Default: [Dialect 1]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects" >}}). +- Recommendations: + - Update the default to [**Dialect 4**]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-4" >}}) for better performance and access to advanced features. + - Individual commands can override this parameter if necessary, but setting a higher default ensures consistent performance across queries. + +## Testing + +### 1. Correctness +- Run a few test queries and check the results are what you expect. +- Use the following tools to validate and debug: + - Redis CLI: use the [`MONITOR`]({{< relref "commands/monitor" >}}) command or [profiling features]({{< relref "/develop/tools/insight#profiler" >}}) in Redis Insight to analyze commands. + - [`FT.PROFILE`]({{< relref "commands/ft.profile" >}}): Provides detailed insights into individual query execution paths, helping identify bottlenecks and inefficiencies. + +### 2. Performance +- Test query performance in a controlled test environment that mirrors production as closely as possible. +- Use tools like `memtier_benchmark` or custom test applications to simulate load. +- Network Considerations: + - Minimize latency during testing by locating test clients in the same network as the Redis instance. + - For Redis Cloud, ensure test machines are in a **VPC-peered environment** with the target Redis database. + +## Sizing requirements + +Redis Search has resource requirements distinct from general caching use cases. Proper sizing ensures that the system can handle production workloads efficiently. + +### Key considerations: +1. CPU: + - Adequate CPU resources are critical. + - Ensure CPUs are not over-subscribed with search threads and shard processes. +2. RAM: + - Plan for sufficient memory to store the dataset and indexes, plus overhead for operations. +3. Network: + - High throughput and low latency are essential, particularly for applications with demanding query patterns. + +### Tools: +- Use the [Redis Search Sizing Calculator](https://redis.io/redisearch-sizing-calculator/) to estimate resource requirements based on your dataset and workload. + +## Demand spikes + +Production environments must be sized for peak load scenarios to ensure performance remains acceptable under maximum stress. + +### Recommendations: +1. Plan for Spikes: + - If query workloads are expected to vary significantly, ensure the infrastructure can handle peak loads. + - Monitor real-world usage patterns and adjust capacity as needed. +2. Autoscaling: + - Consider using autoscaling strategies in cloud environments to dynamically adjust resources based on load. + +By following these best practices, you can ensure a smooth and efficient transition from development to production with Redis Open Source and RQE. Proper configuration, rigorous testing, and careful resource planning are critical to delivering a reliable and high-performance Redis deployment. +--- +Title: Index management best practices for Redis Query Engine +alwaysopen: false +categories: +- docs +- develop +- stack +- oss +- kubernetes +- clients +linkTitle: RQE index management +weight: 3 +--- +## Introduction to managing Redis Query Engine indexes + +The Redis Query Engine (RQE) is a powerful tool for executing complex search and query operations on structured, semi-structured, and unstructured data. Indexes are the backbone of this functionality, enabling fast and efficient data retrieval. +Proper management of these indexes is essential for optimal performance, scalability, and resource utilization. + +This guide outlines best practices for managing RQE indexes throughout their lifecycle. It provides recommendations on: + +- Planning and creating indexes to suit your query patterns. +- Using index aliasing to manage schema updates and minimize downtime. +- Monitoring and verifying index population to ensure query readiness. +- Optimizing performance through query profiling and memory management. +- Maintaining and scaling indexes in both standalone and clustered Redis environments. +- Versioning, testing, and automating index management. + +## Why index management matters + +Indexes directly impact query speed and resource consumption. +Poorly managed indexes can lead to increased memory usage, slower query times, and challenges in maintaining data consistency. +By following the strategies outlined in this guide, you can: + +- Reduce operational overhead. +- Improve application performance. +- Ensure smooth transitions during schema changes. +- Scale efficiently with your growing datasets. + +## Plan your indexes strategically + +Planning your indexes strategically requires understanding your application’s query patterns and tailoring indexes to match. +Begin by identifying the types of searches your application performs—such as full-text search, range queries, or geospatial lookups—and the fields involved. +Categorize fields based on their purpose: searchable fields (e.g., [`TEXT`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#text-fields" >}}) for full-text searches), filterable fields (e.g., [`TAG`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#tag-fields" >}}) for exact match searches), and sortable fields (e.g., [`NUMERIC`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#numeric-fields" >}}) for range queries or sorting). +Match field types to their intended use and avoid indexing fields that are rarely queried to conserve resources. Here's the list of index types: + +- [`TEXT`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#text-fields" >}}): use `TEXT` for free-text searches and set weights if some fields are more important. +- [`TAG`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#tag-fields" >}}): use `TAG` for categorical data (e.g., product categories) that benefit from exact matching and filtering. +- [`NUMERIC`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#numeric-fields" >}}): use `NUMERIC` for numeric ranges (e.g., prices, timestamps). +- [`GEO`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#geo-fields" >}}): use `GEO` for geospatial coordinates (e.g., latitude/longitude). +- [`GEOSHAPE`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#geoshape-fields" >}}): use `GEOSHAPE` to represent locations as points, but also to define shapes and query the interactions between points and shapes (e.g., to find all points that are contained within an enclosing shape). +- [`VECTOR`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#vector-fields" >}}): use `VECTOR` for high-dimensional similarity searches. + +See [these pages]({{< relref "/develop/interact/search-and-query/query" >}}) for discussions and examples on how best to use these index types. + +Next, simulate queries on a sample dataset to identify potential bottlenecks. +Use tools like [`FT.PROFILE`]({{< relref "commands/ft.profile" >}}) to analyze query execution and refine your schema if needed. +For example, assign weights to `TEXT` fields for prioritizing results or use the `PREFIX` option of [`FT.CREATE`]({{< relref "commands/ft.create" >}}) to limit indexing to specific key patterns. Note that you can use multiple `PREFIX` clauses when you create an index (see [below](#index-creation)) +After creating the index, validate its performance with real queries and monitor usage with the available tools: + +- [`FT.EXPLAIN`]({{< relref "commands/ft.explain" >}}) and [`FT.EXPLAINCLI`]({{< relref "commands/ft.explaincli" >}}) allow you to see how Redis Query Engine parses a given search query. `FT.EXPLAIN` returns a structured breakdown of the query execution plan, while `FT.EXPLAINCLI` presents a more readable, tree-like format for easier interpretation. These commands are useful for diagnosing query structure and ensuring it aligns with the intended logic. +- [`FT.INFO`]({{< relref "commands/ft.info" >}}) provides detailed statistics about an index, including the number of indexed documents, memory usage, and configuration settings. It helps in monitoring index growth, assessing memory consumption, and verifying index structure to detect potential inefficiencies. +- [`FT.PROFILE`]({{< relref "commands/ft.profile" >}}) runs a query while capturing execution details, which helps to reveal query performance bottlenecks. It provides insights into processing time, key accesses, and filter application, making it a crucial tool for fine-tuning complex queries and optimizing search efficiency. + +Avoid over-indexing. Indexing every field increases memory usage and can slow down updates. +Only index the fields that are essential for your planned queries. + +## Index creation {#index-creation} + - Use the [`FT.CREATE`]({{< relref "commands/ft.create" >}}) command to define an index schema. + - Assign weights to `TEXT` fields to prioritize certain fields in full-text search results. + - Use the `PREFIX` option to restrict indexing to keys with specific patterns. + Using multiple PREFIX clauses when creating an index allows you to index multiple key patterns under a single index. This is useful in several scenarios: + - If your Redis database stores different types of entities under distinct key prefixes (e.g., `user:123`, `order:456`), a single index can cover both by specifying multiple prefixes. For example: + + ```bash + FT.CREATE my_index ON HASH PREFIX 2 "user:" "order:" SCHEMA name TEXT age NUMERIC status TAG + ``` + + This approach enables searching across multiple entity types without needing separate indexes. + + - Instead of querying multiple indexes separately, you can search across related data structures using a single query. This is particularly helpful when data structures share common fields, such as searching both customer and vendor records under a unified contacts index. + + - Maintaining multiple indexes for similar data types can be inefficient in terms of memory and query performance. By consolidating data under one index with multiple prefixes, you reduce overhead while still allowing for distinct key organization. + + - If your data model evolves and new key patterns are introduced, using multiple `PREFIX` clauses from the start ensures future compatibility without requiring a full reindexing. + - Data loading strategy: load data into Redis before creating an index when working with large datasets. Use the `ON HASH` or `ON JSON` options to match the data structure. + +## Index aliasing + +Index aliases act as abstracted names for the underlying indexes, enabling applications to reference the alias instead of the actual index name. This approach simplifies schema updates and index management. + +There are several use cases for index aliasing, including: + +- Schema updates: when updating an index schema, create a new index and associate the same alias with it. This allows a seamless transition without requiring application-level changes. +- Version control: use aliases to manage different versions of an index. For example, assign the alias products to `products_v1` initially and later to `products_v2` when the schema evolves. +- Testing and rollback: assign an alias to a test index during staged deployments. If issues arise, quickly switch the alias back to the stable index. + +Best practices for aliasing: + +- Always create an alias for your indexes during initial setup, even if you don’t anticipate immediate schema changes. +- Use clear and descriptive alias names to avoid confusion (e.g., `users_current` or `orders_live`). +- Make sure that an alias points to only one index at a time to maintain predictable query results. +- Use aliases to provide tenant-specific access. For example, assign tenant-specific aliases like `tenant1_products` and `tenant2_products` to different indexes for isolated query performance. + +Tools for managing aliases: + +- Assign an alias: [`FT.ALIASADD`]({{< relref "commands/ft.aliasadd" >}}) `my_alias my_index` +- Update an alias: [`FT.ALIASUPDATE`]({{< relref "commands/ft.aliasupdate" >}}) `my_alias new_index` +- Remove an alias: [`FT.ALIASDEL`]({{< relref "commands/ft.aliasdel" >}}) `my_alias` + +Monitoring and troubleshooting aliases: + +- Use the `FT.INFO` command to check which aliases are associated with an index. +- Make sure your aliases always points to valid indexes and are correctly updated during schema changes. + +## Monitor index population + +- Use the `FT.INFO` command to monitor the `num_docs` and `indexing` fields, to check that all expected documents are indexed. + ```bash + FT.INFO my_new_index + ``` +- Validate data with sample queries to ensure proper indexing: + ```bash + FT.SEARCH my_new_index "*" + ``` +- Use `FT.PROFILE` to analyze query plans and validate performance: + + ```bash + FT.PROFILE my_new_index SEARCH QUERY "your_query" + ``` + - Implement scripts to periodically verify document counts and query results. For example, in Python: + + ```python + import re + def check_index_readiness(index_name, expected_docs): + r = redis.StrictRedis(host='localhost', port=6379, decode_responses=True) + info = r.execute_command('FT.INFO', index_name) + num_docs = int(info[info.index('num_docs') + 1]) + return num_docs >= expected_d + if check_index_readiness('my_new_index', 100000): + print("Index is fully populated!") + else: + print("Index is still populating...") + ``` + +## Monitoring index performance + +- Use the `FT.PROFILE` command to analyze query performance and identify bottlenecks. +- Regularly monitor memory usage with the [`INFO`]({{< relref "commands/info" >}}) `memory` and `FT.INFO` commands to detect growth patterns and optimize resource allocation. + +## Index maintenance + +- If schema changes are required, create a new index with the updated schema and reassign the alias once the index is ready. +- Use [Redis key expiration]({{< relref "/develop/use/keyspace#key-expiration" >}}) to automatically remove outdated records and keep indexes lean. + +### FT.ALTER vs. aliasing + +Use [`FT.ALTER`]({{< relref "commands/ft.alter" >}}) when you need to add new fields to an existing index without rebuilding it, minimizing downtime and resource usage. However, `FT.ALTER` cannot remove or modify existing fields, limiting its flexibility. + +Use index aliasing when making schema changes that require reindexing, such as modifying field types or removing fields. In this case, create a new index with the updated schema, populate it, and then use `FT.ALIASUPDATE` to seamlessly switch queries to the new index without disrupting application functionality. + +## Scaling and high availability + +- In a clustered Redis setup, make sure indexes are designed with key distribution in mind to prevent query inefficiencies. +- Test how indexes behave under replica promotion to ensure consistent query behavior across nodes. + +## Versioning and testing + +- When changing schemas, create a new version of the index alongside the old one and migrate data progressively. +- Test index changes in a staging environment before deploying them to production. + +## Cleaning up + +- Use the [`FT.DROPINDEX`]({{< relref "commands/ft.dropindex" >}}) command to remove unused indexes and free up memory. Be cautious with the `DD` (Delete Documents) flag to avoid unintended data deletion. +- Make sure no keys remain that were previously associated with dropped indexes if the data is no longer relevant. + +## Documentation and automation + +- Document your index configurations to facilitate future maintenance. +- Use scripts or orchestration tools to automate index creation, monitoring, and cleanup. +--- +Title: Best practices for Redis Query Engine performance +alwaysopen: false +categories: +- docs +- develop +- stack +- oss +- kubernetes +- clients +linkTitle: RQE performance +weight: 1 +--- + +{{< note >}} +If you're using Redis Software or Redis Cloud, see the [best practices for scalable Redis Query Engine]({{< relref "/operate/oss_and_stack/stack-with-enterprise/search/scalable-query-best-practices" >}}) page. +{{< /note >}} + +## Checklist +Below are some basic steps to ensure good performance of the Redis Query Engine (RQE). + +* Create a Redis data model with your query patterns in mind. +* Ensure the Redis architecture has been sized for the expected load using the [sizing calculator](https://redis.io/redisearch-sizing-calculator/). +* Provision Redis nodes with sufficient resources (RAM, CPU, network) to support the expected maximum load. +* Review [`FT.INFO`]({{< relref "commands/ft.info" >}}) and [`FT.PROFILE`]({{< relref "commands/ft.profile" >}}) outputs for anomalies and/or errors. +* Conduct load testing in a test environment with real-world queries and a load generated by either [memtier_benchmark](https://github.com/redislabs/memtier_benchmark) or a custom load application. + +## Indexing considerations + +### General +- Favor [`TAG`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#tag-fields" >}}) over [`NUMERIC`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#numeric-fields" >}}) for use cases that only require matching. +- Favor [`TAG`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#tag-fields" >}}) over [`TEXT`]({{< relref "/develop/interact/search-and-query/basic-constructs/field-and-type-options#text-fields" >}}) for use cases that don’t require full-text capabilities (pure match). + +### Non-threaded search +- Put only those fields used in your queries in the index. +- Only make fields [`SORTABLE`]({{< relref "/develop/interact/search-and-query/advanced-concepts/sorting" >}}) if they are used in [`SORTBY`]({{< relref "/develop/interact/search-and-query/advanced-concepts/sorting#specifying-sortby" >}}) +queries. +- Use [`DIALECT 2`]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-2" >}}). + +### Threaded (query performance factor or QPF) search +- Put both query fields and any projected fields (`RETURN` or `LOAD`) in the index. +- Set all fields to `SORTABLE`. +- Set TAG fields to [UNF]({{< relref "/develop/interact/search-and-query/advanced-concepts/sorting#normalization-unf-option" >}}). +- Optional: Set `TEXT` fields to `NOSTEM` if the use case will support it. +- Use [`DIALECT 2`]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-2" >}}). + +## Query optimization + +- Avoid returning large result sets. Use `CURSOR` or `LIMIT`. +- Avoid wildcard searches. +- Avoid projecting all fields (e.g., `LOAD *`). Project only those fields that are part of the index schema. +- If queries are long-running, enable threading (query performance factor) to reduce contention for the main Redis thread. + +## Validate performance (`FT.PROFILE`) + +You can analyze [`FT.PROFILE`]({{< relref "commands/ft.profile" >}}) output to gain insights about query execution. +The following informational items are available for analysis: + +- Total execution time +- Execution time per shard +- Coordination time (for multi-sharded environments) +- Breakdown of the query into fundamental components, such as `UNION` and `INTERSECT` +- Warnings, such as `TIMEOUT` + +## Anti-patterns + +When designing and querying indexes in RQE, certain practices can hinder performance, scalability, and maintainability. Below are some common anti-patterns to avoid: + +- **Large documents**: storing excessively large documents in Redis makes data retrieval slower and increases memory usage. Break data into smaller, focused records whenever possible. +- **Deeply-nested fields**: retrieving or indexing deeply-nested JSON fields is computationally expensive. Use a flatter schema for better performance. +- **Large result sets**: fetching unnecessarily large result sets puts a strain on memory and network resources. Limit results to only what is needed. +- **Wildcarding**: using wildcard patterns indiscriminately in queries can lead to large and inefficient scans, especially if the index size is significant. +- **Large projections**: including excessive fields in query results increases memory overhead and slows down query execution. Limit projections to essential fields. + +The following examples depict an anti-pattern index schema and query, followed by corrected versions designed for scalability with RQE. + +### Anti-pattern index schema + +The following schema introduces challenges for scalability and performance: + +```sh +FT.CREATE jsonidx:profiles ON JSON PREFIX 1 profiles: + SCHEMA $.tags.* as t NUMERIC SORTABLE + $.firstName as name TEXT + $.location as loc GEO +``` + +Issues: + +- Minimal schema definition: the schema is sparse and lacks fields like `lastName`, `id`, and `version` that might be frequently queried. This results in additional operations to fetch these fields separately, reducing efficiency. +- Missing `SORTABLE` flag for text fields: sorting operations on unsortable fields require full-text processing, which is slow. +- Wildcard indexing: `$.tags.*` creates a broad index that can lead to excessive memory usage and reduced query performance. + +### Anti-pattern query + +The following query is inefficient and not optimized for vertical scaling: + +```sh +FT.AGGREGATE jsonidx:profiles '@t:[1299 1299]' LOAD * LIMIT 0 10 +``` +Issues: + +- Wildcard projection (`LOAD *`): retrieving all fields in the result set is inefficient and increases memory usage, especially if the documents are large. +- Unnecessary fields: fields that aren't required for the current operation are still fetched, slowing down execution. +- Lack of advanced query syntax: without specifying a query dialect or leveraging features like tagging, the query may perform unnecessary computations. + +### Improved index schema + +Here’s an optimized schema that adheres to best practices for vertical scaling: + +```sh +FT.CREATE jsonidx:profiles ON JSON PREFIX 1 profiles: + SCHEMA $.tags.* as t NUMERIC SORTABLE + $.firstName as name TEXT NOSTEM SORTABLE + $.lastName as lastname TEXT NOSTEM SORTABLE + $.location as loc GEO SORTABLE + $.id as id TAG SORTABLE UNF + $.ver as ver TAG SORTABLE UNF +``` + +Improvements: + +- `NOSTEM` for text fields: prevents stemming on fields like `firstName` and `lastName` to allow for exact matches (e.g., "Smith" stays "Smith"). +- Expanded schema: adds commonly queried fields like `lastName`, `id`, and `version`, making queries more efficient by reducing the need for post-query data retrieval. +- `TAG` fields: `id` and `ver` are defined as `TAG` fields to support fast filtering with exact matches. +- `SORTABLE` for all relevant fields: ensures that sorting operations are efficient without requiring full-text scanning. + +You might be wondering why `$.tags.* as t NUMERIC SORTABLE` is acceptable in the improved schema and it wasn't previously. +The inclusion of `$.tags.*` is acceptable when: + +- It has a clear purpose: it is actively used in queries, such as filtering on numeric ranges or matching specific values. +- Other fields in the schema complement it: these fields reduce over-reliance on `$.tags.*` for all query operations, distributing the load more evenly. +- Projections and limits are managed carefully: queries that use `$.tags.*` should avoid loading unnecessary fields or returning excessively large result sets. + +### Improved query + +The following query is better suited for vertical scaling: + +```sh +FT.AGGREGATE jsonidx:profiles '@t:[1299 1299]' + LOAD 6 id t name lastname loc ver + LIMIT 0 10 + DIALECT 2 +``` + +Improvements: + +- Targeted projection: the `LOAD` clause specifies only essential fields (`id, t, name, lastname, loc, ver`), reducing memory and network overhead. +- Limited results: the `LIMIT` clause ensures the query retrieves only the first 10 results, avoiding large result sets. +- [`DIALECT 2`]({{< relref "/develop/interact/search-and-query/advanced-concepts/dialects#dialect-2" >}}): enables the latest RQE syntax and features, ensuring compatibility with modern capabilities. +--- +categories: +- docs +- develop +- stack +- oss +description: Redis Query Engine best practices +linkTitle: Best practices +title: Best practices +weight: 8 +--- +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Searching and querying Redis data using the Redis Query Engine +highlighted: true +linkTitle: Redis Query Engine +stack: true +title: Redis Query Engine +weight: 10 +--- + +The Redis Query Engine offers an enhanced Redis experience via the following search and query features: + +- A rich query language +- Incremental indexing on JSON and hash documents +- Vector search +- Full-text search +- Geospatial queries +- Aggregations + +You can find a complete list of features in the [reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/" >}}). + +The Redis Query Engine features allow you to use Redis as a: + +- Document database +- Vector database +- Secondary index +- Search engine + +Here are the next steps to get you started: + +1. Follow our [quick start guide]({{< relref "/develop/get-started/document-database" >}}) to get some initial hands-on experience. +1. Learn how to [create an index]({{< relref "/develop/interact/search-and-query/indexing/" >}}). +1. Learn how to [query your data]({{< relref "/develop/interact/search-and-query/query/" >}}). +1. [Install Redis Insight]({{< relref "/operate/redisinsight" >}}), connect it to your Redis database, and then use [Redis Copilot]({{< relref "/develop/tools/insight" >}}#redis-copilot) to help you learn how to execute complex queries against your own data using simple, plain language prompts. + + +## Enable the Redis Query Engine + +The Redis Query Engine is available in Redis Open Source, Redis Software, and Redis Cloud. +See +[Install Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack" >}}) or +[Install Redis Enterprise]({{< relref "/operate/rs/installing-upgrading/install" >}}) +for full installation instructions. + +## License and source code + +The Redis Query Engine features of Redis are available under the Source Available License 2.0 (RSALv2), the Server Side Public License v1 (SSPLv1), or the GNU Affero General Public License version 3 (AGPLv3). Please read the [license file](https://raw.githubusercontent.com/RediSearch/RediSearch/master/LICENSE.txt) for further details. The source code and the [detailed release notes](https://github.com/RediSearch/RediSearch/releases) are available on [GitHub](https://github.com/RediSearch/RediSearch). + +Do you have questions? Feel free to ask at the [RediSearch forum](https://forum.redis.com/c/modules/redisearch/). + +
+--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Payload support(deprecated) +linkTitle: Payload +title: Document payloads +weight: 12 +--- + +{{% alert title="Warning" color="warning" %}} +The payload feature is deprecated in 2.0 +{{% /alert %}} + +Usually, Redis Open Source stores documents as hashes or JSON. But if you want to access some data for aggregation or scoring functions, Redis can store that data as an inline payload. This will allow us to evaluate the properties of a document for scoring purposes at a very low cost. + +Since the scoring functions already have access to the DocumentMetaData, which contains document flags and score, Redis can add custom payloads that can be evaluated in run-time. + +Payloads are NOT indexed and are not treated by the engine in any way. They are simply there for the purpose of evaluating them in query time, and optionally retrieving them. They can be JSON objects, strings, or preferably, if you are interested in fast evaluation, some sort of binary encoded data which is fast to decode. + +## Evaluating payloads in query time + +When implementing a scoring function, the signature of the function exposed is: + +```c +double (*ScoringFunction)(DocumentMetadata *dmd, IndexResult *h); +``` + +{{% alert title="Note" color="info" %}} +Currently, scoring functions cannot be dynamically added, and forking the engine and replacing them is required. +{{% /alert %}} + +DocumentMetaData includes a few fields, one of them being the payload. It wraps a simple byte array with arbitrary length: + +```c +typedef struct { + char *data, + uint32_t len; +} DocumentPayload; +``` + +If no payload was set to the document, it is simply NULL. If it is not, you can go ahead and decode it. It is recommended to encode some metadata about the payload inside it, like a leading version number, etc. + +## Retrieving payloads from documents + +When searching, it is possible to request the document payloads from the engine. + +This is done by adding the keyword `WITHPAYLOADS` to [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}). + +If `WITHPAYLOADS` is set, the payloads follow the document id in the returned result. +If `WITHSCORES` is set as well, the payloads follow the scores. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Notes on RediSearch debugging, testing, and documentation +linkTitle: Developer notes +title: Developer notes +weight: 3 +--- + +Developing RediSearch features involves setting up a development environment (which can be either Linux-based or macOS-based), building the module, running tests and benchmarks, and debugging both the module and its tests. + +## Cloning the git repository + +Run the following command to clone the RediSearch module and its submodules: + +```sh +git clone --recursive https://github.com/RediSearch/RediSearch.git +``` + +## Working in an isolated environment + +There are several reasons to develop in an isolated environment, like keeping your workstation clean, and developing for a different Linux distribution. +The most general option for an isolated environment is a virtual machine. It's very easy to set one up using [Vagrant](https://www.vagrantup.com)). +Docker is even more agile, as it offers an almost instant solution: + +``` +search=$(docker run -d -it -v $PWD:/build debian:bullseye bash) +docker exec -it $search bash +``` + +Then, from within the container, `cd /build` and go on as usual. + +In this mode, all installations remain in the scope of the Docker container. +Upon exiting the container, you can either re-invoke it with the above `docker exec` or commit the state of the container to an image and re-invoke it at a later stage: + +``` +docker commit $search redisearch1 +docker stop $search +search=$(docker run -d -it -v $PWD:/build rediseatch1 bash) +docker exec -it $search bash +``` + +You can replace `debian:bullseye` with your choice of OS, with the host OS being the best choice allowing you to run the RediSearch binary on your host after it is built. + +## Installing prerequisites + +To build and test RediSearch you need to install several packages, depending on the underlying OS. The following OSes are supported: +- Ubuntu 18.04 +- Ubuntu 20.04 +- Ubuntu 22.04 +- Debian linux 11 +- Rocky linux 8 +- Rocky linux 9 +- Amazon linux 2 +- Mariner 2.0 +- MacOS + +To install the prerequisites on your system using a setup script, first enter the `RediSearch` directory and then run: + +``` +cd ./install +./install_script.sh sudo +./install_boost.sh 1.83.0 +``` + +Note that this will install various packages on your system using the native package manager (`sudo` is not required in a Docker environment). + +If you prefer to avoid that, you can: + +* Review the relevant setup scripts under the `./install` directory and install packages manually. +* Use an isolated environment as explained above. + + +## Installing Redis +As a rule of thumb, you should run the latest Redis version. + +If your OS has a Redis 7.x package, you can install it using the OS package manager. + +Otherwise, you can build it from source and install it as described in [redis GitHub page](https://github.com/redis/redis). + +## Getting help + +```make help``` provides a quick summary of the development features. Following is a partial list that contains the most common and relevant ones: + +``` +make fetch # download and prepare dependant modules + +make build # compile and link + COORD=1 # build coordinator + DEBUG=1 # build for debugging + NO_TESTS=1 # disable unit tests + WHY=1 # explain CMake decisions (in /tmp/cmake-why) + FORCE=1 # Force CMake rerun (default) + CMAKE_ARGS=... # extra arguments to CMake + VG=1 # build for Valgrind + SAN=type # build with LLVM sanitizer (type=address|memory|leak|thread) + SLOW=1 # do not parallelize build (for diagnostics) + GCC=1 # build with GCC (default unless Sanitizer) + CLANG=1 # build with CLang + STATIC_LIBSTDCXX=0 # link libstdc++ dynamically (default: 1) +make parsers # build parsers code (required after chaging files under query_parser dir) +make clean # remove build artifacts + ALL=1 # remove entire artifacts directory + +make run # run redis with RediSearch + COORD=1 # run three local shards with coordinator (assuming the module was built with coordinator support) + GDB=1 # invoke using gdb + +make test # run all tests + COORD=1 # test coordinator + TEST=name # run specified test +make pytest # run python tests (tests/pytests) + COORD=1 # test coordinator + TEST=name # e.g. TEST=test:testSearch + RLTEST_ARGS=... # pass args to RLTest + REJSON=1|0|get # also load JSON module (default: 1) + REJSON_PATH=path # use JSON module at `path` + EXT=1 # External (existing) environment + GDB=1 # RLTest interactive debugging + VG=1 # use Valgrind + VG_LEAKS=0 # do not search leaks with Valgrind + SAN=type # use LLVM sanitizer (type=address|memory|leak|thread) +make unit-tests # run unit tests (C and C++) + TEST=name # e.g. TEST=FGCTest.testRemoveLastBlock +make c_tests # run C tests (from tests/ctests) +make cpp_tests # run C++ tests (from tests/cpptests) + +make callgrind # produce a call graph + REDIS_ARGS="args" + +make sanbox # create container with CLang Sanitizer +``` + +## Building from source + +Run the following from the project root dir: + +```make build``` will build RediSearch. + +`make build COORD=1` will build Redis Open Source RediSearch Coordinator. + +`make build STATIC=1` will build as a static library. + +Notes: + +* Binary files are placed under `bin`, according to platform and build variant. +* RediSearch uses [CMake](https://cmake.org) as its build system. ```make build``` will invoke both CMake and the subsequent make command that's required to complete the build. + +Use ```make clean``` to remove build artifacts. ```make clean ALL=1``` will remove the entire `bin` subdirectory. + +### Diagnosing the build process + +`make build` will build in parallel by default. + +For the purposes of build diagnosis, `make build SLOW=1 VERBOSE=1` can be used to examine compilation commands. + +## Running Redis with RediSearch + +The following will run ```redis``` and load the RediSearch module. + +``` +make run +``` +You can open ```redis-cli``` in another terminal to interact with it. + +## Running tests + +There are several sets of unit tests: +* C tests, located in ```tests/ctests```, run by ```make c-tests```. +* C++ tests (enabled by GTest), located in ```tests/cpptests```, run by ```make cpp-tests```. +* Python tests (enabled by RLTest), located in ```tests/pytests```, run by ```make pytest```. + +You can run all tests by invoking ```make test```. + +A single test can be run using the ```TEST``` parameter, e.g., ```make test TEST=regex```. + +## Debugging + +To build for debugging (enabling symbolic information and disabling optimization), run ```make DEBUG=1```. +You can then use ```make run DEBUG=1``` to invoke ```gdb```. +In addition to the usual way to set breakpoints in ```gdb```, it is possible to use the ```BB``` macro to set a breakpoint inside the RediSearch code. It will only have an effect when running under ```gdb```. + +Similarly, Python tests in a single-test mode, you can set a breakpoint by using the ```BB()``` function inside a test. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Deprecated features +linkTitle: Deprecated +title: Deprecated +weight: 10 +--- +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How transactions work in Redis +linkTitle: Transactions +title: Transactions +weight: 30 +--- + +Redis Transactions allow the execution of a group of commands +in a single step, they are centered around the commands +[`MULTI`]({{< relref "/commands/multi" >}}), [`EXEC`]({{< relref "/commands/exec" >}}), [`DISCARD`]({{< relref "/commands/discard" >}}) and [`WATCH`]({{< relref "/commands/watch" >}}). +Redis Transactions make two important guarantees: + +* All the commands in a transaction are serialized and executed +sequentially. A request sent by another client will never be +served **in the middle** of the execution of a Redis Transaction. +This guarantees that the commands are executed as a single +isolated operation. + +* The [`EXEC`]({{< relref "/commands/exec" >}}) command +triggers the execution of all the commands in the transaction, so +if a client loses the connection to the server in the context of a +transaction before calling the [`EXEC`]({{< relref "/commands/exec" >}}) command none of the operations +are performed, instead if the [`EXEC`]({{< relref "/commands/exec" >}}) command is called, all the +operations are performed. When using the +[append-only file]({{< relref "/operate/oss_and_stack/management/persistence#append-only-file" >}}) Redis makes sure +to use a single write(2) syscall to write the transaction on disk. +However if the Redis server crashes or is killed by the system administrator +in some hard way it is possible that only a partial number of operations +are registered. Redis will detect this condition at restart, and will exit with an error. +Using the `redis-check-aof` tool it is possible to fix the +append only file that will remove the partial transaction so that the +server can start again. + +Starting with version 2.2, Redis allows for an extra guarantee to the +above two, in the form of optimistic locking in a way very similar to a +check-and-set (CAS) operation. +This is documented [later](#cas) on this page. + +## Usage + +A Redis Transaction is entered using the [`MULTI`]({{< relref "/commands/multi" >}}) command. The command +always replies with `OK`. At this point the user can issue multiple +commands. Instead of executing these commands, Redis will queue +them. All the commands are executed once [`EXEC`]({{< relref "/commands/exec" >}}) is called. + +Calling [`DISCARD`]({{< relref "/commands/discard" >}}) instead will flush the transaction queue and will exit +the transaction. + +The following example increments keys `foo` and `bar` atomically. + +``` +> MULTI +OK +> INCR foo +QUEUED +> INCR bar +QUEUED +> EXEC +1) (integer) 1 +2) (integer) 1 +``` + +As is clear from the session above, [`EXEC`]({{< relref "/commands/exec" >}}) returns an +array of replies, where every element is the reply of a single command +in the transaction, in the same order the commands were issued. + +When a Redis connection is in the context of a [`MULTI`]({{< relref "/commands/multi" >}}) request, +all commands will reply with the string `QUEUED` (sent as a Status Reply +from the point of view of the Redis protocol). A queued command is +simply scheduled for execution when [`EXEC`]({{< relref "/commands/exec" >}}) is called. + +## Errors inside a transaction + +During a transaction it is possible to encounter two kind of command errors: + +* A command may fail to be queued, so there may be an error before [`EXEC`]({{< relref "/commands/exec" >}}) is called. +For instance the command may be syntactically wrong (wrong number of arguments, +wrong command name, ...), or there may be some critical condition like an out of +memory condition (if the server is configured to have a memory limit using the `maxmemory` directive). +* A command may fail *after* [`EXEC`]({{< relref "/commands/exec" >}}) is called, for instance since we performed +an operation against a key with the wrong value (like calling a list operation against a string value). + +Starting with Redis 2.6.5, the server will detect an error during the accumulation of commands. +It will then refuse to execute the transaction returning an error during [`EXEC`]({{< relref "/commands/exec" >}}), discarding the transaction. + +> **Note for Redis < 2.6.5:** Prior to Redis 2.6.5 clients needed to detect errors occurring prior to [`EXEC`]({{< relref "/commands/exec" >}}) by checking +the return value of the queued command: if the command replies with QUEUED it was +queued correctly, otherwise Redis returns an error. +If there is an error while queueing a command, most clients +will abort and discard the transaction. Otherwise, if the client elected to proceed with the transaction +the [`EXEC`]({{< relref "/commands/exec" >}}) command would execute all commands queued successfully regardless of previous errors. + +Errors happening *after* [`EXEC`]({{< relref "/commands/exec" >}}) instead are not handled in a special way: +all the other commands will be executed even if some command fails during the transaction. + +This is more clear on the protocol level. In the following example one +command will fail when executed even if the syntax is right: + +``` +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. +MULTI ++OK +SET a abc ++QUEUED +LPOP a ++QUEUED +EXEC +*2 ++OK +-WRONGTYPE Operation against a key holding the wrong kind of value +``` + +[`EXEC`]({{< relref "/commands/exec" >}}) returned two-element [bulk string reply]({{< relref "/develop/reference/protocol-spec#bulk-string-reply" >}}) where one is an `OK` code and +the other an error reply. It's up to the client library to find a +sensible way to provide the error to the user. + +It's important to note that +**even when a command fails, all the other commands in the queue are processed** – Redis will _not_ stop the +processing of commands. + +Another example, again using the wire protocol with `telnet`, shows how +syntax errors are reported ASAP instead: + +``` +MULTI ++OK +INCR a b c +-ERR wrong number of arguments for 'incr' command +``` + +This time due to the syntax error the bad [`INCR`]({{< relref "/commands/incr" >}}) command is not queued +at all. + +## What about rollbacks? + +Redis does not support rollbacks of transactions since supporting rollbacks +would have a significant impact on the simplicity and performance of Redis. + +## Discarding the command queue + +[`DISCARD`]({{< relref "/commands/discard" >}}) can be used in order to abort a transaction. In this case, no +commands are executed and the state of the connection is restored to +normal. + +``` +> SET foo 1 +OK +> MULTI +OK +> INCR foo +QUEUED +> DISCARD +OK +> GET foo +"1" +``` + + +## Optimistic locking using check-and-set + +[`WATCH`]({{< relref "/commands/watch" >}}) is used to provide a check-and-set (CAS) behavior to Redis +transactions. + +[`WATCH`]({{< relref "/commands/watch" >}})ed keys are monitored in order to detect changes against them. If +at least one watched key is modified before the [`EXEC`]({{< relref "/commands/exec" >}}) command, the +whole transaction aborts, and [`EXEC`]({{< relref "/commands/exec" >}}) returns a [Null reply]({{< relref "/develop/reference/protocol-spec#nil-reply" >}}) to notify that +the transaction failed. + +For example, imagine we have the need to atomically increment the value +of a key by 1 (let's suppose Redis doesn't have [`INCR`]({{< relref "/commands/incr" >}})). + +The first try may be the following: + +``` +val = GET mykey +val = val + 1 +SET mykey $val +``` + +This will work reliably only if we have a single client performing the +operation in a given time. If multiple clients try to increment the key +at about the same time there will be a race condition. For instance, +client A and B will read the old value, for instance, 10. The value will +be incremented to 11 by both the clients, and finally [`SET`]({{< relref "/commands/set" >}}) as the value +of the key. So the final value will be 11 instead of 12. + +Thanks to [`WATCH`]({{< relref "/commands/watch" >}}) we are able to model the problem very well: + +``` +WATCH mykey +val = GET mykey +val = val + 1 +MULTI +SET mykey $val +EXEC +``` + +Using the above code, if there are race conditions and another client +modifies the result of `val` in the time between our call to [`WATCH`]({{< relref "/commands/watch" >}}) and +our call to [`EXEC`]({{< relref "/commands/exec" >}}), the transaction will fail. + +We just have to repeat the operation hoping this time we'll not get a +new race. This form of locking is called _optimistic locking_. +In many use cases, multiple clients will be accessing different keys, +so collisions are unlikely – usually there's no need to repeat the operation. + +## WATCH explained + +So what is [`WATCH`]({{< relref "/commands/watch" >}}) really about? It is a command that will +make the [`EXEC`]({{< relref "/commands/exec" >}}) conditional: we are asking Redis to perform +the transaction only if none of the [`WATCH`]({{< relref "/commands/watch" >}})ed keys were modified. This includes +modifications made by the client, like write commands, and by Redis itself, +like expiration or eviction. If keys were modified between when they were +[`WATCH`]({{< relref "/commands/watch" >}})ed and when the [`EXEC`]({{< relref "/commands/exec" >}}) was received, the entire transaction will be aborted +instead. + +**NOTE** +* In Redis versions before 6.0.9, an expired key would not cause a transaction +to be aborted. [More on this](https://github.com/redis/redis/pull/7920) +* Commands within a transaction won't trigger the [`WATCH`]({{< relref "/commands/watch" >}}) condition since they +are only queued until the [`EXEC`]({{< relref "/commands/exec" >}}) is sent. + +[`WATCH`]({{< relref "/commands/watch" >}}) can be called multiple times. Simply all the [`WATCH`]({{< relref "/commands/watch" >}}) calls will +have the effects to watch for changes starting from the call, up to +the moment [`EXEC`]({{< relref "/commands/exec" >}}) is called. You can also send any number of keys to a +single [`WATCH`]({{< relref "/commands/watch" >}}) call. + +When [`EXEC`]({{< relref "/commands/exec" >}}) is called, all keys are [`UNWATCH`]({{< relref "/commands/unwatch" >}})ed, regardless of whether +the transaction was aborted or not. Also when a client connection is +closed, everything gets [`UNWATCH`]({{< relref "/commands/unwatch" >}})ed. + +It is also possible to use the [`UNWATCH`]({{< relref "/commands/unwatch" >}}) command (without arguments) +in order to flush all the watched keys. Sometimes this is useful as we +optimistically lock a few keys, since possibly we need to perform a +transaction to alter those keys, but after reading the current content +of the keys we don't want to proceed. When this happens we just call +[`UNWATCH`]({{< relref "/commands/unwatch" >}}) so that the connection can already be used freely for new +transactions. + +### Using WATCH to implement ZPOP + +A good example to illustrate how [`WATCH`]({{< relref "/commands/watch" >}}) can be used to create new +atomic operations otherwise not supported by Redis is to implement ZPOP +([`ZPOPMIN`]({{< relref "/commands/zpopmin" >}}), [`ZPOPMAX`]({{< relref "/commands/zpopmax" >}}) and their blocking variants have only been added +in version 5.0), that is a command that pops the element with the lower +score from a sorted set in an atomic way. This is the simplest +implementation: + +``` +WATCH zset +element = ZRANGE zset 0 0 +MULTI +ZREM zset element +EXEC +``` + +If [`EXEC`]({{< relref "/commands/exec" >}}) fails (i.e. returns a [Null reply]({{< relref "/develop/reference/protocol-spec#nil-reply" >}})) we just repeat the operation. + +## Redis scripting and transactions + +Something else to consider for transaction like operations in redis are +[redis scripts]({{< relref "/commands/eval" >}}) which are transactional. Everything +you can do with a Redis Transaction, you can also do with a script, and +usually the script will be both simpler and faster. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'How to interact with data in Redis, including queries, triggered + functions, transactions, and pub/sub' +linkTitle: Interact with data +title: Interact with data in Redis +hideListLinks: true +weight: 40 +--- + +Redis is useful as a key-value store but also gives you other powerful ways +to interact with your data: + +- [Redis Query Engine](#search-and-query) +- [Programmability](#programmability) +- [Transactions](#transactions) +- [Publish/subscribe](#publishsubscribe) + +## Search and query with the Redis Query Engine + +The [Redis query engine]({{< relref "/develop/interact/search-and-query" >}}) +lets you retrieve data by content rather than by key. You +can [index]({{< relref "/develop/interact/search-and-query/indexing" >}}) +the fields of [hash]({{< relref "/develop/data-types/hashes" >}}) +and [JSON]({{< relref "/develop/data-types/json" >}}) objects +according to their type and then perform sophisticated +[queries]({{< relref "/develop/interact/search-and-query/query" >}}) +on those fields. For example, you can use queries to find: + - matches in + [text fields]({{< relref "/develop/interact/search-and-query/query/full-text" >}}) + - numeric values that fall within a specified + [range]({{< relref "/develop/interact/search-and-query/query/range" >}}) + - [Geospatial]({{< relref "/develop/interact/search-and-query/query/geo-spatial" >}}) + coordinates that fall within a specified area + - [Vector matches]({{< relref "/develop/interact/search-and-query/query/vector-search" >}}) + against [word embeddings](https://en.wikipedia.org/wiki/Word_embedding) calculated from + your text data + +## Programmability + +Redis has an [interface]({{< relref "/develop/interact/programmability" >}}) +for the [Lua programming language](https://www.lua.org/) +that lets you store and execute scripts on the server. Use scripts +to ensure that different clients always update data using the same logic. +You can also reduce network traffic by reimplementing a sequence of +related client-side commands as a single server script. + +## Transactions + +A client will often execute a sequence of commands to make +a set of related changes to data objects. However, another client could also +modify the same data object with similar commands in between. This situation can create +corrupt or inconsistent data. + +Use a [transaction]({{< relref "/develop/interact/transactions" >}}) to +group several commands from a client together as a single unit. The +commands in the transaction are guaranteed to execute in sequence without +interruptions from other clients' commands. + +You can also use the +[`WATCH`]({{< relref "/commands/watch" >}}) command to check for changes +to the keys used in a transaction just before it executes. If the data you +are watching changes while you construct the transaction then +execution safely aborts. Use this feature for efficient +[optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) +in the common case where data is usually accessed only by one client +at a time. + +## Publish/subscribe + +Redis has a [publish/subscribe]({{< relref "/develop/interact/pubsub" >}}) (Pub/sub) +feature that implements the well-known +[design pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) +of the same name. You can *publish* messages from a particular client +connection to a channel maintained by the server. Other connections that have +*subscribed* to the channel will receive the messages in the order you sent them. +Use pub/sub to share small amounts of data among clients easily and +efficiently. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Scripting with Redis 7 and beyond + + ' +linkTitle: Functions +title: Redis functions +weight: 1 +--- + +Redis Functions is an API for managing code to be executed on the server. This feature, which became available in Redis 7, supersedes the use of [EVAL]({{< relref "/develop/interact/programmability/eval-intro" >}}) in prior versions of Redis. + +## Prologue (or, what's wrong with Eval Scripts?) + +Prior versions of Redis made scripting available only via the [`EVAL`]({{< relref "/commands/eval" >}}) command, which allows a Lua script to be sent for execution by the server. +The core use cases for [Eval Scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}) is executing part of your application logic inside Redis, efficiently and atomically. +Such script can perform conditional updates across multiple keys, possibly combining several different data types. + +Using [`EVAL`]({{< relref "/commands/eval" >}}) requires that the application sends the entire script for execution every time. +Because this results in network and script compilation overheads, Redis provides an optimization in the form of the [`EVALSHA`]({{< relref "/commands/evalsha" >}}) command. By first calling [`SCRIPT LOAD`]({{< relref "/commands/script-load" >}}) to obtain the script's SHA1, the application can invoke it repeatedly afterward with its digest alone. + +By design, Redis only caches the loaded scripts. +That means that the script cache can become lost at any time, such as after calling [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}), after restarting the server, or when failing over to a replica. +The application is responsible for reloading scripts during runtime if any are missing. +The underlying assumption is that scripts are a part of the application and not maintained by the Redis server. + +This approach suits many light-weight scripting use cases, but introduces several difficulties once an application becomes complex and relies more heavily on scripting, namely: + +1. All client application instances must maintain a copy of all scripts. That means having some mechanism that applies script updates to all of the application's instances. +1. Calling cached scripts within the context of a [transaction]({{< relref "/develop/interact/transactions" >}}) increases the probability of the transaction failing because of a missing script. Being more likely to fail makes using cached scripts as building blocks of workflows less attractive. +1. SHA1 digests are meaningless, making debugging the system extremely hard (e.g., in a [`MONITOR`]({{< relref "/commands/monitor" >}}) session). +1. When used naively, [`EVAL`]({{< relref "/commands/eval" >}}) promotes an anti-pattern in which scripts the client application renders verbatim scripts instead of responsibly using the [`KEYS` and `ARGV` Lua APIs]({{< relref "develop/interact/programmability/lua-api#runtime-globals" >}}). +1. Because they are ephemeral, a script can't call another script. This makes sharing and reusing code between scripts nearly impossible, short of client-side preprocessing (see the first point). + +To address these needs while avoiding breaking changes to already-established and well-liked ephemeral scripts, Redis v7.0 introduces Redis Functions. + +## What are Redis Functions? + +Redis functions are an evolutionary step from ephemeral scripting. + +Functions provide the same core functionality as scripts but are first-class software artifacts of the database. +Redis manages functions as an integral part of the database and ensures their availability via data persistence and replication. +Because functions are part of the database and therefore declared before use, applications aren't required to load them during runtime nor risk aborted transactions. +An application that uses functions depends only on their APIs rather than on the embedded script logic in the database. + +Whereas ephemeral scripts are considered a part of the application's domain, functions extend the database server itself with user-provided logic. +They can be used to expose a richer API composed of core Redis commands, similar to modules, developed once, loaded at startup, and used repeatedly by various applications / clients. +Every function has a unique user-defined name, making it much easier to call and trace its execution. + +The design of Redis Functions also attempts to demarcate between the programming language used for writing functions and their management by the server. +Lua, the only language interpreter that Redis presently support as an embedded execution engine, is meant to be simple and easy to learn. +However, the choice of Lua as a language still presents many Redis users with a challenge. + +The Redis Functions feature makes no assumptions about the implementation's language. +An execution engine that is part of the definition of the function handles running it. +An engine can theoretically execute functions in any language as long as it respects several rules (such as the ability to terminate an executing function). + +Presently, as noted above, Redis ships with a single embedded [Lua 5.1]({{< relref "develop/interact/programmability/lua-api" >}}) engine. +There are plans to support additional engines in the future. +Redis functions can use all of Lua's available capabilities to ephemeral scripts, +with the only exception being the [Redis Lua scripts debugger]({{< relref "/develop/interact/programmability/lua-debugging" >}}). + +Functions also simplify development by enabling code sharing. +Every function belongs to a single library, and any given library can consist of multiple functions. +The library's contents are immutable, and selective updates of its functions aren't allowed. +Instead, libraries are updated as a whole with all of their functions together in one operation. +This allows calling functions from other functions within the same library, or sharing code between functions by using a common code in library-internal methods, that can also take language native arguments. + +Functions are intended to better support the use case of maintaining a consistent view for data entities through a logical schema, as mentioned above. +As such, functions are stored alongside the data itself. +Functions are also persisted to the AOF file and replicated from master to replicas, so they are as durable as the data itself. +When Redis is used as an ephemeral cache, additional mechanisms (described below) are required to make functions more durable. + +Like all other operations in Redis, the execution of a function is atomic. +A function's execution blocks all server activities during its entire time, similarly to the semantics of [transactions]({{< relref "/develop/interact/transactions" >}}). +These semantics mean that all of the script's effects either have yet to happen or had already happened. +The blocking semantics of an executed function apply to all connected clients at all times. +Because running a function blocks the Redis server, functions are meant to finish executing quickly, so you should avoid using long-running functions. + +## Loading libraries and functions + +Let's explore Redis Functions via some tangible examples and Lua snippets. + +At this point, if you're unfamiliar with Lua in general and specifically in Redis, you may benefit from reviewing some of the examples in [Introduction to Eval Scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}) and [Lua API]({{< relref "develop/interact/programmability/lua-api" >}}) pages for a better grasp of the language. + +Every Redis function belongs to a single library that's loaded to Redis. +Loading a library to the database is done with the [`FUNCTION LOAD`]({{< relref "/commands/function-load" >}}) command. +The command gets the library payload as input, +the library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). +The Shebang format is: +``` +#! name= +``` + +Let's try loading an empty library: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n" +(error) ERR No functions registered +``` + +The error is expected, as there are no functions in the loaded library. Every library needs to include at least one registered function to load successfully. +A registered function is named and acts as an entry point to the library. +When the target execution engine handles the [`FUNCTION LOAD`]({{< relref "/commands/function-load" >}}) command, it registers the library's functions. + +The Lua engine compiles and evaluates the library source code when loaded, and expects functions to be registered by calling the `redis.register_function()` API. + +The following snippet demonstrates a simple library registering a single function named _knockknock_, returning a string reply: + +```lua +#!lua name=mylib +redis.register_function( + 'knockknock', + function() return 'Who\'s there?' end +) +``` + +In the example above, we provide two arguments about the function to Lua's `redis.register_function()` API: its registered name and a callback. + +We can load our library and use [`FCALL`]({{< relref "/commands/fcall" >}}) to call the registered function: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)" +mylib +redis> FCALL knockknock 0 +"Who's there?" +``` + +Notice that the [`FUNCTION LOAD`]({{< relref "/commands/function-load" >}}) command returns the name of the loaded library, this name can later be used [`FUNCTION LIST`]({{< relref "/commands/function-list" >}}) and [`FUNCTION DELETE`]({{< relref "/commands/function-delete" >}}). + +We've provided [`FCALL`]({{< relref "/commands/fcall" >}}) with two arguments: the function's registered name and the numeric value `0`. This numeric value indicates the number of key names that follow it (the same way [`EVAL`]({{< relref "/commands/eval" >}}) and [`EVALSHA`]({{< relref "/commands/evalsha" >}}) work). + +We'll explain immediately how key names and additional arguments are available to the function. As this simple example doesn't involve keys, we simply use 0 for now. + +## Input keys and regular arguments + +Before we move to the following example, it is vital to understand the distinction Redis makes between arguments that are names of keys and those that aren't. + +While key names in Redis are just strings, unlike any other string values, these represent keys in the database. +The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. + +**Important:** +To ensure the correct execution of Redis Functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. + +Any input to the function that isn't the name of a key is a regular input argument. + +Now, let's pretend that our application stores some of its data in Redis Hashes. +We want an [`HSET`]({{< relref "/commands/hset" >}})-like way to set and update fields in said Hashes and store the last modification time in a new field named `_last_modified_`. +We can implement a function to do all that. + +Our function will call [`TIME`]({{< relref "/commands/time" >}}) to get the server's clock reading and update the target Hash with the new fields' values and the modification's timestamp. +The function we'll implement accepts the following input arguments: the Hash's key name and the field-value pairs to update. + +The Lua API for Redis Functions makes these inputs accessible as the first and second arguments to the function's callback. +The callback's first argument is a Lua table populated with all key names inputs to the function. +Similarly, the callback's second argument consists of all regular arguments. + +The following is a possible implementation for our function and its library registration: + +```lua +#!lua name=mylib + +local function my_hset(keys, args) + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +redis.register_function('my_hset', my_hset) +``` + +If we create a new file named _mylib.lua_ that consists of the library's definition, we can load it like so (without stripping the source code of helpful whitespaces): + +```bash +$ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE +``` + +We've added the `REPLACE` modifier to the call to [`FUNCTION LOAD`]({{< relref "/commands/function-load" >}}) to tell Redis that we want to overwrite the existing library definition. +Otherwise, we would have gotten an error from Redis complaining that the library already exists. + +Now that the library's updated code is loaded to Redis, we can proceed and call our function: + +``` +redis> FCALL my_hset 1 myhash myfield "some value" another_field "another value" +(integer) 3 +redis> HGETALL myhash +1) "_last_modified_" +2) "1640772721" +3) "myfield" +4) "some value" +5) "another_field" +6) "another value" +``` + +In this case, we had invoked [`FCALL`]({{< relref "/commands/fcall" >}}) with _1_ as the number of key name arguments. +That means that the function's first input argument is a name of a key (and is therefore included in the callback's `keys` table). +After that first argument, all following input arguments are considered regular arguments and constitute the `args` table passed to the callback as its second argument. + +## Expanding the library + +We can add more functions to our library to benefit our application. +The additional metadata field we've added to the Hash shouldn't be included in responses when accessing the Hash's data. +On the other hand, we do want to provide the means to obtain the modification timestamp for a given Hash key. + +We'll add two new functions to our library to accomplish these objectives: + +1. The `my_hgetall` Redis Function will return all fields and their respective values from a given Hash key name, excluding the metadata (i.e., the `_last_modified_` field). +1. The `my_hlastmodified` Redis Function will return the modification timestamp for a given Hash key name. + +The library's source code could look something like the following: + +```lua +#!lua name=mylib + +local function my_hset(keys, args) + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +local function my_hgetall(keys, args) + redis.setresp(3) + local hash = keys[1] + local res = redis.call('HGETALL', hash) + res['map']['_last_modified_'] = nil + return res +end + +local function my_hlastmodified(keys, args) + local hash = keys[1] + return redis.call('HGET', hash, '_last_modified_') +end + +redis.register_function('my_hset', my_hset) +redis.register_function('my_hgetall', my_hgetall) +redis.register_function('my_hlastmodified', my_hlastmodified) +``` + +While all of the above should be straightforward, note that the `my_hgetall` also calls [`redis.setresp(3)`]({{< relref "develop/interact/programmability/lua-api#redis.setresp" >}}). +That means that the function expects [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) replies after calling `redis.call()`, which, unlike the default RESP2 protocol, provides dictionary (associative arrays) replies. +Doing so allows the function to delete (or set to `nil` as is the case with Lua tables) specific fields from the reply, and in our case, the `_last_modified_` field. + +Assuming you've saved the library's implementation in the _mylib.lua_ file, you can replace it with: + +```bash +$ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE +``` + +Once loaded, you can call the library's functions with [`FCALL`]({{< relref "/commands/fcall" >}}): + +``` +redis> FCALL my_hgetall 1 myhash +1) "myfield" +2) "some value" +3) "another_field" +4) "another value" +redis> FCALL my_hlastmodified 1 myhash +"1640772721" +``` + +You can also get the library's details with the [`FUNCTION LIST`]({{< relref "/commands/function-list" >}}) command: + +``` +redis> FUNCTION LIST +1) 1) "library_name" + 2) "mylib" + 3) "engine" + 4) "LUA" + 5) "functions" + 6) 1) 1) "name" + 2) "my_hset" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) + 2) 1) "name" + 2) "my_hgetall" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) + 3) 1) "name" + 2) "my_hlastmodified" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) +``` + +You can see that it is easy to update our library with new capabilities. + +## Reusing code in the library + +On top of bundling functions together into database-managed software artifacts, libraries also facilitate code sharing. +We can add to our library an error handling helper function called from other functions. +The helper function `check_keys()` verifies that the input _keys_ table has a single key. +Upon success it returns `nil`, otherwise it returns an [error reply]({{< relref "develop/interact/programmability/lua-api#redis.error_reply" >}}). + +The updated library's source code would be: + +```lua +#!lua name=mylib + +local function check_keys(keys) + local error = nil + local nkeys = table.getn(keys) + if nkeys == 0 then + error = 'Hash key name not provided' + elseif nkeys > 1 then + error = 'Only one key name is allowed' + end + + if error ~= nil then + redis.log(redis.LOG_WARNING, error); + return redis.error_reply(error) + end + return nil +end + +local function my_hset(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +local function my_hgetall(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + redis.setresp(3) + local hash = keys[1] + local res = redis.call('HGETALL', hash) + res['map']['_last_modified_'] = nil + return res +end + +local function my_hlastmodified(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + local hash = keys[1] + return redis.call('HGET', keys[1], '_last_modified_') +end + +redis.register_function('my_hset', my_hset) +redis.register_function('my_hgetall', my_hgetall) +redis.register_function('my_hlastmodified', my_hlastmodified) +``` + +After you've replaced the library in Redis with the above, you can immediately try out the new error handling mechanism: + +``` +127.0.0.1:6379> FCALL my_hset 0 myhash nope nope +(error) Hash key name not provided +127.0.0.1:6379> FCALL my_hgetall 2 myhash anotherone +(error) Only one key name is allowed +``` + +And your Redis log file should have lines in it that are similar to: + +``` +... +20075:M 1 Jan 2022 16:53:57.688 # Hash key name not provided +20075:M 1 Jan 2022 16:54:01.309 # Only one key name is allowed +``` + +## Functions in cluster + +As noted above, Redis automatically handles propagation of loaded functions to replicas. +In a Redis Cluster, it is also necessary to load functions to all cluster nodes. This is not handled automatically by Redis Cluster, and needs to be handled by the cluster administrator (like module loading, configuration setting, etc.). + +As one of the goals of functions is to live separately from the client application, this should not be part of the Redis client library responsibilities. Instead, `redis-cli --cluster-only-masters --cluster call host:port FUNCTION LOAD ...` can be used to execute the load command on all master nodes. + +Also, note that `redis-cli --cluster add-node` automatically takes care to propagate the loaded functions from one of the existing nodes to the new node. + +## Functions and ephemeral Redis instances + +In some cases there may be a need to start a fresh Redis server with a set of functions pre-loaded. Common reasons for that could be: + +* Starting Redis in a new environment +* Re-starting an ephemeral (cache-only) Redis, that uses functions + +In such cases, we need to make sure that the pre-loaded functions are available before Redis accepts inbound user connections and commands. + +To do that, it is possible to use `redis-cli --functions-rdb` to extract the functions from an existing server. This generates an RDB file that can be loaded by Redis at startup. + +## Function flags + +Redis needs to have some information about how a function is going to behave when executed, in order to properly enforce resource usage policies and maintain data consistency. + +For example, Redis needs to know that a certain function is read-only before permitting it to execute using [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) on a read-only replica. + +By default, Redis assumes that all functions may perform arbitrary read or write operations. Function Flags make it possible to declare more specific function behavior at the time of registration. Let's see how this works. + +In our previous example, we defined two functions that only read data. We can try executing them using [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) against a read-only replica. + +``` +redis > FCALL_RO my_hgetall 1 myhash +(error) ERR Can not execute a function with write flag using fcall_ro. +``` + +Redis returns this error because a function can, in theory, perform both read and write operations on the database. +As a safeguard and by default, Redis assumes that the function does both, so it blocks its execution. +The server will reply with this error in the following cases: + +1. Executing a function with [`FCALL`]({{< relref "/commands/fcall" >}}) against a read-only replica. +2. Using [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) to execute a function. +3. A disk error was detected (Redis is unable to persist so it rejects writes). + +In these cases, you can add the `no-writes` flag to the function's registration, disable the safeguard and allow them to run. +To register a function with flags use the [named arguments]({{< relref "develop/interact/programmability/lua-api#redis.register_function_named_args" >}}) variant of `redis.register_function`. + +The updated registration code snippet from the library looks like this: + +```lua +redis.register_function('my_hset', my_hset) +redis.register_function{ + function_name='my_hgetall', + callback=my_hgetall, + flags={ 'no-writes' } +} +redis.register_function{ + function_name='my_hlastmodified', + callback=my_hlastmodified, + flags={ 'no-writes' } +} +``` + +Once we've replaced the library, Redis allows running both `my_hgetall` and `my_hlastmodified` with [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) against a read-only replica: + +``` +redis> FCALL_RO my_hgetall 1 myhash +1) "myfield" +2) "some value" +3) "another_field" +4) "another value" +redis> FCALL_RO my_hlastmodified 1 myhash +"1640772721" +``` + +For the complete documentation flags, please refer to [Script flags]({{< relref "develop/interact/programmability/lua-api#script_flags" >}}). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to use the built-in Lua debugger +linkTitle: Debugging Lua +title: Debugging Lua scripts in Redis +weight: 4 +--- + +Starting with version 3.2 Redis includes a complete Lua debugger, that can be +used in order to make the task of writing complex Redis scripts much simpler. + +The Redis Lua debugger, codenamed LDB, has the following important features: + +* It uses a server-client model, so it's a remote debugger. +The Redis server acts as the debugging server, while the default client is `redis-cli`. +However other clients can be developed by following the simple protocol implemented by the server. +* By default every new debugging session is a forked session. +It means that while the Redis Lua script is being debugged, the server does not block and is usable for development or in order to execute multiple debugging sessions in parallel. +This also means that changes are **rolled back** after the script debugging session finished, so that's possible to restart a new debugging session again, using exactly the same Redis data set as the previous debugging session. +* An alternative synchronous (non forked) debugging model is available on demand, so that changes to the dataset can be retained. +In this mode the server blocks for the time the debugging session is active. +* Support for step by step execution. +* Support for static and dynamic breakpoints. +* Support from logging the debugged script into the debugger console. +* Inspection of Lua variables. +* Tracing of Redis commands executed by the script. +* Pretty printing of Redis and Lua values. +* Infinite loops and long execution detection, which simulates a breakpoint. + +## Quick start + +A simple way to get started with the Lua debugger is to watch this video +introduction: + + + +> Important Note: please make sure to avoid debugging Lua scripts using your Redis production server. +Use a development server instead. +Also note that using the synchronous debugging mode (which is NOT the default) results in the Redis server blocking for all the time the debugging session lasts. + +To start a new debugging session using `redis-cli` do the following: + +1. Create your script in some file with your preferred editor. Let's assume you are editing your Redis Lua script located at `/tmp/script.lua`. +2. Start a debugging session with: + + ./redis-cli --ldb --eval /tmp/script.lua + +Note that with the `--eval` option of `redis-cli` you can pass key names and arguments to the script, separated by a comma, like in the following example: + +``` +./redis-cli --ldb --eval /tmp/script.lua mykey somekey , arg1 arg2 +``` + +You'll enter a special mode where `redis-cli` no longer accepts its normal +commands, but instead prints a help screen and passes the unmodified debugging +commands directly to Redis. + +The only commands which are not passed to the Redis debugger are: + +* `quit` -- this will terminate the debugging session. +It's like removing all the breakpoints and using the `continue` debugging command. +Moreover the command will exit from `redis-cli`. +* `restart` -- the debugging session will restart from scratch, **reloading the new version of the script from the file**. +So a normal debugging cycle involves modifying the script after some debugging, and calling `restart` in order to start debugging again with the new script changes. +* `help` -- this command is passed to the Redis Lua debugger, that will print a list of commands like the following: + +``` +lua debugger> help +Redis Lua debugger help: +[h]elp Show this help. +[s]tep Run current line and stop again. +[n]ext Alias for step. +[c]ontinue Run till next breakpoint. +[l]ist List source code around current line. +[l]ist [line] List source code around [line]. + line = 0 means: current position. +[l]ist [line] [ctx] In this form [ctx] specifies how many lines + to show before/after [line]. +[w]hole List all source code. Alias for 'list 1 1000000'. +[p]rint Show all the local variables. +[p]rint Show the value of the specified variable. + Can also show global vars KEYS and ARGV. +[b]reak Show all breakpoints. +[b]reak Add a breakpoint to the specified line. +[b]reak - Remove breakpoint from the specified line. +[b]reak 0 Remove all breakpoints. +[t]race Show a backtrace. +[e]val Execute some Lua code (in a different callframe). +[r]edis Execute a Redis command. +[m]axlen [len] Trim logged Redis replies and Lua var dumps to len. + Specifying zero as means unlimited. +[a]bort Stop the execution of the script. In sync + mode dataset changes will be retained. + +Debugger functions you can call from Lua scripts: +redis.debug() Produce logs in the debugger console. +redis.breakpoint() Stop execution as if there was a breakpoint in the + next line of code. +``` + +Note that when you start the debugger it will start in **stepping mode**. +It will stop at the first line of the script that actually does something before executing it. + +From this point you usually call `step` in order to execute the line and go to the next line. +While you step Redis will show all the commands executed by the server like in the following example: + +``` +* Stopped at 1, stop reason = step over +-> 1 redis.call('ping') +lua debugger> step + ping + "+PONG" +* Stopped at 2, stop reason = step over +``` + +The `` and `` lines show the command executed by the line just +executed, and the reply from the server. Note that this happens only in stepping mode. +If you use `continue` in order to execute the script till the next breakpoint, commands will not be dumped on the screen to prevent too much output. + +## Termination of the debugging session + + +When the scripts terminates naturally, the debugging session ends and +`redis-cli` returns in its normal non-debugging mode. You can restart the +session using the `restart` command as usual. + +Another way to stop a debugging session is just interrupting `redis-cli` +manually by pressing `Ctrl+C`. Note that also any event breaking the +connection between `redis-cli` and the `redis-server` will interrupt the +debugging session. + +All the forked debugging sessions are terminated when the server is shut +down. + +## Abbreviating debugging commands + +Debugging can be a very repetitive task. For this reason every Redis +debugger command starts with a different character, and you can use the single +initial character in order to refer to the command. + +So for example instead of typing `step` you can just type `s`. + +## Breakpoints + +Adding and removing breakpoints is trivial as described in the online help. +Just use `b 1 2 3 4` to add a breakpoint in line 1, 2, 3, 4. +The command `b 0` removes all the breakpoints. Selected breakpoints can be +removed using as argument the line where the breakpoint we want to remove is, but prefixed by a minus sign. +So for example `b -3` removes the breakpoint from line 3. + +Note that adding breakpoints to lines that Lua never executes, like declaration of local variables or comments, will not work. +The breakpoint will be added but since this part of the script will never be executed, the program will never stop. + +## Dynamic breakpoints + +Using the `breakpoint` command it is possible to add breakpoints into specific +lines. However sometimes we want to stop the execution of the program only +when something special happens. In order to do so, you can use the +`redis.breakpoint()` function inside your Lua script. When called it simulates +a breakpoint in the next line that will be executed. + +``` +if counter > 10 then redis.breakpoint() end +``` +This feature is extremely useful when debugging, so that we can avoid +continuing the script execution manually multiple times until a given condition +is encountered. + +## Synchronous mode + +As explained previously, but default LDB uses forked sessions with rollback +of all the data changes operated by the script while it has being debugged. +Determinism is usually a good thing to have during debugging, so that successive +debugging sessions can be started without having to reset the database content +to its original state. + +However for tracking certain bugs, you may want to retain the changes performed +to the key space by each debugging session. When this is a good idea you +should start the debugger using a special option, `ldb-sync-mode`, in `redis-cli`. + +``` +./redis-cli --ldb-sync-mode --eval /tmp/script.lua +``` + +> Note: Redis server will be unreachable during the debugging session in this mode, so use with care. + +In this special mode, the `abort` command can stop the script half-way taking the changes operated to the dataset. +Note that this is different compared to ending the debugging session normally. +If you just interrupt `redis-cli` the script will be fully executed and then the session terminated. +Instead with `abort` you can interrupt the script execution in the middle and start a new debugging session if needed. + +## Logging from scripts + +The `redis.debug()` command is a powerful debugging facility that can be +called inside the Redis Lua script in order to log things into the debug +console: + +``` +lua debugger> list +-> 1 local a = {1,2,3} + 2 local b = false + 3 redis.debug(a,b) +lua debugger> continue + line 3: {1; 2; 3}, false +``` + +If the script is executed outside of a debugging session, `redis.debug()` has no effects at all. +Note that the function accepts multiple arguments, that are separated by a comma and a space in the output. + +Tables and nested tables are displayed correctly in order to make values simple to observe for the programmer debugging the script. + +## Inspecting the program state with `print` and `eval` + + +While the `redis.debug()` function can be used in order to print values +directly from within the Lua script, often it is useful to observe the local +variables of a program while stepping or when stopped into a breakpoint. + +The `print` command does just that, and performs lookup in the call frames +starting from the current one back to the previous ones, up to top-level. +This means that even if we are into a nested function inside a Lua script, +we can still use `print foo` to look at the value of `foo` in the context +of the calling function. When called without a variable name, `print` will +print all variables and their respective values. + +The `eval` command executes small pieces of Lua scripts **outside the context of the current call frame** (evaluating inside the context of the current call frame is not possible with the current Lua internals). +However you can use this command in order to test Lua functions. + +``` +lua debugger> e redis.sha1hex('foo') + "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33" +``` + +## Debugging clients + +LDB uses the client-server model where the Redis server acts as a debugging server that communicates using [RESP]({{< relref "/develop/reference/protocol-spec" >}}). While `redis-cli` is the default debug client, any client can be used for debugging as long as it meets one of the following conditions: + +1. The client provides a native interface for setting the debug mode and controlling the debug session. +2. The client provides an interface for sending arbitrary commands over RESP. +3. The client allows sending raw messages to the Redis server. + +For example, the [Redis plugin](https://redis.com/blog/zerobrane-studio-plugin-for-redis-lua-scripts) for [ZeroBrane Studio](http://studio.zerobrane.com/) integrates with LDB using [redis-lua](https://github.com/nrk/redis-lua). The following Lua code is a simplified example of how the plugin achieves that: + +```Lua +local redis = require 'redis' + +-- add LDB's Continue command +redis.commands['ldbcontinue'] = redis.command('C') + +-- script to be debugged +local script = [[ + local x, y = tonumber(ARGV[1]), tonumber(ARGV[2]) + local result = x * y + return result +]] + +local client = redis.connect('127.0.0.1', 6379) +client:script("DEBUG", "YES") +print(unpack(client:eval(script, 0, 6, 9))) +client:ldbcontinue() +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Executing Lua in Redis + + ' +linkTitle: Lua scripting +title: Scripting with Lua +weight: 2 +--- + +Redis lets users upload and execute Lua scripts on the server. +Scripts can employ programmatic control structures and use most of the [commands]({{< relref "/commands" >}}) while executing to access the database. +Because scripts execute in the server, reading and writing data from scripts is very efficient. + +Redis guarantees the script's atomic execution. +While executing the script, all server activities are blocked during its entire runtime. +These semantics mean that all of the script's effects either have yet to happen or had already happened. + +Scripting offers several properties that can be valuable in many cases. +These include: + +* Providing locality by executing logic where data lives. Data locality reduces overall latency and saves networking resources. +* Blocking semantics that ensure the script's atomic execution. +* Enabling the composition of simple capabilities that are either missing from Redis or are too niche to be a part of it. + +Lua lets you run part of your application logic inside Redis. +Such scripts can perform conditional updates across multiple keys, possibly combining several different data types atomically. + +Scripts are executed in Redis by an embedded execution engine. +Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. +Please refer to the [Redis Lua API Reference]({{< relref "develop/interact/programmability/lua-api" >}}) page for complete documentation. + +Although the server executes them, Eval scripts are regarded as a part of the client-side application, which is why they're not named, versioned, or persisted. +So all scripts may need to be reloaded by the application at any time if missing (after a server restart, fail-over to a replica, etc.). +As of version 7.0, [Redis Functions]({{< relref "/develop/interact/programmability/functions-intro" >}}) offer an alternative approach to programmability which allow the server itself to be extended with additional programmed logic. + +## Getting started + +We'll start scripting with Redis by using the [`EVAL`]({{< relref "/commands/eval" >}}) command. + +Here's our first example: + +``` +> EVAL "return 'Hello, scripting!'" 0 +"Hello, scripting!" +``` + +In this example, [`EVAL`]({{< relref "/commands/eval" >}}) takes two arguments. +The first argument is a string that consists of the script's Lua source code. +The script doesn't need to include any definitions of Lua function. +It is just a Lua program that will run in the Redis engine's context. + +The second argument is the number of arguments that follow the script's body, starting from the third argument, representing Redis key names. +In this example, we used the value _0_ because we didn't provide the script with any arguments, whether the names of keys or not. + +## Script parameterization + +It is possible, although highly ill-advised, to have the application dynamically generate script source code per its needs. +For example, the application could send these two entirely different, but at the same time perfectly identical scripts: + +``` +redis> EVAL "return 'Hello'" 0 +"Hello" +redis> EVAL "return 'Scripting!'" 0 +"Scripting!" +``` + +Although this mode of operation isn't blocked by Redis, it is an anti-pattern due to script cache considerations (more on the topic below). +Instead of having your application generate subtle variations of the same scripts, you can parametrize them and pass any arguments needed for to execute them. + +The following example demonstrates how to achieve the same effects as above, but via parameterization: + +``` +redis> EVAL "return ARGV[1]" 0 Hello +"Hello" +redis> EVAL "return ARGV[1]" 0 Parameterization! +"Parameterization!" +``` + +At this point, it is essential to understand the distinction Redis makes between input arguments that are names of keys and those that aren't. + +While key names in Redis are just strings, unlike any other string values, these represent keys in the database. +The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. + +**Important:** +to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. +The script **should only** access keys whose names are given as input arguments. +Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. + +Any input to the function that isn't the name of a key is a regular input argument. + +In the example above, both _Hello_ and _Parameterization!_ regular input arguments for the script. +Because the script doesn't touch any keys, we use the numerical argument _0_ to specify there are no key name arguments. +The execution context makes arguments available to the script through [_KEYS_]({{< relref "develop/interact/programmability/lua-api#the-keys-global-variable" >}}) and [_ARGV_]({{< relref "develop/interact/programmability/lua-api#the-argv-global-variable" >}}) global runtime variables. +The _KEYS_ table is pre-populated with all key name arguments provided to the script before its execution, whereas the _ARGV_ table serves a similar purpose but for regular arguments. + +The following attempts to demonstrate the distribution of input arguments between the scripts _KEYS_ and _ARGV_ runtime global variables: + + +``` +redis> EVAL "return { KEYS[1], KEYS[2], ARGV[1], ARGV[2], ARGV[3] }" 2 key1 key2 arg1 arg2 arg3 +1) "key1" +2) "key2" +3) "arg1" +4) "arg2" +5) "arg3" +``` + +**Note:** +as can been seen above, Lua's table arrays are returned as [RESP2 array replies]({{< relref "/develop/reference/protocol-spec#resp-arrays" >}}), so it is likely that your client's library will convert it to the native array data type in your programming language. +Please refer to the rules that govern [data type conversion]({{< relref "develop/interact/programmability/lua-api#data-type-conversion" >}}) for more pertinent information. + +## Interacting with Redis from a script + +It is possible to call Redis commands from a Lua script either via [`redis.call()`]({{< relref "develop/interact/programmability/lua-api#redis.call" >}}) or [`redis.pcall()`]({{< relref "develop/interact/programmability/lua-api#redis.pcall" >}}). + +The two are nearly identical. +Both execute a Redis command along with its provided arguments, if these represent a well-formed command. +However, the difference between the two functions lies in the manner in which runtime errors (such as syntax errors, for example) are handled. +Errors raised from calling `redis.call()` function are returned directly to the client that had executed it. +Conversely, errors encountered when calling the `redis.pcall()` function are returned to the script's execution context instead for possible handling. + +For example, consider the following: + +``` +> EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 foo bar +OK +``` +The above script accepts one key name and one value as its input arguments. +When executed, the script calls the [`SET`]({{< relref "/commands/set" >}}) command to set the input key, _foo_, with the string value "bar". + +## Script cache + +Until this point, we've used the [`EVAL`]({{< relref "/commands/eval" >}}) command to run our script. + +Whenever we call [`EVAL`]({{< relref "/commands/eval" >}}), we also include the script's source code with the request. +Repeatedly calling [`EVAL`]({{< relref "/commands/eval" >}}) to execute the same set of parameterized scripts, wastes both network bandwidth and also has some overheads in Redis. +Naturally, saving on network and compute resources is key, so, instead, Redis provides a caching mechanism for scripts. + +Every script you execute with [`EVAL`]({{< relref "/commands/eval" >}}) is stored in a dedicated cache that the server keeps. +The cache's contents are organized by the scripts' SHA1 digest sums, so the SHA1 digest sum of a script uniquely identifies it in the cache. +You can verify this behavior by running [`EVAL`]({{< relref "/commands/eval" >}}) and calling [`INFO`]({{< relref "/commands/info" >}}) afterward. +You'll notice that the _used_memory_scripts_eval_ and _number_of_cached_scripts_ metrics grow with every new script that's executed. + +As mentioned above, dynamically-generated scripts are an anti-pattern. +Generating scripts during the application's runtime may, and probably will, exhaust the host's memory resources for caching them. +Instead, scripts should be as generic as possible and provide customized execution via their arguments. + +A script is loaded to the server's cache by calling the [`SCRIPT LOAD`]({{< relref "/commands/script-load" >}}) command and providing its source code. +The server doesn't execute the script, but instead just compiles and loads it to the server's cache. +Once loaded, you can execute the cached script with the SHA1 digest returned from the server. + +Here's an example of loading and then executing a cached script: + +``` +redis> SCRIPT LOAD "return 'Immabe a cached script'" +"c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f" +redis> EVALSHA c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f 0 +"Immabe a cached script" +``` + +### Cache volatility + +The Redis script cache is **always volatile**. +It isn't considered as a part of the database and is **not persisted**. +The cache may be cleared when the server restarts, during fail-over when a replica assumes the master role, or explicitly by [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}). +That means that cached scripts are ephemeral, and the cache's contents can be lost at any time. + +Applications that use scripts should always call [`EVALSHA`]({{< relref "/commands/evalsha" >}}) to execute them. +The server returns an error if the script's SHA1 digest is not in the cache. +For example: + +``` +redis> EVALSHA ffffffffffffffffffffffffffffffffffffffff 0 +(error) NOSCRIPT No matching script +``` + +In this case, the application should first load it with [`SCRIPT LOAD`]({{< relref "/commands/script-load" >}}) and then call [`EVALSHA`]({{< relref "/commands/evalsha" >}}) once more to run the cached script by its SHA1 sum. +Most of Redis' clients already provide utility APIs for doing that automatically. +Please consult your client's documentation regarding the specific details. + +### `EVALSHA` in the context of pipelining + +Special care should be given executing [`EVALSHA`]({{< relref "/commands/evalsha" >}}) in the context of a [pipelined request]({{< relref "/develop/use/pipelining" >}}). +The commands in a pipelined request run in the order they are sent, but other clients' commands may be interleaved for execution between these. +Because of that, the `NOSCRIPT` error can return from a pipelined request but can't be handled. + +Therefore, a client library's implementation should revert to using plain [`EVAL`]({{< relref "/commands/eval" >}}) of parameterized in the context of a pipeline. + +### Script cache semantics + +During normal operation, an application's scripts are meant to stay indefinitely in the cache (that is, until the server is restarted or the cache being flushed). +The underlying reasoning is that the script cache contents of a well-written application are unlikely to grow continuously. +Even large applications that use hundreds of cached scripts shouldn't be an issue in terms of cache memory usage. + +The only way to flush the script cache is by explicitly calling the [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}) command. +Running the command will _completely flush_ the scripts cache, removing all the scripts executed so far. +Typically, this is only needed when the instance is going to be instantiated for another customer or application in a cloud environment. + +Also, as already mentioned, restarting a Redis instance flushes the non-persistent script cache. +However, from the point of view of the Redis client, there are only two ways to make sure that a Redis instance was not restarted between two different commands: + +* The connection we have with the server is persistent and was never closed so far. +* The client explicitly checks the `run_id` field in the [`INFO`]({{< relref "/commands/info" >}}) command to ensure the server was not restarted and is still the same process. + +Practically speaking, it is much simpler for the client to assume that in the context of a given connection, cached scripts are guaranteed to be there unless the administrator explicitly invoked the [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}) command. +The fact that the user can count on Redis to retain cached scripts is semantically helpful in the context of pipelining. + +## The `SCRIPT` command + +The Redis [`SCRIPT`]({{< relref "/commands/script" >}}) provides several ways for controlling the scripting subsystem. +These are: + +* [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}): this command is the only way to force Redis to flush the scripts cache. + It is most useful in environments where the same Redis instance is reassigned to different uses. + It is also helpful for testing client libraries' implementations of the scripting feature. + +* [`SCRIPT EXISTS`]({{< relref "/commands/script-exists" >}}): given one or more SHA1 digests as arguments, this command returns an array of _1_'s and _0_'s. + _1_ means the specific SHA1 is recognized as a script already present in the scripting cache. _0_'s meaning is that a script with this SHA1 wasn't loaded before (or at least never since the latest call to [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}})). + +* `SCRIPT LOAD script`: this command registers the specified script in the Redis script cache. + It is a useful command in all the contexts where we want to ensure that [`EVALSHA`]({{< relref "/commands/evalsha" >}}) doesn't not fail (for instance, in a pipeline or when called from a [`MULTI`]({{< relref "/commands/multi" >}})/[`EXEC`]({{< relref "/commands/exec" >}}) [transaction]({{< relref "/develop/interact/transactions" >}}), without the need to execute the script. + +* [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}): this command is the only way to interrupt a long-running script (a.k.a slow script), short of shutting down the server. + A script is deemed as slow once its execution's duration exceeds the configured [maximum execution time]({{< relref "/develop/interact/programmability/#maximum-execution-time" >}}) threshold. + The [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}) command can be used only with scripts that did not modify the dataset during their execution (since stopping a read-only script does not violate the scripting engine's guaranteed atomicity). + +* [`SCRIPT DEBUG`]({{< relref "/commands/script-debug" >}}): controls use of the built-in [Redis Lua scripts debugger]({{< relref "/develop/interact/programmability/lua-debugging" >}}). + +## Script replication + +In standalone deployments, a single Redis instance called _master_ manages the entire database. +A [clustered deployment]({{< relref "/operate/oss_and_stack/management/scaling" >}}) has at least three masters managing the sharded database. +Redis uses [replication]({{< relref "/operate/oss_and_stack/management/replication" >}}) to maintain one or more replicas, or exact copies, for any given master. + +Because scripts can modify the data, Redis ensures all write operations performed by a script are also sent to replicas to maintain consistency. +There are two conceptual approaches when it comes to script replication: + +1. Verbatim replication: the master sends the script's source code to the replicas. + Replicas then execute the script and apply the write effects. + This mode can save on replication bandwidth in cases where short scripts generate many commands (for example, a _for_ loop). + However, this replication mode means that replicas redo the same work done by the master, which is wasteful. + More importantly, it also requires [all write scripts to be deterministic](#scripts-with-deterministic-writes). +1. Effects replication: only the script's data-modifying commands are replicated. + Replicas then run the commands without executing any scripts. + While potentially lengthier in terms of network traffic, this replication mode is deterministic by definition and therefore doesn't require special consideration. + +Verbatim script replication was the only mode supported until Redis 3.2, in which effects replication was added. +The _lua-replicate-commands_ configuration directive and [`redis.replicate_commands()`]({{< relref "develop/interact/programmability/lua-api#redis.replicate_commands" >}}) Lua API can be used to enable it. + +In Redis 5.0, effects replication became the default mode. +As of Redis 7.0, verbatim replication is no longer supported. + +### Replicating commands instead of scripts + +Starting with Redis 3.2, it is possible to select an alternative replication method. +Instead of replicating whole scripts, we can replicate the write commands generated by the script. +We call this **script effects replication**. + +**Note:** +starting with Redis 5.0, script effects replication is the default mode and does not need to be explicitly enabled. + +In this replication mode, while Lua scripts are executed, Redis collects all the commands executed by the Lua scripting engine that actually modify the dataset. +When the script execution finishes, the sequence of commands that the script generated are wrapped into a [`MULTI`]({{< relref "/commands/multi" >}})/[`EXEC`]({{< relref "/commands/exec" >}}) [transaction]({{< relref "/develop/interact/transactions" >}}) and are sent to the replicas and AOF. + +This is useful in several ways depending on the use case: + +* When the script is slow to compute, but the effects can be summarized by a few write commands, it is a shame to re-compute the script on the replicas or when reloading the AOF. + In this case, it is much better to replicate just the effects of the script. +* When script effects replication is enabled, the restrictions on non-deterministic functions are removed. + You can, for example, use the [`TIME`]({{< relref "/commands/time" >}}) or [`SRANDMEMBER`]({{< relref "/commands/srandmember" >}}) commands inside your scripts freely at any place. +* The Lua PRNG in this mode is seeded randomly on every call. + +Unless already enabled by the server's configuration or defaults (before Redis 7.0), you need to issue the following Lua command before the script performs a write: + +```lua +redis.replicate_commands() +``` + +The [`redis.replicate_commands()`]({{< relref "develop/interact/programmability/lua-api#redis.replicate_commands" >}}) function returns _true) if script effects replication was enabled; +otherwise, if the function was called after the script already called a write command, +it returns _false_, and normal whole script replication is used. + +This function is deprecated as of Redis 7.0, and while you can still call it, it will always succeed. + +### Scripts with deterministic writes + +**Note:** +Starting with Redis 5.0, script replication is by default effect-based rather than verbatim. +In Redis 7.0, verbatim script replication had been removed entirely. +The following section only applies to versions lower than Redis 7.0 when not using effect-based script replication. + +An important part of scripting is writing scripts that only change the database in a deterministic way. +Scripts executed in a Redis instance are, by default until version 5.0, propagated to replicas and to the AOF file by sending the script itself -- not the resulting commands. +Since the script will be re-run on the remote host (or when reloading the AOF file), its changes to the database must be reproducible. + +The reason for sending the script is that it is often much faster than sending the multiple commands that the script generates. +If the client is sending many scripts to the master, converting the scripts into individual commands for the replica / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via the network is a lot more work for Redis compared to dispatching a command invoked by Lua scripts). + +Normally replicating scripts instead of the effects of the scripts makes sense, however not in all the cases. +So starting with Redis 3.2, the scripting engine is able to, alternatively, replicate the sequence of write commands resulting from the script execution, instead of replication the script itself. + +In this section, we'll assume that scripts are replicated verbatim by sending the whole script. +Let's call this replication mode **verbatim scripts replication**. + +The main drawback with the *whole scripts replication* approach is that scripts are required to have the following property: +the script **always must** execute the same Redis _write_ commands with the same arguments given the same input data set. +Operations performed by the script can't depend on any hidden (non-explicit) information or state that may change as the script execution proceeds or between different executions of the script. +Nor can it depend on any external input from I/O devices. + +Acts such as using the system time, calling Redis commands that return random values (e.g., [`RANDOMKEY`]({{< relref "/commands/randomkey" >}})), or using Lua's random number generator, could result in scripts that will not evaluate consistently. + +To enforce the deterministic behavior of scripts, Redis does the following: + +* Lua does not export commands to access the system time or other external states. +* Redis will block the script with an error if a script calls a Redis command able to alter the data set **after** a Redis _random_ command like [`RANDOMKEY`]({{< relref "/commands/randomkey" >}}), [`SRANDMEMBER`]({{< relref "/commands/srandmember" >}}), [`TIME`]({{< relref "/commands/time" >}}). + That means that read-only scripts that don't modify the dataset can call those commands. + Note that a _random command_ does not necessarily mean a command that uses random numbers: any non-deterministic command is considered as a random command (the best example in this regard is the [`TIME`]({{< relref "/commands/time" >}}) command). +* In Redis version 4.0, commands that may return elements in random order, such as [`SMEMBERS`]({{< relref "/commands/smembers" >}}) (because Redis Sets are _unordered_), exhibit a different behavior when called from Lua, +and undergo a silent lexicographical sorting filter before returning data to Lua scripts. + So `redis.call("SMEMBERS",KEYS[1])` will always return the Set elements in the same order, while the same command invoked by normal clients may return different results even if the key contains exactly the same elements. + However, starting with Redis 5.0, this ordering is no longer performed because replicating effects circumvents this type of non-determinism. + In general, even when developing for Redis 4.0, never assume that certain commands in Lua will be ordered, but instead rely on the documentation of the original command you call to see the properties it provides. +* Lua's pseudo-random number generation function `math.random` is modified and always uses the same seed for every execution. + This means that calling [`math.random`]({{< relref "develop/interact/programmability/lua-api#runtime-libraries" >}}) will always generate the same sequence of numbers every time a script is executed (unless `math.randomseed` is used). + +All that said, you can still use commands that write and random behavior with a simple trick. +Imagine that you want to write a Redis script that will populate a list with N random integers. + +The initial implementation in Ruby could look like this: + +``` +require 'rubygems' +require 'redis' + +r = Redis.new + +RandomPushScript = < 0) do + res = redis.call('LPUSH',KEYS[1],math.random()) + i = i-1 + end + return res +EOF + +r.del(:mylist) +puts r.eval(RandomPushScript,[:mylist],[10,rand(2**32)]) +``` + +Every time this code runs, the resulting list will have exactly the +following elements: + +``` +redis> LRANGE mylist 0 -1 + 1) "0.74509509873814" + 2) "0.87390407681181" + 3) "0.36876626981831" + 4) "0.6921941534114" + 5) "0.7857992587545" + 6) "0.57730350670279" + 7) "0.87046522734243" + 8) "0.09637165539729" + 9) "0.74990198051087" +10) "0.17082803611217" +``` + +To make the script both deterministic and still have it produce different random elements, +we can add an extra argument to the script that's the seed to Lua's pseudo-random number generator. +The new script is as follows: + +``` +RandomPushScript = < 0) do + res = redis.call('LPUSH',KEYS[1],math.random()) + i = i-1 + end + return res +EOF + +r.del(:mylist) +puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) +``` + +What we are doing here is sending the seed of the PRNG as one of the arguments. +The script output will always be the same given the same arguments (our requirement) but we are changing one of the arguments at every invocation, +generating the random seed client-side. +The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, +guaranteeing that the same changes will be generated when the AOF is reloaded or when the replica processes the script. + +Note: an important part of this behavior is that the PRNG that Redis implements as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. +32-bit, 64-bit, big-endian and little-endian systems will all produce the same output. + +## Debugging Eval scripts + +Starting with Redis 3.2, Redis has support for native Lua debugging. +The Redis Lua debugger is a remote debugger consisting of a server, which is Redis itself, and a client, which is by default [`redis-cli`]({{< relref "/develop/tools/cli" >}}). + +The Lua debugger is described in the [Lua scripts debugging]({{< relref "/develop/interact/programmability/lua-debugging" >}}) section of the Redis documentation. + +## Execution under low memory conditions + +When memory usage in Redis exceeds the `maxmemory` limit, the first write command encountered in the script that uses additional memory will cause the script to abort (unless [`redis.pcall`]({{< relref "develop/interact/programmability/lua-api#redis.pcall" >}}) was used). + +However, an exception to the above is when the script's first write command does not use additional memory, as is the case with (for example, [`DEL`]({{< relref "/commands/del" >}}) and [`LREM`]({{< relref "/commands/lrem" >}})). +In this case, Redis will allow all commands in the script to run to ensure atomicity. +If subsequent writes in the script consume additional memory, Redis' memory usage can exceed the threshold set by the `maxmemory` configuration directive. + +Another scenario in which a script can cause memory usage to cross the `maxmemory` threshold is when the execution begins when Redis is slightly below `maxmemory`, so the script's first write command is allowed. +As the script executes, subsequent write commands consume more memory leading to the server using more RAM than the configured `maxmemory` directive. + +In those scenarios, you should consider setting the `maxmemory-policy` configuration directive to any values other than `noeviction`. +In addition, Lua scripts should be as fast as possible so that eviction can kick in between executions. + +Note that you can change this behaviour by using [flags](#eval-flags) + +## Eval flags + +Normally, when you run an Eval script, the server does not know how it accesses the database. +By default, Redis assumes that all scripts read and write data. +However, starting with Redis 7.0, there's a way to declare flags when creating a script in order to tell Redis how it should behave. + +The way to do that is by using a Shebang statement on the first line of the script like so: + +``` +#!lua flags=no-writes,allow-stale +local x = redis.call('get','x') +return x +``` + +Note that as soon as Redis sees the `#!` comment, it'll treat the script as if it declares flags, even if no flags are defined, +it still has a different set of defaults compared to a script without a `#!` line. + +Another difference is that scripts without `#!` can run commands that access keys belonging to different cluster hash slots, but ones with `#!` inherit the default flags, so they cannot. + +Please refer to [Script flags]({{< relref "develop/interact/programmability/lua-api#script_flags" >}}) to learn about the various scripts and the defaults. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Executing Lua in Redis + + ' +linkTitle: Lua API +title: Redis Lua API reference +weight: 3 +--- + +Redis includes an embedded [Lua 5.1](https://www.lua.org/) interpreter. +The interpreter runs user-defined [ephemeral scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}) and [functions]({{< relref "/develop/interact/programmability/functions-intro" >}}). Scripts run in a sandboxed context and can only access specific Lua packages. This page describes the packages and APIs available inside the execution's context. + +## Sandbox context + +The sandboxed Lua context attempts to prevent accidental misuse and reduce potential threats from the server's environment. + +Scripts should never try to access the Redis server's underlying host systems. +That includes the file system, network, and any other attempt to perform a system call other than those supported by the API. + +Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. + +### Global variables and functions + +The sandboxed Lua execution context blocks the declaration of global variables and functions. +The blocking of global variables is in place to ensure that scripts and functions don't attempt to maintain any runtime context other than the data stored in Redis. +In the (somewhat uncommon) use case that a context needs to be maintain between executions, +you should store the context in Redis' keyspace. + +Redis will return a "Script attempted to create global variable 'my_global_variable" error when trying to execute the following snippet: + +```lua +my_global_variable = 'some value' +``` + +And similarly for the following global function declaration: + +```lua +function my_global_function() + -- Do something amazing +end +``` + +You'll also get a similar error when your script attempts to access any global variables that are undefined in the runtime's context: + +```lua +-- The following will surely raise an error +return an_undefined_global_variable +``` + +Instead, all variable and function definitions are required to be declared as local. +To do so, you'll need to prepend the [_local_](https://www.lua.org/manual/5.1/manual.html#2.4.7) keyword to your declarations. +For example, the following snippet will be considered perfectly valid by Redis: + +```lua +local my_local_variable = 'some value' + +local function my_local_function() + -- Do something else, but equally amazing +end +``` + +**Note:** +the sandbox attempts to prevent the use of globals. +Using Lua's debugging functionality or other approaches such as altering the meta table used for implementing the globals' protection to circumvent the sandbox isn't hard. +However, it is difficult to circumvent the protection by accident. +If the user messes with the Lua global state, the consistency of AOF and replication can't be guaranteed. +In other words, just don't do it. + +### Imported Lua modules + +Using imported Lua modules is not supported inside the sandboxed execution context. +The sandboxed execution context prevents the loading modules by disabling Lua's [`require` function](https://www.lua.org/pil/8.1.html). + +The only libraries that Redis ships with and that you can use in scripts are listed under the [Runtime libraries](#runtime-libraries) section. + +## Runtime globals + +While the sandbox prevents users from declaring globals, the execution context is pre-populated with several of these. + +### The _redis_ singleton + +The _redis_ singleton is an object instance that's accessible from all scripts. +It provides the API to interact with Redis from scripts. +Its description follows [below](#redis_object). + +### The _KEYS_ global variable {#the-keys-global-variable} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: no + +**Important:** +to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. +The script **should only** access keys whose names are given as input arguments. +Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. + +The _KEYS_ global variable is available only for [ephemeral scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}). +It is pre-populated with all key name input arguments. + +### The _ARGV_ global variable {#the-argv-global-variable} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: no + +The _ARGV_ global variable is available only in [ephemeral scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}). +It is pre-populated with all regular input arguments. + +## _redis_ object {#redis_object} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The Redis Lua execution context always provides a singleton instance of an object named _redis_. +The _redis_ instance enables the script to interact with the Redis server that's running it. +Following is the API provided by the _redis_ object instance. + +### `redis.call(command [,arg...])` {#redis.call} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The `redis.call()` function calls a given Redis command and returns its reply. +Its inputs are the command and arguments, and once called, it executes the command in Redis and returns the reply. + +For example, we can call the [`ECHO`]({{< relref "/commands/echo" >}}) command from a script and return its reply like so: + +```lua +return redis.call('ECHO', 'Echo, echo... eco... o...') +``` + +If and when `redis.call()` triggers a runtime exception, the raw exception is raised back to the user as an error, automatically. +Therefore, attempting to execute the following ephemeral script will fail and generate a runtime exception because [`ECHO`]({{< relref "/commands/echo" >}}) accepts exactly one argument: + +```lua +redis> EVAL "return redis.call('ECHO', 'Echo,', 'echo... ', 'eco... ', 'o...')" 0 +(error) ERR Wrong number of args calling Redis command from script script: b0345693f4b77517a711221050e76d24ae60b7f7, on @user_script:1. +``` + +Note that the call can fail due to various reasons, see [Execution under low memory conditions]({{< relref "/develop/interact/programmability/eval-intro#execution-under-low-memory-conditions" >}}) and [Script flags](#script_flags) + +To handle Redis runtime errors use `redis.pcall()` instead. + +### `redis.pcall(command [,arg...])` {#redis.pcall} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function enables handling runtime errors raised by the Redis server. +The `redis.pcall()` function behaves exactly like [`redis.call()`](#redis.call), except that it: + +* Always returns a reply. +* Never throws a runtime exception, and returns in its stead a [`redis.error_reply`](#redis.error_reply) in case that a runtime exception is thrown by the server. + +The following demonstrates how to use `redis.pcall()` to intercept and handle runtime exceptions from within the context of an ephemeral script. + +```lua +local reply = redis.pcall('ECHO', unpack(ARGV)) +if reply['err'] ~= nil then + -- Handle the error sometime, but for now just log it + redis.log(redis.LOG_WARNING, reply['err']) + reply['err'] = 'ERR Something is wrong, but no worries, everything is under control' +end +return reply +``` + +Evaluating this script with more than one argument will return: + +``` +redis> EVAL "..." 0 hello world +(error) ERR Something is wrong, but no worries, everything is under control +``` + +### `redis.error_reply(x)` {#redis.error_reply} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This is a helper function that returns an [error reply]({{< relref "develop/reference/protocol-spec/#simple-errors" >}}). +The helper accepts a single string argument and returns a Lua table with the _err_ field set to that string. + +The outcome of the following code is that _error1_ and _error2_ are identical for all intents and purposes: + +```lua +local text = 'ERR My very special error' +local reply1 = { err = text } +local reply2 = redis.error_reply(text) +``` + +Therefore, both forms are valid as means for returning an error reply from scripts: + +``` +redis> EVAL "return { err = 'ERR My very special table error' }" 0 +(error) ERR My very special table error +redis> EVAL "return redis.error_reply('ERR My very special reply error')" 0 +(error) ERR My very special reply error +``` + +For returning Redis status replies refer to [`redis.status_reply()`](#redis.status_reply). +Refer to the [Data type conversion](#data-type-conversion) for returning other response types. + +**Note:** +By convention, Redis uses the first word of an error string as a unique error code for specific errors or `ERR` for general-purpose errors. +Scripts are advised to follow this convention, as shown in the example above, but this is not mandatory. + +### `redis.status_reply(x)` {#redis.status_reply} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This is a helper function that returns a [simple string reply]({{< relref "develop/reference/protocol-spec#simple-strings" >}}). +"OK" is an example of a standard Redis status reply. +The Lua API represents status replies as tables with a single field, _ok_, set with a simple status string. + +The outcome of the following code is that _status1_ and _status2_ are identical for all intents and purposes: + +```lua +local text = 'Frosty' +local status1 = { ok = text } +local status2 = redis.status_reply(text) +``` + +Therefore, both forms are valid as means for returning status replies from scripts: + +``` +redis> EVAL "return { ok = 'TICK' }" 0 +TICK +redis> EVAL "return redis.status_reply('TOCK')" 0 +TOCK +``` + +For returning Redis error replies refer to [`redis.error_reply()`](#redis.error_reply). +Refer to the [Data type conversion](#data-type-conversion) for returning other response types. + +### `redis.sha1hex(x)` {#redis.sha1hex} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function returns the SHA1 hexadecimal digest of its single string argument. + +You can, for example, obtain the empty string's SHA1 digest: + +``` +redis> EVAL "return redis.sha1hex('')" 0 +"da39a3ee5e6b4b0d3255bfef95601890afd80709" +``` + +### `redis.log(level, message)` {#redis.log} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function writes to the Redis server log. + +It expects two input arguments: the log level and a message. +The message is a string to write to the log file. +Log level can be on of these: + +* `redis.LOG_DEBUG` +* `redis.LOG_VERBOSE` +* `redis.LOG_NOTICE` +* `redis.LOG_WARNING` + +These levels map to the server's log levels. +The log only records messages equal or greater in level than the server's `loglevel` configuration directive. + +The following snippet: + +```lua +redis.log(redis.LOG_WARNING, 'Something is terribly wrong') +``` + +will produce a line similar to the following in your server's log: + +``` +[32343] 22 Mar 15:21:39 # Something is terribly wrong +``` + +### `redis.setresp(x)` {#redis.setresp} + +* Since version: 6.0.0 +* Available in scripts: yes +* Available in functions: yes + +This function allows the executing script to switch between [Redis Serialization Protocol (RESP)]({{< relref "/develop/reference/protocol-spec" >}}) versions for the replies returned by [`redis.call()`](#redis.call) and [`redis.pcall()`](#redis.pcall). +It expects a single numerical argument as the protocol's version. +The default protocol version is _2_, but it can be switched to version _3_. + +Here's an example of switching to RESP3 replies: + +```lua +redis.setresp(3) +``` + +Please refer to the [Data type conversion](#data-type-conversion) for more information about type conversions. + +### `redis.set_repl(x)` {#redis.set_repl} + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +**Note:** +this feature is only available when script effects replication is employed. +Calling it when using verbatim script replication will result in an error. +As of Redis version 2.6.0, scripts were replicated verbatim, meaning that the scripts' source code was sent for execution by replicas and stored in the AOF. +An alternative replication mode added in version 3.2.0 allows replicating only the scripts' effects. +As of Redis version 7.0, script replication is no longer supported, and the only replication mode available is script effects replication. + +**Warning:** +this is an advanced feature. Misuse can cause damage by violating the contract that binds the Redis master, its replicas, and AOF contents to hold the same logical content. + +This function allows a script to assert control over how its effects are propagated to replicas and the AOF afterward. +A script's effects are the Redis write commands that it calls. + +By default, all write commands that a script executes are replicated. +Sometimes, however, better control over this behavior can be helpful. +This can be the case, for example, when storing intermediate values in the master alone. + +Consider a script that intersects two sets and stores the result in a temporary key with [`SUNIONSTORE`]({{< relref "/commands/sunionstore" >}}). +It then picks five random elements ([`SRANDMEMBER`]({{< relref "/commands/srandmember" >}})) from the intersection and stores ([`SADD`]({{< relref "/commands/sadd" >}})) them in another set. +Finally, before returning, it deletes the temporary key that stores the intersection of the two source sets. + +In this case, only the new set with its five randomly-chosen elements needs to be replicated. +Replicating the [`SUNIONSTORE`]({{< relref "/commands/sunionstore" >}}) command and the [`DEL`]({{< relref "/commands/del" >}})ition of the temporary key is unnecessary and wasteful. + +The `redis.set_repl()` function instructs the server how to treat subsequent write commands in terms of replication. +It accepts a single input argument that only be one of the following: + +* `redis.REPL_ALL`: replicates the effects to the AOF and replicas. +* `redis.REPL_AOF`: replicates the effects to the AOF alone. +* `redis.REPL_REPLICA`: replicates the effects to the replicas alone. +* `redis.REPL_SLAVE`: same as `REPL_REPLICA`, maintained for backward compatibility. +* `redis.REPL_NONE`: disables effect replication entirely. + +By default, the scripting engine is initialized to the `redis.REPL_ALL` setting when a script begins its execution. +You can call the `redis.set_repl()` function at any time during the script's execution to switch between the different replication modes. + +A simple example follows: + +```lua +redis.replicate_commands() -- Enable effects replication in versions lower than Redis v7.0 +redis.call('SET', KEYS[1], ARGV[1]) +redis.set_repl(redis.REPL_NONE) +redis.call('SET', KEYS[2], ARGV[2]) +redis.set_repl(redis.REPL_ALL) +redis.call('SET', KEYS[3], ARGV[3]) +``` + +If you run this script by calling `EVAL "..." 3 A B C 1 2 3`, the result will be that only the keys _A_ and _C_ are created on the replicas and AOF. + +### `redis.replicate_commands()` {#redis.replicate_commands} + +* Since version: 3.2.0 +* Until version: 7.0.0 +* Available in scripts: yes +* Available in functions: no + +This function switches the script's replication mode from verbatim replication to effects replication. +You can use it to override the default verbatim script replication mode used by Redis until version 7.0. + +**Note:** +as of Redis v7.0, verbatim script replication is no longer supported. +The default, and only script replication mode supported, is script effects' replication. +For more information, please refer to [`Replicating commands instead of scripts`]({{< relref "/develop/interact/programmability/eval-intro#replicating-commands-instead-of-scripts" >}}) + +### `redis.breakpoint()` {#redis.breakpoint} + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +This function triggers a breakpoint when using the [Redis Lua debugger]({{< relref "/develop/interact/programmability/lua-debugging" >}}). + +### `redis.debug(x)` {#redis.debug} + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +This function prints its argument in the [Redis Lua debugger]({{< relref "/develop/interact/programmability/lua-debugging" >}}) console. + +### `redis.acl_check_cmd(command [,arg...])` {#redis.acl_check_cmd} + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +This function is used for checking if the current user running the script has [ACL]({{< relref "/operate/oss_and_stack/management/security/acl" >}}) permissions to execute the given command with the given arguments. + +The return value is a boolean `true` in case the current user has permissions to execute the command (via a call to [redis.call](#redis.call) or [redis.pcall](#redis.pcall)) or `false` in case they don't. + +The function will raise an error if the passed command or its arguments are invalid. + +### `redis.register_function` {#redis.register_function} + +* Since version: 7.0.0 +* Available in scripts: no +* Available in functions: yes + +This function is only available from the context of the [`FUNCTION LOAD`]({{< relref "/commands/function-load" >}}) command. +When called, it registers a function to the loaded library. +The function can be called either with positional or named arguments. + +#### positional arguments: `redis.register_function(name, callback)` {#redis.register_function_pos_args} +The first argument to `redis.register_function` is a Lua string representing the function name. +The second argument to `redis.register_function` is a Lua function. + +Usage example: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n redis.register_function('noop', function() end)" +``` + +#### Named arguments: `redis.register_function{function_name=name, callback=callback, flags={flag1, flag2, ..}, description=description}` {#redis.register_function_named_args} + +The named arguments variant accepts the following arguments: + +* _function\_name_: the function's name. +* _callback_: the function's callback. +* _flags_: an array of strings, each a function flag (optional). +* _description_: function's description (optional). + +Both _function\_name_ and _callback_ are mandatory. + +Usage example: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n redis.register_function{function_name='noop', callback=function() end, flags={ 'no-writes' }, description='Does nothing'}" +``` + +#### Script flags {#script_flags} + +**Important:** +Use script flags with care, which may negatively impact if misused. +Note that the default for Eval scripts are different than the default for functions that are mentioned below, see [Eval Flags]({{< relref "develop/interact/programmability/eval-intro#eval-flags" >}}) + +When you register a function or load an Eval script, the server does not know how it accesses the database. +By default, Redis assumes that all scripts read and write data. +This results in the following behavior: + +1. They can read and write data. +1. They can run in cluster mode, and are not able to run commands accessing keys of different hash slots. +1. Execution against a stale replica is denied to avoid inconsistent reads. +1. Execution under low memory is denied to avoid exceeding the configured threshold. + +You can use the following flags and instruct the server to treat the scripts' execution differently: + +* `no-writes`: this flag indicates that the script only reads data but never writes. + + By default, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang]({{< relref "/develop/interact/programmability/eval-intro#eval-flags" >}})) against read-only replicas, as they may attempt to perform writes. + Similarly, the server will not allow calling scripts with [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) / [`EVAL_RO`]({{< relref "/commands/eval_ro" >}}). + Lastly, when data persistence is at risk due to a disk error, execution is blocked as well. + + Using this flag allows executing the script: + 1. With [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) / [`EVAL_RO`]({{< relref "/commands/eval_ro" >}}) + 2. On read-only replicas. + 3. Even if there's a disk error (Redis is unable to persist so it rejects writes). + 4. When over the memory limit since it implies the script doesn't increase memory consumption (see `allow-oom` below) + + However, note that the server will return an error if the script attempts to call a write command. + Also note that currently [`PUBLISH`]({{< relref "/commands/publish" >}}), [`SPUBLISH`]({{< relref "/commands/spublish" >}}) and [`PFCOUNT`]({{< relref "/commands/pfcount" >}}) are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. + + For more information please refer to [Read-only scripts]({{< relref "develop/interact/programmability/#read-only_scripts" >}}) + +* `allow-oom`: use this flag to allow a script to execute when the server is out of memory (OOM). + + Unless used, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang]({{< relref "/develop/interact/programmability/eval-intro#eval-flags" >}})) when in an OOM state. + Furthermore, when you use this flag, the script can call any Redis command, including commands that aren't usually allowed in this state. + Specifying `no-writes` or using [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}) / [`EVAL_RO`]({{< relref "/commands/eval_ro" >}}) also implies the script can run in OOM state (without specifying `allow-oom`) + +* `allow-stale`: a flag that enables running the flagged scripts (Functions and Eval scripts with [shebang]({{< relref "/develop/interact/programmability/eval-intro#eval-flags" >}})) against a stale replica when the `replica-serve-stale-data` config is set to `no` . + + Redis can be set to prevent data consistency problems from using old data by having stale replicas return a runtime error. + For scripts that do not access the data, this flag can be set to allow stale Redis replicas to run the script. + Note however that the script will still be unable to execute any command that accesses stale data. + +* `no-cluster`: the flag causes the script to return an error in Redis cluster mode. + + Redis allows scripts to be executed both in standalone and cluster modes. + Setting this flag prevents executing the script against nodes in the cluster. + +* `allow-cross-slot-keys`: The flag that allows a script to access keys from multiple slots. + + Redis typically prevents any single command from accessing keys that hash to multiple slots. + This flag allows scripts to break this rule and access keys within the script that access multiple slots. + Declared keys to the script are still always required to hash to a single slot. + Accessing keys from multiple slots is discouraged as applications should be designed to only access keys from a single slot at a time, allowing slots to move between Redis servers. + + This flag has no effect when cluster mode is disabled. + +Please refer to [Function Flags]({{< relref "develop/interact/programmability/functions-intro#function-flags" >}}) and [Eval Flags]({{< relref "develop/interact/programmability/eval-intro#eval-flags" >}}) for a detailed example. + +### `redis.REDIS_VERSION` {#redis.redis_version} + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +Returns the current Redis server version as a Lua string. +The reply's format is `MM.mm.PP`, where: + +* **MM:** is the major version. +* **mm:** is the minor version. +* **PP:** is the patch level. + +### `redis.REDIS_VERSION_NUM` {#redis.redis_version_num} + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +Returns the current Redis server version as a number. +The reply is a hexadecimal value structured as `0x00MMmmPP`, where: + +* **MM:** is the major version. +* **mm:** is the minor version. +* **PP:** is the patch level. + +## Data type conversion + +Unless a runtime exception is raised, `redis.call()` and `redis.pcall()` return the reply from the executed command to the Lua script. +Redis' replies from these functions are converted automatically into Lua's native data types. + +Similarly, when a Lua script returns a reply with the `return` keyword, +that reply is automatically converted to Redis' protocol. + +Put differently; there's a one-to-one mapping between Redis' replies and Lua's data types and a one-to-one mapping between Lua's data types and the [Redis Protocol]({{< relref "/develop/reference/protocol-spec" >}}) data types. +The underlying design is such that if a Redis type is converted into a Lua type and converted back into a Redis type, the result is the same as the initial value. + +Type conversion from Redis protocol replies (i.e., the replies from `redis.call()` and `redis.pcall()`) to Lua data types depends on the Redis Serialization Protocol version used by the script. +The default protocol version during script executions is RESP2. +The script may switch the replies' protocol versions by calling the `redis.setresp()` function. + +Type conversion from a script's returned Lua data type depends on the user's choice of protocol (see the [`HELLO`]({{< relref "/commands/hello" >}}) command). + +The following sections describe the type conversion rules between Lua and Redis per the protocol's version. + +### RESP2 to Lua type conversion + +The following type conversion rules apply to the execution's context by default as well as after calling `redis.setresp(2)`: + +* [RESP2 integer reply]({{< relref "develop/reference/protocol-spec#integers" >}}) -> Lua number +* [RESP2 bulk string reply]({{< relref "develop/reference/protocol-spec#bulk-strings" >}}) -> Lua string +* [RESP2 array reply]({{< relref "develop/reference/protocol-spec#arrays" >}}) -> Lua table (may have other Redis data types nested) +* [RESP2 status reply]({{< relref "develop/reference/protocol-spec#simple-strings" >}}) -> Lua table with a single _ok_ field containing the status string +* [RESP2 error reply]({{< relref "develop/reference/protocol-spec#simple-errors" >}}) -> Lua table with a single _err_ field containing the error string +* [RESP2 null bulk reply]({{< relref "develop/reference/protocol-spec#bulk-strings" >}}) and [RESP2 null multi-bulk reply]({{< relref "develop/reference/protocol-spec#arrays" >}}) -> Lua false boolean type + +## Lua to RESP2 type conversion + +The following type conversion rules apply by default as well as after the user had called `HELLO 2`: + +* Lua number -> [RESP2 integer reply]({{< relref "develop/reference/protocol-spec#integers" >}}) (the number is converted into an integer) +* Lua string -> [RESP2 bulk string reply]({{< relref "develop/reference/protocol-spec#bulk-strings" >}}) +* Lua table (indexed, non-associative array) -> [RESP2 array reply]({{< relref "develop/reference/protocol-spec#arrays" >}}) (truncated at the first Lua `nil` value encountered in the table, if any) +* Lua table with a single _ok_ field -> [RESP2 status reply]({{< relref "develop/reference/protocol-spec#simple-strings" >}}) +* Lua table with a single _err_ field -> [RESP2 error reply]({{< relref "develop/reference/protocol-spec#simple-errors" >}}) +* Lua boolean false -> [RESP2 null bulk reply]({{< relref "develop/reference/protocol-spec#bulk-strings" >}}) + +There is an additional Lua-to-Redis conversion rule that has no corresponding Redis-to-Lua conversion rule: + +* Lua Boolean `true` -> [RESP2 integer reply]({{< relref "develop/reference/protocol-spec#integers" >}}) with value of 1. + +There are three additional rules to note about converting Lua to Redis data types: + +* Lua has a single numerical type, Lua numbers. + There is no distinction between integers and floats. + So we always convert Lua numbers into integer replies, removing the decimal part of the number, if any. + **If you want to return a Lua float, it should be returned as a string**, + exactly like Redis itself does (see, for instance, the [`ZSCORE`]({{< relref "/commands/zscore" >}}) command). +* There's [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html) due + to Lua's table semantics. + Therefore, when Redis converts a Lua array to RESP, the conversion stops when it encounters a Lua `nil` value. +* When a Lua table is an associative array that contains keys and their respective values, the converted Redis reply will **not** include them. + +Lua to RESP2 type conversion examples: + +``` +redis> EVAL "return 10" 0 +(integer) 10 + +redis> EVAL "return { 1, 2, { 3, 'Hello World!' } }" 0 +1) (integer) 1 +2) (integer) 2 +3) 1) (integer) 3 + 1) "Hello World!" + +redis> EVAL "return redis.call('get','foo')" 0 +"bar" +``` + +The last example demonstrates receiving and returning the exact return value of `redis.call()` (or `redis.pcall()`) in Lua as it would be returned if the command had been called directly. + +The following example shows how floats and arrays that cont nils and keys are handled: + +``` +redis> EVAL "return { 1, 2, 3.3333, somekey = 'somevalue', 'foo', nil , 'bar' }" 0 +1) (integer) 1 +2) (integer) 2 +3) (integer) 3 +4) "foo" +``` + +As you can see, the float value of _3.333_ gets converted to an integer _3_, the _somekey_ key and its value are omitted, and the string "bar" isn't returned as there is a `nil` value that precedes it. + +### RESP3 to Lua type conversion + +[RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) is a newer version of the [Redis Serialization Protocol]({{< relref "/develop/reference/protocol-spec" >}}). +It is available as an opt-in choice as of Redis v6.0. + +An executing script may call the [`redis.setresp`](#redis.setresp) function during its execution and switch the protocol version that's used for returning replies from Redis' commands (that can be invoked via [`redis.call()`](#redis.call) or [`redis.pcall()`](#redis.pcall)). + +Once Redis' replies are in RESP3 protocol, all of the [RESP2 to Lua conversion](#resp2-to-lua-type-conversion) rules apply, with the following additions: + +* [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type) -> Lua table with a single _map_ field containing a Lua table representing the fields and values of the map. +* [RESP set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-reply) -> Lua table with a single _set_ field containing a Lua table representing the elements of the set as fields, each with the Lua Boolean value of `true`. +* [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply) -> Lua `nil`. +* [RESP3 true reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua true boolean value. +* [RESP3 false reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua false boolean value. +* [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type) -> Lua table with a single _double_ field containing a Lua number representing the double value. +* [RESP3 big number reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) -> Lua table with a single _big_number_ field containing a Lua string representing the big number value. +* [Redis verbatim string reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) -> Lua table with a single _verbatim_string_ field containing a Lua table with two fields, _string_ and _format_, representing the verbatim string and its format, respectively. + +**Note:** +the RESP3 [big number](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) and [verbatim strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) replies are only supported as of Redis v7.0 and greater. +Also, presently, RESP3's [attributes](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#attribute-type), [streamed strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-strings) and [streamed aggregate data types](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-aggregate-data-types) are not supported by the Redis Lua API. + +### Lua to RESP3 type conversion + +Regardless of the script's choice of protocol version set for replies with the [`redis.setresp()` function] when it calls `redis.call()` or `redis.pcall()`, the user may opt-in to using RESP3 (with the `HELLO 3` command) for the connection. +Although the default protocol for incoming client connections is RESP2, the script should honor the user's preference and return adequately-typed RESP3 replies, so the following rules apply on top of those specified in the [Lua to RESP2 type conversion](#lua-to-resp2-type-conversion) section when that is the case. + +* Lua Boolean -> [RESP3 Boolean reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) (note that this is a change compared to the RESP2, in which returning a Boolean Lua `true` returned the number 1 to the Redis client, and returning a `false` used to return a `null`. +* Lua table with a single _map_ field set to an associative Lua table -> [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type). +* Lua table with a single _set_ field set to an associative Lua table -> [RESP3 set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-type). Values can be set to anything and are discarded anyway. +* Lua table with a single _double_ field to an associative Lua table -> [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type). +* Lua nil -> [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply). + +However, if the connection is set use the RESP2 protocol, and even if the script replies with RESP3-typed responses, Redis will automatically perform a RESP3 to RESP2 conversion of the reply as is the case for regular commands. +That means, for example, that returning the RESP3 map type to a RESP2 connection will result in the reply being converted to a flat RESP2 array that consists of alternating field names and their values, rather than a RESP3 map. + +## Additional notes about scripting + +### Using `SELECT` inside scripts + +You can call the [`SELECT`]({{< relref "/commands/select" >}}) command from your Lua scripts, like you can with any normal client connection. +However, one subtle aspect of the behavior changed between Redis versions 2.8.11 and 2.8.12. +Prior to Redis version 2.8.12, the database selected by the Lua script was *set as the current database* for the client connection that had called it. +As of Redis version 2.8.12, the database selected by the Lua script only affects the execution context of the script, and does not modify the database that's selected by the client calling the script. +This semantic change between patch level releases was required since the old behavior was inherently incompatible with Redis' replication and introduced bugs. + +## Runtime libraries + +The Redis Lua runtime context always comes with several pre-imported libraries. + +The following [standard Lua libraries](https://www.lua.org/manual/5.1/manual.html#5) are available to use: + +* The [_String Manipulation (string)_ library](https://www.lua.org/manual/5.1/manual.html#5.4) +* The [_Table Manipulation (table)_ library](https://www.lua.org/manual/5.1/manual.html#5.5) +* The [_Mathematical Functions (math)_ library](https://www.lua.org/manual/5.1/manual.html#5.6) +* The [_Operating System Facilities (os)_ library](#os-library) + +In addition, the following external libraries are loaded and accessible to scripts: + +* The [_struct_ library](#struct-library) +* The [_cjson_ library](#cjson-library) +* The [_cmsgpack_ library](#cmsgpack-library) +* The [_bitop_ library](#bitop-library) + +### _os_ library {#os-library} + +* Since version: 7.4 +* Available in scripts: yes +* Available in functions: yes + +_os_ provides a set of functions for dealing with date, time, and system commands. +More details can be found in the [Operating System Facilities](https://www.lua.org/manual/5.1/manual.html#5.8). +Note that for sandbox security, currently only the following os functions is exposed: + +* `os.clock()` + +### _struct_ library {#struct-library} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +_struct_ is a library for packing and unpacking C-like structures in Lua. +It provides the following functions: + +* [`struct.pack()`](#struct.pack) +* [`struct.unpack()`](#struct.unpack) +* [`struct.size()`](#struct.size) + +All of _struct_'s functions expect their first argument to be a [format string](#struct-formats). + +#### _struct_ formats {#struct-formats} + +The following are valid format strings for _struct_'s functions: + +* `>`: big endian +* `<`: little endian +* `![num]`: alignment +* `x`: padding +* `b/B`: signed/unsigned byte +* `h/H`: signed/unsigned short +* `l/L`: signed/unsigned long +* `T`: size_t +* `i/In`: signed/unsigned integer with size _n_ (defaults to the size of int) +* `cn`: sequence of _n_ chars (from/to a string); when packing, n == 0 means the + whole string; when unpacking, n == 0 means use the previously read number as + the string's length. +* `s`: zero-terminated string +* `f`: float +* `d`: double +* ` ` (space): ignored + +#### `struct.pack(x)` {#struct.pack} + +This function returns a struct-encoded string from values. +It accepts a [_struct_ format string](#struct-formats) as its first argument, followed by the values that are to be encoded. + +Usage example: + +``` +redis> EVAL "return struct.pack('HH', 1, 2)" 0 +"\x01\x00\x02\x00" +``` + +#### `struct.unpack(x)` {#struct.unpack} + +This function returns the decoded values from a struct. +It accepts a [_struct_ format string](#struct-formats) as its first argument, followed by encoded struct's string. + +Usage example: + +``` +redis> EVAL "return { struct.unpack('HH', ARGV[1]) }" 0 "\x01\x00\x02\x00" +1) (integer) 1 +2) (integer) 2 +3) (integer) 5 +``` + +#### `struct.size(x)` {#struct.size} + +This function returns the size, in bytes, of a struct. +It accepts a [_struct_ format string](#struct-formats) as its only argument. + +Usage example: + +``` +redis> EVAL "return struct.size('HH')" 0 +(integer) 4 +``` + +### _cjson_ library {#cjson-library} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The _cjson_ library provides fast [JSON](https://json.org) encoding and decoding from Lua. +It provides these functions. + +#### `cjson.encode(x)` {#cjson.encode} + +This function returns a JSON-encoded string for the Lua data type provided as its argument. + +Usage example: + +``` +redis> EVAL "return cjson.encode({ ['foo'] = 'bar' })" 0 +"{\"foo\":\"bar\"}" +``` + +#### `cjson.decode(x)` {#cjson.decode()} + +This function returns a Lua data type from the JSON-encoded string provided as its argument. + +Usage example: + +``` +redis> EVAL "return cjson.decode(ARGV[1])['foo']" 0 '{"foo":"bar"}' +"bar" +``` + +### _cmsgpack_ library {#cmsgpack-library} + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The _cmsgpack_ library provides fast [MessagePack](https://msgpack.org/index.html) encoding and decoding from Lua. +It provides these functions. + +#### `cmsgpack.pack(x)` {#cmsgpack.pack()} + +This function returns the packed string encoding of the Lua data type it is given as an argument. + +Usage example: + +``` +redis> EVAL "return cmsgpack.pack({'foo', 'bar', 'baz'})" 0 +"\x93\xa3foo\xa3bar\xa3baz" +``` + +#### `cmsgpack.unpack(x)` {#cmsgpack.unpack()} + +This function returns the unpacked values from decoding its input string argument. + +Usage example: + +``` +redis> EVAL "return cmsgpack.unpack(ARGV[1])" 0 "\x93\xa3foo\xa3bar\xa3baz" +1) "foo" +2) "bar" +3) "baz" +``` + +### _bit_ library {#bitop-library} + +* Since version: 2.8.18 +* Available in scripts: yes +* Available in functions: yes + +The _bit_ library provides bitwise operations on numbers. +Its documentation resides at [Lua BitOp documentation](http://bitop.luajit.org/api.html) +It provides the following functions. + +#### `bit.tobit(x)` {#bit.tobit()} + +Normalizes a number to the numeric range for bit operations and returns it. + +Usage example: + +``` +redis> EVAL 'return bit.tobit(1)' 0 +(integer) 1 +``` + +#### `bit.tohex(x [,n])` {#bit.tohex()} + +Converts its first argument to a hex string. The number of hex digits is given by the absolute value of the optional second argument. + +Usage example: + +``` +redis> EVAL 'return bit.tohex(422342)' 0 +"000671c6" +``` + +#### `bit.bnot(x)` {#bit.bnot()} + +Returns the bitwise **not** of its argument. + +#### `bit.bnot(x)` `bit.bor(x1 [,x2...])`, `bit.band(x1 [,x2...])` and `bit.bxor(x1 [,x2...])` {#bit.ops} + +Returns either the bitwise **or**, bitwise **and**, or bitwise **xor** of all of its arguments. +Note that more than two arguments are allowed. + +Usage example: + +``` +redis> EVAL 'return bit.bor(1,2,4,8,16,32,64,128)' 0 +(integer) 255 +``` + +#### `bit.lshift(x, n)`, `bit.rshift(x, n)` and `bit.arshift(x, n)` {#bit.shifts} + +Returns either the bitwise logical **left-shift**, bitwise logical **right-shift**, or bitwise **arithmetic right-shift** of its first argument by the number of bits given by the second argument. + +#### `bit.rol(x, n)` and `bit.ror(x, n)` {#bit.ro} + +Returns either the bitwise **left rotation**, or bitwise **right rotation** of its first argument by the number of bits given by the second argument. +Bits shifted out on one side are shifted back in on the other side. + +#### `bit.bswap(x)` {#bit.bswap()} + +Swaps the bytes of its argument and returns it. +This can be used to convert little-endian 32-bit numbers to big-endian 32-bit numbers and vice versa. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Extending Redis with Lua and Redis Functions + + ' +linkTitle: Programmability +title: Redis programmability +weight: 20 +--- + +Redis provides a programming interface that lets you execute custom scripts on the server itself. In Redis 7 and beyond, you can use [Redis Functions]({{< relref "/develop/interact/programmability/functions-intro" >}}) to manage and run your scripts. In Redis 6.2 and below, you use [Lua scripting with the EVAL command]({{< relref "/develop/interact/programmability/eval-intro" >}}) to program the server. + +## Background + +Redis is, by [definition](https://github.com/redis/redis/blob/unstable/MANIFESTO#L7), a _"domain-specific language for abstract data types"_. +The language that Redis speaks consists of its [commands]({{< relref "/commands" >}}). +Most the commands specialize at manipulating core [data types]({{< relref "/develop/data-types" >}}) in different ways. +In many cases, these commands provide all the functionality that a developer requires for managing application data in Redis. + +The term **programmability** in Redis means having the ability to execute arbitrary user-defined logic by the server. +We refer to such pieces of logic as **scripts**. +In our case, scripts enable processing the data where it lives, a.k.a _data locality_. +Furthermore, the responsible embedding of programmatic workflows in the Redis server can help in reducing network traffic and improving overall performance. +Developers can use this capability for implementing robust, application-specific APIs. +Such APIs can encapsulate business logic and maintain a data model across multiple keys and different data structures. + +User scripts are executed in Redis by an embedded, sandboxed scripting engine. +Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. + +Please refer to the [Redis Lua API Reference]({{< relref "develop/interact/programmability/lua-api" >}}) page for complete documentation. + +## Running scripts + +Redis provides two means for running scripts. + +Firstly, and ever since Redis 2.6.0, the [`EVAL`]({{< relref "/commands/eval" >}}) command enables running server-side scripts. +Eval scripts provide a quick and straightforward way to have Redis run your scripts ad-hoc. +However, using them means that the scripted logic is a part of your application (not an extension of the Redis server). +Every applicative instance that runs a script must have the script's source code readily available for loading at any time. +That is because scripts are only cached by the server and are volatile. +As your application grows, this approach can become harder to develop and maintain. + +Secondly, added in v7.0, Redis Functions are essentially scripts that are first-class database elements. +As such, functions decouple scripting from application logic and enable independent development, testing, and deployment of scripts. +To use functions, they need to be loaded first, and then they are available for use by all connected clients. +In this case, loading a function to the database becomes an administrative deployment task (such as loading a Redis module, for example), which separates the script from the application. + +Please refer to the following pages for more information: + +* [Redis Eval Scripts]({{< relref "/develop/interact/programmability/eval-intro" >}}) +* [Redis Functions]({{< relref "/develop/interact/programmability/functions-intro" >}}) + +When running a script or a function, Redis guarantees its atomic execution. +The script's execution blocks all server activities during its entire time, similarly to the semantics of [transactions]({{< relref "/develop/interact/transactions" >}}). +These semantics mean that all of the script's effects either have yet to happen or had already happened. +The blocking semantics of an executed script apply to all connected clients at all times. + +Note that the potential downside of this blocking approach is that executing slow scripts is not a good idea. +It is not hard to create fast scripts because scripting's overhead is very low. +However, if you intend to use a slow script in your application, be aware that all other clients are blocked and can't execute any command while it is running. + +## Read-only scripts + +A read-only script is a script that only executes commands that don't modify any keys within Redis. +Read-only scripts can be executed either by adding the `no-writes` [flag]({{< relref "develop/interact/programmability/lua-api#script_flags" >}}) to the script or by executing the script with one of the read-only script command variants: [`EVAL_RO`]({{< relref "/commands/eval_ro" >}}), [`EVALSHA_RO`]({{< relref "/commands/evalsha_ro" >}}), or [`FCALL_RO`]({{< relref "/commands/fcall_ro" >}}). +They have the following properties: + +* They can always be executed on replicas. +* They can always be killed by the [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}) command. +* They never fail with OOM error when redis is over the memory limit. +* They are not blocked during write pauses, such as those that occur during coordinated failovers. +* They cannot execute any command that may modify the data set. +* Currently [`PUBLISH`]({{< relref "/commands/publish" >}}), [`SPUBLISH`]({{< relref "/commands/spublish" >}}) and [`PFCOUNT`]({{< relref "/commands/pfcount" >}}) are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. + +In addition to the benefits provided by all read-only scripts, the read-only script commands have the following advantages: + +* They can be used to configure an ACL user to only be able to execute read-only scripts. +* Many clients also better support routing the read-only script commands to replicas for applications that want to use replicas for read scaling. + +#### Read-only script history + +Read-only scripts and read-only script commands were introduced in Redis 7.0 + +* Before Redis 7.0.1 [`PUBLISH`]({{< relref "/commands/publish" >}}), [`SPUBLISH`]({{< relref "/commands/spublish" >}}) and [`PFCOUNT`]({{< relref "/commands/pfcount" >}}) were not considered write commands in scripts +* Before Redis 7.0.1 the `no-writes` [flag]({{< relref "develop/interact/programmability/lua-api#script_flags" >}}) did not imply `allow-oom` +* Before Redis 7.0.1 the `no-writes` flag did not permit the script to run during write pauses. + + +The recommended approach is to use the standard scripting commands with the `no-writes` flag unless you need one of the previously mentioned features. + +## Sandboxed script context + +Redis places the engine that executes user scripts inside a sandbox. +The sandbox attempts to prevent accidental misuse and reduce potential threats from the server's environment. + +Scripts should never try to access the Redis server's underlying host systems, such as the file system, network, or attempt to perform any other system call other than those supported by the API. + +Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. + +## Maximum execution time + +Scripts are subject to a maximum execution time (set by default to five seconds). +This default timeout is enormous since a script usually runs in less than a millisecond. +The limit is in place to handle accidental infinite loops created during development. + +It is possible to modify the maximum time a script can be executed with millisecond precision, +either via `redis.conf` or by using the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command. +The configuration parameter affecting max execution time is called `busy-reply-threshold`. + +When a script reaches the timeout threshold, it isn't terminated by Redis automatically. +Doing so would violate the contract between Redis and the scripting engine that ensures that scripts are atomic. +Interrupting the execution of a script has the potential of leaving the dataset with half-written changes. + +Therefore, when a script executes longer than the configured timeout, the following happens: + +* Redis logs that a script is running for too long. +* It starts accepting commands again from other clients but will reply with a BUSY error to all the clients sending normal commands. The only commands allowed in this state are [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}), [`FUNCTION KILL`]({{< relref "/commands/function-kill" >}}), and `SHUTDOWN NOSAVE`. +* It is possible to terminate a script that only executes read-only commands using the [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}) and [`FUNCTION KILL`]({{< relref "/commands/function-kill" >}}) commands. These commands do not violate the scripting semantic as no data was written to the dataset by the script yet. +* If the script had already performed even a single write operation, the only command allowed is `SHUTDOWN NOSAVE` that stops the server without saving the current data set on disk (basically, the server is aborted). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'How the Redis server manages client connections + + ' +linkTitle: Client handling +title: Redis client handling +weight: 5 +--- + +This document provides information about how Redis handles clients at the network layer level: connections, timeouts, buffers, and other similar topics are covered here. + +The information contained in this document is **only applicable to Redis version 2.6 or greater**. + +## Accepting Client Connections + +Redis accepts clients connections on the configured TCP port and on the Unix socket if enabled. When a new client connection is accepted the following operations are performed: + +* The client socket is put in the non-blocking state since Redis uses multiplexing and non-blocking I/O. +* The `TCP_NODELAY` option is set in order to ensure that there are no delays to the connection. +* A *readable* file event is created so that Redis is able to collect the client queries as soon as new data is available to read on the socket. + +After the client is initialized, Redis checks if it is already at the limit +configured for the number of simultaneous clients (configured using the `maxclients` configuration directive, see the next section of this document for further information). + +When Redis can't accept a new client connection because the maximum number of clients +has been reached, it tries to send an error to the client in order to +make it aware of this condition, closing the connection immediately. +The error message will reach the client even if the connection is +closed immediately by Redis because the new socket output buffer is usually +big enough to contain the error, so the kernel will handle transmission +of the error. + +## What Order are Client Requests Served In? + +The order is determined by a combination of the client socket file descriptor +number and order in which the kernel reports events, so the order should be +considered as unspecified. + +However, Redis does the following two things when serving clients: + +* It only performs a single `read()` system call every time there is something new to read from the client socket. This ensures that if we have multiple clients connected, and a few send queries at a high rate, other clients are not penalized and will not experience latency issues. +* However once new data is read from a client, all the queries contained in the current buffers are processed sequentially. This improves locality and does not need iterating a second time to see if there are clients that need some processing time. + +## Maximum Concurrent Connected Clients + +In Redis 2.4 there was a hard-coded limit for the maximum number of clients +that could be handled simultaneously. + +In Redis 2.6 and newer, this limit is configurable using the `maxclients` directive in `redis.conf`. The default is 10,000 clients. + +However, Redis checks with the kernel what the maximum number of file +descriptors that we are able to open is (the *soft limit* is checked). If the +limit is less than the maximum number of clients we want to handle, plus +32 (that is the number of file descriptors Redis reserves for internal uses), +then the maximum number of clients is updated to match the number +of clients it is *really able to handle* under the current operating system +limit. + +When `maxclients` is set to a number greater than Redis can support, a message is logged at startup: + +``` +$ ./redis-server --maxclients 100000 +[41422] 23 Jan 11:28:33.179 # Unable to set the max number of files limit to 100032 (Invalid argument), setting the max clients configuration to 10112. +``` + +When Redis is configured in order to handle a specific number of clients it +is a good idea to make sure that the operating system limit for the maximum +number of file descriptors per process is also set accordingly. + +Under Linux these limits can be set both in the current session and as a +system-wide setting with the following commands: + +* `ulimit -Sn 100000 # This will only work if hard limit is big enough.` +* `sysctl -w fs.file-max=100000` + +## Output Buffer Limits + +Redis needs to handle a variable-length output buffer for every client, since +a command can produce a large amount of data that needs to be transferred to the +client. + +However it is possible that a client sends more commands producing more output +to serve at a faster rate than that which Redis can send the existing output to the +client. This is especially true with Pub/Sub clients in case a client is not +able to process new messages fast enough. + +Both conditions will cause the client output buffer to grow and consume +more and more memory. For this reason by default Redis sets limits to the +output buffer size for different kind of clients. When the limit is reached +the client connection is closed and the event logged in the Redis log file. + +There are two kind of limits Redis uses: + +* The **hard limit** is a fixed limit that when reached will make Redis close the client connection as soon as possible. +* The **soft limit** instead is a limit that depends on the time, for instance a soft limit of 32 megabytes per 10 seconds means that if the client has an output buffer bigger than 32 megabytes for, continuously, 10 seconds, the connection gets closed. + +Different kind of clients have different default limits: + +* **Normal clients** have a default limit of 0, that means, no limit at all, because most normal clients use blocking implementations sending a single command and waiting for the reply to be completely read before sending the next command, so it is always not desirable to close the connection in case of a normal client. +* **Pub/Sub clients** have a default hard limit of 32 megabytes and a soft limit of 8 megabytes per 60 seconds. +* **Replicas** have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 seconds. + +It is possible to change the limit at runtime using the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command or in a permanent way using the Redis configuration file `redis.conf`. See the example `redis.conf` in the Redis distribution for more information about how to set the limit. + +## Query Buffer Hard Limit + +Every client is also subject to a query buffer limit. This is a non-configurable hard limit that will close the connection when the client query buffer (that is the buffer we use to accumulate commands from the client) reaches 1 GB, and is actually only an extreme limit to avoid a server crash in case of client or server software bugs. + +## Client Eviction + +Redis is built to handle a very large number of client connections. +Client connections tend to consume memory, and when there are many of them, the aggregate memory consumption can be extremely high, leading to data eviction or out-of-memory errors. +These cases can be mitigated to an extent using [output buffer limits](#output-buffer-limits), but Redis allows us a more robust configuration to limit the aggregate memory used by all clients' connections. + + +This mechanism is called **client eviction**, and it's essentially a safety mechanism that will disconnect clients once the aggregate memory usage of all clients is above a threshold. +The mechanism first attempts to disconnect clients that use the most memory. +It disconnects the minimal number of clients needed to return below the `maxmemory-clients` threshold. + +`maxmemory-clients` defines the maximum aggregate memory usage of all clients connected to Redis. +The aggregation takes into account all the memory used by the client connections: the [query buffer](#query-buffer-hard-limit), the output buffer, and other intermediate buffers. + +Note that replica and master connections aren't affected by the client eviction mechanism. Therefore, such connections are never evicted. + +`maxmemory-clients` can be set permanently in the configuration file (`redis.conf`) or via the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command. +This setting can either be 0 (meaning no limit), a size in bytes (possibly with `mb`/`gb` suffix), +or a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of the `maxmemory` configuration). + +The default setting is 0, meaning client eviction is turned off by default. +However, for any large production deployment, it is highly recommended to configure some non-zero `maxmemory-clients` value. +A value `5%`, for example, can be a good place to start. + +It is possible to flag a specific client connection to be excluded from the client eviction mechanism. +This is useful for control path connections. +If, for example, you have an application that monitors the server via the [`INFO`]({{< relref "/commands/info" >}}) command and alerts you in case of a problem, you might want to make sure this connection isn't evicted. +You can do so using the following command (from the relevant client's connection): + +[`CLIENT NO-EVICT`]({{< relref "/commands/client-no-evict" >}}) `on` + +And you can revert that with: + +[`CLIENT NO-EVICT`]({{< relref "/commands/client-no-evict" >}}) `off` + +For more information and an example refer to the `maxmemory-clients` section in the default `redis.conf` file. + +Client eviction is available from Redis 7.0. + +## Client Timeouts + +By default recent versions of Redis don't close the connection with the client +if the client is idle for many seconds: the connection will remain open forever. + +However if you don't like this behavior, you can configure a timeout, so that +if the client is idle for more than the specified number of seconds, the client connection will be closed. + +You can configure this limit via `redis.conf` or simply using `CONFIG SET timeout `. + +Note that the timeout only applies to normal clients and it **does not apply to Pub/Sub clients**, since a Pub/Sub connection is a *push style* connection so a client that is idle is the norm. + +Even if by default connections are not subject to timeout, there are two conditions when it makes sense to set a timeout: + +* Mission critical applications where a bug in the client software may saturate the Redis server with idle connections, causing service disruption. +* As a debugging mechanism in order to be able to connect with the server if a bug in the client software saturates the server with idle connections, making it impossible to interact with the server. + +Timeouts are not to be considered very precise: Redis avoids setting timer events or running O(N) algorithms in order to check idle clients, so the check is performed incrementally from time to time. This means that it is possible that while the timeout is set to 10 seconds, the client connection will be closed, for instance, after 12 seconds if many clients are connected at the same time. + +## The CLIENT Command + +The Redis [`CLIENT`]({{< relref "/commands/client" >}}) command allows you to inspect the state of every connected client, to kill a specific client, and to name connections. It is a very powerful debugging tool if you use Redis at scale. + +[`CLIENT LIST`]({{< relref "/commands/client-list" >}}) is used in order to obtain a list of connected clients and their state: + +``` +redis 127.0.0.1:6379> client list +addr=127.0.0.1:52555 fd=5 name= age=855 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client +addr=127.0.0.1:52787 fd=6 name= age=6 idle=5 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping +``` + +In the above example two clients are connected to the Redis server. Let's look at what some of the data returned represents: + +* **addr**: The client address, that is, the client IP and the remote port number it used to connect with the Redis server. +* **fd**: The client socket file descriptor number. +* **name**: The client name as set by [`CLIENT SETNAME`]({{< relref "/commands/client-setname" >}}). +* **age**: The number of seconds the connection existed for. +* **idle**: The number of seconds the connection is idle. +* **flags**: The kind of client (N means normal client, check the [full list of flags]({{< relref "/commands/client-list" >}})). +* **omem**: The amount of memory used by the client for the output buffer. +* **cmd**: The last executed command. + +See the [[`CLIENT LIST`]({{< relref "/commands/client-list" >}})](/commands/client-list) documentation for the full listing of fields and their purpose. + +Once you have the list of clients, you can close a client's connection using the [`CLIENT KILL`]({{< relref "/commands/client-kill" >}}) command, specifying the client address as its argument. + +The commands [`CLIENT SETNAME`]({{< relref "/commands/client-setname" >}}) and [`CLIENT GETNAME`]({{< relref "/commands/client-getname" >}}) can be used to set and get the connection name. Starting with Redis 4.0, the client name is shown in the +[`SLOWLOG`]({{< relref "/commands/slowlog" >}}) output, to help identify clients that create latency issues. + +## TCP keepalive + +From version 3.2 onwards, Redis has TCP keepalive (`SO_KEEPALIVE` socket option) enabled by default and set to about 300 seconds. This option is useful in order to detect dead peers (clients that cannot be reached even if they look connected). Moreover, if there is network equipment between clients and servers that need to see some traffic in order to take the connection open, the option will prevent unexpected connection closed events. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to build clients for Redis Sentinel +linkTitle: Sentinel clients +title: Sentinel client spec +weight: 2 +--- + +Redis Sentinel is a monitoring solution for Redis instances that handles +automatic failover of Redis masters and service discovery (who is the current +master for a given group of instances?). Since Sentinel is both responsible +for reconfiguring instances during failovers, and providing configurations to +clients connecting to Redis masters or replicas, clients are required to have +explicit support for Redis Sentinel. + +This document is targeted at Redis clients developers that want to support Sentinel in their clients implementation with the following goals: + +* Automatic configuration of clients via Sentinel. +* Improved safety of Redis Sentinel automatic failover. + +For details about how Redis Sentinel works, please check the [Redis Documentation]({{< relref "/operate/oss_and_stack/management/sentinel" >}}), as this document only contains information needed for Redis client developers, and it is expected that readers are familiar with the way Redis Sentinel works. + +## Redis service discovery via Sentinel + +Redis Sentinel identifies every master with a name like "stats" or "cache". +Every name actually identifies a *group of instances*, composed of a master +and a variable number of replicas. + +The address of the Redis master that is used for a specific purpose inside a network may change after events like an automatic failover, a manually triggered failover (for instance in order to upgrade a Redis instance), and other reasons. + +Normally Redis clients have some kind of hard-coded configuration that specifies the address of a Redis master instance within a network as IP address and port number. However if the master address changes, manual intervention in every client is needed. + +A Redis client supporting Sentinel can automatically discover the address of a Redis master from the master name using Redis Sentinel. So instead of a hard coded IP address and port, a client supporting Sentinel should optionally be able to take as input: + +* A list of ip:port pairs pointing to known Sentinel instances. +* The name of the service, like "cache" or "timelines". + +This is the procedure a client should follow in order to obtain the master address starting from the list of Sentinels and the service name. + +### Step 1: connect to the first Sentinel + +The client should iterate the list of Sentinel addresses. For every address it should try to connect to the Sentinel, using a short timeout (in the order of a few hundreds of milliseconds). On errors or timeouts the next Sentinel address should be tried. + +If all the Sentinel addresses were tried without success, an error should be returned to the client. + +The first Sentinel replying to the client request should be put at the start of the list, so that at the next reconnection, we'll try first the Sentinel that was reachable in the previous connection attempt, minimizing latency. + +### Step 2: ask for the master address + +Once a connection with a Sentinel is established, the client should retry to execute the following command on the Sentinel: + + SENTINEL get-master-addr-by-name master-name + +Where *master-name* should be replaced with the actual service name specified by the user. + +The result from this call can be one of the following two replies: + +* An ip:port pair. +* A null reply. This means Sentinel does not know this master. + +If an ip:port pair is received, this address should be used to connect to the Redis master. Otherwise if a null reply is received, the client should try the next Sentinel in the list. + +### Step 3: call the ROLE command in the target instance + +Once the client discovered the address of the master instance, it should +attempt a connection with the master, and call the [`ROLE`]({{< relref "/commands/role" >}}) command in order +to verify the role of the instance is actually a master. + +If the [`ROLE`]({{< relref "/commands/role" >}}) commands is not available (it was introduced in Redis 2.8.12), a client may resort to the `INFO replication` command parsing the `role:` field of the output. + +If the instance is not a master as expected, the client should wait a short amount of time (a few hundreds of milliseconds) and should try again starting from Step 1. + +## Handle reconnections + +Once the service name is resolved into the master address and a connection is established with the Redis master instance, every time a reconnection is needed, the client should resolve again the address using Sentinels restarting from Step 1. For instance Sentinel should contacted again the following cases: + +* If the client reconnects after a timeout or socket error. +* If the client reconnects because it was explicitly closed or reconnected by the user. + +In the above cases and any other case where the client lost the connection with the Redis server, the client should resolve the master address again. + +## Sentinel failover disconnection + +Starting with Redis 2.8.12, when Redis Sentinel changes the configuration of +an instance, for example promoting a replica to a master, demoting a master to +replicate to the new master after a failover, or simply changing the master +address of a stale replica instance, it sends a `CLIENT KILL type normal` +command to the instance in order to make sure all the clients are disconnected +from the reconfigured instance. This will force clients to resolve the master +address again. + +If the client will contact a Sentinel with yet not updated information, the verification of the Redis instance role via the [`ROLE`]({{< relref "/commands/role" >}}) command will fail, allowing the client to detect that the contacted Sentinel provided stale information, and will try again. + +Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to replica, triggering a new disconnection. The same reasoning applies to connecting to stale replicas that will get reconfigured to replicate with a different master. + +## Connect to replicas + +Sometimes clients are interested to connect to replicas, for example in order to scale read requests. This protocol supports connecting to replicas by modifying step 2 slightly. Instead of calling the following command: + + SENTINEL get-master-addr-by-name master-name + +The clients should call instead: + + SENTINEL replicas master-name + +In order to retrieve a list of replica instances. + +Symmetrically the client should verify with the [`ROLE`]({{< relref "/commands/role" >}}) command that the +instance is actually a replica, in order to avoid scaling read queries with +the master. + +## Connection pools + +For clients implementing connection pools, on reconnection of a single connection, the Sentinel should be contacted again, and in case of a master address change all the existing connections should be closed and connected to the new address. + +## Error reporting + +The client should correctly return the information to the user in case of errors. Specifically: + +* If no Sentinel can be contacted (so that the client was never able to get the reply to `SENTINEL get-master-addr-by-name`), an error that clearly states that Redis Sentinel is unreachable should be returned. +* If all the Sentinels in the pool replied with a null reply, the user should be informed with an error that Sentinels don't know this master name. + +## Sentinels list automatic refresh + +Optionally once a successful reply to `get-master-addr-by-name` is received, a client may update its internal list of Sentinel nodes following this procedure: + +* Obtain a list of other Sentinels for this master using the command `SENTINEL sentinels `. +* Add every ip:port pair not already existing in our list at the end of the list. + +It is not needed for a client to be able to make the list persistent updating its own configuration. The ability to upgrade the in-memory representation of the list of Sentinels can be already useful to improve reliability. + +## Subscribe to Sentinel events to improve responsiveness + +The [Sentinel documentation]({{< relref "/operate/oss_and_stack/management/sentinel" >}}) shows how clients can connect to +Sentinel instances using Pub/Sub in order to subscribe to changes in the +Redis instances configurations. + +This mechanism can be used in order to speedup the reconfiguration of clients, +that is, clients may listen to Pub/Sub in order to know when a configuration +change happened in order to run the three steps protocol explained in this +document in order to resolve the new Redis master (or replica) address. + +However update messages received via Pub/Sub should not substitute the +above procedure, since there is no guarantee that a client is able to +receive all the update messages. + +## Additional information + +For additional information or to discuss specific aspects of this guidelines, please drop a message to the [Redis Google Group](https://groups.google.com/group/redis-db). +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Get additional information about a command +linkTitle: Command tips +title: Redis command tips +weight: 1 +--- + +Command tips are an array of strings. +These provide Redis clients with additional information about the command. +The information can instruct Redis Cluster clients as to how the command should be executed and its output processed in a clustered deployment. + +Unlike the command's flags (see the 3rd element of [`COMMAND`]({{< relref "/commands/command" >}})'s reply), which are strictly internal to the server's operation, tips don't serve any purpose other than being reported to clients. + +Command tips are arbitrary strings. +However, the following sections describe proposed tips and demonstrate the conventions they are likely to adhere to. + +## nondeterministic_output + +This tip indicates that the command's output isn't deterministic. +That means that calls to the command may yield different results with the same arguments and data. +That difference could be the result of the command's random nature (e.g., [`RANDOMKEY`]({{< relref "/commands/randomkey" >}}) and [`SPOP`]({{< relref "/commands/spop" >}})); the call's timing (e.g., [`TTL`]({{< relref "/commands/ttl" >}})); or generic differences that relate to the server's state (e.g., [`INFO`]({{< relref "/commands/info" >}}) and [`CLIENT LIST`]({{< relref "/commands/client-list" >}})). + +**Note:** +Prior to Redis 7.0, this tip was the _random_ command flag. + +## nondeterministic_output_order + +The existence of this tip indicates that the command's output is deterministic, but its ordering is random (e.g., [`HGETALL`]({{< relref "/commands/hgetall" >}}) and [`SMEMBERS`]({{< relref "/commands/smembers" >}})). + +**Note:** +Prior to Redis 7.0, this tip was the _sort_\__for_\__script_ flag. + +## request_policy + +This tip can help clients determine the shards to send the command in clustering mode. +The default behavior a client should implement for commands without the _request_policy_ tip is as follows: + +1. The command doesn't accept key name arguments: the client can execute the command on an arbitrary shard. +1. For commands that accept one or more key name arguments: the client should route the command to a single shard, as determined by the hash slot of the input keys. + +In cases where the client should adopt a behavior different than the default, the _request_policy_ tip can be one of: + +- **all_nodes:** the client should execute the command on all nodes - masters and replicas alike. + An example is the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command. + This tip is in-use by commands that don't accept key name arguments. + The command operates atomically per shard. +* **all_shards:** the client should execute the command on all master shards (e.g., the [`DBSIZE`]({{< relref "/commands/dbsize" >}}) command). + This tip is in-use by commands that don't accept key name arguments. + The command operates atomically per shard. +- **multi_shard:** the client should execute the command on several shards. + The client should split the inputs according to the hash slots of its input key name arguments. + For example, the command `DEL {foo} {foo}1 bar` should be split to `DEL {foo} {foo}1` and `DEL bar`. + If the keys are hashed to more than a single slot, the command must be split even if all the slots are managed by the same shard. + Examples for such commands include [`MSET`]({{< relref "/commands/mset" >}}), [`MGET`]({{< relref "/commands/mget" >}}) and [`DEL`]({{< relref "/commands/del" >}}). + However, note that [`SUNIONSTORE`]({{< relref "/commands/sunionstore" >}}) isn't considered as _multi_shard_ because all of its keys must belong to the same hash slot. +- **special:** indicates a non-trivial form of the client's request policy, such as the [`SCAN`]({{< relref "/commands/scan" >}}) command. + +## response_policy + +This tip can help clients determine the aggregate they need to compute from the replies of multiple shards in a cluster. +The default behavior for commands without a _request_policy_ tip only applies to replies with of nested types (i.e., an array, a set, or a map). +The client's implementation for the default behavior should be as follows: + +1. The command doesn't accept key name arguments: the client can aggregate all replies within a single nested data structure. +For example, the array replies we get from calling [`KEYS`]({{< relref "/commands/keys" >}}) against all shards. +These should be packed in a single in no particular order. +1. For commands that accept one or more key name arguments: the client needs to retain the same order of replies as the input key names. +For example, [`MGET`]({{< relref "/commands/mget" >}})'s aggregated reply. + +The _response_policy_ tip is set for commands that reply with scalar data types, or when it's expected that clients implement a non-default aggregate. +This tip can be one of: + +* **one_succeeded:** the clients should return success if at least one shard didn't reply with an error. + The client should reply with the first non-error reply it obtains. + If all shards return an error, the client can reply with any one of these. + For example, consider a [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}) command that's sent to all shards. + Although the script should be loaded in all of the cluster's shards, the [`SCRIPT KILL`]({{< relref "/commands/script-kill" >}}) will typically run only on one at a given time. +* **all_succeeded:** the client should return successfully only if there are no error replies. + Even a single error reply should disqualify the aggregate and be returned. + Otherwise, the client should return one of the non-error replies. + As an example, consider the [`CONFIG SET`]({{< relref "/commands/config-set" >}}), [`SCRIPT FLUSH`]({{< relref "/commands/script-flush" >}}) and [`SCRIPT LOAD`]({{< relref "/commands/script-load" >}}) commands. +* **agg_logical_and:** the client should return the result of a logical _AND_ operation on all replies (only applies to integer replies, usually from commands that return either _0_ or _1_). + Consider the [`SCRIPT EXISTS`]({{< relref "/commands/script-exists" >}}) command as an example. + It returns an array of _0_'s and _1_'s that denote the existence of its given SHA1 sums in the script cache. + The aggregated response should be _1_ only when all shards had reported that a given script SHA1 sum is in their respective cache. +* **agg_logical_or:** the client should return the result of a logical _AND_ operation on all replies (only applies to integer replies, usually from commands that return either _0_ or _1_). +* **agg_min:** the client should return the minimal value from the replies (only applies to numerical replies). + The aggregate reply from a cluster-wide [`WAIT`]({{< relref "/commands/wait" >}}) command, for example, should be the minimal value (number of synchronized replicas) from all shards. +* **agg_max:** the client should return the maximal value from the replies (only applies to numerical replies). +* **agg_sum:** the client should return the sum of replies (only applies to numerical replies). + Example: [`DBSIZE`]({{< relref "/commands/dbsize" >}}). +* **special:** this type of tip indicates a non-trivial form of reply policy. + [`INFO`]({{< relref "/commands/info" >}}) is an excellent example of that. + +## Example + +``` +redis> command info ping +1) 1) "ping" + 2) (integer) -1 + 3) 1) fast + 4) (integer) 0 + 5) (integer) 0 + 6) (integer) 0 + 7) 1) @fast + 2) @connection + 8) 1) "request_policy:all_shards" + 2) "response_policy:all_succeeded" + 9) (empty array) + 10) (empty array) +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: The Redis Gopher protocol implementation +linkTitle: Gopher protocol +title: Redis and the Gopher protocol +weight: 10 +--- + +** Note: Support for Gopher was removed in Redis 7.0 ** + +Redis contains an implementation of the Gopher protocol, as specified in +the [RFC 1436](https://www.ietf.org/rfc/rfc1436.txt). + +The Gopher protocol was very popular in the late '90s. It is an alternative +to the web, and the implementation both server and client side is so simple +that the Redis server has just 100 lines of code in order to implement this +support. + +What do you do with Gopher nowadays? Well Gopher never *really* died, and +lately there is a movement in order for the Gopher more hierarchical content +composed of just plain text documents to be resurrected. Some want a simpler +internet, others believe that the mainstream internet became too much +controlled, and it's cool to create an alternative space for people that +want a bit of fresh air. + +Anyway, for the 10th birthday of the Redis, we gave it the Gopher protocol +as a gift. + +## How it works + +The Redis Gopher support uses the inline protocol of Redis, and specifically +two kind of inline requests that were anyway illegal: an empty request +or any request that starts with "/" (there are no Redis commands starting +with such a slash). Normal RESP2/RESP3 requests are completely out of the +path of the Gopher protocol implementation and are served as usually as well. + +If you open a connection to Redis when Gopher is enabled and send it +a string like "/foo", if there is a key named "/foo" it is served via the +Gopher protocol. + +In order to create a real Gopher "hole" (the name of a Gopher site in Gopher +talking), you likely need a script such as the one in [https://github.com/antirez/gopher2redis](https://github.com/antirez/gopher2redis). + +## SECURITY WARNING + +If you plan to put Redis on the internet in a publicly accessible address +to server Gopher pages **make sure to set a password** to the instance. +Once a password is set: + +1. The Gopher server (when enabled, not by default) will kill serve content via Gopher. +2. However other commands cannot be called before the client will authenticate. + +So use the `requirepass` option to protect your instance. + +To enable Gopher support use the following configuration line. + + gopher-enabled yes + +Accessing keys that are not strings or do not exit will generate +an error in Gopher protocol format. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Server-assisted, client-side caching in Redis + + ' +linkTitle: Client-side caching +title: Client-side caching reference +aliases: /develop/use/client-side-caching/ +weight: 2 +--- + +{{}}This document is intended as an in-depth reference for +client-side caching. See +[Client-side caching introduction]({{< relref "/develop/clients/client-side-caching" >}}) +for general usage guidelines. +{{}} + +Client-side caching is a technique used to create high performance services. +It exploits the memory available on application servers, servers that are +usually distinct computers compared to the database nodes, to store some subset +of the database information directly in the application side. + +Normally when data is required, the application servers ask the database about +such information, like in the following diagram: + + + +-------------+ +----------+ + | | ------- GET user:1234 -------> | | + | Application | | Database | + | | <---- username = Alice ------- | | + +-------------+ +----------+ + +When client-side caching is used, the application will store the reply of +popular queries directly inside the application memory, so that it can +reuse such replies later, without contacting the database again: + + +-------------+ +----------+ + | | | | + | Application | ( No chat needed ) | Database | + | | | | + +-------------+ +----------+ + | Local cache | + | | + | user:1234 = | + | username | + | Alice | + +-------------+ + +While the application memory used for the local cache may not be very big, +the time needed in order to access the local computer memory is orders of +magnitude smaller compared to accessing a networked service like a database. +Since often the same small percentage of data are accessed frequently, +this pattern can greatly reduce the latency for the application to get data +and, at the same time, the load in the database side. + +Moreover there are many datasets where items change very infrequently. +For instance, most user posts in a social network are either immutable or +rarely edited by the user. Adding to this the fact that usually a small +percentage of the posts are very popular, either because a small set of users +have a lot of followers and/or because recent posts have a lot more +visibility, it is clear why such a pattern can be very useful. + +Usually the two key advantages of client-side caching are: + +1. Data is available with a very small latency. +2. The database system receives less queries, allowing it to serve the same dataset with a smaller number of nodes. + +## There are two hard problems in computer science... + +A problem with the above pattern is how to invalidate the information that +the application is holding, in order to avoid presenting stale data to the +user. For example after the application above locally cached the information +for user:1234, Alice may update her username to Flora. Yet the application +may continue to serve the old username for user:1234. + +Sometimes, depending on the exact application we are modeling, this isn't a +big deal, so the client will just use a fixed maximum "time to live" for the +cached information. Once a given amount of time has elapsed, the information +will no longer be considered valid. More complex patterns, when using Redis, +leverage the Pub/Sub system in order to send invalidation messages to +listening clients. This can be made to work but is tricky and costly from +the point of view of the bandwidth used, because often such patterns involve +sending the invalidation messages to every client in the application, even +if certain clients may not have any copy of the invalidated data. Moreover +every application query altering the data requires to use the [`PUBLISH`]({{< relref "/commands/publish" >}}) +command, costing the database more CPU time to process this command. + +Regardless of what schema is used, there is a simple fact: many very large +applications implement some form of client-side caching, because it is the +next logical step to having a fast store or a fast cache server. For this +reason Redis 6 implements direct support for client-side caching, in order +to make this pattern much simpler to implement, more accessible, reliable, +and efficient. + +## The Redis implementation of client-side caching + +The Redis client-side caching support is called _Tracking_, and has two modes: + +* In the default mode, the server remembers what keys a given client accessed, and sends invalidation messages when the same keys are modified. This costs memory in the server side, but sends invalidation messages only for the set of keys that the client might have in memory. +* In the _broadcasting_ mode, the server does not attempt to remember what keys a given client accessed, so this mode costs no memory at all in the server side. Instead clients subscribe to key prefixes such as `object:` or `user:`, and receive a notification message every time a key matching a subscribed prefix is touched. + +To recap, for now let's forget for a moment about the broadcasting mode, to +focus on the first mode. We'll describe broadcasting in more detail later. + +1. Clients can enable tracking if they want. Connections start without tracking enabled. +2. When tracking is enabled, the server remembers what keys each client requested during the connection lifetime (by sending read commands about such keys). +3. When a key is modified by some client, or is evicted because it has an associated expire time, or evicted because of a _maxmemory_ policy, all the clients with tracking enabled that may have the key cached, are notified with an _invalidation message_. +4. When clients receive invalidation messages, they are required to remove the corresponding keys, in order to avoid serving stale data. + +This is an example of the protocol: + +* Client 1 `->` Server: CLIENT TRACKING ON +* Client 1 `->` Server: GET foo +* (The server remembers that Client 1 may have the key "foo" cached) +* (Client 1 may remember the value of "foo" inside its local memory) +* Client 2 `->` Server: SET foo SomeOtherValue +* Server `->` Client 1: INVALIDATE "foo" + +This looks great superficially, but if you imagine 10k connected clients all +asking for millions of keys over long living connection, the server ends up +storing too much information. For this reason Redis uses two key ideas in +order to limit the amount of memory used server-side and the CPU cost of +handling the data structures implementing the feature: + +* The server remembers the list of clients that may have cached a given key in a single global table. This table is called the **Invalidation Table**. The invalidation table can contain a maximum number of entries. If a new key is inserted, the server may evict an older entry by pretending that such key was modified (even if it was not), and sending an invalidation message to the clients. Doing so, it can reclaim the memory used for this key, even if this will force the clients having a local copy of the key to evict it. +* Inside the invalidation table we don't really need to store pointers to clients' structures, that would force a garbage collection procedure when the client disconnects: instead what we do is just store client IDs (each Redis client has a unique numerical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. +* There is a single keys namespace, not divided by database numbers. So if a client is caching the key `foo` in database 2, and some other client changes the value of the key `foo` in database 3, an invalidation message will still be sent. This way we can ignore database numbers reducing both the memory usage and the implementation complexity. + +## Two connections mode + +Using the new version of the Redis protocol, RESP3, supported by Redis 6, it is possible to run the data queries and receive the invalidation messages in the same connection. However many client implementations may prefer to implement client-side caching using two separated connections: one for data, and one for invalidation messages. For this reason when a client enables tracking, it can specify to redirect the invalidation messages to another connection by specifying the "client ID" of a different connection. Many data connections can redirect invalidation messages to the same connection, this is useful for clients implementing connection pooling. The two connections model is the only one that is also supported for RESP2 (which lacks the ability to multiplex different kind of information in the same connection). + +Here's an example of a complete session using the Redis protocol in the old RESP2 mode involving the following steps: enabling tracking redirecting to another connection, asking for a key, and getting an invalidation message once the key gets modified. + +To start, the client opens a first connection that will be used for invalidations, requests the connection ID, and subscribes via Pub/Sub to the special channel that is used to get invalidation messages when in RESP2 modes (remember that RESP2 is the usual Redis protocol, and not the more advanced protocol that you can use, optionally, with Redis 6 using the [`HELLO`]({{< relref "/commands/hello" >}}) command): + +``` +(Connection 1 -- used for invalidations) +CLIENT ID +:4 +SUBSCRIBE __redis__:invalidate +*3 +$9 +subscribe +$20 +__redis__:invalidate +:1 +``` + +Now we can enable tracking from the data connection: + +``` +(Connection 2 -- data connection) +CLIENT TRACKING on REDIRECT 4 ++OK + +GET foo +$3 +bar +``` + +The client may decide to cache `"foo" => "bar"` in the local memory. + +A different client will now modify the value of the "foo" key: + +``` +(Some other unrelated connection) +SET foo bar ++OK +``` + +As a result, the invalidations connection will receive a message that invalidates the specified key. + +``` +(Connection 1 -- used for invalidations) +*3 +$7 +message +$20 +__redis__:invalidate +*1 +$3 +foo +``` +The client will check if there are cached keys in this caching slot, and will evict the information that is no longer valid. + +Note that the third element of the Pub/Sub message is not a single key but +is a Redis array with just a single element. Since we send an array, if there +are groups of keys to invalidate, we can do that in a single message. +In case of a flush ([`FLUSHALL`]({{< relref "/commands/flushall" >}}) or [`FLUSHDB`]({{< relref "/commands/flushdb" >}})), a `null` message will be sent. + +A very important thing to understand about client-side caching used with +RESP2 and a Pub/Sub connection in order to read the invalidation messages, +is that using Pub/Sub is entirely a trick **in order to reuse old client +implementations**, but actually the message is not really sent to a channel +and received by all the clients subscribed to it. Only the connection we +specified in the `REDIRECT` argument of the [`CLIENT`]({{< relref "/commands/client" >}}) command will actually +receive the Pub/Sub message, making the feature a lot more scalable. + +When RESP3 is used instead, invalidation messages are sent (either in the +same connection, or in the secondary connection when redirection is used) +as `push` messages (read the RESP3 specification for more information). + +## What tracking tracks + +As you can see clients do not need, by default, to tell the server what keys +they are caching. Every key that is mentioned in the context of a read-only +command is tracked by the server, because it *could be cached*. + +This has the obvious advantage of not requiring the client to tell the server +what it is caching. Moreover in many clients implementations, this is what +you want, because a good solution could be to just cache everything that is not +already cached, using a first-in first-out approach: we may want to cache a +fixed number of objects, every new data we retrieve, we could cache it, +discarding the oldest cached object. More advanced implementations may instead +drop the least used object or alike. + +Note that anyway if there is write traffic on the server, caching slots +will get invalidated during the course of the time. In general when the +server assumes that what we get we also cache, we are making a tradeoff: + +1. It is more efficient when the client tends to cache many things with a policy that welcomes new objects. +2. The server will be forced to retain more data about the client keys. +3. The client will receive useless invalidation messages about objects it did not cache. + +So there is an alternative described in the next section. + +## Opt-in and Opt-out caching + +### Opt-in + +Clients implementations may want to cache only selected keys, and communicate +explicitly to the server what they'll cache and what they will not. This will +require more bandwidth when caching new objects, but at the same time reduces +the amount of data that the server has to remember and the amount of +invalidation messages received by the client. + +In order to do this, tracking must be enabled using the OPTIN option: + + CLIENT TRACKING ON REDIRECT 1234 OPTIN + +In this mode, by default, keys mentioned in read queries *are not supposed to be cached*, instead when a client wants to cache something, it must send a special command immediately before the actual command to retrieve the data: + + CLIENT CACHING YES + +OK + GET foo + "bar" + +The `CACHING` command affects the command executed immediately after it. +However, in case the next command is [`MULTI`]({{< relref "/commands/multi" >}}), all the commands in the +transaction will be tracked. Similarly, in case of Lua scripts, all the +commands executed by the script will be tracked. + +### Opt-out + +Opt-out caching allows clients to automatically cache keys locally without explicitly opting in for each key. +This approach ensures that all keys are cached by default unless specified otherwise. +Opt-out caching can simplify the implementation of client-side caching by reducing the need for explicit commands to enable caching for individual keys. + +Tracking must be enabled using the OPTOUT option to enable opt-out caching: + + CLIENT TRACKING ON OPTOUT + +If you want to exclude a specific key from being tracked and cached, use the CLIENT UNTRACKING command: + + CLIENT UNTRACKING key + +## Broadcasting mode + +So far we described the first client-side caching model that Redis implements. +There is another one, called broadcasting, that sees the problem from the +point of view of a different tradeoff, does not consume any memory on the +server side, but instead sends more invalidation messages to clients. +In this mode we have the following main behaviors: + +* Clients enable client-side caching using the `BCAST` option, specifying one or more prefixes using the `PREFIX` option. For instance: `CLIENT TRACKING on REDIRECT 10 BCAST PREFIX object: PREFIX user:`. If no prefix is specified at all, the prefix is assumed to be the empty string, so the client will receive invalidation messages for every key that gets modified. Instead if one or more prefixes are used, only keys matching one of the specified prefixes will be sent in the invalidation messages. +* The server does not store anything in the invalidation table. Instead it uses a different **Prefixes Table**, where each prefix is associated to a list of clients. +* No two prefixes can track overlapping parts of the keyspace. For instance, having the prefix "foo" and "foob" would not be allowed, since they would both trigger an invalidation for the key "foobar". However, just using the prefix "foo" is sufficient. +* Every time a key matching any of the prefixes is modified, all the clients subscribed to that prefix, will receive the invalidation message. +* The server will consume CPU proportional to the number of registered prefixes. If you have just a few, it is hard to see any difference. With a big number of prefixes the CPU cost can become quite large. +* In this mode the server can perform the optimization of creating a single reply for all the clients subscribed to a given prefix, and send the same reply to all. This helps to lower the CPU usage. + +## The NOLOOP option + +By default client-side tracking will send invalidation messages to the +client that modified the key. Sometimes clients want this, since they +implement very basic logic that does not involve automatically caching +writes locally. However, more advanced clients may want to cache even the +writes they are doing in the local in-memory table. In such case receiving +an invalidation message immediately after the write is a problem, since it +will force the client to evict the value it just cached. + +In this case it is possible to use the `NOLOOP` option: it works both +in normal and broadcasting mode. Using this option, clients are able to +tell the server they don't want to receive invalidation messages for keys +that they modified. + +## Avoiding race conditions + +When implementing client-side caching redirecting the invalidation messages +to a different connection, you should be aware that there is a possible +race condition. See the following example interaction, where we'll call +the data connection "D" and the invalidation connection "I": + + [D] client -> server: GET foo + [I] server -> client: Invalidate foo (somebody else touched it) + [D] server -> client: "bar" (the reply of "GET foo") + +As you can see, because the reply to the GET was slower to reach the +client, we received the invalidation message before the actual data that +is already no longer valid. So we'll keep serving a stale version of the +foo key. To avoid this problem, it is a good idea to populate the cache +when we send the command with a placeholder: + + Client cache: set the local copy of "foo" to "caching-in-progress" + [D] client-> server: GET foo. + [I] server -> client: Invalidate foo (somebody else touched it) + Client cache: delete "foo" from the local cache. + [D] server -> client: "bar" (the reply of "GET foo") + Client cache: don't set "bar" since the entry for "foo" is missing. + +Such a race condition is not possible when using a single connection for both +data and invalidation messages, since the order of the messages is always known +in that case. + +## What to do when losing connection with the server + +Similarly, if we lost the connection with the socket we use in order to +get the invalidation messages, we may end with stale data. In order to avoid +this problem, we need to do the following things: + +1. Make sure that if the connection is lost, the local cache is flushed. +2. Both when using RESP2 with Pub/Sub, or RESP3, ping the invalidation channel periodically (you can send PING commands even when the connection is in Pub/Sub mode!). If the connection looks broken and we are not able to receive ping backs, after a maximum amount of time, close the connection and flush the cache. + +## What to cache + +Clients may want to run internal statistics about the number of times +a given cached key was actually served in a request, to understand in the +future what is good to cache. In general: + +* We don't want to cache many keys that change continuously. +* We don't want to cache many keys that are requested very rarely. +* We want to cache keys that are requested often and change at a reasonable rate. For an example of key not changing at a reasonable rate, think of a global counter that is continuously [`INCR`]({{< relref "/commands/incr" >}})emented. + +However simpler clients may just evict data using some random sampling just +remembering the last time a given cached value was served, trying to evict +keys that were not served recently. + +## Other hints for implementing client libraries + +* Handling TTLs: make sure you also request the key TTL and set the TTL in the local cache if you want to support caching keys with a TTL. +* Putting a max TTL on every key is a good idea, even if it has no TTL. This protects against bugs or connection issues that would make the client have old data in the local copy. +* Limiting the amount of memory used by clients is absolutely needed. There must be a way to evict old keys when new ones are added. + +## Limiting the amount of memory used by Redis + +Be sure to configure a suitable value for the maximum number of keys remembered by Redis or alternatively use the BCAST mode that consumes no memory at all on the Redis side. Note that the memory consumed by Redis when BCAST is not used, is proportional both to the number of keys tracked and the number of clients requesting such keys. + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: What are command key specification and how to use them in your client +linkTitle: Command key specifications +title: Command key specifications +weight: 3 +--- + +Many of the commands in Redis accept key names as input arguments. +The 9th element in the reply of [`COMMAND`]({{< relref "/commands/command" >}}) (and [`COMMAND INFO`]({{< relref "/commands/command-info" >}})) is an array that consists of the command's key specifications. + +A _key specification_ describes a rule for extracting the names of one or more keys from the arguments of a given command. +Key specifications provide a robust and flexible mechanism, compared to the _first key_, _last key_ and _step_ scheme employed until Redis 7.0. +Before introducing these specifications, Redis clients had no trivial programmatic means to extract key names for all commands. + +Cluster-aware Redis clients had to have the keys' extraction logic hard-coded in the cases of commands such as [`EVAL`]({{< relref "/commands/eval" >}}) and [`ZUNIONSTORE`]({{< relref "/commands/zunionstore" >}}) that rely on a _numkeys_ argument or [`SORT`]({{< relref "/commands/sort" >}}) and its many clauses. +Alternatively, the [`COMMAND GETKEYS`]({{< relref "/commands/command-getkeys" >}}) can be used to achieve a similar extraction effect but at a higher latency. + +A Redis client isn't obligated to support key specifications. +It can continue using the legacy _first key_, _last key_ and _step_ scheme along with the [_movablekeys_ flag]({{< relref "/commands/command#flags" >}}) that remain unchanged. + +However, a Redis client that implements key specifications support can consolidate most of its keys' extraction logic. +Even if the client encounters an unfamiliar type of key specification, it can always revert to the [`COMMAND GETKEYS`]({{< relref "/commands/command-getkeys" >}}) command. + +That said, most cluster-aware clients only require a single key name to perform correct command routing, so it is possible that although a command features one unfamiliar specification, its other specification may still be usable by the client. + +Key specifications are maps with the following keys: + +1. **begin_search:**: the starting index for keys' extraction. +2. **find_keys:** the rule for identifying the keys relative to the BS. +3. **notes**: notes about this key spec, if there are any. +4. **flags**: indicate the type of data access. + +## begin_search + +The _begin\_search_ value of a specification informs the client of the extraction's beginning. +The value is a map. +There are three types of `begin_search`: + +1. **index:** key name arguments begin at a constant index. +2. **keyword:** key names start after a specific keyword (token). +3. **unknown:** an unknown type of specification - see the [incomplete flag section](#incomplete) for more details. + +### index + +The _index_ type of `begin_search` indicates that input keys appear at a constant index. +It is a map under the _spec_ key with a single key: + +1. **index:** the 0-based index from which the client should start extracting key names. + +### keyword + +The _keyword_ type of `begin_search` means a literal token precedes key name arguments. +It is a map under the _spec_ with two keys: + +1. **keyword:** the keyword (token) that marks the beginning of key name arguments. +2. **startfrom:** an index to the arguments array from which the client should begin searching. + This can be a negative value, which means the search should start from the end of the arguments' array, in reverse order. + For example, _-2_'s meaning is to search reverse from the penultimate argument. + +More examples of the _keyword_ search type include: + +* [`SET`]({{< relref "/commands/set" >}}) has a `begin_search` specification of type _index_ with a value of _1_. +* [`XREAD`]({{< relref "/commands/xread" >}}) has a `begin_search` specification of type _keyword_ with the values _"STREAMS"_ and _1_ as _keyword_ and _startfrom_, respectively. +* [`MIGRATE`]({{< relref "/commands/migrate" >}}) has a _start_search_ specification of type _keyword_ with the values of _"KEYS"_ and _-2_. + +## find_keys + +The `find_keys` value of a key specification tells the client how to continue the search for key names. +`find_keys` has three possible types: + +1. **range:** keys stop at a specific index or relative to the last argument. +2. **keynum:** an additional argument specifies the number of input keys. +3. **unknown:** an unknown type of specification - see the [incomplete flag section](#incomplete) for more details. + +### range + +The _range_ type of `find_keys` is a map under the _spec_ key with three keys: + +1. **lastkey:** the index, relative to `begin_search`, of the last key argument. + This can be a negative value, in which case it isn't relative. + For example, _-1_ indicates to keep extracting keys until the last argument, _-2_ until one before the last, and so on. +2. **keystep:** the number of arguments that should be skipped, after finding a key, to find the next one. +3. **limit:** if _lastkey_ is has the value of _-1_, we use the _limit_ to stop the search by a factor. + _0_ and _1_ mean no limit. + _2_ means half of the remaining arguments, 3 means a third, and so on. + +### keynum + +The _keynum_ type of `find_keys` is a map under the _spec_ key with three keys: + +* **keynumidx:** the index, relative to `begin_search`, of the argument containing the number of keys. +* **firstkey:** the index, relative to `begin_search`, of the first key. + This is usually the next argument after _keynumidx_, and its value, in this case, is greater by one. +* **keystep:** Tthe number of arguments that should be skipped, after finding a key, to find the next one. + +Examples: + +* The [`SET`]({{< relref "/commands/set" >}}) command has a _range_ of _0_, _1_ and _0_. +* The [`MSET`]({{< relref "/commands/mset" >}}) command has a _range_ of _-1_, _2_ and _0_. +* The [`XREAD`]({{< relref "/commands/xread" >}}) command has a _range_ of _-1_, _1_ and _2_. +* The [`ZUNION`]({{< relref "/commands/zunion" >}}) command has a _start_search_ type _index_ with the value _1_, and `find_keys` of type _keynum_ with values of _0_, _1_ and _1_. + +**Note:** +this isn't a perfect solution as the module writers can come up with anything. +However, this mechanism should allow the extraction of key name arguments for the vast majority of commands. + +## notes + +Notes about non-obvious key specs considerations, if applicable. + +## flags + +A key specification can have additional flags that provide more details about the key. +These flags are divided into three groups, as described below. + +### Access type flags + +The following flags declare the type of access the command uses to a key's value or its metadata. +A key's metadata includes LRU/LFU counters, type, and cardinality. +These flags do not relate to the reply sent back to the client. + +Every key specification has precisely one of the following flags: + +* **RW:** the read-write flag. + The command modifies the data stored in the value of the key or its metadata. + This flag marks every operation that isn't distinctly a delete, an overwrite, or read-only. +* **RO:** the read-only flag. + The command only reads the value of the key (although it doesn't necessarily return it). +* **OW:** the overwrite flag. + The command overwrites the data stored in the value of the key. +* **RM:** the remove flag. + The command deletes the key. + +### Logical operation flags + +The following flags declare the type of operations performed on the data stored as the key's value and its TTL (if any), not the metadata. +These flags describe the logical operation that the command executes on data, driven by the input arguments. +The flags do not relate to modifying or returning metadata (such as a key's type, cardinality, or existence). + +Every key specification may include the following flag: + +* **access:** the access flag. + This flag indicates that the command returns, copies, or somehow uses the user's data that's stored in the key. + +In addition, the specification may include precisely one of the following: + +* **update:** the update flag. + The command updates the data stored in the key's value. + The new value may depend on the old value. + This flag marks every operation that isn't distinctly an insert or a delete. +* **insert:** the insert flag. + The command only adds data to the value; existing data isn't modified or deleted. +* **delete:** the delete flag. + The command explicitly deletes data from the value stored at the key. + +### Miscellaneous flags + +Key specifications may have the following flags: + +* **not_key:** this flag indicates that the specified argument isn't a key. + This argument is treated the same as a key when computing which slot a command should be assigned to for Redis cluster. + For all other purposes this argument should not be considered a key. +* **incomplete:** this flag is explained below. +* **variable_flags:** this flag is explained below. + +### incomplete + +Some commands feature exotic approaches when it comes to specifying their keys, which makes extraction difficult. +Consider, for example, what would happen with a call to [`MIGRATE`]({{< relref "/commands/migrate" >}}) that includes the literal string _"KEYS"_ as an argument to its _AUTH_ clause. +Our key specifications would miss the mark, and extraction would begin at the wrong index. + +Thus, we recognize that key specifications are incomplete and may fail to extract all keys. +However, we assure that even incomplete specifications never yield the wrong names of keys, providing that the command is syntactically correct. + +In the case of [`MIGRATE`]({{< relref "/commands/migrate" >}}), the search begins at the end (_startfrom_ has the value of _-1_). +If and when we encounter a key named _"KEYS"_, we'll only extract the subset of the key name arguments after it. +That's why [`MIGRATE`]({{< relref "/commands/migrate" >}}) has the _incomplete_ flag in its key specification. + +Another case of incompleteness is the [`SORT`]({{< relref "/commands/sort" >}}) command. +Here, the `begin_search` and `find_keys` are of type _unknown_. +The client should revert to calling the [`COMMAND GETKEYS`]({{< relref "/commands/command-getkeys" >}}) command to extract key names from the arguments, short of implementing it natively. +The difficulty arises, for example, because the string _"STORE"_ is both a keyword (token) and a valid literal argument for [`SORT`]({{< relref "/commands/sort" >}}). + +**Note:** +the only commands with _incomplete_ key specifications are [`SORT`]({{< relref "/commands/sort" >}}) and [`MIGRATE`]({{< relref "/commands/migrate" >}}). +We don't expect the addition of such commands in the future. + +### variable_flags + +In some commands, the flags for the same key name argument can depend on other arguments. +For example, consider the [`SET`]({{< relref "/commands/set" >}}) command and its optional _GET_ argument. +Without the _GET_ argument, [`SET`]({{< relref "/commands/set" >}}) is write-only, but it becomes a read and write command with it. +When this flag is present, it means that the key specification flags cover all possible options, but the effective flags depend on other arguments. + +## Examples + +### SET key specifications + +``` + 1) 1) "flags" + 2) 1) RW + 2) access + 3) update + 3) "begin_search" + 4) 1) "type" + 2) "index" + 3) "spec" + 4) 1) "index" + 2) (integer) 1 + 5) "find_keys" + 6) 1) "type" + 2) "range" + 3) "spec" + 4) 1) "lastkey" + 2) (integer) 0 + 3) "keystep" + 4) (integer) 1 + 5) "limit" + 6) (integer) 0 +``` + +### ZUNION key specifications + +``` + 1) 1) "flags" + 2) 1) RO + 2) access + 3) "begin_search" + 4) 1) "type" + 2) "index" + 3) "spec" + 4) 1) "index" + 2) (integer) 1 + 5) "find_keys" + 6) 1) "type" + 2) "keynum" + 3) "spec" + 4) 1) "keynumidx" + 2) (integer) 0 + 3) "firstkey" + 4) (integer) 1 + 5) "keystep" + 6) (integer) 1 +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Reference for the Redis Modules API +linkTitle: API reference +title: Modules API reference +weight: 1 +--- + + + +## Sections + +* [Heap allocation raw functions](#section-heap-allocation-raw-functions) +* [Commands API](#section-commands-api) +* [Module information and time measurement](#section-module-information-and-time-measurement) +* [Automatic memory management for modules](#section-automatic-memory-management-for-modules) +* [String objects APIs](#section-string-objects-apis) +* [Reply APIs](#section-reply-apis) +* [Commands replication API](#section-commands-replication-api) +* [DB and Key APIs – Generic API](#section-db-and-key-apis-generic-api) +* [Key API for String type](#section-key-api-for-string-type) +* [Key API for List type](#section-key-api-for-list-type) +* [Key API for Sorted Set type](#section-key-api-for-sorted-set-type) +* [Key API for Sorted Set iterator](#section-key-api-for-sorted-set-iterator) +* [Key API for Hash type](#section-key-api-for-hash-type) +* [Key API for Stream type](#section-key-api-for-stream-type) +* [Calling Redis commands from modules](#section-calling-redis-commands-from-modules) +* [Modules data types](#section-modules-data-types) +* [RDB loading and saving functions](#section-rdb-loading-and-saving-functions) +* [Key digest API (DEBUG DIGEST interface for modules types)](#section-key-digest-api-debug-digest-interface-for-modules-types) +* [AOF API for modules data types](#section-aof-api-for-modules-data-types) +* [IO context handling](#section-io-context-handling) +* [Logging](#section-logging) +* [Blocking clients from modules](#section-blocking-clients-from-modules) +* [Thread Safe Contexts](#section-thread-safe-contexts) +* [Module Keyspace Notifications API](#section-module-keyspace-notifications-api) +* [Modules Cluster API](#section-modules-cluster-api) +* [Modules Timers API](#section-modules-timers-api) +* [Modules EventLoop API](#section-modules-eventloop-api) +* [Modules ACL API](#section-modules-acl-api) +* [Modules Dictionary API](#section-modules-dictionary-api) +* [Modules Info fields](#section-modules-info-fields) +* [Modules utility APIs](#section-modules-utility-apis) +* [Modules API exporting / importing](#section-modules-api-exporting-importing) +* [Module Command Filter API](#section-module-command-filter-api) +* [Scanning keyspace and hashes](#section-scanning-keyspace-and-hashes) +* [Module fork API](#section-module-fork-api) +* [Server hooks implementation](#section-server-hooks-implementation) +* [Module Configurations API](#section-module-configurations-api) +* [RDB load/save API](#section-rdb-load-save-api) +* [Key eviction API](#section-key-eviction-api) +* [Miscellaneous APIs](#section-miscellaneous-apis) +* [Defrag API](#section-defrag-api) +* [Function index](#section-function-index) + + + +## Heap allocation raw functions + +Memory allocated with these functions are taken into account by Redis key +eviction algorithms and are reported in Redis memory usage information. + + + +### `RedisModule_Alloc` + + void *RedisModule_Alloc(size_t bytes); + +**Available since:** 4.0.0 + +Use like `malloc()`. Memory allocated with this function is reported in +Redis INFO memory, used for keys eviction according to maxmemory settings +and in general is taken into account as memory allocated by Redis. +You should avoid using `malloc()`. +This function panics if unable to allocate enough memory. + + + +### `RedisModule_TryAlloc` + + void *RedisModule_TryAlloc(size_t bytes); + +**Available since:** 7.0.0 + +Similar to [`RedisModule_Alloc`](#RedisModule_Alloc), but returns NULL in case of allocation failure, instead +of panicking. + + + +### `RedisModule_Calloc` + + void *RedisModule_Calloc(size_t nmemb, size_t size); + +**Available since:** 4.0.0 + +Use like `calloc()`. Memory allocated with this function is reported in +Redis INFO memory, used for keys eviction according to maxmemory settings +and in general is taken into account as memory allocated by Redis. +You should avoid using `calloc()` directly. + + + +### `RedisModule_TryCalloc` + + void *RedisModule_TryCalloc(size_t nmemb, size_t size); + +**Available since:** 7.4.0 + +Similar to [`RedisModule_Calloc`](#RedisModule_Calloc), but returns NULL in case of allocation failure, instead +of panicking. + + + +### `RedisModule_Realloc` + + void* RedisModule_Realloc(void *ptr, size_t bytes); + +**Available since:** 4.0.0 + +Use like `realloc()` for memory obtained with [`RedisModule_Alloc()`](#RedisModule_Alloc). + + + +### `RedisModule_TryRealloc` + + void *RedisModule_TryRealloc(void *ptr, size_t bytes); + +**Available since:** 7.4.0 + +Similar to [`RedisModule_Realloc`](#RedisModule_Realloc), but returns NULL in case of allocation failure, +instead of panicking. + + + +### `RedisModule_Free` + + void RedisModule_Free(void *ptr); + +**Available since:** 4.0.0 + +Use like `free()` for memory obtained by [`RedisModule_Alloc()`](#RedisModule_Alloc) and +[`RedisModule_Realloc()`](#RedisModule_Realloc). However you should never try to free with +[`RedisModule_Free()`](#RedisModule_Free) memory allocated with `malloc()` inside your module. + + + +### `RedisModule_Strdup` + + char *RedisModule_Strdup(const char *str); + +**Available since:** 4.0.0 + +Like `strdup()` but returns memory allocated with [`RedisModule_Alloc()`](#RedisModule_Alloc). + + + +### `RedisModule_PoolAlloc` + + void *RedisModule_PoolAlloc(RedisModuleCtx *ctx, size_t bytes); + +**Available since:** 4.0.0 + +Return heap allocated memory that will be freed automatically when the +module callback function returns. Mostly suitable for small allocations +that are short living and must be released when the callback returns +anyway. The returned memory is aligned to the architecture word size +if at least word size bytes are requested, otherwise it is just +aligned to the next power of two, so for example a 3 bytes request is +4 bytes aligned while a 2 bytes request is 2 bytes aligned. + +There is no realloc style function since when this is needed to use the +pool allocator is not a good idea. + +The function returns NULL if `bytes` is 0. + + + +## Commands API + +These functions are used to implement custom Redis commands. + +For examples, see [https://redis.io/docs/latest/develop/reference/modules/](https://redis.io/docs/latest/develop/reference/modules/). + + + +### `RedisModule_IsKeysPositionRequest` + + int RedisModule_IsKeysPositionRequest(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Return non-zero if a module command, that was declared with the +flag "getkeys-api", is called in a special way to get the keys positions +and not to get executed. Otherwise zero is returned. + + + +### `RedisModule_KeyAtPosWithFlags` + + void RedisModule_KeyAtPosWithFlags(RedisModuleCtx *ctx, int pos, int flags); + +**Available since:** 7.0.0 + +When a module command is called in order to obtain the position of +keys, since it was flagged as "getkeys-api" during the registration, +the command implementation checks for this special call using the +[`RedisModule_IsKeysPositionRequest()`](#RedisModule_IsKeysPositionRequest) API and uses this function in +order to report keys. + +The supported flags are the ones used by [`RedisModule_SetCommandInfo`](#RedisModule_SetCommandInfo), see `REDISMODULE_CMD_KEY_`*. + + +The following is an example of how it could be used: + + if (RedisModule_IsKeysPositionRequest(ctx)) { + RedisModule_KeyAtPosWithFlags(ctx, 2, REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS); + RedisModule_KeyAtPosWithFlags(ctx, 1, REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE | REDISMODULE_CMD_KEY_ACCESS); + } + + Note: in the example above the get keys API could have been handled by key-specs (preferred). + Implementing the getkeys-api is required only when is it not possible to declare key-specs that cover all keys. + + + +### `RedisModule_KeyAtPos` + + void RedisModule_KeyAtPos(RedisModuleCtx *ctx, int pos); + +**Available since:** 4.0.0 + +This API existed before [`RedisModule_KeyAtPosWithFlags`](#RedisModule_KeyAtPosWithFlags) was added, now deprecated and +can be used for compatibility with older versions, before key-specs and flags +were introduced. + + + +### `RedisModule_IsChannelsPositionRequest` + + int RedisModule_IsChannelsPositionRequest(RedisModuleCtx *ctx); + +**Available since:** 7.0.0 + +Return non-zero if a module command, that was declared with the +flag "getchannels-api", is called in a special way to get the channel positions +and not to get executed. Otherwise zero is returned. + + + +### `RedisModule_ChannelAtPosWithFlags` + + void RedisModule_ChannelAtPosWithFlags(RedisModuleCtx *ctx, + int pos, + int flags); + +**Available since:** 7.0.0 + +When a module command is called in order to obtain the position of +channels, since it was flagged as "getchannels-api" during the +registration, the command implementation checks for this special call +using the [`RedisModule_IsChannelsPositionRequest()`](#RedisModule_IsChannelsPositionRequest) API and uses this +function in order to report the channels. + +The supported flags are: +* `REDISMODULE_CMD_CHANNEL_SUBSCRIBE`: This command will subscribe to the channel. +* `REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE`: This command will unsubscribe from this channel. +* `REDISMODULE_CMD_CHANNEL_PUBLISH`: This command will publish to this channel. +* `REDISMODULE_CMD_CHANNEL_PATTERN`: Instead of acting on a specific channel, will act on any + channel specified by the pattern. This is the same access + used by the PSUBSCRIBE and PUNSUBSCRIBE commands available + in Redis. Not intended to be used with PUBLISH permissions. + +The following is an example of how it could be used: + + if (RedisModule_IsChannelsPositionRequest(ctx)) { + RedisModule_ChannelAtPosWithFlags(ctx, 1, REDISMODULE_CMD_CHANNEL_SUBSCRIBE | REDISMODULE_CMD_CHANNEL_PATTERN); + RedisModule_ChannelAtPosWithFlags(ctx, 1, REDISMODULE_CMD_CHANNEL_PUBLISH); + } + +Note: One usage of declaring channels is for evaluating ACL permissions. In this context, +unsubscribing is always allowed, so commands will only be checked against subscribe and +publish permissions. This is preferred over using [`RedisModule_ACLCheckChannelPermissions`](#RedisModule_ACLCheckChannelPermissions), since +it allows the ACLs to be checked before the command is executed. + + + +### `RedisModule_CreateCommand` + + int RedisModule_CreateCommand(RedisModuleCtx *ctx, + const char *name, + RedisModuleCmdFunc cmdfunc, + const char *strflags, + int firstkey, + int lastkey, + int keystep); + +**Available since:** 4.0.0 + +Register a new command in the Redis server, that will be handled by +calling the function pointer 'cmdfunc' using the RedisModule calling +convention. + +The function returns `REDISMODULE_ERR` in these cases: +- If creation of module command is called outside the `RedisModule_OnLoad`. +- The specified command is already busy. +- The command name contains some chars that are not allowed. +- A set of invalid flags were passed. + +Otherwise `REDISMODULE_OK` is returned and the new command is registered. + +This function must be called during the initialization of the module +inside the `RedisModule_OnLoad()` function. Calling this function outside +of the initialization function is not defined. + +The command function type is the following: + + int MyCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc); + +And is supposed to always return `REDISMODULE_OK`. + +The set of flags 'strflags' specify the behavior of the command, and should +be passed as a C string composed of space separated words, like for +example "write deny-oom". The set of flags are: + +* **"write"**: The command may modify the data set (it may also read + from it). +* **"readonly"**: The command returns data from keys but never writes. +* **"admin"**: The command is an administrative command (may change + replication or perform similar tasks). +* **"deny-oom"**: The command may use additional memory and should be + denied during out of memory conditions. +* **"deny-script"**: Don't allow this command in Lua scripts. +* **"allow-loading"**: Allow this command while the server is loading data. + Only commands not interacting with the data set + should be allowed to run in this mode. If not sure + don't use this flag. +* **"pubsub"**: The command publishes things on Pub/Sub channels. +* **"random"**: The command may have different outputs even starting + from the same input arguments and key values. + Starting from Redis 7.0 this flag has been deprecated. + Declaring a command as "random" can be done using + command tips, see https://redis.io/docs/latest/develop/reference/command-tips/. +* **"allow-stale"**: The command is allowed to run on slaves that don't + serve stale data. Don't use if you don't know what + this means. +* **"no-monitor"**: Don't propagate the command on monitor. Use this if + the command has sensitive data among the arguments. +* **"no-slowlog"**: Don't log this command in the slowlog. Use this if + the command has sensitive data among the arguments. +* **"fast"**: The command time complexity is not greater + than O(log(N)) where N is the size of the collection or + anything else representing the normal scalability + issue with the command. +* **"getkeys-api"**: The command implements the interface to return + the arguments that are keys. Used when start/stop/step + is not enough because of the command syntax. +* **"no-cluster"**: The command should not register in Redis Cluster + since is not designed to work with it because, for + example, is unable to report the position of the + keys, programmatically creates key names, or any + other reason. +* **"no-auth"**: This command can be run by an un-authenticated client. + Normally this is used by a command that is used + to authenticate a client. +* **"may-replicate"**: This command may generate replication traffic, even + though it's not a write command. +* **"no-mandatory-keys"**: All the keys this command may take are optional +* **"blocking"**: The command has the potential to block the client. +* **"allow-busy"**: Permit the command while the server is blocked either by + a script or by a slow module command, see + RedisModule_Yield. +* **"getchannels-api"**: The command implements the interface to return + the arguments that are channels. +* **"internal"**: Internal command, one that should not be exposed to the user connections. + For example, module commands that are called by the modules, + commands that do not perform ACL validations (relying on earlier checks) + +The last three parameters specify which arguments of the new command are +Redis keys. See [https://redis.io/commands/command](https://redis.io/commands/command) for more information. + +* `firstkey`: One-based index of the first argument that's a key. + Position 0 is always the command name itself. + 0 for commands with no keys. +* `lastkey`: One-based index of the last argument that's a key. + Negative numbers refer to counting backwards from the last + argument (-1 means the last argument provided) + 0 for commands with no keys. +* `keystep`: Step between first and last key indexes. + 0 for commands with no keys. + +This information is used by ACL, Cluster and the `COMMAND` command. + +NOTE: The scheme described above serves a limited purpose and can +only be used to find keys that exist at constant indices. +For non-trivial key arguments, you may pass 0,0,0 and use +[`RedisModule_SetCommandInfo`](#RedisModule_SetCommandInfo) to set key specs using a more advanced scheme and use +[`RedisModule_SetCommandACLCategories`](#RedisModule_SetCommandACLCategories) to set Redis ACL categories of the commands. + + + +### `RedisModule_GetCommand` + + RedisModuleCommand *RedisModule_GetCommand(RedisModuleCtx *ctx, + const char *name); + +**Available since:** 7.0.0 + +Get an opaque structure, representing a module command, by command name. +This structure is used in some of the command-related APIs. + +NULL is returned in case of the following errors: + +* Command not found +* The command is not a module command +* The command doesn't belong to the calling module + + + +### `RedisModule_CreateSubcommand` + + int RedisModule_CreateSubcommand(RedisModuleCommand *parent, + const char *name, + RedisModuleCmdFunc cmdfunc, + const char *strflags, + int firstkey, + int lastkey, + int keystep); + +**Available since:** 7.0.0 + +Very similar to [`RedisModule_CreateCommand`](#RedisModule_CreateCommand) except that it is used to create +a subcommand, associated with another, container, command. + +Example: If a module has a configuration command, MODULE.CONFIG, then +GET and SET should be individual subcommands, while MODULE.CONFIG is +a command, but should not be registered with a valid `funcptr`: + + if (RedisModule_CreateCommand(ctx,"module.config",NULL,"",0,0,0) == REDISMODULE_ERR) + return REDISMODULE_ERR; + + RedisModuleCommand *parent = RedisModule_GetCommand(ctx,,"module.config"); + + if (RedisModule_CreateSubcommand(parent,"set",cmd_config_set,"",0,0,0) == REDISMODULE_ERR) + return REDISMODULE_ERR; + + if (RedisModule_CreateSubcommand(parent,"get",cmd_config_get,"",0,0,0) == REDISMODULE_ERR) + return REDISMODULE_ERR; + +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` in case of the following errors: + +* Error while parsing `strflags` +* Command is marked as `no-cluster` but cluster mode is enabled +* `parent` is already a subcommand (we do not allow more than one level of command nesting) +* `parent` is a command with an implementation (`RedisModuleCmdFunc`) (A parent command should be a pure container of subcommands) +* `parent` already has a subcommand called `name` +* Creating a subcommand is called outside of `RedisModule_OnLoad`. + + + +### `RedisModule_AddACLCategory` + + int RedisModule_AddACLCategory(RedisModuleCtx *ctx, const char *name); + +**Available since:** 7.4.0 + +[`RedisModule_AddACLCategory`](#RedisModule_AddACLCategory) can be used to add new ACL command categories. Category names +can only contain alphanumeric characters, underscores, or dashes. Categories can only be added +during the `RedisModule_OnLoad` function. Once a category has been added, it can not be removed. +Any module can register a command to any added categories using [`RedisModule_SetCommandACLCategories`](#RedisModule_SetCommandACLCategories). + +Returns: +- `REDISMODULE_OK` on successfully adding the new ACL category. +- `REDISMODULE_ERR` on failure. + +On error the errno is set to: +- EINVAL if the name contains invalid characters. +- EBUSY if the category name already exists. +- ENOMEM if the number of categories reached the max limit of 64 categories. + + + +### `RedisModule_SetCommandACLCategories` + + int RedisModule_SetCommandACLCategories(RedisModuleCommand *command, + const char *aclflags); + +**Available since:** 7.2.0 + +[`RedisModule_SetCommandACLCategories`](#RedisModule_SetCommandACLCategories) can be used to set ACL categories to module +commands and subcommands. The set of ACL categories should be passed as +a space separated C string 'aclflags'. + +Example, the acl flags 'write slow' marks the command as part of the write and +slow ACL categories. + +On success `REDISMODULE_OK` is returned. On error `REDISMODULE_ERR` is returned. + +This function can only be called during the `RedisModule_OnLoad` function. If called +outside of this function, an error is returned. + + + +### `RedisModule_SetCommandInfo` + + int RedisModule_SetCommandInfo(RedisModuleCommand *command, + const RedisModuleCommandInfo *info); + +**Available since:** 7.0.0 + +Set additional command information. + +Affects the output of `COMMAND`, `COMMAND INFO` and `COMMAND DOCS`, Cluster, +ACL and is used to filter commands with the wrong number of arguments before +the call reaches the module code. + +This function can be called after creating a command using [`RedisModule_CreateCommand`](#RedisModule_CreateCommand) +and fetching the command pointer using [`RedisModule_GetCommand`](#RedisModule_GetCommand). The information can +only be set once for each command and has the following structure: + + typedef struct RedisModuleCommandInfo { + const RedisModuleCommandInfoVersion *version; + const char *summary; + const char *complexity; + const char *since; + RedisModuleCommandHistoryEntry *history; + const char *tips; + int arity; + RedisModuleCommandKeySpec *key_specs; + RedisModuleCommandArg *args; + } RedisModuleCommandInfo; + +All fields except `version` are optional. Explanation of the fields: + +- `version`: This field enables compatibility with different Redis versions. + Always set this field to `REDISMODULE_COMMAND_INFO_VERSION`. + +- `summary`: A short description of the command (optional). + +- `complexity`: Complexity description (optional). + +- `since`: The version where the command was introduced (optional). + Note: The version specified should be the module's, not Redis version. + +- `history`: An array of `RedisModuleCommandHistoryEntry` (optional), which is + a struct with the following fields: + + const char *since; + const char *changes; + + `since` is a version string and `changes` is a string describing the + changes. The array is terminated by a zeroed entry, i.e. an entry with + both strings set to NULL. + +- `tips`: A string of space-separated tips regarding this command, meant for + clients and proxies. See [https://redis.io/docs/latest/develop/reference/command-tips/](https://redis.io/docs/latest/develop/reference/command-tips/). + +- `arity`: Number of arguments, including the command name itself. A positive + number specifies an exact number of arguments and a negative number + specifies a minimum number of arguments, so use -N to say >= N. Redis + validates a call before passing it to a module, so this can replace an + arity check inside the module command implementation. A value of 0 (or an + omitted arity field) is equivalent to -2 if the command has sub commands + and -1 otherwise. + +- `key_specs`: An array of `RedisModuleCommandKeySpec`, terminated by an + element memset to zero. This is a scheme that tries to describe the + positions of key arguments better than the old [`RedisModule_CreateCommand`](#RedisModule_CreateCommand) arguments + `firstkey`, `lastkey`, `keystep` and is needed if those three are not + enough to describe the key positions. There are two steps to retrieve key + positions: *begin search* (BS) in which index should find the first key and + *find keys* (FK) which, relative to the output of BS, describes how can we + will which arguments are keys. Additionally, there are key specific flags. + + Key-specs cause the triplet (firstkey, lastkey, keystep) given in + RedisModule_CreateCommand to be recomputed, but it is still useful to provide + these three parameters in RedisModule_CreateCommand, to better support old Redis + versions where RedisModule_SetCommandInfo is not available. + + Note that key-specs don't fully replace the "getkeys-api" (see + RedisModule_CreateCommand, RedisModule_IsKeysPositionRequest and RedisModule_KeyAtPosWithFlags) so + it may be a good idea to supply both key-specs and implement the + getkeys-api. + + A key-spec has the following structure: + + typedef struct RedisModuleCommandKeySpec { + const char *notes; + uint64_t flags; + RedisModuleKeySpecBeginSearchType begin_search_type; + union { + struct { + int pos; + } index; + struct { + const char *keyword; + int startfrom; + } keyword; + } bs; + RedisModuleKeySpecFindKeysType find_keys_type; + union { + struct { + int lastkey; + int keystep; + int limit; + } range; + struct { + int keynumidx; + int firstkey; + int keystep; + } keynum; + } fk; + } RedisModuleCommandKeySpec; + + Explanation of the fields of RedisModuleCommandKeySpec: + + * `notes`: Optional notes or clarifications about this key spec. + + * `flags`: A bitwise or of key-spec flags described below. + + * `begin_search_type`: This describes how the first key is discovered. + There are two ways to determine the first key: + + * `REDISMODULE_KSPEC_BS_UNKNOWN`: There is no way to tell where the + key args start. + * `REDISMODULE_KSPEC_BS_INDEX`: Key args start at a constant index. + * `REDISMODULE_KSPEC_BS_KEYWORD`: Key args start just after a + specific keyword. + + * `bs`: This is a union in which the `index` or `keyword` branch is used + depending on the value of the `begin_search_type` field. + + * `bs.index.pos`: The index from which we start the search for keys. + (`REDISMODULE_KSPEC_BS_INDEX` only.) + + * `bs.keyword.keyword`: The keyword (string) that indicates the + beginning of key arguments. (`REDISMODULE_KSPEC_BS_KEYWORD` only.) + + * `bs.keyword.startfrom`: An index in argv from which to start + searching. Can be negative, which means start search from the end, + in reverse. Example: -2 means to start in reverse from the + penultimate argument. (`REDISMODULE_KSPEC_BS_KEYWORD` only.) + + * `find_keys_type`: After the "begin search", this describes which + arguments are keys. The strategies are: + + * `REDISMODULE_KSPEC_BS_UNKNOWN`: There is no way to tell where the + key args are located. + * `REDISMODULE_KSPEC_FK_RANGE`: Keys end at a specific index (or + relative to the last argument). + * `REDISMODULE_KSPEC_FK_KEYNUM`: There's an argument that contains + the number of key args somewhere before the keys themselves. + + `find_keys_type` and `fk` can be omitted if this keyspec describes + exactly one key. + + * `fk`: This is a union in which the `range` or `keynum` branch is used + depending on the value of the `find_keys_type` field. + + * `fk.range` (for `REDISMODULE_KSPEC_FK_RANGE`): A struct with the + following fields: + + * `lastkey`: Index of the last key relative to the result of the + begin search step. Can be negative, in which case it's not + relative. -1 indicates the last argument, -2 one before the + last and so on. + + * `keystep`: How many arguments should we skip after finding a + key, in order to find the next one? + + * `limit`: If `lastkey` is -1, we use `limit` to stop the search + by a factor. 0 and 1 mean no limit. 2 means 1/2 of the + remaining args, 3 means 1/3, and so on. + + * `fk.keynum` (for `REDISMODULE_KSPEC_FK_KEYNUM`): A struct with the + following fields: + + * `keynumidx`: Index of the argument containing the number of + keys to come, relative to the result of the begin search step. + + * `firstkey`: Index of the fist key relative to the result of the + begin search step. (Usually it's just after `keynumidx`, in + which case it should be set to `keynumidx + 1`.) + + * `keystep`: How many arguments should we skip after finding a + key, in order to find the next one? + + Key-spec flags: + + The first four refer to what the command actually does with the *value or + metadata of the key*, and not necessarily the user data or how it affects + it. Each key-spec may must have exactly one of these. Any operation + that's not distinctly deletion, overwrite or read-only would be marked as + RW. + + * `REDISMODULE_CMD_KEY_RO`: Read-Only. Reads the value of the key, but + doesn't necessarily return it. + + * `REDISMODULE_CMD_KEY_RW`: Read-Write. Modifies the data stored in the + value of the key or its metadata. + + * `REDISMODULE_CMD_KEY_OW`: Overwrite. Overwrites the data stored in the + value of the key. + + * `REDISMODULE_CMD_KEY_RM`: Deletes the key. + + The next four refer to *user data inside the value of the key*, not the + metadata like LRU, type, cardinality. It refers to the logical operation + on the user's data (actual input strings or TTL), being + used/returned/copied/changed. It doesn't refer to modification or + returning of metadata (like type, count, presence of data). ACCESS can be + combined with one of the write operations INSERT, DELETE or UPDATE. Any + write that's not an INSERT or a DELETE would be UPDATE. + + * `REDISMODULE_CMD_KEY_ACCESS`: Returns, copies or uses the user data + from the value of the key. + + * `REDISMODULE_CMD_KEY_UPDATE`: Updates data to the value, new value may + depend on the old value. + + * `REDISMODULE_CMD_KEY_INSERT`: Adds data to the value with no chance of + modification or deletion of existing data. + + * `REDISMODULE_CMD_KEY_DELETE`: Explicitly deletes some content from the + value of the key. + + Other flags: + + * `REDISMODULE_CMD_KEY_NOT_KEY`: The key is not actually a key, but + should be routed in cluster mode as if it was a key. + + * `REDISMODULE_CMD_KEY_INCOMPLETE`: The keyspec might not point out all + the keys it should cover. + + * `REDISMODULE_CMD_KEY_VARIABLE_FLAGS`: Some keys might have different + flags depending on arguments. + +- `args`: An array of `RedisModuleCommandArg`, terminated by an element memset + to zero. `RedisModuleCommandArg` is a structure with at the fields described + below. + + typedef struct RedisModuleCommandArg { + const char *name; + RedisModuleCommandArgType type; + int key_spec_index; + const char *token; + const char *summary; + const char *since; + int flags; + struct RedisModuleCommandArg *subargs; + } RedisModuleCommandArg; + + Explanation of the fields: + + * `name`: Name of the argument. + + * `type`: The type of the argument. See below for details. The types + `REDISMODULE_ARG_TYPE_ONEOF` and `REDISMODULE_ARG_TYPE_BLOCK` require + an argument to have sub-arguments, i.e. `subargs`. + + * `key_spec_index`: If the `type` is `REDISMODULE_ARG_TYPE_KEY` you must + provide the index of the key-spec associated with this argument. See + `key_specs` above. If the argument is not a key, you may specify -1. + + * `token`: The token preceding the argument (optional). Example: the + argument `seconds` in `SET` has a token `EX`. If the argument consists + of only a token (for example `NX` in `SET`) the type should be + `REDISMODULE_ARG_TYPE_PURE_TOKEN` and `value` should be NULL. + + * `summary`: A short description of the argument (optional). + + * `since`: The first version which included this argument (optional). + + * `flags`: A bitwise or of the macros `REDISMODULE_CMD_ARG_*`. See below. + + * `value`: The display-value of the argument. This string is what should + be displayed when creating the command syntax from the output of + `COMMAND`. If `token` is not NULL, it should also be displayed. + + Explanation of `RedisModuleCommandArgType`: + + * `REDISMODULE_ARG_TYPE_STRING`: String argument. + * `REDISMODULE_ARG_TYPE_INTEGER`: Integer argument. + * `REDISMODULE_ARG_TYPE_DOUBLE`: Double-precision float argument. + * `REDISMODULE_ARG_TYPE_KEY`: String argument representing a keyname. + * `REDISMODULE_ARG_TYPE_PATTERN`: String, but regex pattern. + * `REDISMODULE_ARG_TYPE_UNIX_TIME`: Integer, but Unix timestamp. + * `REDISMODULE_ARG_TYPE_PURE_TOKEN`: Argument doesn't have a placeholder. + It's just a token without a value. Example: the `KEEPTTL` option of the + `SET` command. + * `REDISMODULE_ARG_TYPE_ONEOF`: Used when the user can choose only one of + a few sub-arguments. Requires `subargs`. Example: the `NX` and `XX` + options of `SET`. + * `REDISMODULE_ARG_TYPE_BLOCK`: Used when one wants to group together + several sub-arguments, usually to apply something on all of them, like + making the entire group "optional". Requires `subargs`. Example: the + `LIMIT offset count` parameters in `ZRANGE`. + + Explanation of the command argument flags: + + * `REDISMODULE_CMD_ARG_OPTIONAL`: The argument is optional (like GET in + the SET command). + * `REDISMODULE_CMD_ARG_MULTIPLE`: The argument may repeat itself (like + key in DEL). + * `REDISMODULE_CMD_ARG_MULTIPLE_TOKEN`: The argument may repeat itself, + and so does its token (like `GET pattern` in SORT). + +On success `REDISMODULE_OK` is returned. On error `REDISMODULE_ERR` is returned +and `errno` is set to EINVAL if invalid info was provided or EEXIST if info +has already been set. If the info is invalid, a warning is logged explaining +which part of the info is invalid and why. + + + +## Module information and time measurement + + + +### `RedisModule_IsModuleNameBusy` + + int RedisModule_IsModuleNameBusy(const char *name); + +**Available since:** 4.0.3 + +Return non-zero if the module name is busy. +Otherwise zero is returned. + + + +### `RedisModule_Milliseconds` + + mstime_t RedisModule_Milliseconds(void); + +**Available since:** 4.0.0 + +Return the current UNIX time in milliseconds. + + + +### `RedisModule_MonotonicMicroseconds` + + uint64_t RedisModule_MonotonicMicroseconds(void); + +**Available since:** 7.0.0 + +Return counter of micro-seconds relative to an arbitrary point in time. + + + +### `RedisModule_Microseconds` + + ustime_t RedisModule_Microseconds(void); + +**Available since:** 7.2.0 + +Return the current UNIX time in microseconds + + + +### `RedisModule_CachedMicroseconds` + + ustime_t RedisModule_CachedMicroseconds(void); + +**Available since:** 7.2.0 + +Return the cached UNIX time in microseconds. +It is updated in the server cron job and before executing a command. +It is useful for complex call stacks, such as a command causing a +key space notification, causing a module to execute a [`RedisModule_Call`](#RedisModule_Call), +causing another notification, etc. +It makes sense that all this callbacks would use the same clock. + + + +### `RedisModule_BlockedClientMeasureTimeStart` + + int RedisModule_BlockedClientMeasureTimeStart(RedisModuleBlockedClient *bc); + +**Available since:** 6.2.0 + +Mark a point in time that will be used as the start time to calculate +the elapsed execution time when [`RedisModule_BlockedClientMeasureTimeEnd()`](#RedisModule_BlockedClientMeasureTimeEnd) is called. +Within the same command, you can call multiple times +[`RedisModule_BlockedClientMeasureTimeStart()`](#RedisModule_BlockedClientMeasureTimeStart) and [`RedisModule_BlockedClientMeasureTimeEnd()`](#RedisModule_BlockedClientMeasureTimeEnd) +to accumulate independent time intervals to the background duration. +This method always return `REDISMODULE_OK`. + +This function is not thread safe, If used in module thread and blocked callback (possibly main thread) +simultaneously, it's recommended to protect them with lock owned by caller instead of GIL. + + + +### `RedisModule_BlockedClientMeasureTimeEnd` + + int RedisModule_BlockedClientMeasureTimeEnd(RedisModuleBlockedClient *bc); + +**Available since:** 6.2.0 + +Mark a point in time that will be used as the end time +to calculate the elapsed execution time. +On success `REDISMODULE_OK` is returned. +This method only returns `REDISMODULE_ERR` if no start time was +previously defined ( meaning [`RedisModule_BlockedClientMeasureTimeStart`](#RedisModule_BlockedClientMeasureTimeStart) was not called ). + +This function is not thread safe, If used in module thread and blocked callback (possibly main thread) +simultaneously, it's recommended to protect them with lock owned by caller instead of GIL. + + + +### `RedisModule_Yield` + + void RedisModule_Yield(RedisModuleCtx *ctx, int flags, const char *busy_reply); + +**Available since:** 7.0.0 + +This API allows modules to let Redis process background tasks, and some +commands during long blocking execution of a module command. +The module can call this API periodically. +The flags is a bit mask of these: + +- `REDISMODULE_YIELD_FLAG_NONE`: No special flags, can perform some background + operations, but not process client commands. +- `REDISMODULE_YIELD_FLAG_CLIENTS`: Redis can also process client commands. + +The `busy_reply` argument is optional, and can be used to control the verbose +error string after the `-BUSY` error code. + +When the `REDISMODULE_YIELD_FLAG_CLIENTS` is used, Redis will only start +processing client commands after the time defined by the +`busy-reply-threshold` config, in which case Redis will start rejecting most +commands with `-BUSY` error, but allow the ones marked with the `allow-busy` +flag to be executed. +This API can also be used in thread safe context (while locked), and during +loading (in the `rdb_load` callback, in which case it'll reject commands with +the -LOADING error) + + + +### `RedisModule_SetModuleOptions` + + void RedisModule_SetModuleOptions(RedisModuleCtx *ctx, int options); + +**Available since:** 6.0.0 + +Set flags defining capabilities or behavior bit flags. + +`REDISMODULE_OPTIONS_HANDLE_IO_ERRORS`: +Generally, modules don't need to bother with this, as the process will just +terminate if a read error happens, however, setting this flag would allow +repl-diskless-load to work if enabled. +The module should use [`RedisModule_IsIOError`](#RedisModule_IsIOError) after reads, before using the +data that was read, and in case of error, propagate it upwards, and also be +able to release the partially populated value and all it's allocations. + +`REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED`: +See [`RedisModule_SignalModifiedKey()`](#RedisModule_SignalModifiedKey). + +`REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD`: +Setting this flag indicates module awareness of diskless async replication (repl-diskless-load=swapdb) +and that redis could be serving reads during replication instead of blocking with LOADING status. + +`REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`: +Declare that the module wants to get nested key-space notifications. +By default, Redis will not fire key-space notifications that happened inside +a key-space notification callback. This flag allows to change this behavior +and fire nested key-space notifications. Notice: if enabled, the module +should protected itself from infinite recursion. + + + +### `RedisModule_SignalModifiedKey` + + int RedisModule_SignalModifiedKey(RedisModuleCtx *ctx, + RedisModuleString *keyname); + +**Available since:** 6.0.0 + +Signals that the key is modified from user's perspective (i.e. invalidate WATCH +and client side caching). + +This is done automatically when a key opened for writing is closed, unless +the option `REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED` has been set using +[`RedisModule_SetModuleOptions()`](#RedisModule_SetModuleOptions). + + + +## Automatic memory management for modules + + + +### `RedisModule_AutoMemory` + + void RedisModule_AutoMemory(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Enable automatic memory management. + +The function must be called as the first function of a command implementation +that wants to use automatic memory. + +When enabled, automatic memory management tracks and automatically frees +keys, call replies and Redis string objects once the command returns. In most +cases this eliminates the need of calling the following functions: + +1. [`RedisModule_CloseKey()`](#RedisModule_CloseKey) +2. [`RedisModule_FreeCallReply()`](#RedisModule_FreeCallReply) +3. [`RedisModule_FreeString()`](#RedisModule_FreeString) + +These functions can still be used with automatic memory management enabled, +to optimize loops that make numerous allocations for example. + + + +## String objects APIs + + + +### `RedisModule_CreateString` + + RedisModuleString *RedisModule_CreateString(RedisModuleCtx *ctx, + const char *ptr, + size_t len); + +**Available since:** 4.0.0 + +Create a new module string object. The returned string must be freed +with [`RedisModule_FreeString()`](#RedisModule_FreeString), unless automatic memory is enabled. + +The string is created by copying the `len` bytes starting +at `ptr`. No reference is retained to the passed buffer. + +The module context 'ctx' is optional and may be NULL if you want to create +a string out of the context scope. However in that case, the automatic +memory management will not be available, and the string memory must be +managed manually. + + + +### `RedisModule_CreateStringPrintf` + + RedisModuleString *RedisModule_CreateStringPrintf(RedisModuleCtx *ctx, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Create a new module string object from a printf format and arguments. +The returned string must be freed with [`RedisModule_FreeString()`](#RedisModule_FreeString), unless +automatic memory is enabled. + +The string is created using the sds formatter function `sdscatvprintf()`. + +The passed context 'ctx' may be NULL if necessary, see the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_CreateStringFromLongLong` + + RedisModuleString *RedisModule_CreateStringFromLongLong(RedisModuleCtx *ctx, + long long ll); + +**Available since:** 4.0.0 + +Like [`RedisModule_CreateString()`](#RedisModule_CreateString), but creates a string starting from a `long long` +integer instead of taking a buffer and its length. + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + +The passed context 'ctx' may be NULL if necessary, see the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_CreateStringFromULongLong` + + RedisModuleString *RedisModule_CreateStringFromULongLong(RedisModuleCtx *ctx, + unsigned long long ull); + +**Available since:** 7.0.3 + +Like [`RedisModule_CreateString()`](#RedisModule_CreateString), but creates a string starting from a `unsigned long long` +integer instead of taking a buffer and its length. + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + +The passed context 'ctx' may be NULL if necessary, see the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_CreateStringFromDouble` + + RedisModuleString *RedisModule_CreateStringFromDouble(RedisModuleCtx *ctx, + double d); + +**Available since:** 6.0.0 + +Like [`RedisModule_CreateString()`](#RedisModule_CreateString), but creates a string starting from a double +instead of taking a buffer and its length. + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + + + +### `RedisModule_CreateStringFromLongDouble` + + RedisModuleString *RedisModule_CreateStringFromLongDouble(RedisModuleCtx *ctx, + long double ld, + int humanfriendly); + +**Available since:** 6.0.0 + +Like [`RedisModule_CreateString()`](#RedisModule_CreateString), but creates a string starting from a long +double. + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + +The passed context 'ctx' may be NULL if necessary, see the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_CreateStringFromString` + + RedisModuleString *RedisModule_CreateStringFromString(RedisModuleCtx *ctx, + const RedisModuleString *str); + +**Available since:** 4.0.0 + +Like [`RedisModule_CreateString()`](#RedisModule_CreateString), but creates a string starting from another +`RedisModuleString`. + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + +The passed context 'ctx' may be NULL if necessary, see the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_CreateStringFromStreamID` + + RedisModuleString *RedisModule_CreateStringFromStreamID(RedisModuleCtx *ctx, + const RedisModuleStreamID *id); + +**Available since:** 6.2.0 + +Creates a string from a stream ID. The returned string must be released with +[`RedisModule_FreeString()`](#RedisModule_FreeString), unless automatic memory is enabled. + +The passed context `ctx` may be NULL if necessary. See the +[`RedisModule_CreateString()`](#RedisModule_CreateString) documentation for more info. + + + +### `RedisModule_FreeString` + + void RedisModule_FreeString(RedisModuleCtx *ctx, RedisModuleString *str); + +**Available since:** 4.0.0 + +Free a module string object obtained with one of the Redis modules API calls +that return new string objects. + +It is possible to call this function even when automatic memory management +is enabled. In that case the string will be released ASAP and removed +from the pool of string to release at the end. + +If the string was created with a NULL context 'ctx', it is also possible to +pass ctx as NULL when releasing the string (but passing a context will not +create any issue). Strings created with a context should be freed also passing +the context, so if you want to free a string out of context later, make sure +to create it using a NULL context. + +This API is not thread safe, access to these retained strings (if they originated +from a client command arguments) must be done with GIL locked. + + + +### `RedisModule_RetainString` + + void RedisModule_RetainString(RedisModuleCtx *ctx, RedisModuleString *str); + +**Available since:** 4.0.0 + +Every call to this function, will make the string 'str' requiring +an additional call to [`RedisModule_FreeString()`](#RedisModule_FreeString) in order to really +free the string. Note that the automatic freeing of the string obtained +enabling modules automatic memory management counts for one +[`RedisModule_FreeString()`](#RedisModule_FreeString) call (it is just executed automatically). + +Normally you want to call this function when, at the same time +the following conditions are true: + +1. You have automatic memory management enabled. +2. You want to create string objects. +3. Those string objects you create need to live *after* the callback + function(for example a command implementation) creating them returns. + +Usually you want this in order to store the created string object +into your own data structure, for example when implementing a new data +type. + +Note that when memory management is turned off, you don't need +any call to RetainString() since creating a string will always result +into a string that lives after the callback function returns, if +no FreeString() call is performed. + +It is possible to call this function with a NULL context. + +When strings are going to be retained for an extended duration, it is good +practice to also call [`RedisModule_TrimStringAllocation()`](#RedisModule_TrimStringAllocation) in order to +optimize memory usage. + +Threaded modules that reference retained strings from other threads *must* +explicitly trim the allocation as soon as the string is retained. Not doing +so may result with automatic trimming which is not thread safe. + +This API is not thread safe, access to these retained strings (if they originated +from a client command arguments) must be done with GIL locked. + + + +### `RedisModule_HoldString` + + RedisModuleString* RedisModule_HoldString(RedisModuleCtx *ctx, + RedisModuleString *str); + +**Available since:** 6.0.7 + + +This function can be used instead of [`RedisModule_RetainString()`](#RedisModule_RetainString). +The main difference between the two is that this function will always +succeed, whereas [`RedisModule_RetainString()`](#RedisModule_RetainString) may fail because of an +assertion. + +The function returns a pointer to `RedisModuleString`, which is owned +by the caller. It requires a call to [`RedisModule_FreeString()`](#RedisModule_FreeString) to free +the string when automatic memory management is disabled for the context. +When automatic memory management is enabled, you can either call +[`RedisModule_FreeString()`](#RedisModule_FreeString) or let the automation free it. + +This function is more efficient than [`RedisModule_CreateStringFromString()`](#RedisModule_CreateStringFromString) +because whenever possible, it avoids copying the underlying +`RedisModuleString`. The disadvantage of using this function is that it +might not be possible to use [`RedisModule_StringAppendBuffer()`](#RedisModule_StringAppendBuffer) on the +returned `RedisModuleString`. + +It is possible to call this function with a NULL context. + +When strings are going to be held for an extended duration, it is good +practice to also call [`RedisModule_TrimStringAllocation()`](#RedisModule_TrimStringAllocation) in order to +optimize memory usage. + +Threaded modules that reference held strings from other threads *must* +explicitly trim the allocation as soon as the string is held. Not doing +so may result with automatic trimming which is not thread safe. + +This API is not thread safe, access to these retained strings (if they originated +from a client command arguments) must be done with GIL locked. + + + +### `RedisModule_StringPtrLen` + + const char *RedisModule_StringPtrLen(const RedisModuleString *str, + size_t *len); + +**Available since:** 4.0.0 + +Given a string module object, this function returns the string pointer +and length of the string. The returned pointer and length should only +be used for read only accesses and never modified. + + + +### `RedisModule_StringToLongLong` + + int RedisModule_StringToLongLong(const RedisModuleString *str, long long *ll); + +**Available since:** 4.0.0 + +Convert the string into a `long long` integer, storing it at `*ll`. +Returns `REDISMODULE_OK` on success. If the string can't be parsed +as a valid, strict `long long` (no spaces before/after), `REDISMODULE_ERR` +is returned. + + + +### `RedisModule_StringToULongLong` + + int RedisModule_StringToULongLong(const RedisModuleString *str, + unsigned long long *ull); + +**Available since:** 7.0.3 + +Convert the string into a `unsigned long long` integer, storing it at `*ull`. +Returns `REDISMODULE_OK` on success. If the string can't be parsed +as a valid, strict `unsigned long long` (no spaces before/after), `REDISMODULE_ERR` +is returned. + + + +### `RedisModule_StringToDouble` + + int RedisModule_StringToDouble(const RedisModuleString *str, double *d); + +**Available since:** 4.0.0 + +Convert the string into a double, storing it at `*d`. +Returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if the string is +not a valid string representation of a double value. + + + +### `RedisModule_StringToLongDouble` + + int RedisModule_StringToLongDouble(const RedisModuleString *str, + long double *ld); + +**Available since:** 6.0.0 + +Convert the string into a long double, storing it at `*ld`. +Returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if the string is +not a valid string representation of a double value. + + + +### `RedisModule_StringToStreamID` + + int RedisModule_StringToStreamID(const RedisModuleString *str, + RedisModuleStreamID *id); + +**Available since:** 6.2.0 + +Convert the string into a stream ID, storing it at `*id`. +Returns `REDISMODULE_OK` on success and returns `REDISMODULE_ERR` if the string +is not a valid string representation of a stream ID. The special IDs "+" and +"-" are allowed. + + + +### `RedisModule_StringCompare` + + int RedisModule_StringCompare(const RedisModuleString *a, + const RedisModuleString *b); + +**Available since:** 4.0.0 + +Compare two string objects, returning -1, 0 or 1 respectively if +a < b, a == b, a > b. Strings are compared byte by byte as two +binary blobs without any encoding care / collation attempt. + + + +### `RedisModule_StringAppendBuffer` + + int RedisModule_StringAppendBuffer(RedisModuleCtx *ctx, + RedisModuleString *str, + const char *buf, + size_t len); + +**Available since:** 4.0.0 + +Append the specified buffer to the string 'str'. The string must be a +string created by the user that is referenced only a single time, otherwise +`REDISMODULE_ERR` is returned and the operation is not performed. + + + +### `RedisModule_TrimStringAllocation` + + void RedisModule_TrimStringAllocation(RedisModuleString *str); + +**Available since:** 7.0.0 + +Trim possible excess memory allocated for a `RedisModuleString`. + +Sometimes a `RedisModuleString` may have more memory allocated for +it than required, typically for argv arguments that were constructed +from network buffers. This function optimizes such strings by reallocating +their memory, which is useful for strings that are not short lived but +retained for an extended duration. + +This operation is *not thread safe* and should only be called when +no concurrent access to the string is guaranteed. Using it for an argv +string in a module command before the string is potentially available +to other threads is generally safe. + +Currently, Redis may also automatically trim retained strings when a +module command returns. However, doing this explicitly should still be +a preferred option: + +1. Future versions of Redis may abandon auto-trimming. +2. Auto-trimming as currently implemented is *not thread safe*. + A background thread manipulating a recently retained string may end up + in a race condition with the auto-trim, which could result with + data corruption. + + + +## Reply APIs + +These functions are used for sending replies to the client. + +Most functions always return `REDISMODULE_OK` so you can use it with +'return' in order to return from the command implementation with: + + if (... some condition ...) + return RedisModule_ReplyWithLongLong(ctx,mycount); + +### Reply with collection functions + +After starting a collection reply, the module must make calls to other +`ReplyWith*` style functions in order to emit the elements of the collection. +Collection types include: Array, Map, Set and Attribute. + +When producing collections with a number of elements that is not known +beforehand, the function can be called with a special flag +`REDISMODULE_POSTPONED_LEN` (`REDISMODULE_POSTPONED_ARRAY_LEN` in the past), +and the actual number of elements can be later set with `RedisModule_ReplySet`*Length() +call (which will set the latest "open" count if there are multiple ones). + + + +### `RedisModule_WrongArity` + + int RedisModule_WrongArity(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Send an error about the number of arguments given to the command, +citing the command name in the error message. Returns `REDISMODULE_OK`. + +Example: + + if (argc != 3) return RedisModule_WrongArity(ctx); + + + +### `RedisModule_ReplyWithLongLong` + + int RedisModule_ReplyWithLongLong(RedisModuleCtx *ctx, long long ll); + +**Available since:** 4.0.0 + +Send an integer reply to the client, with the specified `long long` value. +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithError` + + int RedisModule_ReplyWithError(RedisModuleCtx *ctx, const char *err); + +**Available since:** 4.0.0 + +Reply with the error 'err'. + +Note that 'err' must contain all the error, including +the initial error code. The function only provides the initial "-", so +the usage is, for example: + + RedisModule_ReplyWithError(ctx,"ERR Wrong Type"); + +and not just: + + RedisModule_ReplyWithError(ctx,"Wrong Type"); + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithErrorFormat` + + int RedisModule_ReplyWithErrorFormat(RedisModuleCtx *ctx, + const char *fmt, + ...); + +**Available since:** 7.2.0 + +Reply with the error create from a printf format and arguments. + +Note that 'fmt' must contain all the error, including +the initial error code. The function only provides the initial "-", so +the usage is, for example: + + RedisModule_ReplyWithErrorFormat(ctx,"ERR Wrong Type: %s",type); + +and not just: + + RedisModule_ReplyWithErrorFormat(ctx,"Wrong Type: %s",type); + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithSimpleString` + + int RedisModule_ReplyWithSimpleString(RedisModuleCtx *ctx, const char *msg); + +**Available since:** 4.0.0 + +Reply with a simple string (`+... \r\n` in RESP protocol). This replies +are suitable only when sending a small non-binary string with small +overhead, like "OK" or similar replies. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithArray` + + int RedisModule_ReplyWithArray(RedisModuleCtx *ctx, long len); + +**Available since:** 4.0.0 + +Reply with an array type of 'len' elements. + +After starting an array reply, the module must make `len` calls to other +`ReplyWith*` style functions in order to emit the elements of the array. +See Reply APIs section for more details. + +Use [`RedisModule_ReplySetArrayLength()`](#RedisModule_ReplySetArrayLength) to set deferred length. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithMap` + + int RedisModule_ReplyWithMap(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Reply with a RESP3 Map type of 'len' pairs. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +After starting a map reply, the module must make `len*2` calls to other +`ReplyWith*` style functions in order to emit the elements of the map. +See Reply APIs section for more details. + +If the connected client is using RESP2, the reply will be converted to a flat +array. + +Use [`RedisModule_ReplySetMapLength()`](#RedisModule_ReplySetMapLength) to set deferred length. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithSet` + + int RedisModule_ReplyWithSet(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Reply with a RESP3 Set type of 'len' elements. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +After starting a set reply, the module must make `len` calls to other +`ReplyWith*` style functions in order to emit the elements of the set. +See Reply APIs section for more details. + +If the connected client is using RESP2, the reply will be converted to an +array type. + +Use [`RedisModule_ReplySetSetLength()`](#RedisModule_ReplySetSetLength) to set deferred length. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithAttribute` + + int RedisModule_ReplyWithAttribute(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Add attributes (metadata) to the reply. Should be done before adding the +actual reply. see [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md)#attribute-type + +After starting an attribute's reply, the module must make `len*2` calls to other +`ReplyWith*` style functions in order to emit the elements of the attribute map. +See Reply APIs section for more details. + +Use [`RedisModule_ReplySetAttributeLength()`](#RedisModule_ReplySetAttributeLength) to set deferred length. + +Not supported by RESP2 and will return `REDISMODULE_ERR`, otherwise +the function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithNullArray` + + int RedisModule_ReplyWithNullArray(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Reply to the client with a null array, simply null in RESP3, +null array in RESP2. + +Note: In RESP3 there's no difference between Null reply and +NullArray reply, so to prevent ambiguity it's better to avoid +using this API and use [`RedisModule_ReplyWithNull`](#RedisModule_ReplyWithNull) instead. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithEmptyArray` + + int RedisModule_ReplyWithEmptyArray(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Reply to the client with an empty array. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplySetArrayLength` + + void RedisModule_ReplySetArrayLength(RedisModuleCtx *ctx, long len); + +**Available since:** 4.0.0 + +When [`RedisModule_ReplyWithArray()`](#RedisModule_ReplyWithArray) is used with the argument +`REDISMODULE_POSTPONED_LEN`, because we don't know beforehand the number +of items we are going to output as elements of the array, this function +will take care to set the array length. + +Since it is possible to have multiple array replies pending with unknown +length, this function guarantees to always set the latest array length +that was created in a postponed way. + +For example in order to output an array like [1,[10,20,30]] we +could write: + + RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN); + RedisModule_ReplyWithLongLong(ctx,1); + RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN); + RedisModule_ReplyWithLongLong(ctx,10); + RedisModule_ReplyWithLongLong(ctx,20); + RedisModule_ReplyWithLongLong(ctx,30); + RedisModule_ReplySetArrayLength(ctx,3); // Set len of 10,20,30 array. + RedisModule_ReplySetArrayLength(ctx,2); // Set len of top array + +Note that in the above example there is no reason to postpone the array +length, since we produce a fixed number of elements, but in the practice +the code may use an iterator or other ways of creating the output so +that is not easy to calculate in advance the number of elements. + + + +### `RedisModule_ReplySetMapLength` + + void RedisModule_ReplySetMapLength(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Very similar to [`RedisModule_ReplySetArrayLength`](#RedisModule_ReplySetArrayLength) except `len` should +exactly half of the number of `ReplyWith*` functions called in the +context of the map. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + + + +### `RedisModule_ReplySetSetLength` + + void RedisModule_ReplySetSetLength(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Very similar to [`RedisModule_ReplySetArrayLength`](#RedisModule_ReplySetArrayLength) +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + + + +### `RedisModule_ReplySetAttributeLength` + + void RedisModule_ReplySetAttributeLength(RedisModuleCtx *ctx, long len); + +**Available since:** 7.0.0 + +Very similar to [`RedisModule_ReplySetMapLength`](#RedisModule_ReplySetMapLength) +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +Must not be called if [`RedisModule_ReplyWithAttribute`](#RedisModule_ReplyWithAttribute) returned an error. + + + +### `RedisModule_ReplyWithStringBuffer` + + int RedisModule_ReplyWithStringBuffer(RedisModuleCtx *ctx, + const char *buf, + size_t len); + +**Available since:** 4.0.0 + +Reply with a bulk string, taking in input a C buffer pointer and length. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithCString` + + int RedisModule_ReplyWithCString(RedisModuleCtx *ctx, const char *buf); + +**Available since:** 5.0.6 + +Reply with a bulk string, taking in input a C buffer pointer that is +assumed to be null-terminated. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithString` + + int RedisModule_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str); + +**Available since:** 4.0.0 + +Reply with a bulk string, taking in input a `RedisModuleString` object. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithEmptyString` + + int RedisModule_ReplyWithEmptyString(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Reply with an empty string. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithVerbatimStringType` + + int RedisModule_ReplyWithVerbatimStringType(RedisModuleCtx *ctx, + const char *buf, + size_t len, + const char *ext); + +**Available since:** 7.0.0 + +Reply with a binary safe string, which should not be escaped or filtered +taking in input a C buffer pointer, length and a 3 character type/extension. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithVerbatimString` + + int RedisModule_ReplyWithVerbatimString(RedisModuleCtx *ctx, + const char *buf, + size_t len); + +**Available since:** 6.0.0 + +Reply with a binary safe string, which should not be escaped or filtered +taking in input a C buffer pointer and length. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithNull` + + int RedisModule_ReplyWithNull(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Reply to the client with a NULL. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithBool` + + int RedisModule_ReplyWithBool(RedisModuleCtx *ctx, int b); + +**Available since:** 7.0.0 + +Reply with a RESP3 Boolean type. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +In RESP3, this is boolean type +In RESP2, it's a string response of "1" and "0" for true and false respectively. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithCallReply` + + int RedisModule_ReplyWithCallReply(RedisModuleCtx *ctx, + RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Reply exactly what a Redis command returned us with [`RedisModule_Call()`](#RedisModule_Call). +This function is useful when we use [`RedisModule_Call()`](#RedisModule_Call) in order to +execute some command, as we want to reply to the client exactly the +same reply we obtained by the command. + +Return: +- `REDISMODULE_OK` on success. +- `REDISMODULE_ERR` if the given reply is in RESP3 format but the client expects RESP2. + In case of an error, it's the module writer responsibility to translate the reply + to RESP2 (or handle it differently by returning an error). Notice that for + module writer convenience, it is possible to pass `0` as a parameter to the fmt + argument of [`RedisModule_Call`](#RedisModule_Call) so that the `RedisModuleCallReply` will return in the same + protocol (RESP2 or RESP3) as set in the current client's context. + + + +### `RedisModule_ReplyWithDouble` + + int RedisModule_ReplyWithDouble(RedisModuleCtx *ctx, double d); + +**Available since:** 4.0.0 + +Reply with a RESP3 Double type. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +Send a string reply obtained converting the double 'd' into a bulk string. +This function is basically equivalent to converting a double into +a string into a C buffer, and then calling the function +[`RedisModule_ReplyWithStringBuffer()`](#RedisModule_ReplyWithStringBuffer) with the buffer and length. + +In RESP3 the string is tagged as a double, while in RESP2 it's just a plain string +that the user will have to parse. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithBigNumber` + + int RedisModule_ReplyWithBigNumber(RedisModuleCtx *ctx, + const char *bignum, + size_t len); + +**Available since:** 7.0.0 + +Reply with a RESP3 BigNumber type. +Visit [https://github.com/antirez/RESP3/blob/master/spec.md](https://github.com/antirez/RESP3/blob/master/spec.md) for more info about RESP3. + +In RESP3, this is a string of length `len` that is tagged as a BigNumber, +however, it's up to the caller to ensure that it's a valid BigNumber. +In RESP2, this is just a plain bulk string response. + +The function always returns `REDISMODULE_OK`. + + + +### `RedisModule_ReplyWithLongDouble` + + int RedisModule_ReplyWithLongDouble(RedisModuleCtx *ctx, long double ld); + +**Available since:** 6.0.0 + +Send a string reply obtained converting the long double 'ld' into a bulk +string. This function is basically equivalent to converting a long double +into a string into a C buffer, and then calling the function +[`RedisModule_ReplyWithStringBuffer()`](#RedisModule_ReplyWithStringBuffer) with the buffer and length. +The double string uses human readable formatting (see +`addReplyHumanLongDouble` in networking.c). + +The function always returns `REDISMODULE_OK`. + + + +## Commands replication API + + + +### `RedisModule_Replicate` + + int RedisModule_Replicate(RedisModuleCtx *ctx, + const char *cmdname, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Replicate the specified command and arguments to slaves and AOF, as effect +of execution of the calling command implementation. + +The replicated commands are always wrapped into the MULTI/EXEC that +contains all the commands replicated in a given module command +execution, in the order they were executed. + +Modules should try to use one interface or the other. + +This command follows exactly the same interface of [`RedisModule_Call()`](#RedisModule_Call), +so a set of format specifiers must be passed, followed by arguments +matching the provided format specifiers. + +Please refer to [`RedisModule_Call()`](#RedisModule_Call) for more information. + +Using the special "A" and "R" modifiers, the caller can exclude either +the AOF or the replicas from the propagation of the specified command. +Otherwise, by default, the command will be propagated in both channels. + +#### Note about calling this function from a thread safe context: + +Normally when you call this function from the callback implementing a +module command, or any other callback provided by the Redis Module API, +Redis will accumulate all the calls to this function in the context of +the callback, and will propagate all the commands wrapped in a MULTI/EXEC +transaction. However when calling this function from a threaded safe context +that can live an undefined amount of time, and can be locked/unlocked in +at will, it is important to note that this API is not thread-safe and +must be executed while holding the GIL. + +#### Return value + +The command returns `REDISMODULE_ERR` if the format specifiers are invalid +or the command name does not belong to a known command. + + + +### `RedisModule_ReplicateVerbatim` + + int RedisModule_ReplicateVerbatim(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +This function will replicate the command exactly as it was invoked +by the client. Note that the replicated commands are always wrapped +into the MULTI/EXEC that contains all the commands replicated in a +given module command execution, in the order they were executed. + +Basically this form of replication is useful when you want to propagate +the command to the slaves and AOF file exactly as it was called, since +the command can just be re-executed to deterministically re-create the +new state starting from the old one. + +It is important to note that this API is not thread-safe and +must be executed while holding the GIL. + +The function always returns `REDISMODULE_OK`. + + + +## DB and Key APIs – Generic API + + + +### `RedisModule_GetClientId` + + unsigned long long RedisModule_GetClientId(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Return the ID of the current client calling the currently active module +command. The returned ID has a few guarantees: + +1. The ID is different for each different client, so if the same client + executes a module command multiple times, it can be recognized as + having the same ID, otherwise the ID will be different. +2. The ID increases monotonically. Clients connecting to the server later + are guaranteed to get IDs greater than any past ID previously seen. + +Valid IDs are from 1 to 2^64 - 1. If 0 is returned it means there is no way +to fetch the ID in the context the function was currently called. + +After obtaining the ID, it is possible to check if the command execution +is actually happening in the context of AOF loading, using this macro: + + if (RedisModule_IsAOFClient(RedisModule_GetClientId(ctx)) { + // Handle it differently. + } + + + +### `RedisModule_GetClientUserNameById` + + RedisModuleString *RedisModule_GetClientUserNameById(RedisModuleCtx *ctx, + uint64_t id); + +**Available since:** 6.2.1 + +Return the ACL user name used by the client with the specified client ID. +Client ID can be obtained with [`RedisModule_GetClientId()`](#RedisModule_GetClientId) API. If the client does not +exist, NULL is returned and errno is set to ENOENT. If the client isn't +using an ACL user, NULL is returned and errno is set to ENOTSUP + + + +### `RedisModule_GetClientInfoById` + + int RedisModule_GetClientInfoById(void *ci, uint64_t id); + +**Available since:** 6.0.0 + +Return information about the client with the specified ID (that was +previously obtained via the [`RedisModule_GetClientId()`](#RedisModule_GetClientId) API). If the +client exists, `REDISMODULE_OK` is returned, otherwise `REDISMODULE_ERR` +is returned. + +When the client exist and the `ci` pointer is not NULL, but points to +a structure of type `RedisModuleClientInfoV`1, previously initialized with +the correct `REDISMODULE_CLIENTINFO_INITIALIZER_V1`, the structure is populated +with the following fields: + + uint64_t flags; // REDISMODULE_CLIENTINFO_FLAG_* + uint64_t id; // Client ID + char addr[46]; // IPv4 or IPv6 address. + uint16_t port; // TCP port. + uint16_t db; // Selected DB. + +Note: the client ID is useless in the context of this call, since we + already know, however the same structure could be used in other + contexts where we don't know the client ID, yet the same structure + is returned. + +With flags having the following meaning: + + REDISMODULE_CLIENTINFO_FLAG_SSL Client using SSL connection. + REDISMODULE_CLIENTINFO_FLAG_PUBSUB Client in Pub/Sub mode. + REDISMODULE_CLIENTINFO_FLAG_BLOCKED Client blocked in command. + REDISMODULE_CLIENTINFO_FLAG_TRACKING Client with keys tracking on. + REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET Client using unix domain socket. + REDISMODULE_CLIENTINFO_FLAG_MULTI Client in MULTI state. + +However passing NULL is a way to just check if the client exists in case +we are not interested in any additional information. + +This is the correct usage when we want the client info structure +returned: + + RedisModuleClientInfo ci = REDISMODULE_CLIENTINFO_INITIALIZER; + int retval = RedisModule_GetClientInfoById(&ci,client_id); + if (retval == REDISMODULE_OK) { + printf("Address: %s\n", ci.addr); + } + + + +### `RedisModule_GetClientNameById` + + RedisModuleString *RedisModule_GetClientNameById(RedisModuleCtx *ctx, + uint64_t id); + +**Available since:** 7.0.3 + +Returns the name of the client connection with the given ID. + +If the client ID does not exist or if the client has no name associated with +it, NULL is returned. + + + +### `RedisModule_SetClientNameById` + + int RedisModule_SetClientNameById(uint64_t id, RedisModuleString *name); + +**Available since:** 7.0.3 + +Sets the name of the client with the given ID. This is equivalent to the client calling +`CLIENT SETNAME name`. + +Returns `REDISMODULE_OK` on success. On failure, `REDISMODULE_ERR` is returned +and errno is set as follows: + +- ENOENT if the client does not exist +- EINVAL if the name contains invalid characters + + + +### `RedisModule_PublishMessage` + + int RedisModule_PublishMessage(RedisModuleCtx *ctx, + RedisModuleString *channel, + RedisModuleString *message); + +**Available since:** 6.0.0 + +Publish a message to subscribers (see PUBLISH command). + + + +### `RedisModule_PublishMessageShard` + + int RedisModule_PublishMessageShard(RedisModuleCtx *ctx, + RedisModuleString *channel, + RedisModuleString *message); + +**Available since:** 7.0.0 + +Publish a message to shard-subscribers (see SPUBLISH command). + + + +### `RedisModule_GetSelectedDb` + + int RedisModule_GetSelectedDb(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Return the currently selected DB. + + + +### `RedisModule_GetContextFlags` + + int RedisModule_GetContextFlags(RedisModuleCtx *ctx); + +**Available since:** 4.0.3 + +Return the current context's flags. The flags provide information on the +current request context (whether the client is a Lua script or in a MULTI), +and about the Redis instance in general, i.e replication and persistence. + +It is possible to call this function even with a NULL context, however +in this case the following flags will not be reported: + + * LUA, MULTI, REPLICATED, DIRTY (see below for more info). + +Available flags and their meaning: + + * `REDISMODULE_CTX_FLAGS_LUA`: The command is running in a Lua script + + * `REDISMODULE_CTX_FLAGS_MULTI`: The command is running inside a transaction + + * `REDISMODULE_CTX_FLAGS_REPLICATED`: The command was sent over the replication + link by the MASTER + + * `REDISMODULE_CTX_FLAGS_MASTER`: The Redis instance is a master + + * `REDISMODULE_CTX_FLAGS_SLAVE`: The Redis instance is a slave + + * `REDISMODULE_CTX_FLAGS_READONLY`: The Redis instance is read-only + + * `REDISMODULE_CTX_FLAGS_CLUSTER`: The Redis instance is in cluster mode + + * `REDISMODULE_CTX_FLAGS_AOF`: The Redis instance has AOF enabled + + * `REDISMODULE_CTX_FLAGS_RDB`: The instance has RDB enabled + + * `REDISMODULE_CTX_FLAGS_MAXMEMORY`: The instance has Maxmemory set + + * `REDISMODULE_CTX_FLAGS_EVICT`: Maxmemory is set and has an eviction + policy that may delete keys + + * `REDISMODULE_CTX_FLAGS_OOM`: Redis is out of memory according to the + maxmemory setting. + + * `REDISMODULE_CTX_FLAGS_OOM_WARNING`: Less than 25% of memory remains before + reaching the maxmemory level. + + * `REDISMODULE_CTX_FLAGS_LOADING`: Server is loading RDB/AOF + + * `REDISMODULE_CTX_FLAGS_REPLICA_IS_STALE`: No active link with the master. + + * `REDISMODULE_CTX_FLAGS_REPLICA_IS_CONNECTING`: The replica is trying to + connect with the master. + + * `REDISMODULE_CTX_FLAGS_REPLICA_IS_TRANSFERRING`: Master -> Replica RDB + transfer is in progress. + + * `REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE`: The replica has an active link + with its master. This is the + contrary of STALE state. + + * `REDISMODULE_CTX_FLAGS_ACTIVE_CHILD`: There is currently some background + process active (RDB, AUX or module). + + * `REDISMODULE_CTX_FLAGS_MULTI_DIRTY`: The next EXEC will fail due to dirty + CAS (touched keys). + + * `REDISMODULE_CTX_FLAGS_IS_CHILD`: Redis is currently running inside + background child process. + + * `REDISMODULE_CTX_FLAGS_RESP3`: Indicate the that client attached to this + context is using RESP3. + + * `REDISMODULE_CTX_FLAGS_SERVER_STARTUP`: The Redis instance is starting + + * `REDISMODULE_CTX_FLAGS_DEBUG_ENABLED`: Debug commands are enabled for this + context. + + + +### `RedisModule_AvoidReplicaTraffic` + + int RedisModule_AvoidReplicaTraffic(void); + +**Available since:** 6.0.0 + +Returns true if a client sent the CLIENT PAUSE command to the server or +if Redis Cluster does a manual failover, pausing the clients. +This is needed when we have a master with replicas, and want to write, +without adding further data to the replication channel, that the replicas +replication offset, match the one of the master. When this happens, it is +safe to failover the master without data loss. + +However modules may generate traffic by calling [`RedisModule_Call()`](#RedisModule_Call) with +the "!" flag, or by calling [`RedisModule_Replicate()`](#RedisModule_Replicate), in a context outside +commands execution, for instance in timeout callbacks, threads safe +contexts, and so forth. When modules will generate too much traffic, it +will be hard for the master and replicas offset to match, because there +is more data to send in the replication channel. + +So modules may want to try to avoid very heavy background work that has +the effect of creating data to the replication channel, when this function +returns true. This is mostly useful for modules that have background +garbage collection tasks, or that do writes and replicate such writes +periodically in timer callbacks or other periodic callbacks. + + + +### `RedisModule_SelectDb` + + int RedisModule_SelectDb(RedisModuleCtx *ctx, int newid); + +**Available since:** 4.0.0 + +Change the currently selected DB. Returns an error if the id +is out of range. + +Note that the client will retain the currently selected DB even after +the Redis command implemented by the module calling this function +returns. + +If the module command wishes to change something in a different DB and +returns back to the original one, it should call [`RedisModule_GetSelectedDb()`](#RedisModule_GetSelectedDb) +before in order to restore the old DB number before returning. + + + +### `RedisModule_KeyExists` + + int RedisModule_KeyExists(RedisModuleCtx *ctx, robj *keyname); + +**Available since:** 7.0.0 + +Check if a key exists, without affecting its last access time. + +This is equivalent to calling [`RedisModule_OpenKey`](#RedisModule_OpenKey) with the mode `REDISMODULE_READ` | +`REDISMODULE_OPEN_KEY_NOTOUCH`, then checking if NULL was returned and, if not, +calling [`RedisModule_CloseKey`](#RedisModule_CloseKey) on the opened key. + + + +### `RedisModule_OpenKey` + + RedisModuleKey *RedisModule_OpenKey(RedisModuleCtx *ctx, + robj *keyname, + int mode); + +**Available since:** 4.0.0 + +Return a handle representing a Redis key, so that it is possible +to call other APIs with the key handle as argument to perform +operations on the key. + +The return value is the handle representing the key, that must be +closed with [`RedisModule_CloseKey()`](#RedisModule_CloseKey). + +If the key does not exist and `REDISMODULE_WRITE` mode is requested, the handle +is still returned, since it is possible to perform operations on +a yet not existing key (that will be created, for example, after +a list push operation). If the mode is just `REDISMODULE_READ` instead, and the +key does not exist, NULL is returned. However it is still safe to +call [`RedisModule_CloseKey()`](#RedisModule_CloseKey) and [`RedisModule_KeyType()`](#RedisModule_KeyType) on a NULL +value. + +Extra flags that can be pass to the API under the mode argument: +* `REDISMODULE_OPEN_KEY_NOTOUCH` - Avoid touching the LRU/LFU of the key when opened. +* `REDISMODULE_OPEN_KEY_NONOTIFY` - Don't trigger keyspace event on key misses. +* `REDISMODULE_OPEN_KEY_NOSTATS` - Don't update keyspace hits/misses counters. +* `REDISMODULE_OPEN_KEY_NOEXPIRE` - Avoid deleting lazy expired keys. +* `REDISMODULE_OPEN_KEY_NOEFFECTS` - Avoid any effects from fetching the key. +* `REDISMODULE_OPEN_KEY_ACCESS_EXPIRED` - Access expired keys that have not yet been deleted + + + +### `RedisModule_GetOpenKeyModesAll` + + int RedisModule_GetOpenKeyModesAll(void); + +**Available since:** 7.2.0 + + +Returns the full OpenKey modes mask, using the return value +the module can check if a certain set of OpenKey modes are supported +by the redis server version in use. +Example: + + int supportedMode = RedisModule_GetOpenKeyModesAll(); + if (supportedMode & REDISMODULE_OPEN_KEY_NOTOUCH) { + // REDISMODULE_OPEN_KEY_NOTOUCH is supported + } else{ + // REDISMODULE_OPEN_KEY_NOTOUCH is not supported + } + + + +### `RedisModule_CloseKey` + + void RedisModule_CloseKey(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Close a key handle. + + + +### `RedisModule_KeyType` + + int RedisModule_KeyType(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Return the type of the key. If the key pointer is NULL then +`REDISMODULE_KEYTYPE_EMPTY` is returned. + + + +### `RedisModule_ValueLength` + + size_t RedisModule_ValueLength(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Return the length of the value associated with the key. +For strings this is the length of the string. For all the other types +is the number of elements (just counting keys for hashes). + +If the key pointer is NULL or the key is empty, zero is returned. + + + +### `RedisModule_DeleteKey` + + int RedisModule_DeleteKey(RedisModuleKey *key); + +**Available since:** 4.0.0 + +If the key is open for writing, remove it, and setup the key to +accept new writes as an empty key (that will be created on demand). +On success `REDISMODULE_OK` is returned. If the key is not open for +writing `REDISMODULE_ERR` is returned. + + + +### `RedisModule_UnlinkKey` + + int RedisModule_UnlinkKey(RedisModuleKey *key); + +**Available since:** 4.0.7 + +If the key is open for writing, unlink it (that is delete it in a +non-blocking way, not reclaiming memory immediately) and setup the key to +accept new writes as an empty key (that will be created on demand). +On success `REDISMODULE_OK` is returned. If the key is not open for +writing `REDISMODULE_ERR` is returned. + + + +### `RedisModule_GetExpire` + + mstime_t RedisModule_GetExpire(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Return the key expire value, as milliseconds of remaining TTL. +If no TTL is associated with the key or if the key is empty, +`REDISMODULE_NO_EXPIRE` is returned. + + + +### `RedisModule_SetExpire` + + int RedisModule_SetExpire(RedisModuleKey *key, mstime_t expire); + +**Available since:** 4.0.0 + +Set a new expire for the key. If the special expire +`REDISMODULE_NO_EXPIRE` is set, the expire is cancelled if there was +one (the same as the PERSIST command). + +Note that the expire must be provided as a positive integer representing +the number of milliseconds of TTL the key should have. + +The function returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if +the key was not open for writing or is an empty key. + + + +### `RedisModule_GetAbsExpire` + + mstime_t RedisModule_GetAbsExpire(RedisModuleKey *key); + +**Available since:** 6.2.2 + +Return the key expire value, as absolute Unix timestamp. +If no TTL is associated with the key or if the key is empty, +`REDISMODULE_NO_EXPIRE` is returned. + + + +### `RedisModule_SetAbsExpire` + + int RedisModule_SetAbsExpire(RedisModuleKey *key, mstime_t expire); + +**Available since:** 6.2.2 + +Set a new expire for the key. If the special expire +`REDISMODULE_NO_EXPIRE` is set, the expire is cancelled if there was +one (the same as the PERSIST command). + +Note that the expire must be provided as a positive integer representing +the absolute Unix timestamp the key should have. + +The function returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if +the key was not open for writing or is an empty key. + + + +### `RedisModule_ResetDataset` + + void RedisModule_ResetDataset(int restart_aof, int async); + +**Available since:** 6.0.0 + +Performs similar operation to FLUSHALL, and optionally start a new AOF file (if enabled) +If `restart_aof` is true, you must make sure the command that triggered this call is not +propagated to the AOF file. +When async is set to true, db contents will be freed by a background thread. + + + +### `RedisModule_DbSize` + + unsigned long long RedisModule_DbSize(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Returns the number of keys in the current db. + + + +### `RedisModule_RandomKey` + + RedisModuleString *RedisModule_RandomKey(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Returns a name of a random key, or NULL if current db is empty. + + + +### `RedisModule_GetKeyNameFromOptCtx` + + const RedisModuleString *RedisModule_GetKeyNameFromOptCtx(RedisModuleKeyOptCtx *ctx); + +**Available since:** 7.0.0 + +Returns the name of the key currently being processed. + + + +### `RedisModule_GetToKeyNameFromOptCtx` + + const RedisModuleString *RedisModule_GetToKeyNameFromOptCtx(RedisModuleKeyOptCtx *ctx); + +**Available since:** 7.0.0 + +Returns the name of the target key currently being processed. + + + +### `RedisModule_GetDbIdFromOptCtx` + + int RedisModule_GetDbIdFromOptCtx(RedisModuleKeyOptCtx *ctx); + +**Available since:** 7.0.0 + +Returns the dbid currently being processed. + + + +### `RedisModule_GetToDbIdFromOptCtx` + + int RedisModule_GetToDbIdFromOptCtx(RedisModuleKeyOptCtx *ctx); + +**Available since:** 7.0.0 + +Returns the target dbid currently being processed. + + + +## Key API for String type + +See also [`RedisModule_ValueLength()`](#RedisModule_ValueLength), which returns the length of a string. + + + +### `RedisModule_StringSet` + + int RedisModule_StringSet(RedisModuleKey *key, RedisModuleString *str); + +**Available since:** 4.0.0 + +If the key is open for writing, set the specified string 'str' as the +value of the key, deleting the old value if any. +On success `REDISMODULE_OK` is returned. If the key is not open for +writing or there is an active iterator, `REDISMODULE_ERR` is returned. + + + +### `RedisModule_StringDMA` + + char *RedisModule_StringDMA(RedisModuleKey *key, size_t *len, int mode); + +**Available since:** 4.0.0 + +Prepare the key associated string value for DMA access, and returns +a pointer and size (by reference), that the user can use to read or +modify the string in-place accessing it directly via pointer. + +The 'mode' is composed by bitwise OR-ing the following flags: + + REDISMODULE_READ -- Read access + REDISMODULE_WRITE -- Write access + +If the DMA is not requested for writing, the pointer returned should +only be accessed in a read-only fashion. + +On error (wrong type) NULL is returned. + +DMA access rules: + +1. No other key writing function should be called since the moment +the pointer is obtained, for all the time we want to use DMA access +to read or modify the string. + +2. Each time [`RedisModule_StringTruncate()`](#RedisModule_StringTruncate) is called, to continue with the DMA +access, [`RedisModule_StringDMA()`](#RedisModule_StringDMA) should be called again to re-obtain +a new pointer and length. + +3. If the returned pointer is not NULL, but the length is zero, no +byte can be touched (the string is empty, or the key itself is empty) +so a [`RedisModule_StringTruncate()`](#RedisModule_StringTruncate) call should be used if there is to enlarge +the string, and later call StringDMA() again to get the pointer. + + + +### `RedisModule_StringTruncate` + + int RedisModule_StringTruncate(RedisModuleKey *key, size_t newlen); + +**Available since:** 4.0.0 + +If the key is open for writing and is of string type, resize it, padding +with zero bytes if the new length is greater than the old one. + +After this call, [`RedisModule_StringDMA()`](#RedisModule_StringDMA) must be called again to continue +DMA access with the new pointer. + +The function returns `REDISMODULE_OK` on success, and `REDISMODULE_ERR` on +error, that is, the key is not open for writing, is not a string +or resizing for more than 512 MB is requested. + +If the key is empty, a string key is created with the new string value +unless the new length value requested is zero. + + + +## Key API for List type + +Many of the list functions access elements by index. Since a list is in +essence a doubly-linked list, accessing elements by index is generally an +O(N) operation. However, if elements are accessed sequentially or with +indices close together, the functions are optimized to seek the index from +the previous index, rather than seeking from the ends of the list. + +This enables iteration to be done efficiently using a simple for loop: + + long n = RedisModule_ValueLength(key); + for (long i = 0; i < n; i++) { + RedisModuleString *elem = RedisModule_ListGet(key, i); + // Do stuff... + } + +Note that after modifying a list using [`RedisModule_ListPop`](#RedisModule_ListPop), [`RedisModule_ListSet`](#RedisModule_ListSet) or +[`RedisModule_ListInsert`](#RedisModule_ListInsert), the internal iterator is invalidated so the next operation +will require a linear seek. + +Modifying a list in any another way, for example using [`RedisModule_Call()`](#RedisModule_Call), while a key +is open will confuse the internal iterator and may cause trouble if the key +is used after such modifications. The key must be reopened in this case. + +See also [`RedisModule_ValueLength()`](#RedisModule_ValueLength), which returns the length of a list. + + + +### `RedisModule_ListPush` + + int RedisModule_ListPush(RedisModuleKey *key, + int where, + RedisModuleString *ele); + +**Available since:** 4.0.0 + +Push an element into a list, on head or tail depending on 'where' argument +(`REDISMODULE_LIST_HEAD` or `REDISMODULE_LIST_TAIL`). If the key refers to an +empty key opened for writing, the key is created. On success, `REDISMODULE_OK` +is returned. On failure, `REDISMODULE_ERR` is returned and `errno` is set as +follows: + +- EINVAL if key or ele is NULL. +- ENOTSUP if the key is of another type than list. +- EBADF if the key is not opened for writing. + +Note: Before Redis 7.0, `errno` was not set by this function. + + + +### `RedisModule_ListPop` + + RedisModuleString *RedisModule_ListPop(RedisModuleKey *key, int where); + +**Available since:** 4.0.0 + +Pop an element from the list, and returns it as a module string object +that the user should be free with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by enabling +automatic memory. The `where` argument specifies if the element should be +popped from the beginning or the end of the list (`REDISMODULE_LIST_HEAD` or +`REDISMODULE_LIST_TAIL`). On failure, the command returns NULL and sets +`errno` as follows: + +- EINVAL if key is NULL. +- ENOTSUP if the key is empty or of another type than list. +- EBADF if the key is not opened for writing. + +Note: Before Redis 7.0, `errno` was not set by this function. + + + +### `RedisModule_ListGet` + + RedisModuleString *RedisModule_ListGet(RedisModuleKey *key, long index); + +**Available since:** 7.0.0 + +Returns the element at index `index` in the list stored at `key`, like the +LINDEX command. The element should be free'd using [`RedisModule_FreeString()`](#RedisModule_FreeString) or using +automatic memory management. + +The index is zero-based, so 0 means the first element, 1 the second element +and so on. Negative indices can be used to designate elements starting at the +tail of the list. Here, -1 means the last element, -2 means the penultimate +and so forth. + +When no value is found at the given key and index, NULL is returned and +`errno` is set as follows: + +- EINVAL if key is NULL. +- ENOTSUP if the key is not a list. +- EBADF if the key is not opened for reading. +- EDOM if the index is not a valid index in the list. + + + +### `RedisModule_ListSet` + + int RedisModule_ListSet(RedisModuleKey *key, + long index, + RedisModuleString *value); + +**Available since:** 7.0.0 + +Replaces the element at index `index` in the list stored at `key`. + +The index is zero-based, so 0 means the first element, 1 the second element +and so on. Negative indices can be used to designate elements starting at the +tail of the list. Here, -1 means the last element, -2 means the penultimate +and so forth. + +On success, `REDISMODULE_OK` is returned. On failure, `REDISMODULE_ERR` is +returned and `errno` is set as follows: + +- EINVAL if key or value is NULL. +- ENOTSUP if the key is not a list. +- EBADF if the key is not opened for writing. +- EDOM if the index is not a valid index in the list. + + + +### `RedisModule_ListInsert` + + int RedisModule_ListInsert(RedisModuleKey *key, + long index, + RedisModuleString *value); + +**Available since:** 7.0.0 + +Inserts an element at the given index. + +The index is zero-based, so 0 means the first element, 1 the second element +and so on. Negative indices can be used to designate elements starting at the +tail of the list. Here, -1 means the last element, -2 means the penultimate +and so forth. The index is the element's index after inserting it. + +On success, `REDISMODULE_OK` is returned. On failure, `REDISMODULE_ERR` is +returned and `errno` is set as follows: + +- EINVAL if key or value is NULL. +- ENOTSUP if the key of another type than list. +- EBADF if the key is not opened for writing. +- EDOM if the index is not a valid index in the list. + + + +### `RedisModule_ListDelete` + + int RedisModule_ListDelete(RedisModuleKey *key, long index); + +**Available since:** 7.0.0 + +Removes an element at the given index. The index is 0-based. A negative index +can also be used, counting from the end of the list. + +On success, `REDISMODULE_OK` is returned. On failure, `REDISMODULE_ERR` is +returned and `errno` is set as follows: + +- EINVAL if key or value is NULL. +- ENOTSUP if the key is not a list. +- EBADF if the key is not opened for writing. +- EDOM if the index is not a valid index in the list. + + + +## Key API for Sorted Set type + +See also [`RedisModule_ValueLength()`](#RedisModule_ValueLength), which returns the length of a sorted set. + + + +### `RedisModule_ZsetAdd` + + int RedisModule_ZsetAdd(RedisModuleKey *key, + double score, + RedisModuleString *ele, + int *flagsptr); + +**Available since:** 4.0.0 + +Add a new element into a sorted set, with the specified 'score'. +If the element already exists, the score is updated. + +A new sorted set is created at value if the key is an empty open key +setup for writing. + +Additional flags can be passed to the function via a pointer, the flags +are both used to receive input and to communicate state when the function +returns. 'flagsptr' can be NULL if no special flags are used. + +The input flags are: + + REDISMODULE_ZADD_XX: Element must already exist. Do nothing otherwise. + REDISMODULE_ZADD_NX: Element must not exist. Do nothing otherwise. + REDISMODULE_ZADD_GT: If element exists, new score must be greater than the current score. + Do nothing otherwise. Can optionally be combined with XX. + REDISMODULE_ZADD_LT: If element exists, new score must be less than the current score. + Do nothing otherwise. Can optionally be combined with XX. + +The output flags are: + + REDISMODULE_ZADD_ADDED: The new element was added to the sorted set. + REDISMODULE_ZADD_UPDATED: The score of the element was updated. + REDISMODULE_ZADD_NOP: No operation was performed because XX or NX flags. + +On success the function returns `REDISMODULE_OK`. On the following errors +`REDISMODULE_ERR` is returned: + +* The key was not opened for writing. +* The key is of the wrong type. +* 'score' double value is not a number (NaN). + + + +### `RedisModule_ZsetIncrby` + + int RedisModule_ZsetIncrby(RedisModuleKey *key, + double score, + RedisModuleString *ele, + int *flagsptr, + double *newscore); + +**Available since:** 4.0.0 + +This function works exactly like [`RedisModule_ZsetAdd()`](#RedisModule_ZsetAdd), but instead of setting +a new score, the score of the existing element is incremented, or if the +element does not already exist, it is added assuming the old score was +zero. + +The input and output flags, and the return value, have the same exact +meaning, with the only difference that this function will return +`REDISMODULE_ERR` even when 'score' is a valid double number, but adding it +to the existing score results into a NaN (not a number) condition. + +This function has an additional field 'newscore', if not NULL is filled +with the new score of the element after the increment, if no error +is returned. + + + +### `RedisModule_ZsetRem` + + int RedisModule_ZsetRem(RedisModuleKey *key, + RedisModuleString *ele, + int *deleted); + +**Available since:** 4.0.0 + +Remove the specified element from the sorted set. +The function returns `REDISMODULE_OK` on success, and `REDISMODULE_ERR` +on one of the following conditions: + +* The key was not opened for writing. +* The key is of the wrong type. + +The return value does NOT indicate the fact the element was really +removed (since it existed) or not, just if the function was executed +with success. + +In order to know if the element was removed, the additional argument +'deleted' must be passed, that populates the integer by reference +setting it to 1 or 0 depending on the outcome of the operation. +The 'deleted' argument can be NULL if the caller is not interested +to know if the element was really removed. + +Empty keys will be handled correctly by doing nothing. + + + +### `RedisModule_ZsetScore` + + int RedisModule_ZsetScore(RedisModuleKey *key, + RedisModuleString *ele, + double *score); + +**Available since:** 4.0.0 + +On success retrieve the double score associated at the sorted set element +'ele' and returns `REDISMODULE_OK`. Otherwise `REDISMODULE_ERR` is returned +to signal one of the following conditions: + +* There is no such element 'ele' in the sorted set. +* The key is not a sorted set. +* The key is an open empty key. + + + +## Key API for Sorted Set iterator + + + +### `RedisModule_ZsetRangeStop` + + void RedisModule_ZsetRangeStop(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Stop a sorted set iteration. + + + +### `RedisModule_ZsetRangeEndReached` + + int RedisModule_ZsetRangeEndReached(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Return the "End of range" flag value to signal the end of the iteration. + + + +### `RedisModule_ZsetFirstInScoreRange` + + int RedisModule_ZsetFirstInScoreRange(RedisModuleKey *key, + double min, + double max, + int minex, + int maxex); + +**Available since:** 4.0.0 + +Setup a sorted set iterator seeking the first element in the specified +range. Returns `REDISMODULE_OK` if the iterator was correctly initialized +otherwise `REDISMODULE_ERR` is returned in the following conditions: + +1. The value stored at key is not a sorted set or the key is empty. + +The range is specified according to the two double values 'min' and 'max'. +Both can be infinite using the following two macros: + +* `REDISMODULE_POSITIVE_INFINITE` for positive infinite value +* `REDISMODULE_NEGATIVE_INFINITE` for negative infinite value + +'minex' and 'maxex' parameters, if true, respectively setup a range +where the min and max value are exclusive (not included) instead of +inclusive. + + + +### `RedisModule_ZsetLastInScoreRange` + + int RedisModule_ZsetLastInScoreRange(RedisModuleKey *key, + double min, + double max, + int minex, + int maxex); + +**Available since:** 4.0.0 + +Exactly like [`RedisModule_ZsetFirstInScoreRange()`](#RedisModule_ZsetFirstInScoreRange) but the last element of +the range is selected for the start of the iteration instead. + + + +### `RedisModule_ZsetFirstInLexRange` + + int RedisModule_ZsetFirstInLexRange(RedisModuleKey *key, + RedisModuleString *min, + RedisModuleString *max); + +**Available since:** 4.0.0 + +Setup a sorted set iterator seeking the first element in the specified +lexicographical range. Returns `REDISMODULE_OK` if the iterator was correctly +initialized otherwise `REDISMODULE_ERR` is returned in the +following conditions: + +1. The value stored at key is not a sorted set or the key is empty. +2. The lexicographical range 'min' and 'max' format is invalid. + +'min' and 'max' should be provided as two `RedisModuleString` objects +in the same format as the parameters passed to the ZRANGEBYLEX command. +The function does not take ownership of the objects, so they can be released +ASAP after the iterator is setup. + + + +### `RedisModule_ZsetLastInLexRange` + + int RedisModule_ZsetLastInLexRange(RedisModuleKey *key, + RedisModuleString *min, + RedisModuleString *max); + +**Available since:** 4.0.0 + +Exactly like [`RedisModule_ZsetFirstInLexRange()`](#RedisModule_ZsetFirstInLexRange) but the last element of +the range is selected for the start of the iteration instead. + + + +### `RedisModule_ZsetRangeCurrentElement` + + RedisModuleString *RedisModule_ZsetRangeCurrentElement(RedisModuleKey *key, + double *score); + +**Available since:** 4.0.0 + +Return the current sorted set element of an active sorted set iterator +or NULL if the range specified in the iterator does not include any +element. + + + +### `RedisModule_ZsetRangeNext` + + int RedisModule_ZsetRangeNext(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Go to the next element of the sorted set iterator. Returns 1 if there was +a next element, 0 if we are already at the latest element or the range +does not include any item at all. + + + +### `RedisModule_ZsetRangePrev` + + int RedisModule_ZsetRangePrev(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Go to the previous element of the sorted set iterator. Returns 1 if there was +a previous element, 0 if we are already at the first element or the range +does not include any item at all. + + + +## Key API for Hash type + +See also [`RedisModule_ValueLength()`](#RedisModule_ValueLength), which returns the number of fields in a hash. + + + +### `RedisModule_HashSet` + + int RedisModule_HashSet(RedisModuleKey *key, int flags, ...); + +**Available since:** 4.0.0 + +Set the field of the specified hash field to the specified value. +If the key is an empty key open for writing, it is created with an empty +hash value, in order to set the specified field. + +The function is variadic and the user must specify pairs of field +names and values, both as `RedisModuleString` pointers (unless the +CFIELD option is set, see later). At the end of the field/value-ptr pairs, +NULL must be specified as last argument to signal the end of the arguments +in the variadic function. + +Example to set the hash argv[1] to the value argv[2]: + + RedisModule_HashSet(key,REDISMODULE_HASH_NONE,argv[1],argv[2],NULL); + +The function can also be used in order to delete fields (if they exist) +by setting them to the specified value of `REDISMODULE_HASH_DELETE`: + + RedisModule_HashSet(key,REDISMODULE_HASH_NONE,argv[1], + REDISMODULE_HASH_DELETE,NULL); + +The behavior of the command changes with the specified flags, that can be +set to `REDISMODULE_HASH_NONE` if no special behavior is needed. + + REDISMODULE_HASH_NX: The operation is performed only if the field was not + already existing in the hash. + REDISMODULE_HASH_XX: The operation is performed only if the field was + already existing, so that a new value could be + associated to an existing filed, but no new fields + are created. + REDISMODULE_HASH_CFIELDS: The field names passed are null terminated C + strings instead of RedisModuleString objects. + REDISMODULE_HASH_COUNT_ALL: Include the number of inserted fields in the + returned number, in addition to the number of + updated and deleted fields. (Added in Redis + 6.2.) + +Unless NX is specified, the command overwrites the old field value with +the new one. + +When using `REDISMODULE_HASH_CFIELDS`, field names are reported using +normal C strings, so for example to delete the field "foo" the following +code can be used: + + RedisModule_HashSet(key,REDISMODULE_HASH_CFIELDS,"foo", + REDISMODULE_HASH_DELETE,NULL); + +Return value: + +The number of fields existing in the hash prior to the call, which have been +updated (its old value has been replaced by a new value) or deleted. If the +flag `REDISMODULE_HASH_COUNT_ALL` is set, inserted fields not previously +existing in the hash are also counted. + +If the return value is zero, `errno` is set (since Redis 6.2) as follows: + +- EINVAL if any unknown flags are set or if key is NULL. +- ENOTSUP if the key is associated with a non Hash value. +- EBADF if the key was not opened for writing. +- ENOENT if no fields were counted as described under Return value above. + This is not actually an error. The return value can be zero if all fields + were just created and the `COUNT_ALL` flag was unset, or if changes were held + back due to the NX and XX flags. + +NOTICE: The return value semantics of this function are very different +between Redis 6.2 and older versions. Modules that use it should determine +the Redis version and handle it accordingly. + + + +### `RedisModule_HashGet` + + int RedisModule_HashGet(RedisModuleKey *key, int flags, ...); + +**Available since:** 4.0.0 + +Get fields from a hash value. This function is called using a variable +number of arguments, alternating a field name (as a `RedisModuleString` +pointer) with a pointer to a `RedisModuleString` pointer, that is set to the +value of the field if the field exists, or NULL if the field does not exist. +At the end of the field/value-ptr pairs, NULL must be specified as last +argument to signal the end of the arguments in the variadic function. + +This is an example usage: + + RedisModuleString *first, *second; + RedisModule_HashGet(mykey,REDISMODULE_HASH_NONE,argv[1],&first, + argv[2],&second,NULL); + +As with [`RedisModule_HashSet()`](#RedisModule_HashSet) the behavior of the command can be specified +passing flags different than `REDISMODULE_HASH_NONE`: + +`REDISMODULE_HASH_CFIELDS`: field names as null terminated C strings. + +`REDISMODULE_HASH_EXISTS`: instead of setting the value of the field +expecting a `RedisModuleString` pointer to pointer, the function just +reports if the field exists or not and expects an integer pointer +as the second element of each pair. + +`REDISMODULE_HASH_EXPIRE_TIME`: retrieves the expiration time of a field in the hash. +The function expects a `mstime_t` pointer as the second element of each pair. +If the field does not exist or has no expiration, the value is set to +`REDISMODULE_NO_EXPIRE`. This flag must not be used with `REDISMODULE_HASH_EXISTS`. + +Example of `REDISMODULE_HASH_CFIELDS`: + + RedisModuleString *username, *hashedpass; + RedisModule_HashGet(mykey,REDISMODULE_HASH_CFIELDS,"username",&username,"hp",&hashedpass, NULL); + +Example of `REDISMODULE_HASH_EXISTS`: + + int exists; + RedisModule_HashGet(mykey,REDISMODULE_HASH_EXISTS,"username",&exists,NULL); + +Example of `REDISMODULE_HASH_EXPIRE_TIME`: + + mstime_t hpExpireTime; + RedisModule_HashGet(mykey,REDISMODULE_HASH_EXPIRE_TIME,"hp",&hpExpireTime,NULL); + +The function returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` if +the key is not a hash value. + +Memory management: + +The returned `RedisModuleString` objects should be released with +[`RedisModule_FreeString()`](#RedisModule_FreeString), or by enabling automatic memory management. + + + +### `RedisModule_HashFieldMinExpire` + + mstime_t RedisModule_HashFieldMinExpire(RedisModuleKey *key); + +**Available since:** unreleased + + +Retrieves the minimum expiration time of fields in a hash. + +Return: + - The minimum expiration time (in milliseconds) of the hash fields if at + least one field has an expiration set. + - `REDISMODULE_NO_EXPIRE` if no fields have an expiration set or if the key + is not a hash. + + + +## Key API for Stream type + +For an introduction to streams, see [https://redis.io/docs/latest/develop/data-types/streams/](https://redis.io/docs/latest/develop/data-types/streams/). + +The type `RedisModuleStreamID`, which is used in stream functions, is a struct +with two 64-bit fields and is defined as + + typedef struct RedisModuleStreamID { + uint64_t ms; + uint64_t seq; + } RedisModuleStreamID; + +See also [`RedisModule_ValueLength()`](#RedisModule_ValueLength), which returns the length of a stream, and the +conversion functions [`RedisModule_StringToStreamID()`](#RedisModule_StringToStreamID) and [`RedisModule_CreateStringFromStreamID()`](#RedisModule_CreateStringFromStreamID). + + + +### `RedisModule_StreamAdd` + + int RedisModule_StreamAdd(RedisModuleKey *key, + int flags, + RedisModuleStreamID *id, + RedisModuleString **argv, + long numfields); + +**Available since:** 6.2.0 + +Adds an entry to a stream. Like XADD without trimming. + +- `key`: The key where the stream is (or will be) stored +- `flags`: A bit field of + - `REDISMODULE_STREAM_ADD_AUTOID`: Assign a stream ID automatically, like + `*` in the XADD command. +- `id`: If the `AUTOID` flag is set, this is where the assigned ID is + returned. Can be NULL if `AUTOID` is set, if you don't care to receive the + ID. If `AUTOID` is not set, this is the requested ID. +- `argv`: A pointer to an array of size `numfields * 2` containing the + fields and values. +- `numfields`: The number of field-value pairs in `argv`. + +Returns `REDISMODULE_OK` if an entry has been added. On failure, +`REDISMODULE_ERR` is returned and `errno` is set as follows: + +- EINVAL if called with invalid arguments +- ENOTSUP if the key refers to a value of a type other than stream +- EBADF if the key was not opened for writing +- EDOM if the given ID was 0-0 or not greater than all other IDs in the + stream (only if the AUTOID flag is unset) +- EFBIG if the stream has reached the last possible ID +- ERANGE if the elements are too large to be stored. + + + +### `RedisModule_StreamDelete` + + int RedisModule_StreamDelete(RedisModuleKey *key, RedisModuleStreamID *id); + +**Available since:** 6.2.0 + +Deletes an entry from a stream. + +- `key`: A key opened for writing, with no stream iterator started. +- `id`: The stream ID of the entry to delete. + +Returns `REDISMODULE_OK` on success. On failure, `REDISMODULE_ERR` is returned +and `errno` is set as follows: + +- EINVAL if called with invalid arguments +- ENOTSUP if the key refers to a value of a type other than stream or if the + key is empty +- EBADF if the key was not opened for writing or if a stream iterator is + associated with the key +- ENOENT if no entry with the given stream ID exists + +See also [`RedisModule_StreamIteratorDelete()`](#RedisModule_StreamIteratorDelete) for deleting the current entry while +iterating using a stream iterator. + + + +### `RedisModule_StreamIteratorStart` + + int RedisModule_StreamIteratorStart(RedisModuleKey *key, + int flags, + RedisModuleStreamID *start, + RedisModuleStreamID *end); + +**Available since:** 6.2.0 + +Sets up a stream iterator. + +- `key`: The stream key opened for reading using [`RedisModule_OpenKey()`](#RedisModule_OpenKey). +- `flags`: + - `REDISMODULE_STREAM_ITERATOR_EXCLUSIVE`: Don't include `start` and `end` + in the iterated range. + - `REDISMODULE_STREAM_ITERATOR_REVERSE`: Iterate in reverse order, starting + from the `end` of the range. +- `start`: The lower bound of the range. Use NULL for the beginning of the + stream. +- `end`: The upper bound of the range. Use NULL for the end of the stream. + +Returns `REDISMODULE_OK` on success. On failure, `REDISMODULE_ERR` is returned +and `errno` is set as follows: + +- EINVAL if called with invalid arguments +- ENOTSUP if the key refers to a value of a type other than stream or if the + key is empty +- EBADF if the key was not opened for writing or if a stream iterator is + already associated with the key +- EDOM if `start` or `end` is outside the valid range + +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` if the key doesn't +refer to a stream or if invalid arguments were given. + +The stream IDs are retrieved using [`RedisModule_StreamIteratorNextID()`](#RedisModule_StreamIteratorNextID) and +for each stream ID, the fields and values are retrieved using +[`RedisModule_StreamIteratorNextField()`](#RedisModule_StreamIteratorNextField). The iterator is freed by calling +[`RedisModule_StreamIteratorStop()`](#RedisModule_StreamIteratorStop). + +Example (error handling omitted): + + RedisModule_StreamIteratorStart(key, 0, startid_ptr, endid_ptr); + RedisModuleStreamID id; + long numfields; + while (RedisModule_StreamIteratorNextID(key, &id, &numfields) == + REDISMODULE_OK) { + RedisModuleString *field, *value; + while (RedisModule_StreamIteratorNextField(key, &field, &value) == + REDISMODULE_OK) { + // + // ... Do stuff ... + // + RedisModule_FreeString(ctx, field); + RedisModule_FreeString(ctx, value); + } + } + RedisModule_StreamIteratorStop(key); + + + +### `RedisModule_StreamIteratorStop` + + int RedisModule_StreamIteratorStop(RedisModuleKey *key); + +**Available since:** 6.2.0 + +Stops a stream iterator created using [`RedisModule_StreamIteratorStart()`](#RedisModule_StreamIteratorStart) and +reclaims its memory. + +Returns `REDISMODULE_OK` on success. On failure, `REDISMODULE_ERR` is returned +and `errno` is set as follows: + +- EINVAL if called with a NULL key +- ENOTSUP if the key refers to a value of a type other than stream or if the + key is empty +- EBADF if the key was not opened for writing or if no stream iterator is + associated with the key + + + +### `RedisModule_StreamIteratorNextID` + + int RedisModule_StreamIteratorNextID(RedisModuleKey *key, + RedisModuleStreamID *id, + long *numfields); + +**Available since:** 6.2.0 + +Finds the next stream entry and returns its stream ID and the number of +fields. + +- `key`: Key for which a stream iterator has been started using + [`RedisModule_StreamIteratorStart()`](#RedisModule_StreamIteratorStart). +- `id`: The stream ID returned. NULL if you don't care. +- `numfields`: The number of fields in the found stream entry. NULL if you + don't care. + +Returns `REDISMODULE_OK` and sets `*id` and `*numfields` if an entry was found. +On failure, `REDISMODULE_ERR` is returned and `errno` is set as follows: + +- EINVAL if called with a NULL key +- ENOTSUP if the key refers to a value of a type other than stream or if the + key is empty +- EBADF if no stream iterator is associated with the key +- ENOENT if there are no more entries in the range of the iterator + +In practice, if [`RedisModule_StreamIteratorNextID()`](#RedisModule_StreamIteratorNextID) is called after a successful call +to [`RedisModule_StreamIteratorStart()`](#RedisModule_StreamIteratorStart) and with the same key, it is safe to assume that +an `REDISMODULE_ERR` return value means that there are no more entries. + +Use [`RedisModule_StreamIteratorNextField()`](#RedisModule_StreamIteratorNextField) to retrieve the fields and values. +See the example at [`RedisModule_StreamIteratorStart()`](#RedisModule_StreamIteratorStart). + + + +### `RedisModule_StreamIteratorNextField` + + int RedisModule_StreamIteratorNextField(RedisModuleKey *key, + RedisModuleString **field_ptr, + RedisModuleString **value_ptr); + +**Available since:** 6.2.0 + +Retrieves the next field of the current stream ID and its corresponding value +in a stream iteration. This function should be called repeatedly after calling +[`RedisModule_StreamIteratorNextID()`](#RedisModule_StreamIteratorNextID) to fetch each field-value pair. + +- `key`: Key where a stream iterator has been started. +- `field_ptr`: This is where the field is returned. +- `value_ptr`: This is where the value is returned. + +Returns `REDISMODULE_OK` and points `*field_ptr` and `*value_ptr` to freshly +allocated `RedisModuleString` objects. The string objects are freed +automatically when the callback finishes if automatic memory is enabled. On +failure, `REDISMODULE_ERR` is returned and `errno` is set as follows: + +- EINVAL if called with a NULL key +- ENOTSUP if the key refers to a value of a type other than stream or if the + key is empty +- EBADF if no stream iterator is associated with the key +- ENOENT if there are no more fields in the current stream entry + +In practice, if [`RedisModule_StreamIteratorNextField()`](#RedisModule_StreamIteratorNextField) is called after a successful +call to [`RedisModule_StreamIteratorNextID()`](#RedisModule_StreamIteratorNextID) and with the same key, it is safe to assume +that an `REDISMODULE_ERR` return value means that there are no more fields. + +See the example at [`RedisModule_StreamIteratorStart()`](#RedisModule_StreamIteratorStart). + + + +### `RedisModule_StreamIteratorDelete` + + int RedisModule_StreamIteratorDelete(RedisModuleKey *key); + +**Available since:** 6.2.0 + +Deletes the current stream entry while iterating. + +This function can be called after [`RedisModule_StreamIteratorNextID()`](#RedisModule_StreamIteratorNextID) or after any +calls to [`RedisModule_StreamIteratorNextField()`](#RedisModule_StreamIteratorNextField). + +Returns `REDISMODULE_OK` on success. On failure, `REDISMODULE_ERR` is returned +and `errno` is set as follows: + +- EINVAL if key is NULL +- ENOTSUP if the key is empty or is of another type than stream +- EBADF if the key is not opened for writing, if no iterator has been started +- ENOENT if the iterator has no current stream entry + + + +### `RedisModule_StreamTrimByLength` + + long long RedisModule_StreamTrimByLength(RedisModuleKey *key, + int flags, + long long length); + +**Available since:** 6.2.0 + +Trim a stream by length, similar to XTRIM with MAXLEN. + +- `key`: Key opened for writing. +- `flags`: A bitfield of + - `REDISMODULE_STREAM_TRIM_APPROX`: Trim less if it improves performance, + like XTRIM with `~`. +- `length`: The number of stream entries to keep after trimming. + +Returns the number of entries deleted. On failure, a negative value is +returned and `errno` is set as follows: + +- EINVAL if called with invalid arguments +- ENOTSUP if the key is empty or of a type other than stream +- EBADF if the key is not opened for writing + + + +### `RedisModule_StreamTrimByID` + + long long RedisModule_StreamTrimByID(RedisModuleKey *key, + int flags, + RedisModuleStreamID *id); + +**Available since:** 6.2.0 + +Trim a stream by ID, similar to XTRIM with MINID. + +- `key`: Key opened for writing. +- `flags`: A bitfield of + - `REDISMODULE_STREAM_TRIM_APPROX`: Trim less if it improves performance, + like XTRIM with `~`. +- `id`: The smallest stream ID to keep after trimming. + +Returns the number of entries deleted. On failure, a negative value is +returned and `errno` is set as follows: + +- EINVAL if called with invalid arguments +- ENOTSUP if the key is empty or of a type other than stream +- EBADF if the key is not opened for writing + + + +## Calling Redis commands from modules + +[`RedisModule_Call()`](#RedisModule_Call) sends a command to Redis. The remaining functions handle the reply. + + + +### `RedisModule_FreeCallReply` + + void RedisModule_FreeCallReply(RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Free a Call reply and all the nested replies it contains if it's an +array. + + + +### `RedisModule_CallReplyType` + + int RedisModule_CallReplyType(RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Return the reply type as one of the following: + +- `REDISMODULE_REPLY_UNKNOWN` +- `REDISMODULE_REPLY_STRING` +- `REDISMODULE_REPLY_ERROR` +- `REDISMODULE_REPLY_INTEGER` +- `REDISMODULE_REPLY_ARRAY` +- `REDISMODULE_REPLY_NULL` +- `REDISMODULE_REPLY_MAP` +- `REDISMODULE_REPLY_SET` +- `REDISMODULE_REPLY_BOOL` +- `REDISMODULE_REPLY_DOUBLE` +- `REDISMODULE_REPLY_BIG_NUMBER` +- `REDISMODULE_REPLY_VERBATIM_STRING` +- `REDISMODULE_REPLY_ATTRIBUTE` +- `REDISMODULE_REPLY_PROMISE` + + + +### `RedisModule_CallReplyLength` + + size_t RedisModule_CallReplyLength(RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Return the reply type length, where applicable. + + + +### `RedisModule_CallReplyArrayElement` + + RedisModuleCallReply *RedisModule_CallReplyArrayElement(RedisModuleCallReply *reply, + size_t idx); + +**Available since:** 4.0.0 + +Return the 'idx'-th nested call reply element of an array reply, or NULL +if the reply type is wrong or the index is out of range. + + + +### `RedisModule_CallReplyInteger` + + long long RedisModule_CallReplyInteger(RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Return the `long long` of an integer reply. + + + +### `RedisModule_CallReplyDouble` + + double RedisModule_CallReplyDouble(RedisModuleCallReply *reply); + +**Available since:** 7.0.0 + +Return the double value of a double reply. + + + +### `RedisModule_CallReplyBigNumber` + + const char *RedisModule_CallReplyBigNumber(RedisModuleCallReply *reply, + size_t *len); + +**Available since:** 7.0.0 + +Return the big number value of a big number reply. + + + +### `RedisModule_CallReplyVerbatim` + + const char *RedisModule_CallReplyVerbatim(RedisModuleCallReply *reply, + size_t *len, + const char **format); + +**Available since:** 7.0.0 + +Return the value of a verbatim string reply, +An optional output argument can be given to get verbatim reply format. + + + +### `RedisModule_CallReplyBool` + + int RedisModule_CallReplyBool(RedisModuleCallReply *reply); + +**Available since:** 7.0.0 + +Return the Boolean value of a Boolean reply. + + + +### `RedisModule_CallReplySetElement` + + RedisModuleCallReply *RedisModule_CallReplySetElement(RedisModuleCallReply *reply, + size_t idx); + +**Available since:** 7.0.0 + +Return the 'idx'-th nested call reply element of a set reply, or NULL +if the reply type is wrong or the index is out of range. + + + +### `RedisModule_CallReplyMapElement` + + int RedisModule_CallReplyMapElement(RedisModuleCallReply *reply, + size_t idx, + RedisModuleCallReply **key, + RedisModuleCallReply **val); + +**Available since:** 7.0.0 + +Retrieve the 'idx'-th key and value of a map reply. + +Returns: +- `REDISMODULE_OK` on success. +- `REDISMODULE_ERR` if idx out of range or if the reply type is wrong. + +The `key` and `value` arguments are used to return by reference, and may be +NULL if not required. + + + +### `RedisModule_CallReplyAttribute` + + RedisModuleCallReply *RedisModule_CallReplyAttribute(RedisModuleCallReply *reply); + +**Available since:** 7.0.0 + +Return the attribute of the given reply, or NULL if no attribute exists. + + + +### `RedisModule_CallReplyAttributeElement` + + int RedisModule_CallReplyAttributeElement(RedisModuleCallReply *reply, + size_t idx, + RedisModuleCallReply **key, + RedisModuleCallReply **val); + +**Available since:** 7.0.0 + +Retrieve the 'idx'-th key and value of an attribute reply. + +Returns: +- `REDISMODULE_OK` on success. +- `REDISMODULE_ERR` if idx out of range or if the reply type is wrong. + +The `key` and `value` arguments are used to return by reference, and may be +NULL if not required. + + + +### `RedisModule_CallReplyPromiseSetUnblockHandler` + + void RedisModule_CallReplyPromiseSetUnblockHandler(RedisModuleCallReply *reply, + RedisModuleOnUnblocked on_unblock, + void *private_data); + +**Available since:** 7.2.0 + +Set unblock handler (callback and private data) on the given promise `RedisModuleCallReply`. +The given reply must be of promise type (`REDISMODULE_REPLY_PROMISE`). + + + +### `RedisModule_CallReplyPromiseAbort` + + int RedisModule_CallReplyPromiseAbort(RedisModuleCallReply *reply, + void **private_data); + +**Available since:** 7.2.0 + +Abort the execution of a given promise `RedisModuleCallReply`. +return `REDMODULE_OK` in case the abort was done successfully and `REDISMODULE_ERR` +if its not possible to abort the execution (execution already finished). +In case the execution was aborted (`REDMODULE_OK` was returned), the `private_data` out parameter +will be set with the value of the private data that was given on '[`RedisModule_CallReplyPromiseSetUnblockHandler`](#RedisModule_CallReplyPromiseSetUnblockHandler)' +so the caller will be able to release the private data. + +If the execution was aborted successfully, it is promised that the unblock handler will not be called. +That said, it is possible that the abort operation will successes but the operation will still continue. +This can happened if, for example, a module implements some blocking command and does not respect the +disconnect callback. For pure Redis commands this can not happened. + + + +### `RedisModule_CallReplyStringPtr` + + const char *RedisModule_CallReplyStringPtr(RedisModuleCallReply *reply, + size_t *len); + +**Available since:** 4.0.0 + +Return the pointer and length of a string or error reply. + + + +### `RedisModule_CreateStringFromCallReply` + + RedisModuleString *RedisModule_CreateStringFromCallReply(RedisModuleCallReply *reply); + +**Available since:** 4.0.0 + +Return a new string object from a call reply of type string, error or +integer. Otherwise (wrong reply type) return NULL. + + + +### `RedisModule_SetContextUser` + + void RedisModule_SetContextUser(RedisModuleCtx *ctx, + const RedisModuleUser *user); + +**Available since:** 7.0.6 + +Modifies the user that [`RedisModule_Call`](#RedisModule_Call) will use (e.g. for ACL checks) + + + +### `RedisModule_Call` + + RedisModuleCallReply *RedisModule_Call(RedisModuleCtx *ctx, + const char *cmdname, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Exported API to call any Redis command from modules. + +* **cmdname**: The Redis command to call. +* **fmt**: A format specifier string for the command's arguments. Each + of the arguments should be specified by a valid type specification. The + format specifier can also contain the modifiers `!`, `A`, `3` and `R` which + don't have a corresponding argument. + + * `b` -- The argument is a buffer and is immediately followed by another + argument that is the buffer's length. + * `c` -- The argument is a pointer to a plain C string (null-terminated). + * `l` -- The argument is a `long long` integer. + * `s` -- The argument is a RedisModuleString. + * `v` -- The argument(s) is a vector of RedisModuleString. + * `!` -- Sends the Redis command and its arguments to replicas and AOF. + * `A` -- Suppress AOF propagation, send only to replicas (requires `!`). + * `R` -- Suppress replicas propagation, send only to AOF (requires `!`). + * `3` -- Return a RESP3 reply. This will change the command reply. + e.g., HGETALL returns a map instead of a flat array. + * `0` -- Return the reply in auto mode, i.e. the reply format will be the + same as the client attached to the given RedisModuleCtx. This will + probably used when you want to pass the reply directly to the client. + * `C` -- Run a command as the user attached to the context. + User is either attached automatically via the client that directly + issued the command and created the context or via RedisModule_SetContextUser. + If the context is not directly created by an issued command (such as a + background context and no user was set on it via RedisModule_SetContextUser, + RedisModule_Call will fail. + Checks if the command can be executed according to ACL rules and causes + the command to run as the determined user, so that any future user + dependent activity, such as ACL checks within scripts will proceed as + expected. + Otherwise, the command will run as the Redis unrestricted user. + Upon sending a command from an internal connection, this flag is + ignored and the command will run as the Redis unrestricted user. + * `S` -- Run the command in a script mode, this means that it will raise + an error if a command which are not allowed inside a script + (flagged with the `deny-script` flag) is invoked (like SHUTDOWN). + In addition, on script mode, write commands are not allowed if there are + not enough good replicas (as configured with `min-replicas-to-write`) + or when the server is unable to persist to the disk. + * `W` -- Do not allow to run any write command (flagged with the `write` flag). + * `M` -- Do not allow `deny-oom` flagged commands when over the memory limit. + * `E` -- Return error as RedisModuleCallReply. If there is an error before + invoking the command, the error is returned using errno mechanism. + This flag allows to get the error also as an error CallReply with + relevant error message. + * 'D' -- A "Dry Run" mode. Return before executing the underlying call(). + If everything succeeded, it will return with a NULL, otherwise it will + return with a CallReply object denoting the error, as if it was called with + the 'E' code. + * 'K' -- Allow running blocking commands. If enabled and the command gets blocked, a + special REDISMODULE_REPLY_PROMISE will be returned. This reply type + indicates that the command was blocked and the reply will be given asynchronously. + The module can use this reply object to set a handler which will be called when + the command gets unblocked using RedisModule_CallReplyPromiseSetUnblockHandler. + The handler must be set immediately after the command invocation (without releasing + the Redis lock in between). If the handler is not set, the blocking command will + still continue its execution but the reply will be ignored (fire and forget), + notice that this is dangerous in case of role change, as explained below. + The module can use RedisModule_CallReplyPromiseAbort to abort the command invocation + if it was not yet finished (see RedisModule_CallReplyPromiseAbort documentation for more + details). It is also the module's responsibility to abort the execution on role change, either by using + server event (to get notified when the instance becomes a replica) or relying on the disconnect + callback of the original client. Failing to do so can result in a write operation on a replica. + Unlike other call replies, promise call reply **must** be freed while the Redis GIL is locked. + Notice that on unblocking, the only promise is that the unblock handler will be called, + If the blocking RedisModule_Call caused the module to also block some real client (using RedisModule_BlockClient), + it is the module responsibility to unblock this client on the unblock handler. + On the unblock handler it is only allowed to perform the following: + * Calling additional Redis commands using RedisModule_Call + * Open keys using RedisModule_OpenKey + * Replicate data to the replica or AOF + + Specifically, it is not allowed to call any Redis module API which are client related such as: + * RedisModule_Reply* API's + * RedisModule_BlockClient + * RedisModule_GetCurrentUserName + +* **...**: The actual arguments to the Redis command. + +On success a `RedisModuleCallReply` object is returned, otherwise +NULL is returned and errno is set to the following values: + +* EBADF: wrong format specifier. +* EINVAL: wrong command arity. +* ENOENT: command does not exist. +* EPERM: operation in Cluster instance with key in non local slot. +* EROFS: operation in Cluster instance when a write command is sent + in a readonly state. +* ENETDOWN: operation in Cluster instance when cluster is down. +* ENOTSUP: No ACL user for the specified module context +* EACCES: Command cannot be executed, according to ACL rules +* ENOSPC: Write or deny-oom command is not allowed +* ESPIPE: Command not allowed on script mode + +Example code fragment: + + reply = RedisModule_Call(ctx,"INCRBY","sc",argv[1],"10"); + if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_INTEGER) { + long long myval = RedisModule_CallReplyInteger(reply); + // Do something with myval. + } + +This API is documented here: [https://redis.io/docs/latest/develop/reference/modules/](https://redis.io/docs/latest/develop/reference/modules/) + + + +### `RedisModule_CallReplyProto` + + const char *RedisModule_CallReplyProto(RedisModuleCallReply *reply, + size_t *len); + +**Available since:** 4.0.0 + +Return a pointer, and a length, to the protocol returned by the command +that returned the reply object. + + + +## Modules data types + +When String DMA or using existing data structures is not enough, it is +possible to create new data types from scratch and export them to +Redis. The module must provide a set of callbacks for handling the +new values exported (for example in order to provide RDB saving/loading, +AOF rewrite, and so forth). In this section we define this API. + + + +### `RedisModule_CreateDataType` + + moduleType *RedisModule_CreateDataType(RedisModuleCtx *ctx, + const char *name, + int encver, + void *typemethods_ptr); + +**Available since:** 4.0.0 + +Register a new data type exported by the module. The parameters are the +following. Please for in depth documentation check the modules API +documentation, especially [https://redis.io/docs/latest/develop/reference/modules/modules-native-types/](https://redis.io/docs/latest/develop/reference/modules/modules-native-types/). + +* **name**: A 9 characters data type name that MUST be unique in the Redis + Modules ecosystem. Be creative... and there will be no collisions. Use + the charset A-Z a-z 9-0, plus the two "-_" characters. A good + idea is to use, for example `-`. For example + "tree-AntZ" may mean "Tree data structure by @antirez". To use both + lower case and upper case letters helps in order to prevent collisions. +* **encver**: Encoding version, which is, the version of the serialization + that a module used in order to persist data. As long as the "name" + matches, the RDB loading will be dispatched to the type callbacks + whatever 'encver' is used, however the module can understand if + the encoding it must load are of an older version of the module. + For example the module "tree-AntZ" initially used encver=0. Later + after an upgrade, it started to serialize data in a different format + and to register the type with encver=1. However this module may + still load old data produced by an older version if the `rdb_load` + callback is able to check the encver value and act accordingly. + The encver must be a positive value between 0 and 1023. + +* **typemethods_ptr** is a pointer to a `RedisModuleTypeMethods` structure + that should be populated with the methods callbacks and structure + version, like in the following example: + + RedisModuleTypeMethods tm = { + .version = REDISMODULE_TYPE_METHOD_VERSION, + .rdb_load = myType_RDBLoadCallBack, + .rdb_save = myType_RDBSaveCallBack, + .aof_rewrite = myType_AOFRewriteCallBack, + .free = myType_FreeCallBack, + + // Optional fields + .digest = myType_DigestCallBack, + .mem_usage = myType_MemUsageCallBack, + .aux_load = myType_AuxRDBLoadCallBack, + .aux_save = myType_AuxRDBSaveCallBack, + .free_effort = myType_FreeEffortCallBack, + .unlink = myType_UnlinkCallBack, + .copy = myType_CopyCallback, + .defrag = myType_DefragCallback + + // Enhanced optional fields + .mem_usage2 = myType_MemUsageCallBack2, + .free_effort2 = myType_FreeEffortCallBack2, + .unlink2 = myType_UnlinkCallBack2, + .copy2 = myType_CopyCallback2, + } + +* **rdb_load**: A callback function pointer that loads data from RDB files. +* **rdb_save**: A callback function pointer that saves data to RDB files. +* **aof_rewrite**: A callback function pointer that rewrites data as commands. +* **digest**: A callback function pointer that is used for `DEBUG DIGEST`. +* **free**: A callback function pointer that can free a type value. +* **aux_save**: A callback function pointer that saves out of keyspace data to RDB files. + 'when' argument is either `REDISMODULE_AUX_BEFORE_RDB` or `REDISMODULE_AUX_AFTER_RDB`. +* **aux_load**: A callback function pointer that loads out of keyspace data from RDB files. + Similar to `aux_save`, returns `REDISMODULE_OK` on success, and ERR otherwise. +* **free_effort**: A callback function pointer that used to determine whether the module's + memory needs to be lazy reclaimed. The module should return the complexity involved by + freeing the value. for example: how many pointers are gonna be freed. Note that if it + returns 0, we'll always do an async free. +* **unlink**: A callback function pointer that used to notifies the module that the key has + been removed from the DB by redis, and may soon be freed by a background thread. Note that + it won't be called on FLUSHALL/FLUSHDB (both sync and async), and the module can use the + `RedisModuleEvent_FlushDB` to hook into that. +* **copy**: A callback function pointer that is used to make a copy of the specified key. + The module is expected to perform a deep copy of the specified value and return it. + In addition, hints about the names of the source and destination keys is provided. + A NULL return value is considered an error and the copy operation fails. + Note: if the target key exists and is being overwritten, the copy callback will be + called first, followed by a free callback to the value that is being replaced. + +* **defrag**: A callback function pointer that is used to request the module to defrag + a key. The module should then iterate pointers and call the relevant `RedisModule_Defrag*()` + functions to defragment pointers or complex types. The module should continue + iterating as long as [`RedisModule_DefragShouldStop()`](#RedisModule_DefragShouldStop) returns a zero value, and return a + zero value if finished or non-zero value if more work is left to be done. If more work + needs to be done, [`RedisModule_DefragCursorSet()`](#RedisModule_DefragCursorSet) and [`RedisModule_DefragCursorGet()`](#RedisModule_DefragCursorGet) can be used to track + this work across different calls. + Normally, the defrag mechanism invokes the callback without a time limit, so + [`RedisModule_DefragShouldStop()`](#RedisModule_DefragShouldStop) always returns zero. The "late defrag" mechanism which has + a time limit and provides cursor support is used only for keys that are determined + to have significant internal complexity. To determine this, the defrag mechanism + uses the `free_effort` callback and the 'active-defrag-max-scan-fields' config directive. + NOTE: The value is passed as a `void**` and the function is expected to update the + pointer if the top-level value pointer is defragmented and consequently changes. + +* **mem_usage2**: Similar to `mem_usage`, but provides the `RedisModuleKeyOptCtx` parameter + so that meta information such as key name and db id can be obtained, and + the `sample_size` for size estimation (see MEMORY USAGE command). +* **free_effort2**: Similar to `free_effort`, but provides the `RedisModuleKeyOptCtx` parameter + so that meta information such as key name and db id can be obtained. +* **unlink2**: Similar to `unlink`, but provides the `RedisModuleKeyOptCtx` parameter + so that meta information such as key name and db id can be obtained. +* **copy2**: Similar to `copy`, but provides the `RedisModuleKeyOptCtx` parameter + so that meta information such as key names and db ids can be obtained. +* **aux_save2**: Similar to `aux_save`, but with small semantic change, if the module + saves nothing on this callback then no data about this aux field will be written to the + RDB and it will be possible to load the RDB even if the module is not loaded. + +Note: the module name "AAAAAAAAA" is reserved and produces an error, it +happens to be pretty lame as well. + +If [`RedisModule_CreateDataType()`](#RedisModule_CreateDataType) is called outside of `RedisModule_OnLoad()` function, +there is already a module registering a type with the same name, +or if the module name or encver is invalid, NULL is returned. +Otherwise the new type is registered into Redis, and a reference of +type `RedisModuleType` is returned: the caller of the function should store +this reference into a global variable to make future use of it in the +modules type API, since a single module may register multiple types. +Example code fragment: + + static RedisModuleType *BalancedTreeType; + + int RedisModule_OnLoad(RedisModuleCtx *ctx) { + // some code here ... + BalancedTreeType = RedisModule_CreateDataType(...); + } + + + +### `RedisModule_ModuleTypeSetValue` + + int RedisModule_ModuleTypeSetValue(RedisModuleKey *key, + moduleType *mt, + void *value); + +**Available since:** 4.0.0 + +If the key is open for writing, set the specified module type object +as the value of the key, deleting the old value if any. +On success `REDISMODULE_OK` is returned. If the key is not open for +writing or there is an active iterator, `REDISMODULE_ERR` is returned. + + + +### `RedisModule_ModuleTypeGetType` + + moduleType *RedisModule_ModuleTypeGetType(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Assuming [`RedisModule_KeyType()`](#RedisModule_KeyType) returned `REDISMODULE_KEYTYPE_MODULE` on +the key, returns the module type pointer of the value stored at key. + +If the key is NULL, is not associated with a module type, or is empty, +then NULL is returned instead. + + + +### `RedisModule_ModuleTypeGetValue` + + void *RedisModule_ModuleTypeGetValue(RedisModuleKey *key); + +**Available since:** 4.0.0 + +Assuming [`RedisModule_KeyType()`](#RedisModule_KeyType) returned `REDISMODULE_KEYTYPE_MODULE` on +the key, returns the module type low-level value stored at key, as +it was set by the user via [`RedisModule_ModuleTypeSetValue()`](#RedisModule_ModuleTypeSetValue). + +If the key is NULL, is not associated with a module type, or is empty, +then NULL is returned instead. + + + +## RDB loading and saving functions + + + +### `RedisModule_IsIOError` + + int RedisModule_IsIOError(RedisModuleIO *io); + +**Available since:** 6.0.0 + +Returns true if any previous IO API failed. +for `Load*` APIs the `REDISMODULE_OPTIONS_HANDLE_IO_ERRORS` flag must be set with +[`RedisModule_SetModuleOptions`](#RedisModule_SetModuleOptions) first. + + + +### `RedisModule_SaveUnsigned` + + void RedisModule_SaveUnsigned(RedisModuleIO *io, uint64_t value); + +**Available since:** 4.0.0 + +Save an unsigned 64 bit value into the RDB file. This function should only +be called in the context of the `rdb_save` method of modules implementing new +data types. + + + +### `RedisModule_LoadUnsigned` + + uint64_t RedisModule_LoadUnsigned(RedisModuleIO *io); + +**Available since:** 4.0.0 + +Load an unsigned 64 bit value from the RDB file. This function should only +be called in the context of the `rdb_load` method of modules implementing +new data types. + + + +### `RedisModule_SaveSigned` + + void RedisModule_SaveSigned(RedisModuleIO *io, int64_t value); + +**Available since:** 4.0.0 + +Like [`RedisModule_SaveUnsigned()`](#RedisModule_SaveUnsigned) but for signed 64 bit values. + + + +### `RedisModule_LoadSigned` + + int64_t RedisModule_LoadSigned(RedisModuleIO *io); + +**Available since:** 4.0.0 + +Like [`RedisModule_LoadUnsigned()`](#RedisModule_LoadUnsigned) but for signed 64 bit values. + + + +### `RedisModule_SaveString` + + void RedisModule_SaveString(RedisModuleIO *io, RedisModuleString *s); + +**Available since:** 4.0.0 + +In the context of the `rdb_save` method of a module type, saves a +string into the RDB file taking as input a `RedisModuleString`. + +The string can be later loaded with [`RedisModule_LoadString()`](#RedisModule_LoadString) or +other Load family functions expecting a serialized string inside +the RDB file. + + + +### `RedisModule_SaveStringBuffer` + + void RedisModule_SaveStringBuffer(RedisModuleIO *io, + const char *str, + size_t len); + +**Available since:** 4.0.0 + +Like [`RedisModule_SaveString()`](#RedisModule_SaveString) but takes a raw C pointer and length +as input. + + + +### `RedisModule_LoadString` + + RedisModuleString *RedisModule_LoadString(RedisModuleIO *io); + +**Available since:** 4.0.0 + +In the context of the `rdb_load` method of a module data type, loads a string +from the RDB file, that was previously saved with [`RedisModule_SaveString()`](#RedisModule_SaveString) +functions family. + +The returned string is a newly allocated `RedisModuleString` object, and +the user should at some point free it with a call to [`RedisModule_FreeString()`](#RedisModule_FreeString). + +If the data structure does not store strings as `RedisModuleString` objects, +the similar function [`RedisModule_LoadStringBuffer()`](#RedisModule_LoadStringBuffer) could be used instead. + + + +### `RedisModule_LoadStringBuffer` + + char *RedisModule_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr); + +**Available since:** 4.0.0 + +Like [`RedisModule_LoadString()`](#RedisModule_LoadString) but returns a heap allocated string that +was allocated with [`RedisModule_Alloc()`](#RedisModule_Alloc), and can be resized or freed with +[`RedisModule_Realloc()`](#RedisModule_Realloc) or [`RedisModule_Free()`](#RedisModule_Free). + +The size of the string is stored at '*lenptr' if not NULL. +The returned string is not automatically NULL terminated, it is loaded +exactly as it was stored inside the RDB file. + + + +### `RedisModule_SaveDouble` + + void RedisModule_SaveDouble(RedisModuleIO *io, double value); + +**Available since:** 4.0.0 + +In the context of the `rdb_save` method of a module data type, saves a double +value to the RDB file. The double can be a valid number, a NaN or infinity. +It is possible to load back the value with [`RedisModule_LoadDouble()`](#RedisModule_LoadDouble). + + + +### `RedisModule_LoadDouble` + + double RedisModule_LoadDouble(RedisModuleIO *io); + +**Available since:** 4.0.0 + +In the context of the `rdb_save` method of a module data type, loads back the +double value saved by [`RedisModule_SaveDouble()`](#RedisModule_SaveDouble). + + + +### `RedisModule_SaveFloat` + + void RedisModule_SaveFloat(RedisModuleIO *io, float value); + +**Available since:** 4.0.0 + +In the context of the `rdb_save` method of a module data type, saves a float +value to the RDB file. The float can be a valid number, a NaN or infinity. +It is possible to load back the value with [`RedisModule_LoadFloat()`](#RedisModule_LoadFloat). + + + +### `RedisModule_LoadFloat` + + float RedisModule_LoadFloat(RedisModuleIO *io); + +**Available since:** 4.0.0 + +In the context of the `rdb_save` method of a module data type, loads back the +float value saved by [`RedisModule_SaveFloat()`](#RedisModule_SaveFloat). + + + +### `RedisModule_SaveLongDouble` + + void RedisModule_SaveLongDouble(RedisModuleIO *io, long double value); + +**Available since:** 6.0.0 + +In the context of the `rdb_save` method of a module data type, saves a long double +value to the RDB file. The double can be a valid number, a NaN or infinity. +It is possible to load back the value with [`RedisModule_LoadLongDouble()`](#RedisModule_LoadLongDouble). + + + +### `RedisModule_LoadLongDouble` + + long double RedisModule_LoadLongDouble(RedisModuleIO *io); + +**Available since:** 6.0.0 + +In the context of the `rdb_save` method of a module data type, loads back the +long double value saved by [`RedisModule_SaveLongDouble()`](#RedisModule_SaveLongDouble). + + + +## Key digest API (DEBUG DIGEST interface for modules types) + + + +### `RedisModule_DigestAddStringBuffer` + + void RedisModule_DigestAddStringBuffer(RedisModuleDigest *md, + const char *ele, + size_t len); + +**Available since:** 4.0.0 + +Add a new element to the digest. This function can be called multiple times +one element after the other, for all the elements that constitute a given +data structure. The function call must be followed by the call to +[`RedisModule_DigestEndSequence`](#RedisModule_DigestEndSequence) eventually, when all the elements that are +always in a given order are added. See the Redis Modules data types +documentation for more info. However this is a quick example that uses Redis +data types as an example. + +To add a sequence of unordered elements (for example in the case of a Redis +Set), the pattern to use is: + + foreach element { + AddElement(element); + EndSequence(); + } + +Because Sets are not ordered, so every element added has a position that +does not depend from the other. However if instead our elements are +ordered in pairs, like field-value pairs of a Hash, then one should +use: + + foreach key,value { + AddElement(key); + AddElement(value); + EndSequence(); + } + +Because the key and value will be always in the above order, while instead +the single key-value pairs, can appear in any position into a Redis hash. + +A list of ordered elements would be implemented with: + + foreach element { + AddElement(element); + } + EndSequence(); + + + +### `RedisModule_DigestAddLongLong` + + void RedisModule_DigestAddLongLong(RedisModuleDigest *md, long long ll); + +**Available since:** 4.0.0 + +Like [`RedisModule_DigestAddStringBuffer()`](#RedisModule_DigestAddStringBuffer) but takes a `long long` as input +that gets converted into a string before adding it to the digest. + + + +### `RedisModule_DigestEndSequence` + + void RedisModule_DigestEndSequence(RedisModuleDigest *md); + +**Available since:** 4.0.0 + +See the documentation for `RedisModule_DigestAddElement()`. + + + +### `RedisModule_LoadDataTypeFromStringEncver` + + void *RedisModule_LoadDataTypeFromStringEncver(const RedisModuleString *str, + const moduleType *mt, + int encver); + +**Available since:** 7.0.0 + +Decode a serialized representation of a module data type 'mt', in a specific encoding version 'encver' +from string 'str' and return a newly allocated value, or NULL if decoding failed. + +This call basically reuses the '`rdb_load`' callback which module data types +implement in order to allow a module to arbitrarily serialize/de-serialize +keys, similar to how the Redis 'DUMP' and 'RESTORE' commands are implemented. + +Modules should generally use the `REDISMODULE_OPTIONS_HANDLE_IO_ERRORS` flag and +make sure the de-serialization code properly checks and handles IO errors +(freeing allocated buffers and returning a NULL). + +If this is NOT done, Redis will handle corrupted (or just truncated) serialized +data by producing an error message and terminating the process. + + + +### `RedisModule_LoadDataTypeFromString` + + void *RedisModule_LoadDataTypeFromString(const RedisModuleString *str, + const moduleType *mt); + +**Available since:** 6.0.0 + +Similar to [`RedisModule_LoadDataTypeFromStringEncver`](#RedisModule_LoadDataTypeFromStringEncver), original version of the API, kept +for backward compatibility. + + + +### `RedisModule_SaveDataTypeToString` + + RedisModuleString *RedisModule_SaveDataTypeToString(RedisModuleCtx *ctx, + void *data, + const moduleType *mt); + +**Available since:** 6.0.0 + +Encode a module data type 'mt' value 'data' into serialized form, and return it +as a newly allocated `RedisModuleString`. + +This call basically reuses the '`rdb_save`' callback which module data types +implement in order to allow a module to arbitrarily serialize/de-serialize +keys, similar to how the Redis 'DUMP' and 'RESTORE' commands are implemented. + + + +### `RedisModule_GetKeyNameFromDigest` + + const RedisModuleString *RedisModule_GetKeyNameFromDigest(RedisModuleDigest *dig); + +**Available since:** 7.0.0 + +Returns the name of the key currently being processed. + + + +### `RedisModule_GetDbIdFromDigest` + + int RedisModule_GetDbIdFromDigest(RedisModuleDigest *dig); + +**Available since:** 7.0.0 + +Returns the database id of the key currently being processed. + + + +## AOF API for modules data types + + + +### `RedisModule_EmitAOF` + + void RedisModule_EmitAOF(RedisModuleIO *io, + const char *cmdname, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Emits a command into the AOF during the AOF rewriting process. This function +is only called in the context of the `aof_rewrite` method of data types exported +by a module. The command works exactly like [`RedisModule_Call()`](#RedisModule_Call) in the way +the parameters are passed, but it does not return anything as the error +handling is performed by Redis itself. + + + +## IO context handling + + + +### `RedisModule_GetKeyNameFromIO` + + const RedisModuleString *RedisModule_GetKeyNameFromIO(RedisModuleIO *io); + +**Available since:** 5.0.5 + +Returns the name of the key currently being processed. +There is no guarantee that the key name is always available, so this may return NULL. + + + +### `RedisModule_GetKeyNameFromModuleKey` + + const RedisModuleString *RedisModule_GetKeyNameFromModuleKey(RedisModuleKey *key); + +**Available since:** 6.0.0 + +Returns a `RedisModuleString` with the name of the key from `RedisModuleKey`. + + + +### `RedisModule_GetDbIdFromModuleKey` + + int RedisModule_GetDbIdFromModuleKey(RedisModuleKey *key); + +**Available since:** 7.0.0 + +Returns a database id of the key from `RedisModuleKey`. + + + +### `RedisModule_GetDbIdFromIO` + + int RedisModule_GetDbIdFromIO(RedisModuleIO *io); + +**Available since:** 7.0.0 + +Returns the database id of the key currently being processed. +There is no guarantee that this info is always available, so this may return -1. + + + +## Logging + + + +### `RedisModule_Log` + + void RedisModule_Log(RedisModuleCtx *ctx, + const char *levelstr, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Produces a log message to the standard Redis log, the format accepts +printf-alike specifiers, while level is a string describing the log +level to use when emitting the log, and must be one of the following: + +* "debug" (`REDISMODULE_LOGLEVEL_DEBUG`) +* "verbose" (`REDISMODULE_LOGLEVEL_VERBOSE`) +* "notice" (`REDISMODULE_LOGLEVEL_NOTICE`) +* "warning" (`REDISMODULE_LOGLEVEL_WARNING`) + +If the specified log level is invalid, verbose is used by default. +There is a fixed limit to the length of the log line this function is able +to emit, this limit is not specified but is guaranteed to be more than +a few lines of text. + +The ctx argument may be NULL if cannot be provided in the context of the +caller for instance threads or callbacks, in which case a generic "module" +will be used instead of the module name. + + + +### `RedisModule_LogIOError` + + void RedisModule_LogIOError(RedisModuleIO *io, + const char *levelstr, + const char *fmt, + ...); + +**Available since:** 4.0.0 + +Log errors from RDB / AOF serialization callbacks. + +This function should be used when a callback is returning a critical +error to the caller since cannot load or save the data for some +critical reason. + + + +### `RedisModule__Assert` + + void RedisModule__Assert(const char *estr, const char *file, int line); + +**Available since:** 6.0.0 + +Redis-like assert function. + +The macro `RedisModule_Assert(expression)` is recommended, rather than +calling this function directly. + +A failed assertion will shut down the server and produce logging information +that looks identical to information generated by Redis itself. + + + +### `RedisModule_LatencyAddSample` + + void RedisModule_LatencyAddSample(const char *event, mstime_t latency); + +**Available since:** 6.0.0 + +Allows adding event to the latency monitor to be observed by the LATENCY +command. The call is skipped if the latency is smaller than the configured +latency-monitor-threshold. + + + +## Blocking clients from modules + +For a guide about blocking commands in modules, see +[https://redis.io/docs/latest/develop/reference/modules/modules-blocking-ops/](https://redis.io/docs/latest/develop/reference/modules/modules-blocking-ops/). + + + +### `RedisModule_RegisterAuthCallback` + + void RedisModule_RegisterAuthCallback(RedisModuleCtx *ctx, + RedisModuleAuthCallback cb); + +**Available since:** 7.2.0 + +This API registers a callback to execute in addition to normal password based authentication. +Multiple callbacks can be registered across different modules. When a Module is unloaded, all the +auth callbacks registered by it are unregistered. +The callbacks are attempted (in the order of most recently registered first) when the AUTH/HELLO +(with AUTH field provided) commands are called. +The callbacks will be called with a module context along with a username and a password, and are +expected to take one of the following actions: +(1) Authenticate - Use the `RedisModule_AuthenticateClient`* API and return `REDISMODULE_AUTH_HANDLED`. +This will immediately end the auth chain as successful and add the OK reply. +(2) Deny Authentication - Return `REDISMODULE_AUTH_HANDLED` without authenticating or blocking the +client. Optionally, `err` can be set to a custom error message and `err` will be automatically +freed by the server. +This will immediately end the auth chain as unsuccessful and add the ERR reply. +(3) Block a client on authentication - Use the [`RedisModule_BlockClientOnAuth`](#RedisModule_BlockClientOnAuth) API and return +`REDISMODULE_AUTH_HANDLED`. Here, the client will be blocked until the [`RedisModule_UnblockClient`](#RedisModule_UnblockClient) API is used +which will trigger the auth reply callback (provided through the [`RedisModule_BlockClientOnAuth`](#RedisModule_BlockClientOnAuth)). +In this reply callback, the Module should authenticate, deny or skip handling authentication. +(4) Skip handling Authentication - Return `REDISMODULE_AUTH_NOT_HANDLED` without blocking the +client. This will allow the engine to attempt the next module auth callback. +If none of the callbacks authenticate or deny auth, then password based auth is attempted and +will authenticate or add failure logs and reply to the clients accordingly. + +Note: If a client is disconnected while it was in the middle of blocking module auth, that +occurrence of the AUTH or HELLO command will not be tracked in the INFO command stats. + +The following is an example of how non-blocking module based authentication can be used: + + int auth_cb(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) { + const char *user = RedisModule_StringPtrLen(username, NULL); + const char *pwd = RedisModule_StringPtrLen(password, NULL); + if (!strcmp(user,"foo") && !strcmp(pwd,"valid_password")) { + RedisModule_AuthenticateClientWithACLUser(ctx, "foo", 3, NULL, NULL, NULL); + return REDISMODULE_AUTH_HANDLED; + } + + else if (!strcmp(user,"foo") && !strcmp(pwd,"wrong_password")) { + RedisModuleString *log = RedisModule_CreateString(ctx, "Module Auth", 11); + RedisModule_ACLAddLogEntryByUserName(ctx, username, log, REDISMODULE_ACL_LOG_AUTH); + RedisModule_FreeString(ctx, log); + const char *err_msg = "Auth denied by Misc Module."; + *err = RedisModule_CreateString(ctx, err_msg, strlen(err_msg)); + return REDISMODULE_AUTH_HANDLED; + } + return REDISMODULE_AUTH_NOT_HANDLED; + } + + int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) { + if (RedisModule_Init(ctx,"authmodule",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR) + return REDISMODULE_ERR; + RedisModule_RegisterAuthCallback(ctx, auth_cb); + return REDISMODULE_OK; + } + + + +### `RedisModule_BlockClient` + + RedisModuleBlockedClient *RedisModule_BlockClient(RedisModuleCtx *ctx, + RedisModuleCmdFunc reply_callback, + ; + +**Available since:** 4.0.0 + +Block a client in the context of a blocking command, returning a handle +which will be used, later, in order to unblock the client with a call to +[`RedisModule_UnblockClient()`](#RedisModule_UnblockClient). The arguments specify callback functions +and a timeout after which the client is unblocked. + +The callbacks are called in the following contexts: + + reply_callback: called after a successful RedisModule_UnblockClient() + call in order to reply to the client and unblock it. + + timeout_callback: called when the timeout is reached or if `CLIENT UNBLOCK` + is invoked, in order to send an error to the client. + + free_privdata: called in order to free the private data that is passed + by RedisModule_UnblockClient() call. + +Note: [`RedisModule_UnblockClient`](#RedisModule_UnblockClient) should be called for every blocked client, + even if client was killed, timed-out or disconnected. Failing to do so + will result in memory leaks. + +There are some cases where [`RedisModule_BlockClient()`](#RedisModule_BlockClient) cannot be used: + +1. If the client is a Lua script. +2. If the client is executing a MULTI block. + +In these cases, a call to [`RedisModule_BlockClient()`](#RedisModule_BlockClient) will **not** block the +client, but instead produce a specific error reply. + +A module that registers a `timeout_callback` function can also be unblocked +using the `CLIENT UNBLOCK` command, which will trigger the timeout callback. +If a callback function is not registered, then the blocked client will be +treated as if it is not in a blocked state and `CLIENT UNBLOCK` will return +a zero value. + +Measuring background time: By default the time spent in the blocked command +is not account for the total command duration. To include such time you should +use [`RedisModule_BlockedClientMeasureTimeStart()`](#RedisModule_BlockedClientMeasureTimeStart) and [`RedisModule_BlockedClientMeasureTimeEnd()`](#RedisModule_BlockedClientMeasureTimeEnd) one, +or multiple times within the blocking command background work. + + + +### `RedisModule_BlockClientOnAuth` + + RedisModuleBlockedClient *RedisModule_BlockClientOnAuth(RedisModuleCtx *ctx, + RedisModuleAuthCallback reply_callback, + ; + +**Available since:** 7.2.0 + +Block the current client for module authentication in the background. If module auth is not in +progress on the client, the API returns NULL. Otherwise, the client is blocked and the `RedisModule_BlockedClient` +is returned similar to the [`RedisModule_BlockClient`](#RedisModule_BlockClient) API. +Note: Only use this API from the context of a module auth callback. + + + +### `RedisModule_BlockClientGetPrivateData` + + void *RedisModule_BlockClientGetPrivateData(RedisModuleBlockedClient *blocked_client); + +**Available since:** 7.2.0 + +Get the private data that was previusely set on a blocked client + + + +### `RedisModule_BlockClientSetPrivateData` + + void RedisModule_BlockClientSetPrivateData(RedisModuleBlockedClient *blocked_client, + void *private_data); + +**Available since:** 7.2.0 + +Set private data on a blocked client + + + +### `RedisModule_BlockClientOnKeys` + + RedisModuleBlockedClient *RedisModule_BlockClientOnKeys(RedisModuleCtx *ctx, + RedisModuleCmdFunc reply_callback, + ; + +**Available since:** 6.0.0 + +This call is similar to [`RedisModule_BlockClient()`](#RedisModule_BlockClient), however in this case we +don't just block the client, but also ask Redis to unblock it automatically +once certain keys become "ready", that is, contain more data. + +Basically this is similar to what a typical Redis command usually does, +like BLPOP or BZPOPMAX: the client blocks if it cannot be served ASAP, +and later when the key receives new data (a list push for instance), the +client is unblocked and served. + +However in the case of this module API, when the client is unblocked? + +1. If you block on a key of a type that has blocking operations associated, + like a list, a sorted set, a stream, and so forth, the client may be + unblocked once the relevant key is targeted by an operation that normally + unblocks the native blocking operations for that type. So if we block + on a list key, an RPUSH command may unblock our client and so forth. +2. If you are implementing your native data type, or if you want to add new + unblocking conditions in addition to "1", you can call the modules API + [`RedisModule_SignalKeyAsReady()`](#RedisModule_SignalKeyAsReady). + +Anyway we can't be sure if the client should be unblocked just because the +key is signaled as ready: for instance a successive operation may change the +key, or a client in queue before this one can be served, modifying the key +as well and making it empty again. So when a client is blocked with +[`RedisModule_BlockClientOnKeys()`](#RedisModule_BlockClientOnKeys) the reply callback is not called after +[`RedisModule_UnblockClient()`](#RedisModule_UnblockClient) is called, but every time a key is signaled as ready: +if the reply callback can serve the client, it returns `REDISMODULE_OK` +and the client is unblocked, otherwise it will return `REDISMODULE_ERR` +and we'll try again later. + +The reply callback can access the key that was signaled as ready by +calling the API [`RedisModule_GetBlockedClientReadyKey()`](#RedisModule_GetBlockedClientReadyKey), that returns +just the string name of the key as a `RedisModuleString` object. + +Thanks to this system we can setup complex blocking scenarios, like +unblocking a client only if a list contains at least 5 items or other +more fancy logics. + +Note that another difference with [`RedisModule_BlockClient()`](#RedisModule_BlockClient), is that here +we pass the private data directly when blocking the client: it will +be accessible later in the reply callback. Normally when blocking with +[`RedisModule_BlockClient()`](#RedisModule_BlockClient) the private data to reply to the client is +passed when calling [`RedisModule_UnblockClient()`](#RedisModule_UnblockClient) but here the unblocking +is performed by Redis itself, so we need to have some private data before +hand. The private data is used to store any information about the specific +unblocking operation that you are implementing. Such information will be +freed using the `free_privdata` callback provided by the user. + +However the reply callback will be able to access the argument vector of +the command, so the private data is often not needed. + +Note: Under normal circumstances [`RedisModule_UnblockClient`](#RedisModule_UnblockClient) should not be + called for clients that are blocked on keys (Either the key will + become ready or a timeout will occur). If for some reason you do want + to call RedisModule_UnblockClient it is possible: Client will be + handled as if it were timed-out (You must implement the timeout + callback in that case). + + + +### `RedisModule_BlockClientOnKeysWithFlags` + + RedisModuleBlockedClient *RedisModule_BlockClientOnKeysWithFlags(RedisModuleCtx *ctx, + RedisModuleCmdFunc reply_callback, + ; + +**Available since:** 7.2.0 + +Same as [`RedisModule_BlockClientOnKeys`](#RedisModule_BlockClientOnKeys), but can take `REDISMODULE_BLOCK_`* flags +Can be either `REDISMODULE_BLOCK_UNBLOCK_DEFAULT`, which means default behavior (same +as calling [`RedisModule_BlockClientOnKeys`](#RedisModule_BlockClientOnKeys)) + +The flags is a bit mask of these: + +- `REDISMODULE_BLOCK_UNBLOCK_DELETED`: The clients should to be awakened in case any of `keys` are deleted. + Mostly useful for commands that require the key to exist (like XREADGROUP) + + + +### `RedisModule_SignalKeyAsReady` + + void RedisModule_SignalKeyAsReady(RedisModuleCtx *ctx, RedisModuleString *key); + +**Available since:** 6.0.0 + +This function is used in order to potentially unblock a client blocked +on keys with [`RedisModule_BlockClientOnKeys()`](#RedisModule_BlockClientOnKeys). When this function is called, +all the clients blocked for this key will get their `reply_callback` called. + + + +### `RedisModule_UnblockClient` + + int RedisModule_UnblockClient(RedisModuleBlockedClient *bc, void *privdata); + +**Available since:** 4.0.0 + +Unblock a client blocked by `RedisModule_BlockedClient`. This will trigger +the reply callbacks to be called in order to reply to the client. +The 'privdata' argument will be accessible by the reply callback, so +the caller of this function can pass any value that is needed in order to +actually reply to the client. + +A common usage for 'privdata' is a thread that computes something that +needs to be passed to the client, included but not limited some slow +to compute reply or some reply obtained via networking. + +Note 1: this function can be called from threads spawned by the module. + +Note 2: when we unblock a client that is blocked for keys using the API +[`RedisModule_BlockClientOnKeys()`](#RedisModule_BlockClientOnKeys), the privdata argument here is not used. +Unblocking a client that was blocked for keys using this API will still +require the client to get some reply, so the function will use the +"timeout" handler in order to do so (The privdata provided in +[`RedisModule_BlockClientOnKeys()`](#RedisModule_BlockClientOnKeys) is accessible from the timeout +callback via [`RedisModule_GetBlockedClientPrivateData`](#RedisModule_GetBlockedClientPrivateData)). + + + +### `RedisModule_AbortBlock` + + int RedisModule_AbortBlock(RedisModuleBlockedClient *bc); + +**Available since:** 4.0.0 + +Abort a blocked client blocking operation: the client will be unblocked +without firing any callback. + + + +### `RedisModule_SetDisconnectCallback` + + void RedisModule_SetDisconnectCallback(RedisModuleBlockedClient *bc, + RedisModuleDisconnectFunc callback); + +**Available since:** 5.0.0 + +Set a callback that will be called if a blocked client disconnects +before the module has a chance to call [`RedisModule_UnblockClient()`](#RedisModule_UnblockClient) + +Usually what you want to do there, is to cleanup your module state +so that you can call [`RedisModule_UnblockClient()`](#RedisModule_UnblockClient) safely, otherwise +the client will remain blocked forever if the timeout is large. + +Notes: + +1. It is not safe to call Reply* family functions here, it is also + useless since the client is gone. + +2. This callback is not called if the client disconnects because of + a timeout. In such a case, the client is unblocked automatically + and the timeout callback is called. + + + +### `RedisModule_IsBlockedReplyRequest` + + int RedisModule_IsBlockedReplyRequest(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Return non-zero if a module command was called in order to fill the +reply for a blocked client. + + + +### `RedisModule_IsBlockedTimeoutRequest` + + int RedisModule_IsBlockedTimeoutRequest(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Return non-zero if a module command was called in order to fill the +reply for a blocked client that timed out. + + + +### `RedisModule_GetBlockedClientPrivateData` + + void *RedisModule_GetBlockedClientPrivateData(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Get the private data set by [`RedisModule_UnblockClient()`](#RedisModule_UnblockClient) + + + +### `RedisModule_GetBlockedClientReadyKey` + + RedisModuleString *RedisModule_GetBlockedClientReadyKey(RedisModuleCtx *ctx); + +**Available since:** 6.0.0 + +Get the key that is ready when the reply callback is called in the context +of a client blocked by [`RedisModule_BlockClientOnKeys()`](#RedisModule_BlockClientOnKeys). + + + +### `RedisModule_GetBlockedClientHandle` + + RedisModuleBlockedClient *RedisModule_GetBlockedClientHandle(RedisModuleCtx *ctx); + +**Available since:** 5.0.0 + +Get the blocked client associated with a given context. +This is useful in the reply and timeout callbacks of blocked clients, +before sometimes the module has the blocked client handle references +around, and wants to cleanup it. + + + +### `RedisModule_BlockedClientDisconnected` + + int RedisModule_BlockedClientDisconnected(RedisModuleCtx *ctx); + +**Available since:** 5.0.0 + +Return true if when the free callback of a blocked client is called, +the reason for the client to be unblocked is that it disconnected +while it was blocked. + + + +## Thread Safe Contexts + + + +### `RedisModule_GetThreadSafeContext` + + RedisModuleCtx *RedisModule_GetThreadSafeContext(RedisModuleBlockedClient *bc); + +**Available since:** 4.0.0 + +Return a context which can be used inside threads to make Redis context +calls with certain modules APIs. If 'bc' is not NULL then the module will +be bound to a blocked client, and it will be possible to use the +`RedisModule_Reply*` family of functions to accumulate a reply for when the +client will be unblocked. Otherwise the thread safe context will be +detached by a specific client. + +To call non-reply APIs, the thread safe context must be prepared with: + + RedisModule_ThreadSafeContextLock(ctx); + ... make your call here ... + RedisModule_ThreadSafeContextUnlock(ctx); + +This is not needed when using `RedisModule_Reply*` functions, assuming +that a blocked client was used when the context was created, otherwise +no `RedisModule_Reply`* call should be made at all. + +NOTE: If you're creating a detached thread safe context (bc is NULL), +consider using [`RedisModule_GetDetachedThreadSafeContext`](#RedisModule_GetDetachedThreadSafeContext) which will also retain +the module ID and thus be more useful for logging. + + + +### `RedisModule_GetDetachedThreadSafeContext` + + RedisModuleCtx *RedisModule_GetDetachedThreadSafeContext(RedisModuleCtx *ctx); + +**Available since:** 6.0.9 + +Return a detached thread safe context that is not associated with any +specific blocked client, but is associated with the module's context. + +This is useful for modules that wish to hold a global context over +a long term, for purposes such as logging. + + + +### `RedisModule_FreeThreadSafeContext` + + void RedisModule_FreeThreadSafeContext(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Release a thread safe context. + + + +### `RedisModule_ThreadSafeContextLock` + + void RedisModule_ThreadSafeContextLock(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Acquire the server lock before executing a thread safe API call. +This is not needed for `RedisModule_Reply*` calls when there is +a blocked client connected to the thread safe context. + + + +### `RedisModule_ThreadSafeContextTryLock` + + int RedisModule_ThreadSafeContextTryLock(RedisModuleCtx *ctx); + +**Available since:** 6.0.8 + +Similar to [`RedisModule_ThreadSafeContextLock`](#RedisModule_ThreadSafeContextLock) but this function +would not block if the server lock is already acquired. + +If successful (lock acquired) `REDISMODULE_OK` is returned, +otherwise `REDISMODULE_ERR` is returned and errno is set +accordingly. + + + +### `RedisModule_ThreadSafeContextUnlock` + + void RedisModule_ThreadSafeContextUnlock(RedisModuleCtx *ctx); + +**Available since:** 4.0.0 + +Release the server lock after a thread safe API call was executed. + + + +## Module Keyspace Notifications API + + + +### `RedisModule_SubscribeToKeyspaceEvents` + + int RedisModule_SubscribeToKeyspaceEvents(RedisModuleCtx *ctx, + int types, + RedisModuleNotificationFunc callback); + +**Available since:** 4.0.9 + +Subscribe to keyspace notifications. This is a low-level version of the +keyspace-notifications API. A module can register callbacks to be notified +when keyspace events occur. + +Notification events are filtered by their type (string events, set events, +etc), and the subscriber callback receives only events that match a specific +mask of event types. + +When subscribing to notifications with [`RedisModule_SubscribeToKeyspaceEvents`](#RedisModule_SubscribeToKeyspaceEvents) +the module must provide an event type-mask, denoting the events the subscriber +is interested in. This can be an ORed mask of any of the following flags: + + - `REDISMODULE_NOTIFY_GENERIC`: Generic commands like DEL, EXPIRE, RENAME + - `REDISMODULE_NOTIFY_STRING`: String events + - `REDISMODULE_NOTIFY_LIST`: List events + - `REDISMODULE_NOTIFY_SET`: Set events + - `REDISMODULE_NOTIFY_HASH`: Hash events + - `REDISMODULE_NOTIFY_ZSET`: Sorted Set events + - `REDISMODULE_NOTIFY_EXPIRED`: Expiration events + - `REDISMODULE_NOTIFY_EVICTED`: Eviction events + - `REDISMODULE_NOTIFY_STREAM`: Stream events + - `REDISMODULE_NOTIFY_MODULE`: Module types events + - `REDISMODULE_NOTIFY_KEYMISS`: Key-miss events + Notice, key-miss event is the only type + of event that is fired from within a read command. + Performing RedisModule_Call with a write command from within + this notification is wrong and discourage. It will + cause the read command that trigger the event to be + replicated to the AOF/Replica. + - `REDISMODULE_NOTIFY_ALL`: All events (Excluding `REDISMODULE_NOTIFY_KEYMISS`) + - `REDISMODULE_NOTIFY_LOADED`: A special notification available only for modules, + indicates that the key was loaded from persistence. + Notice, when this event fires, the given key + can not be retained, use RedisModule_CreateStringFromString + instead. + +We do not distinguish between key events and keyspace events, and it is up +to the module to filter the actions taken based on the key. + +The subscriber signature is: + + int (*RedisModuleNotificationFunc) (RedisModuleCtx *ctx, int type, + const char *event, + RedisModuleString *key); + +`type` is the event type bit, that must match the mask given at registration +time. The event string is the actual command being executed, and key is the +relevant Redis key. + +Notification callback gets executed with a redis context that can not be +used to send anything to the client, and has the db number where the event +occurred as its selected db number. + +Notice that it is not necessary to enable notifications in redis.conf for +module notifications to work. + +Warning: the notification callbacks are performed in a synchronous manner, +so notification callbacks must to be fast, or they would slow Redis down. +If you need to take long actions, use threads to offload them. + +Moreover, the fact that the notification is executed synchronously means +that the notification code will be executed in the middle on Redis logic +(commands logic, eviction, expire). Changing the key space while the logic +runs is dangerous and discouraged. In order to react to key space events with +write actions, please refer to [`RedisModule_AddPostNotificationJob`](#RedisModule_AddPostNotificationJob). + +See [https://redis.io/docs/latest/develop/use/keyspace-notifications/](https://redis.io/docs/latest/develop/use/keyspace-notifications/) for more information. + + + +### `RedisModule_AddPostNotificationJob` + + int RedisModule_AddPostNotificationJob(RedisModuleCtx *ctx, + RedisModulePostNotificationJobFunc callback, + void *privdata, + void (*free_privdata)(void*)); + +**Available since:** 7.2.0 + +When running inside a key space notification callback, it is dangerous and highly discouraged to perform any write +operation (See [`RedisModule_SubscribeToKeyspaceEvents`](#RedisModule_SubscribeToKeyspaceEvents)). In order to still perform write actions in this scenario, +Redis provides [`RedisModule_AddPostNotificationJob`](#RedisModule_AddPostNotificationJob) API. The API allows to register a job callback which Redis will call +when the following condition are promised to be fulfilled: +1. It is safe to perform any write operation. +2. The job will be called atomically along side the key space notification. + +Notice, one job might trigger key space notifications that will trigger more jobs. +This raises a concerns of entering an infinite loops, we consider infinite loops +as a logical bug that need to be fixed in the module, an attempt to protect against +infinite loops by halting the execution could result in violation of the feature correctness +and so Redis will make no attempt to protect the module from infinite loops. + +'`free_pd`' can be NULL and in such case will not be used. + +Return `REDISMODULE_OK` on success and `REDISMODULE_ERR` if was called while loading data from disk (AOF or RDB) or +if the instance is a readonly replica. + + + +### `RedisModule_GetNotifyKeyspaceEvents` + + int RedisModule_GetNotifyKeyspaceEvents(void); + +**Available since:** 6.0.0 + +Get the configured bitmap of notify-keyspace-events (Could be used +for additional filtering in `RedisModuleNotificationFunc`) + + + +### `RedisModule_NotifyKeyspaceEvent` + + int RedisModule_NotifyKeyspaceEvent(RedisModuleCtx *ctx, + int type, + const char *event, + RedisModuleString *key); + +**Available since:** 6.0.0 + +Expose notifyKeyspaceEvent to modules + + + +## Modules Cluster API + + + +### `RedisModule_RegisterClusterMessageReceiver` + + void RedisModule_RegisterClusterMessageReceiver(RedisModuleCtx *ctx, + uint8_t type, + RedisModuleClusterMessageReceiver callback); + +**Available since:** 5.0.0 + +Register a callback receiver for cluster messages of type 'type'. If there +was already a registered callback, this will replace the callback function +with the one provided, otherwise if the callback is set to NULL and there +is already a callback for this function, the callback is unregistered +(so this API call is also used in order to delete the receiver). + + + +### `RedisModule_SendClusterMessage` + + int RedisModule_SendClusterMessage(RedisModuleCtx *ctx, + const char *target_id, + uint8_t type, + const char *msg, + uint32_t len); + +**Available since:** 5.0.0 + +Send a message to all the nodes in the cluster if `target` is NULL, otherwise +at the specified target, which is a `REDISMODULE_NODE_ID_LEN` bytes node ID, as +returned by the receiver callback or by the nodes iteration functions. + +The function returns `REDISMODULE_OK` if the message was successfully sent, +otherwise if the node is not connected or such node ID does not map to any +known cluster node, `REDISMODULE_ERR` is returned. + + + +### `RedisModule_GetClusterNodesList` + + char **RedisModule_GetClusterNodesList(RedisModuleCtx *ctx, size_t *numnodes); + +**Available since:** 5.0.0 + +Return an array of string pointers, each string pointer points to a cluster +node ID of exactly `REDISMODULE_NODE_ID_LEN` bytes (without any null term). +The number of returned node IDs is stored into `*numnodes`. +However if this function is called by a module not running an a Redis +instance with Redis Cluster enabled, NULL is returned instead. + +The IDs returned can be used with [`RedisModule_GetClusterNodeInfo()`](#RedisModule_GetClusterNodeInfo) in order +to get more information about single node. + +The array returned by this function must be freed using the function +[`RedisModule_FreeClusterNodesList()`](#RedisModule_FreeClusterNodesList). + +Example: + + size_t count, j; + char **ids = RedisModule_GetClusterNodesList(ctx,&count); + for (j = 0; j < count; j++) { + RedisModule_Log(ctx,"notice","Node %.*s", + REDISMODULE_NODE_ID_LEN,ids[j]); + } + RedisModule_FreeClusterNodesList(ids); + + + +### `RedisModule_FreeClusterNodesList` + + void RedisModule_FreeClusterNodesList(char **ids); + +**Available since:** 5.0.0 + +Free the node list obtained with [`RedisModule_GetClusterNodesList`](#RedisModule_GetClusterNodesList). + + + +### `RedisModule_GetMyClusterID` + + const char *RedisModule_GetMyClusterID(void); + +**Available since:** 5.0.0 + +Return this node ID (`REDISMODULE_CLUSTER_ID_LEN` bytes) or NULL if the cluster +is disabled. + + + +### `RedisModule_GetClusterSize` + + size_t RedisModule_GetClusterSize(void); + +**Available since:** 5.0.0 + +Return the number of nodes in the cluster, regardless of their state +(handshake, noaddress, ...) so that the number of active nodes may actually +be smaller, but not greater than this number. If the instance is not in +cluster mode, zero is returned. + + + +### `RedisModule_GetClusterNodeInfo` + + int RedisModule_GetClusterNodeInfo(RedisModuleCtx *ctx, + const char *id, + char *ip, + char *master_id, + int *port, + int *flags); + +**Available since:** 5.0.0 + +Populate the specified info for the node having as ID the specified 'id', +then returns `REDISMODULE_OK`. Otherwise if the format of node ID is invalid +or the node ID does not exist from the POV of this local node, `REDISMODULE_ERR` +is returned. + +The arguments `ip`, `master_id`, `port` and `flags` can be NULL in case we don't +need to populate back certain info. If an `ip` and `master_id` (only populated +if the instance is a slave) are specified, they point to buffers holding +at least `REDISMODULE_NODE_ID_LEN` bytes. The strings written back as `ip` +and `master_id` are not null terminated. + +The list of flags reported is the following: + +* `REDISMODULE_NODE_MYSELF`: This node +* `REDISMODULE_NODE_MASTER`: The node is a master +* `REDISMODULE_NODE_SLAVE`: The node is a replica +* `REDISMODULE_NODE_PFAIL`: We see the node as failing +* `REDISMODULE_NODE_FAIL`: The cluster agrees the node is failing +* `REDISMODULE_NODE_NOFAILOVER`: The slave is configured to never failover + + + +### `RedisModule_SetClusterFlags` + + void RedisModule_SetClusterFlags(RedisModuleCtx *ctx, uint64_t flags); + +**Available since:** 5.0.0 + +Set Redis Cluster flags in order to change the normal behavior of +Redis Cluster, especially with the goal of disabling certain functions. +This is useful for modules that use the Cluster API in order to create +a different distributed system, but still want to use the Redis Cluster +message bus. Flags that can be set: + +* `CLUSTER_MODULE_FLAG_NO_FAILOVER` +* `CLUSTER_MODULE_FLAG_NO_REDIRECTION` + +With the following effects: + +* `NO_FAILOVER`: prevent Redis Cluster slaves from failing over a dead master. + Also disables the replica migration feature. + +* `NO_REDIRECTION`: Every node will accept any key, without trying to perform + partitioning according to the Redis Cluster algorithm. + Slots information will still be propagated across the + cluster, but without effect. + + + +### `RedisModule_ClusterKeySlot` + + unsigned int RedisModule_ClusterKeySlot(RedisModuleString *key); + +**Available since:** 7.4.0 + +Returns the cluster slot of a key, similar to the `CLUSTER KEYSLOT` command. +This function works even if cluster mode is not enabled. + + + +### `RedisModule_ClusterCanonicalKeyNameInSlot` + + const char *RedisModule_ClusterCanonicalKeyNameInSlot(unsigned int slot); + +**Available since:** 7.4.0 + +Returns a short string that can be used as a key or as a hash tag in a key, +such that the key maps to the given cluster slot. Returns NULL if slot is not +a valid slot. + + + +## Modules Timers API + +Module timers are a high precision "green timers" abstraction where +every module can register even millions of timers without problems, even if +the actual event loop will just have a single timer that is used to awake the +module timers subsystem in order to process the next event. + +All the timers are stored into a radix tree, ordered by expire time, when +the main Redis event loop timer callback is called, we try to process all +the timers already expired one after the other. Then we re-enter the event +loop registering a timer that will expire when the next to process module +timer will expire. + +Every time the list of active timers drops to zero, we unregister the +main event loop timer, so that there is no overhead when such feature is +not used. + + + +### `RedisModule_CreateTimer` + + RedisModuleTimerID RedisModule_CreateTimer(RedisModuleCtx *ctx, + mstime_t period, + RedisModuleTimerProc callback, + void *data); + +**Available since:** 5.0.0 + +Create a new timer that will fire after `period` milliseconds, and will call +the specified function using `data` as argument. The returned timer ID can be +used to get information from the timer or to stop it before it fires. +Note that for the common use case of a repeating timer (Re-registration +of the timer inside the `RedisModuleTimerProc` callback) it matters when +this API is called: +If it is called at the beginning of 'callback' it means +the event will triggered every 'period'. +If it is called at the end of 'callback' it means +there will 'period' milliseconds gaps between events. +(If the time it takes to execute 'callback' is negligible the two +statements above mean the same) + + + +### `RedisModule_StopTimer` + + int RedisModule_StopTimer(RedisModuleCtx *ctx, + RedisModuleTimerID id, + void **data); + +**Available since:** 5.0.0 + +Stop a timer, returns `REDISMODULE_OK` if the timer was found, belonged to the +calling module, and was stopped, otherwise `REDISMODULE_ERR` is returned. +If not NULL, the data pointer is set to the value of the data argument when +the timer was created. + + + +### `RedisModule_GetTimerInfo` + + int RedisModule_GetTimerInfo(RedisModuleCtx *ctx, + RedisModuleTimerID id, + uint64_t *remaining, + void **data); + +**Available since:** 5.0.0 + +Obtain information about a timer: its remaining time before firing +(in milliseconds), and the private data pointer associated with the timer. +If the timer specified does not exist or belongs to a different module +no information is returned and the function returns `REDISMODULE_ERR`, otherwise +`REDISMODULE_OK` is returned. The arguments remaining or data can be NULL if +the caller does not need certain information. + + + +## Modules EventLoop API + + + +### `RedisModule_EventLoopAdd` + + int RedisModule_EventLoopAdd(int fd, + int mask, + RedisModuleEventLoopFunc func, + void *user_data); + +**Available since:** 7.0.0 + +Add a pipe / socket event to the event loop. + +* `mask` must be one of the following values: + + * `REDISMODULE_EVENTLOOP_READABLE` + * `REDISMODULE_EVENTLOOP_WRITABLE` + * `REDISMODULE_EVENTLOOP_READABLE | REDISMODULE_EVENTLOOP_WRITABLE` + +On success `REDISMODULE_OK` is returned, otherwise +`REDISMODULE_ERR` is returned and errno is set to the following values: + +* ERANGE: `fd` is negative or higher than `maxclients` Redis config. +* EINVAL: `callback` is NULL or `mask` value is invalid. + +`errno` might take other values in case of an internal error. + +Example: + + void onReadable(int fd, void *user_data, int mask) { + char buf[32]; + int bytes = read(fd,buf,sizeof(buf)); + printf("Read %d bytes \n", bytes); + } + RedisModule_EventLoopAdd(fd, REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL); + + + +### `RedisModule_EventLoopDel` + + int RedisModule_EventLoopDel(int fd, int mask); + +**Available since:** 7.0.0 + +Delete a pipe / socket event from the event loop. + +* `mask` must be one of the following values: + + * `REDISMODULE_EVENTLOOP_READABLE` + * `REDISMODULE_EVENTLOOP_WRITABLE` + * `REDISMODULE_EVENTLOOP_READABLE | REDISMODULE_EVENTLOOP_WRITABLE` + +On success `REDISMODULE_OK` is returned, otherwise +`REDISMODULE_ERR` is returned and errno is set to the following values: + +* ERANGE: `fd` is negative or higher than `maxclients` Redis config. +* EINVAL: `mask` value is invalid. + + + +### `RedisModule_EventLoopAddOneShot` + + int RedisModule_EventLoopAddOneShot(RedisModuleEventLoopOneShotFunc func, + void *user_data); + +**Available since:** 7.0.0 + +This function can be called from other threads to trigger callback on Redis +main thread. On success `REDISMODULE_OK` is returned. If `func` is NULL +`REDISMODULE_ERR` is returned and errno is set to EINVAL. + + + +## Modules ACL API + +Implements a hook into the authentication and authorization within Redis. + + + +### `RedisModule_CreateModuleUser` + + RedisModuleUser *RedisModule_CreateModuleUser(const char *name); + +**Available since:** 6.0.0 + +Creates a Redis ACL user that the module can use to authenticate a client. +After obtaining the user, the module should set what such user can do +using the `RedisModule_SetUserACL()` function. Once configured, the user +can be used in order to authenticate a connection, with the specified +ACL rules, using the `RedisModule_AuthClientWithUser()` function. + +Note that: + +* Users created here are not listed by the ACL command. +* Users created here are not checked for duplicated name, so it's up to + the module calling this function to take care of not creating users + with the same name. +* The created user can be used to authenticate multiple Redis connections. + +The caller can later free the user using the function +[`RedisModule_FreeModuleUser()`](#RedisModule_FreeModuleUser). When this function is called, if there are +still clients authenticated with this user, they are disconnected. +The function to free the user should only be used when the caller really +wants to invalidate the user to define a new one with different +capabilities. + + + +### `RedisModule_FreeModuleUser` + + int RedisModule_FreeModuleUser(RedisModuleUser *user); + +**Available since:** 6.0.0 + +Frees a given user and disconnects all of the clients that have been +authenticated with it. See [`RedisModule_CreateModuleUser`](#RedisModule_CreateModuleUser) for detailed usage. + + + +### `RedisModule_SetModuleUserACL` + + int RedisModule_SetModuleUserACL(RedisModuleUser *user, const char* acl); + +**Available since:** 6.0.0 + +Sets the permissions of a user created through the redis module +interface. The syntax is the same as ACL SETUSER, so refer to the +documentation in acl.c for more information. See [`RedisModule_CreateModuleUser`](#RedisModule_CreateModuleUser) +for detailed usage. + +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` on failure +and will set an errno describing why the operation failed. + + + +### `RedisModule_SetModuleUserACLString` + + int RedisModule_SetModuleUserACLString(RedisModuleCtx *ctx, + RedisModuleUser *user, + const char *acl, + RedisModuleString **error); + +**Available since:** 7.0.6 + +Sets the permission of a user with a complete ACL string, such as one +would use on the redis ACL SETUSER command line API. This differs from +[`RedisModule_SetModuleUserACL`](#RedisModule_SetModuleUserACL), which only takes single ACL operations at a time. + +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` on failure +if a `RedisModuleString` is provided in error, a string describing the error +will be returned + + + +### `RedisModule_GetModuleUserACLString` + + RedisModuleString *RedisModule_GetModuleUserACLString(RedisModuleUser *user); + +**Available since:** 7.0.6 + +Get the ACL string for a given user +Returns a `RedisModuleString` + + + +### `RedisModule_GetCurrentUserName` + + RedisModuleString *RedisModule_GetCurrentUserName(RedisModuleCtx *ctx); + +**Available since:** 7.0.0 + +Retrieve the user name of the client connection behind the current context. +The user name can be used later, in order to get a `RedisModuleUser`. +See more information in [`RedisModule_GetModuleUserFromUserName`](#RedisModule_GetModuleUserFromUserName). + +The returned string must be released with [`RedisModule_FreeString()`](#RedisModule_FreeString) or by +enabling automatic memory management. + + + +### `RedisModule_GetModuleUserFromUserName` + + RedisModuleUser *RedisModule_GetModuleUserFromUserName(RedisModuleString *name); + +**Available since:** 7.0.0 + +A `RedisModuleUser` can be used to check if command, key or channel can be executed or +accessed according to the ACLs rules associated with that user. +When a Module wants to do ACL checks on a general ACL user (not created by [`RedisModule_CreateModuleUser`](#RedisModule_CreateModuleUser)), +it can get the `RedisModuleUser` from this API, based on the user name retrieved by [`RedisModule_GetCurrentUserName`](#RedisModule_GetCurrentUserName). + +Since a general ACL user can be deleted at any time, this `RedisModuleUser` should be used only in the context +where this function was called. In order to do ACL checks out of that context, the Module can store the user name, +and call this API at any other context. + +Returns NULL if the user is disabled or the user does not exist. +The caller should later free the user using the function [`RedisModule_FreeModuleUser()`](#RedisModule_FreeModuleUser). + + + +### `RedisModule_ACLCheckCommandPermissions` + + int RedisModule_ACLCheckCommandPermissions(RedisModuleUser *user, + RedisModuleString **argv, + int argc); + +**Available since:** 7.0.0 + +Checks if the command can be executed by the user, according to the ACLs associated with it. + +On success a `REDISMODULE_OK` is returned, otherwise +`REDISMODULE_ERR` is returned and errno is set to the following values: + +* ENOENT: Specified command does not exist. +* EACCES: Command cannot be executed, according to ACL rules + + + +### `RedisModule_ACLCheckKeyPermissions` + + int RedisModule_ACLCheckKeyPermissions(RedisModuleUser *user, + RedisModuleString *key, + int flags); + +**Available since:** 7.0.0 + +Check if the key can be accessed by the user according to the ACLs attached to the user +and the flags representing the key access. The flags are the same that are used in the +keyspec for logical operations. These flags are documented in [`RedisModule_SetCommandInfo`](#RedisModule_SetCommandInfo) as +the `REDISMODULE_CMD_KEY_ACCESS`, `REDISMODULE_CMD_KEY_UPDATE`, `REDISMODULE_CMD_KEY_INSERT`, +and `REDISMODULE_CMD_KEY_DELETE` flags. + +If no flags are supplied, the user is still required to have some access to the key for +this command to return successfully. + +If the user is able to access the key then `REDISMODULE_OK` is returned, otherwise +`REDISMODULE_ERR` is returned and errno is set to one of the following values: + +* EINVAL: The provided flags are invalid. +* EACCESS: The user does not have permission to access the key. + + + +### `RedisModule_ACLCheckKeyPrefixPermissions` + + int RedisModule_ACLCheckKeyPrefixPermissions(RedisModuleUser *user, + RedisModuleString *prefix, + int flags); + +**Available since:** unreleased + +Check if the user can access keys matching the given key prefix according to the ACLs +attached to the user and the flags representing key access. The flags are the same that +are used in the keyspec for logical operations. These flags are documented in +[`RedisModule_SetCommandInfo`](#RedisModule_SetCommandInfo) as the `REDISMODULE_CMD_KEY_ACCESS`, +`REDISMODULE_CMD_KEY_UPDATE`, `REDISMODULE_CMD_KEY_INSERT`, and `REDISMODULE_CMD_KEY_DELETE` flags. + +If no flags are supplied, the user is still required to have some access to keys matching +the prefix for this command to return successfully. + +If the user is able to access keys matching the prefix, then `REDISMODULE_OK` is returned. +Otherwise, `REDISMODULE_ERR` is returned and errno is set to one of the following values: + +* EINVAL: The provided flags are invalid. +* EACCES: The user does not have permission to access keys matching the prefix. + + + +### `RedisModule_ACLCheckChannelPermissions` + + int RedisModule_ACLCheckChannelPermissions(RedisModuleUser *user, + RedisModuleString *ch, + int flags); + +**Available since:** 7.0.0 + +Check if the pubsub channel can be accessed by the user based off of the given +access flags. See [`RedisModule_ChannelAtPosWithFlags`](#RedisModule_ChannelAtPosWithFlags) for more information about the +possible flags that can be passed in. + +If the user is able to access the pubsub channel then `REDISMODULE_OK` is returned, otherwise +`REDISMODULE_ERR` is returned and errno is set to one of the following values: + +* EINVAL: The provided flags are invalid. +* EACCESS: The user does not have permission to access the pubsub channel. + + + +### `RedisModule_ACLAddLogEntry` + + int RedisModule_ACLAddLogEntry(RedisModuleCtx *ctx, + RedisModuleUser *user, + RedisModuleString *object, + RedisModuleACLLogEntryReason reason); + +**Available since:** 7.0.0 + +Adds a new entry in the ACL log. +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` on error. + +For more information about ACL log, please refer to [https://redis.io/commands/acl-log](https://redis.io/commands/acl-log) + + + +### `RedisModule_ACLAddLogEntryByUserName` + + int RedisModule_ACLAddLogEntryByUserName(RedisModuleCtx *ctx, + RedisModuleString *username, + RedisModuleString *object, + RedisModuleACLLogEntryReason reason); + +**Available since:** 7.2.0 + +Adds a new entry in the ACL log with the `username` `RedisModuleString` provided. +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` on error. + +For more information about ACL log, please refer to [https://redis.io/commands/acl-log](https://redis.io/commands/acl-log) + + + +### `RedisModule_AuthenticateClientWithUser` + + int RedisModule_AuthenticateClientWithUser(RedisModuleCtx *ctx, + RedisModuleUser *module_user, + RedisModuleUserChangedFunc callback, + void *privdata, + uint64_t *client_id); + +**Available since:** 6.0.0 + +Authenticate the current context's user with the provided redis acl user. +Returns `REDISMODULE_ERR` if the user is disabled. + +See authenticateClientWithUser for information about callback, `client_id`, +and general usage for authentication. + + + +### `RedisModule_AuthenticateClientWithACLUser` + + int RedisModule_AuthenticateClientWithACLUser(RedisModuleCtx *ctx, + const char *name, + size_t len, + RedisModuleUserChangedFunc callback, + void *privdata, + uint64_t *client_id); + +**Available since:** 6.0.0 + +Authenticate the current context's user with the provided redis acl user. +Returns `REDISMODULE_ERR` if the user is disabled or the user does not exist. + +See authenticateClientWithUser for information about callback, `client_id`, +and general usage for authentication. + + + +### `RedisModule_DeauthenticateAndCloseClient` + + int RedisModule_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, + uint64_t client_id); + +**Available since:** 6.0.0 + +Deauthenticate and close the client. The client resources will not be +immediately freed, but will be cleaned up in a background job. This is +the recommended way to deauthenticate a client since most clients can't +handle users becoming deauthenticated. Returns `REDISMODULE_ERR` when the +client doesn't exist and `REDISMODULE_OK` when the operation was successful. + +The client ID is returned from the [`RedisModule_AuthenticateClientWithUser`](#RedisModule_AuthenticateClientWithUser) and +[`RedisModule_AuthenticateClientWithACLUser`](#RedisModule_AuthenticateClientWithACLUser) APIs, but can be obtained through +the CLIENT api or through server events. + +This function is not thread safe, and must be executed within the context +of a command or thread safe context. + + + +### `RedisModule_RedactClientCommandArgument` + + int RedisModule_RedactClientCommandArgument(RedisModuleCtx *ctx, int pos); + +**Available since:** 7.0.0 + +Redact the client command argument specified at the given position. Redacted arguments +are obfuscated in user facing commands such as SLOWLOG or MONITOR, as well as +never being written to server logs. This command may be called multiple times on the +same position. + +Note that the command name, position 0, can not be redacted. + +Returns `REDISMODULE_OK` if the argument was redacted and `REDISMODULE_ERR` if there +was an invalid parameter passed in or the position is outside the client +argument range. + + + +### `RedisModule_GetClientCertificate` + + RedisModuleString *RedisModule_GetClientCertificate(RedisModuleCtx *ctx, + uint64_t client_id); + +**Available since:** 6.0.9 + +Return the X.509 client-side certificate used by the client to authenticate +this connection. + +The return value is an allocated `RedisModuleString` that is a X.509 certificate +encoded in PEM (Base64) format. It should be freed (or auto-freed) by the caller. + +A NULL value is returned in the following conditions: + +- Connection ID does not exist +- Connection is not a TLS connection +- Connection is a TLS connection but no client certificate was used + + + +## Modules Dictionary API + +Implements a sorted dictionary (actually backed by a radix tree) with +the usual get / set / del / num-items API, together with an iterator +capable of going back and forth. + + + +### `RedisModule_CreateDict` + + RedisModuleDict *RedisModule_CreateDict(RedisModuleCtx *ctx); + +**Available since:** 5.0.0 + +Create a new dictionary. The 'ctx' pointer can be the current module context +or NULL, depending on what you want. Please follow the following rules: + +1. Use a NULL context if you plan to retain a reference to this dictionary + that will survive the time of the module callback where you created it. +2. Use a NULL context if no context is available at the time you are creating + the dictionary (of course...). +3. However use the current callback context as 'ctx' argument if the + dictionary time to live is just limited to the callback scope. In this + case, if enabled, you can enjoy the automatic memory management that will + reclaim the dictionary memory, as well as the strings returned by the + Next / Prev dictionary iterator calls. + + + +### `RedisModule_FreeDict` + + void RedisModule_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d); + +**Available since:** 5.0.0 + +Free a dictionary created with [`RedisModule_CreateDict()`](#RedisModule_CreateDict). You need to pass the +context pointer 'ctx' only if the dictionary was created using the +context instead of passing NULL. + + + +### `RedisModule_DictSize` + + uint64_t RedisModule_DictSize(RedisModuleDict *d); + +**Available since:** 5.0.0 + +Return the size of the dictionary (number of keys). + + + +### `RedisModule_DictSetC` + + int RedisModule_DictSetC(RedisModuleDict *d, + void *key, + size_t keylen, + void *ptr); + +**Available since:** 5.0.0 + +Store the specified key into the dictionary, setting its value to the +pointer 'ptr'. If the key was added with success, since it did not +already exist, `REDISMODULE_OK` is returned. Otherwise if the key already +exists the function returns `REDISMODULE_ERR`. + + + +### `RedisModule_DictReplaceC` + + int RedisModule_DictReplaceC(RedisModuleDict *d, + void *key, + size_t keylen, + void *ptr); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictSetC()`](#RedisModule_DictSetC) but will replace the key with the new +value if the key already exists. + + + +### `RedisModule_DictSet` + + int RedisModule_DictSet(RedisModuleDict *d, RedisModuleString *key, void *ptr); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictSetC()`](#RedisModule_DictSetC) but takes the key as a `RedisModuleString`. + + + +### `RedisModule_DictReplace` + + int RedisModule_DictReplace(RedisModuleDict *d, + RedisModuleString *key, + void *ptr); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictReplaceC()`](#RedisModule_DictReplaceC) but takes the key as a `RedisModuleString`. + + + +### `RedisModule_DictGetC` + + void *RedisModule_DictGetC(RedisModuleDict *d, + void *key, + size_t keylen, + int *nokey); + +**Available since:** 5.0.0 + +Return the value stored at the specified key. The function returns NULL +both in the case the key does not exist, or if you actually stored +NULL at key. So, optionally, if the 'nokey' pointer is not NULL, it will +be set by reference to 1 if the key does not exist, or to 0 if the key +exists. + + + +### `RedisModule_DictGet` + + void *RedisModule_DictGet(RedisModuleDict *d, + RedisModuleString *key, + int *nokey); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictGetC()`](#RedisModule_DictGetC) but takes the key as a `RedisModuleString`. + + + +### `RedisModule_DictDelC` + + int RedisModule_DictDelC(RedisModuleDict *d, + void *key, + size_t keylen, + void *oldval); + +**Available since:** 5.0.0 + +Remove the specified key from the dictionary, returning `REDISMODULE_OK` if +the key was found and deleted, or `REDISMODULE_ERR` if instead there was +no such key in the dictionary. When the operation is successful, if +'oldval' is not NULL, then '*oldval' is set to the value stored at the +key before it was deleted. Using this feature it is possible to get +a pointer to the value (for instance in order to release it), without +having to call [`RedisModule_DictGet()`](#RedisModule_DictGet) before deleting the key. + + + +### `RedisModule_DictDel` + + int RedisModule_DictDel(RedisModuleDict *d, + RedisModuleString *key, + void *oldval); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictDelC()`](#RedisModule_DictDelC) but gets the key as a `RedisModuleString`. + + + +### `RedisModule_DictIteratorStartC` + + RedisModuleDictIter *RedisModule_DictIteratorStartC(RedisModuleDict *d, + const char *op, + void *key, + size_t keylen); + +**Available since:** 5.0.0 + +Return an iterator, setup in order to start iterating from the specified +key by applying the operator 'op', which is just a string specifying the +comparison operator to use in order to seek the first element. The +operators available are: + +* `^` – Seek the first (lexicographically smaller) key. +* `$` – Seek the last (lexicographically bigger) key. +* `>` – Seek the first element greater than the specified key. +* `>=` – Seek the first element greater or equal than the specified key. +* `<` – Seek the first element smaller than the specified key. +* `<=` – Seek the first element smaller or equal than the specified key. +* `==` – Seek the first element matching exactly the specified key. + +Note that for `^` and `$` the passed key is not used, and the user may +just pass NULL with a length of 0. + +If the element to start the iteration cannot be seeked based on the +key and operator passed, [`RedisModule_DictNext()`](#RedisModule_DictNext) / Prev() will just return +`REDISMODULE_ERR` at the first call, otherwise they'll produce elements. + + + +### `RedisModule_DictIteratorStart` + + RedisModuleDictIter *RedisModule_DictIteratorStart(RedisModuleDict *d, + const char *op, + RedisModuleString *key); + +**Available since:** 5.0.0 + +Exactly like [`RedisModule_DictIteratorStartC`](#RedisModule_DictIteratorStartC), but the key is passed as a +`RedisModuleString`. + + + +### `RedisModule_DictIteratorStop` + + void RedisModule_DictIteratorStop(RedisModuleDictIter *di); + +**Available since:** 5.0.0 + +Release the iterator created with [`RedisModule_DictIteratorStart()`](#RedisModule_DictIteratorStart). This call +is mandatory otherwise a memory leak is introduced in the module. + + + +### `RedisModule_DictIteratorReseekC` + + int RedisModule_DictIteratorReseekC(RedisModuleDictIter *di, + const char *op, + void *key, + size_t keylen); + +**Available since:** 5.0.0 + +After its creation with [`RedisModule_DictIteratorStart()`](#RedisModule_DictIteratorStart), it is possible to +change the currently selected element of the iterator by using this +API call. The result based on the operator and key is exactly like +the function [`RedisModule_DictIteratorStart()`](#RedisModule_DictIteratorStart), however in this case the +return value is just `REDISMODULE_OK` in case the seeked element was found, +or `REDISMODULE_ERR` in case it was not possible to seek the specified +element. It is possible to reseek an iterator as many times as you want. + + + +### `RedisModule_DictIteratorReseek` + + int RedisModule_DictIteratorReseek(RedisModuleDictIter *di, + const char *op, + RedisModuleString *key); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictIteratorReseekC()`](#RedisModule_DictIteratorReseekC) but takes the key as a +`RedisModuleString`. + + + +### `RedisModule_DictNextC` + + void *RedisModule_DictNextC(RedisModuleDictIter *di, + size_t *keylen, + void **dataptr); + +**Available since:** 5.0.0 + +Return the current item of the dictionary iterator `di` and steps to the +next element. If the iterator already yield the last element and there +are no other elements to return, NULL is returned, otherwise a pointer +to a string representing the key is provided, and the `*keylen` length +is set by reference (if keylen is not NULL). The `*dataptr`, if not NULL +is set to the value of the pointer stored at the returned key as auxiliary +data (as set by the [`RedisModule_DictSet`](#RedisModule_DictSet) API). + +Usage example: + + ... create the iterator here ... + char *key; + void *data; + while((key = RedisModule_DictNextC(iter,&keylen,&data)) != NULL) { + printf("%.*s %p\n", (int)keylen, key, data); + } + +The returned pointer is of type void because sometimes it makes sense +to cast it to a `char*` sometimes to an unsigned `char*` depending on the +fact it contains or not binary data, so this API ends being more +comfortable to use. + +The validity of the returned pointer is until the next call to the +next/prev iterator step. Also the pointer is no longer valid once the +iterator is released. + + + +### `RedisModule_DictPrevC` + + void *RedisModule_DictPrevC(RedisModuleDictIter *di, + size_t *keylen, + void **dataptr); + +**Available since:** 5.0.0 + +This function is exactly like [`RedisModule_DictNext()`](#RedisModule_DictNext) but after returning +the currently selected element in the iterator, it selects the previous +element (lexicographically smaller) instead of the next one. + + + +### `RedisModule_DictNext` + + RedisModuleString *RedisModule_DictNext(RedisModuleCtx *ctx, + RedisModuleDictIter *di, + void **dataptr); + +**Available since:** 5.0.0 + +Like `RedisModuleNextC()`, but instead of returning an internally allocated +buffer and key length, it returns directly a module string object allocated +in the specified context 'ctx' (that may be NULL exactly like for the main +API [`RedisModule_CreateString`](#RedisModule_CreateString)). + +The returned string object should be deallocated after use, either manually +or by using a context that has automatic memory management active. + + + +### `RedisModule_DictPrev` + + RedisModuleString *RedisModule_DictPrev(RedisModuleCtx *ctx, + RedisModuleDictIter *di, + void **dataptr); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictNext()`](#RedisModule_DictNext) but after returning the currently selected +element in the iterator, it selects the previous element (lexicographically +smaller) instead of the next one. + + + +### `RedisModule_DictCompareC` + + int RedisModule_DictCompareC(RedisModuleDictIter *di, + const char *op, + void *key, + size_t keylen); + +**Available since:** 5.0.0 + +Compare the element currently pointed by the iterator to the specified +element given by key/keylen, according to the operator 'op' (the set of +valid operators are the same valid for [`RedisModule_DictIteratorStart`](#RedisModule_DictIteratorStart)). +If the comparison is successful the command returns `REDISMODULE_OK` +otherwise `REDISMODULE_ERR` is returned. + +This is useful when we want to just emit a lexicographical range, so +in the loop, as we iterate elements, we can also check if we are still +on range. + +The function return `REDISMODULE_ERR` if the iterator reached the +end of elements condition as well. + + + +### `RedisModule_DictCompare` + + int RedisModule_DictCompare(RedisModuleDictIter *di, + const char *op, + RedisModuleString *key); + +**Available since:** 5.0.0 + +Like [`RedisModule_DictCompareC`](#RedisModule_DictCompareC) but gets the key to compare with the current +iterator key as a `RedisModuleString`. + + + +## Modules Info fields + + + +### `RedisModule_InfoAddSection` + + int RedisModule_InfoAddSection(RedisModuleInfoCtx *ctx, const char *name); + +**Available since:** 6.0.0 + +Used to start a new section, before adding any fields. the section name will +be prefixed by `_` and must only include A-Z,a-z,0-9. +NULL or empty string indicates the default section (only ``) is used. +When return value is `REDISMODULE_ERR`, the section should and will be skipped. + + + +### `RedisModule_InfoBeginDictField` + + int RedisModule_InfoBeginDictField(RedisModuleInfoCtx *ctx, const char *name); + +**Available since:** 6.0.0 + +Starts a dict field, similar to the ones in INFO KEYSPACE. Use normal +`RedisModule_InfoAddField`* functions to add the items to this field, and +terminate with [`RedisModule_InfoEndDictField`](#RedisModule_InfoEndDictField). + + + +### `RedisModule_InfoEndDictField` + + int RedisModule_InfoEndDictField(RedisModuleInfoCtx *ctx); + +**Available since:** 6.0.0 + +Ends a dict field, see [`RedisModule_InfoBeginDictField`](#RedisModule_InfoBeginDictField) + + + +### `RedisModule_InfoAddFieldString` + + int RedisModule_InfoAddFieldString(RedisModuleInfoCtx *ctx, + const char *field, + RedisModuleString *value); + +**Available since:** 6.0.0 + +Used by `RedisModuleInfoFunc` to add info fields. +Each field will be automatically prefixed by `_`. +Field names or values must not include `\r\n` or `:`. + + + +### `RedisModule_InfoAddFieldCString` + + int RedisModule_InfoAddFieldCString(RedisModuleInfoCtx *ctx, + const char *field, + const char *value); + +**Available since:** 6.0.0 + +See [`RedisModule_InfoAddFieldString()`](#RedisModule_InfoAddFieldString). + + + +### `RedisModule_InfoAddFieldDouble` + + int RedisModule_InfoAddFieldDouble(RedisModuleInfoCtx *ctx, + const char *field, + double value); + +**Available since:** 6.0.0 + +See [`RedisModule_InfoAddFieldString()`](#RedisModule_InfoAddFieldString). + + + +### `RedisModule_InfoAddFieldLongLong` + + int RedisModule_InfoAddFieldLongLong(RedisModuleInfoCtx *ctx, + const char *field, + long long value); + +**Available since:** 6.0.0 + +See [`RedisModule_InfoAddFieldString()`](#RedisModule_InfoAddFieldString). + + + +### `RedisModule_InfoAddFieldULongLong` + + int RedisModule_InfoAddFieldULongLong(RedisModuleInfoCtx *ctx, + const char *field, + unsigned long long value); + +**Available since:** 6.0.0 + +See [`RedisModule_InfoAddFieldString()`](#RedisModule_InfoAddFieldString). + + + +### `RedisModule_RegisterInfoFunc` + + int RedisModule_RegisterInfoFunc(RedisModuleCtx *ctx, RedisModuleInfoFunc cb); + +**Available since:** 6.0.0 + +Registers callback for the INFO command. The callback should add INFO fields +by calling the `RedisModule_InfoAddField*()` functions. + + + +### `RedisModule_GetServerInfo` + + RedisModuleServerInfoData *RedisModule_GetServerInfo(RedisModuleCtx *ctx, + const char *section); + +**Available since:** 6.0.0 + +Get information about the server similar to the one that returns from the +INFO command. This function takes an optional 'section' argument that may +be NULL. The return value holds the output and can be used with +[`RedisModule_ServerInfoGetField`](#RedisModule_ServerInfoGetField) and alike to get the individual fields. +When done, it needs to be freed with [`RedisModule_FreeServerInfo`](#RedisModule_FreeServerInfo) or with the +automatic memory management mechanism if enabled. + + + +### `RedisModule_FreeServerInfo` + + void RedisModule_FreeServerInfo(RedisModuleCtx *ctx, + RedisModuleServerInfoData *data); + +**Available since:** 6.0.0 + +Free data created with [`RedisModule_GetServerInfo()`](#RedisModule_GetServerInfo). You need to pass the +context pointer 'ctx' only if the dictionary was created using the +context instead of passing NULL. + + + +### `RedisModule_ServerInfoGetField` + + RedisModuleString *RedisModule_ServerInfoGetField(RedisModuleCtx *ctx, + RedisModuleServerInfoData *data, + const char* field); + +**Available since:** 6.0.0 + +Get the value of a field from data collected with [`RedisModule_GetServerInfo()`](#RedisModule_GetServerInfo). You +need to pass the context pointer 'ctx' only if you want to use auto memory +mechanism to release the returned string. Return value will be NULL if the +field was not found. + + + +### `RedisModule_ServerInfoGetFieldC` + + const char *RedisModule_ServerInfoGetFieldC(RedisModuleServerInfoData *data, + const char* field); + +**Available since:** 6.0.0 + +Similar to [`RedisModule_ServerInfoGetField`](#RedisModule_ServerInfoGetField), but returns a char* which should not be freed but the caller. + + + +### `RedisModule_ServerInfoGetFieldSigned` + + long long RedisModule_ServerInfoGetFieldSigned(RedisModuleServerInfoData *data, + const char* field, + int *out_err); + +**Available since:** 6.0.0 + +Get the value of a field from data collected with [`RedisModule_GetServerInfo()`](#RedisModule_GetServerInfo). If the +field is not found, or is not numerical or out of range, return value will be +0, and the optional `out_err` argument will be set to `REDISMODULE_ERR`. + + + +### `RedisModule_ServerInfoGetFieldUnsigned` + + unsigned long long RedisModule_ServerInfoGetFieldUnsigned(RedisModuleServerInfoData *data, + const char* field, + int *out_err); + +**Available since:** 6.0.0 + +Get the value of a field from data collected with [`RedisModule_GetServerInfo()`](#RedisModule_GetServerInfo). If the +field is not found, or is not numerical or out of range, return value will be +0, and the optional `out_err` argument will be set to `REDISMODULE_ERR`. + + + +### `RedisModule_ServerInfoGetFieldDouble` + + double RedisModule_ServerInfoGetFieldDouble(RedisModuleServerInfoData *data, + const char* field, + int *out_err); + +**Available since:** 6.0.0 + +Get the value of a field from data collected with [`RedisModule_GetServerInfo()`](#RedisModule_GetServerInfo). If the +field is not found, or is not a double, return value will be 0, and the +optional `out_err` argument will be set to `REDISMODULE_ERR`. + + + +## Modules utility APIs + + + +### `RedisModule_GetRandomBytes` + + void RedisModule_GetRandomBytes(unsigned char *dst, size_t len); + +**Available since:** 5.0.0 + +Return random bytes using SHA1 in counter mode with a /dev/urandom +initialized seed. This function is fast so can be used to generate +many bytes without any effect on the operating system entropy pool. +Currently this function is not thread safe. + + + +### `RedisModule_GetRandomHexChars` + + void RedisModule_GetRandomHexChars(char *dst, size_t len); + +**Available since:** 5.0.0 + +Like [`RedisModule_GetRandomBytes()`](#RedisModule_GetRandomBytes) but instead of setting the string to +random bytes the string is set to random characters in the in the +hex charset [0-9a-f]. + + + +## Modules API exporting / importing + + + +### `RedisModule_ExportSharedAPI` + + int RedisModule_ExportSharedAPI(RedisModuleCtx *ctx, + const char *apiname, + void *func); + +**Available since:** 5.0.4 + +This function is called by a module in order to export some API with a +given name. Other modules will be able to use this API by calling the +symmetrical function [`RedisModule_GetSharedAPI()`](#RedisModule_GetSharedAPI) and casting the return value to +the right function pointer. + +The function will return `REDISMODULE_OK` if the name is not already taken, +otherwise `REDISMODULE_ERR` will be returned and no operation will be +performed. + +IMPORTANT: the apiname argument should be a string literal with static +lifetime. The API relies on the fact that it will always be valid in +the future. + + + +### `RedisModule_GetSharedAPI` + + void *RedisModule_GetSharedAPI(RedisModuleCtx *ctx, const char *apiname); + +**Available since:** 5.0.4 + +Request an exported API pointer. The return value is just a void pointer +that the caller of this function will be required to cast to the right +function pointer, so this is a private contract between modules. + +If the requested API is not available then NULL is returned. Because +modules can be loaded at different times with different order, this +function calls should be put inside some module generic API registering +step, that is called every time a module attempts to execute a +command that requires external APIs: if some API cannot be resolved, the +command should return an error. + +Here is an example: + + int ... myCommandImplementation(void) { + if (getExternalAPIs() == 0) { + reply with an error here if we cannot have the APIs + } + // Use the API: + myFunctionPointer(foo); + } + +And the function registerAPI() is: + + int getExternalAPIs(void) { + static int api_loaded = 0; + if (api_loaded != 0) return 1; // APIs already resolved. + + myFunctionPointer = RedisModule_GetSharedAPI("..."); + if (myFunctionPointer == NULL) return 0; + + return 1; + } + + + +## Module Command Filter API + + + +### `RedisModule_RegisterCommandFilter` + + RedisModuleCommandFilter *RedisModule_RegisterCommandFilter(RedisModuleCtx *ctx, + RedisModuleCommandFilterFunc callback, + int flags); + +**Available since:** 5.0.5 + +Register a new command filter function. + +Command filtering makes it possible for modules to extend Redis by plugging +into the execution flow of all commands. + +A registered filter gets called before Redis executes *any* command. This +includes both core Redis commands and commands registered by any module. The +filter applies in all execution paths including: + +1. Invocation by a client. +2. Invocation through [`RedisModule_Call()`](#RedisModule_Call) by any module. +3. Invocation through Lua `redis.call()`. +4. Replication of a command from a master. + +The filter executes in a special filter context, which is different and more +limited than a `RedisModuleCtx`. Because the filter affects any command, it +must be implemented in a very efficient way to reduce the performance impact +on Redis. All Redis Module API calls that require a valid context (such as +[`RedisModule_Call()`](#RedisModule_Call), [`RedisModule_OpenKey()`](#RedisModule_OpenKey), etc.) are not supported in a +filter context. + +The `RedisModuleCommandFilterCtx` can be used to inspect or modify the +executed command and its arguments. As the filter executes before Redis +begins processing the command, any change will affect the way the command is +processed. For example, a module can override Redis commands this way: + +1. Register a `MODULE.SET` command which implements an extended version of + the Redis `SET` command. +2. Register a command filter which detects invocation of `SET` on a specific + pattern of keys. Once detected, the filter will replace the first + argument from `SET` to `MODULE.SET`. +3. When filter execution is complete, Redis considers the new command name + and therefore executes the module's own command. + +Note that in the above use case, if `MODULE.SET` itself uses +[`RedisModule_Call()`](#RedisModule_Call) the filter will be applied on that call as well. If +that is not desired, the `REDISMODULE_CMDFILTER_NOSELF` flag can be set when +registering the filter. + +The `REDISMODULE_CMDFILTER_NOSELF` flag prevents execution flows that +originate from the module's own [`RedisModule_Call()`](#RedisModule_Call) from reaching the filter. This +flag is effective for all execution flows, including nested ones, as long as +the execution begins from the module's command context or a thread-safe +context that is associated with a blocking command. + +Detached thread-safe contexts are *not* associated with the module and cannot +be protected by this flag. + +If multiple filters are registered (by the same or different modules), they +are executed in the order of registration. + + + +### `RedisModule_UnregisterCommandFilter` + + int RedisModule_UnregisterCommandFilter(RedisModuleCtx *ctx, + RedisModuleCommandFilter *filter); + +**Available since:** 5.0.5 + +Unregister a command filter. + + + +### `RedisModule_CommandFilterArgsCount` + + int RedisModule_CommandFilterArgsCount(RedisModuleCommandFilterCtx *fctx); + +**Available since:** 5.0.5 + +Return the number of arguments a filtered command has. The number of +arguments include the command itself. + + + +### `RedisModule_CommandFilterArgGet` + + RedisModuleString *RedisModule_CommandFilterArgGet(RedisModuleCommandFilterCtx *fctx, + int pos); + +**Available since:** 5.0.5 + +Return the specified command argument. The first argument (position 0) is +the command itself, and the rest are user-provided args. + + + +### `RedisModule_CommandFilterArgInsert` + + int RedisModule_CommandFilterArgInsert(RedisModuleCommandFilterCtx *fctx, + int pos, + RedisModuleString *arg); + +**Available since:** 5.0.5 + +Modify the filtered command by inserting a new argument at the specified +position. The specified `RedisModuleString` argument may be used by Redis +after the filter context is destroyed, so it must not be auto-memory +allocated, freed or used elsewhere. + + + +### `RedisModule_CommandFilterArgReplace` + + int RedisModule_CommandFilterArgReplace(RedisModuleCommandFilterCtx *fctx, + int pos, + RedisModuleString *arg); + +**Available since:** 5.0.5 + +Modify the filtered command by replacing an existing argument with a new one. +The specified `RedisModuleString` argument may be used by Redis after the +filter context is destroyed, so it must not be auto-memory allocated, freed +or used elsewhere. + + + +### `RedisModule_CommandFilterArgDelete` + + int RedisModule_CommandFilterArgDelete(RedisModuleCommandFilterCtx *fctx, + int pos); + +**Available since:** 5.0.5 + +Modify the filtered command by deleting an argument at the specified +position. + + + +### `RedisModule_CommandFilterGetClientId` + + unsigned long long RedisModule_CommandFilterGetClientId(RedisModuleCommandFilterCtx *fctx); + +**Available since:** 7.2.0 + +Get Client ID for client that issued the command we are filtering + + + +### `RedisModule_MallocSize` + + size_t RedisModule_MallocSize(void* ptr); + +**Available since:** 6.0.0 + +For a given pointer allocated via [`RedisModule_Alloc()`](#RedisModule_Alloc) or +[`RedisModule_Realloc()`](#RedisModule_Realloc), return the amount of memory allocated for it. +Note that this may be different (larger) than the memory we allocated +with the allocation calls, since sometimes the underlying allocator +will allocate more memory. + + + +### `RedisModule_MallocUsableSize` + + size_t RedisModule_MallocUsableSize(void *ptr); + +**Available since:** 7.0.1 + +Similar to [`RedisModule_MallocSize`](#RedisModule_MallocSize), the difference is that [`RedisModule_MallocUsableSize`](#RedisModule_MallocUsableSize) +returns the usable size of memory by the module. + + + +### `RedisModule_MallocSizeString` + + size_t RedisModule_MallocSizeString(RedisModuleString* str); + +**Available since:** 7.0.0 + +Same as [`RedisModule_MallocSize`](#RedisModule_MallocSize), except it works on `RedisModuleString` pointers. + + + +### `RedisModule_MallocSizeDict` + + size_t RedisModule_MallocSizeDict(RedisModuleDict* dict); + +**Available since:** 7.0.0 + +Same as [`RedisModule_MallocSize`](#RedisModule_MallocSize), except it works on `RedisModuleDict` pointers. +Note that the returned value is only the overhead of the underlying structures, +it does not include the allocation size of the keys and values. + + + +### `RedisModule_GetUsedMemoryRatio` + + float RedisModule_GetUsedMemoryRatio(void); + +**Available since:** 6.0.0 + +Return the a number between 0 to 1 indicating the amount of memory +currently used, relative to the Redis "maxmemory" configuration. + +* 0 - No memory limit configured. +* Between 0 and 1 - The percentage of the memory used normalized in 0-1 range. +* Exactly 1 - Memory limit reached. +* Greater 1 - More memory used than the configured limit. + + + +## Scanning keyspace and hashes + + + +### `RedisModule_ScanCursorCreate` + + RedisModuleScanCursor *RedisModule_ScanCursorCreate(void); + +**Available since:** 6.0.0 + +Create a new cursor to be used with [`RedisModule_Scan`](#RedisModule_Scan) + + + +### `RedisModule_ScanCursorRestart` + + void RedisModule_ScanCursorRestart(RedisModuleScanCursor *cursor); + +**Available since:** 6.0.0 + +Restart an existing cursor. The keys will be rescanned. + + + +### `RedisModule_ScanCursorDestroy` + + void RedisModule_ScanCursorDestroy(RedisModuleScanCursor *cursor); + +**Available since:** 6.0.0 + +Destroy the cursor struct. + + + +### `RedisModule_Scan` + + int RedisModule_Scan(RedisModuleCtx *ctx, + RedisModuleScanCursor *cursor, + RedisModuleScanCB fn, + void *privdata); + +**Available since:** 6.0.0 + +Scan API that allows a module to scan all the keys and value in +the selected db. + +Callback for scan implementation. + + void scan_callback(RedisModuleCtx *ctx, RedisModuleString *keyname, + RedisModuleKey *key, void *privdata); + +- `ctx`: the redis module context provided to for the scan. +- `keyname`: owned by the caller and need to be retained if used after this + function. +- `key`: holds info on the key and value, it is provided as best effort, in + some cases it might be NULL, in which case the user should (can) use + [`RedisModule_OpenKey()`](#RedisModule_OpenKey) (and CloseKey too). + when it is provided, it is owned by the caller and will be free when the + callback returns. +- `privdata`: the user data provided to [`RedisModule_Scan()`](#RedisModule_Scan). + +The way it should be used: + + RedisModuleScanCursor *c = RedisModule_ScanCursorCreate(); + while(RedisModule_Scan(ctx, c, callback, privateData)); + RedisModule_ScanCursorDestroy(c); + +It is also possible to use this API from another thread while the lock +is acquired during the actual call to [`RedisModule_Scan`](#RedisModule_Scan): + + RedisModuleScanCursor *c = RedisModule_ScanCursorCreate(); + RedisModule_ThreadSafeContextLock(ctx); + while(RedisModule_Scan(ctx, c, callback, privateData)){ + RedisModule_ThreadSafeContextUnlock(ctx); + // do some background job + RedisModule_ThreadSafeContextLock(ctx); + } + RedisModule_ScanCursorDestroy(c); + +The function will return 1 if there are more elements to scan and +0 otherwise, possibly setting errno if the call failed. + +It is also possible to restart an existing cursor using [`RedisModule_ScanCursorRestart`](#RedisModule_ScanCursorRestart). + +IMPORTANT: This API is very similar to the Redis SCAN command from the +point of view of the guarantees it provides. This means that the API +may report duplicated keys, but guarantees to report at least one time +every key that was there from the start to the end of the scanning process. + +NOTE: If you do database changes within the callback, you should be aware +that the internal state of the database may change. For instance it is safe +to delete or modify the current key, but may not be safe to delete any +other key. +Moreover playing with the Redis keyspace while iterating may have the +effect of returning more duplicates. A safe pattern is to store the keys +names you want to modify elsewhere, and perform the actions on the keys +later when the iteration is complete. However this can cost a lot of +memory, so it may make sense to just operate on the current key when +possible during the iteration, given that this is safe. + + + +### `RedisModule_ScanKey` + + int RedisModule_ScanKey(RedisModuleKey *key, + RedisModuleScanCursor *cursor, + RedisModuleScanKeyCB fn, + void *privdata); + +**Available since:** 6.0.0 + +Scan api that allows a module to scan the elements in a hash, set or sorted set key + +Callback for scan implementation. + + void scan_callback(RedisModuleKey *key, RedisModuleString* field, RedisModuleString* value, void *privdata); + +- key - the redis key context provided to for the scan. +- field - field name, owned by the caller and need to be retained if used + after this function. +- value - value string or NULL for set type, owned by the caller and need to + be retained if used after this function. +- privdata - the user data provided to [`RedisModule_ScanKey`](#RedisModule_ScanKey). + +The way it should be used: + + RedisModuleScanCursor *c = RedisModule_ScanCursorCreate(); + RedisModuleKey *key = RedisModule_OpenKey(...); + while(RedisModule_ScanKey(key, c, callback, privateData)); + RedisModule_CloseKey(key); + RedisModule_ScanCursorDestroy(c); + +It is also possible to use this API from another thread while the lock is acquired during +the actual call to [`RedisModule_ScanKey`](#RedisModule_ScanKey), and re-opening the key each time: + + RedisModuleScanCursor *c = RedisModule_ScanCursorCreate(); + RedisModule_ThreadSafeContextLock(ctx); + RedisModuleKey *key = RedisModule_OpenKey(...); + while(RedisModule_ScanKey(ctx, c, callback, privateData)){ + RedisModule_CloseKey(key); + RedisModule_ThreadSafeContextUnlock(ctx); + // do some background job + RedisModule_ThreadSafeContextLock(ctx); + key = RedisModule_OpenKey(...); + } + RedisModule_CloseKey(key); + RedisModule_ScanCursorDestroy(c); + +The function will return 1 if there are more elements to scan and 0 otherwise, +possibly setting errno if the call failed. +It is also possible to restart an existing cursor using [`RedisModule_ScanCursorRestart`](#RedisModule_ScanCursorRestart). + +NOTE: Certain operations are unsafe while iterating the object. For instance +while the API guarantees to return at least one time all the elements that +are present in the data structure consistently from the start to the end +of the iteration (see HSCAN and similar commands documentation), the more +you play with the elements, the more duplicates you may get. In general +deleting the current element of the data structure is safe, while removing +the key you are iterating is not safe. + + + +## Module fork API + + + +### `RedisModule_Fork` + + int RedisModule_Fork(RedisModuleForkDoneHandler cb, void *user_data); + +**Available since:** 6.0.0 + +Create a background child process with the current frozen snapshot of the +main process where you can do some processing in the background without +affecting / freezing the traffic and no need for threads and GIL locking. +Note that Redis allows for only one concurrent fork. +When the child wants to exit, it should call [`RedisModule_ExitFromChild`](#RedisModule_ExitFromChild). +If the parent wants to kill the child it should call [`RedisModule_KillForkChild`](#RedisModule_KillForkChild) +The done handler callback will be executed on the parent process when the +child existed (but not when killed) +Return: -1 on failure, on success the parent process will get a positive PID +of the child, and the child process will get 0. + + + +### `RedisModule_SendChildHeartbeat` + + void RedisModule_SendChildHeartbeat(double progress); + +**Available since:** 6.2.0 + +The module is advised to call this function from the fork child once in a while, +so that it can report progress and COW memory to the parent which will be +reported in INFO. +The `progress` argument should between 0 and 1, or -1 when not available. + + + +### `RedisModule_ExitFromChild` + + int RedisModule_ExitFromChild(int retcode); + +**Available since:** 6.0.0 + +Call from the child process when you want to terminate it. +retcode will be provided to the done handler executed on the parent process. + + + +### `RedisModule_KillForkChild` + + int RedisModule_KillForkChild(int child_pid); + +**Available since:** 6.0.0 + +Can be used to kill the forked child process from the parent process. +`child_pid` would be the return value of [`RedisModule_Fork`](#RedisModule_Fork). + + + +## Server hooks implementation + + + +### `RedisModule_SubscribeToServerEvent` + + int RedisModule_SubscribeToServerEvent(RedisModuleCtx *ctx, + RedisModuleEvent event, + RedisModuleEventCallback callback); + +**Available since:** 6.0.0 + +Register to be notified, via a callback, when the specified server event +happens. The callback is called with the event as argument, and an additional +argument which is a void pointer and should be cased to a specific type +that is event-specific (but many events will just use NULL since they do not +have additional information to pass to the callback). + +If the callback is NULL and there was a previous subscription, the module +will be unsubscribed. If there was a previous subscription and the callback +is not null, the old callback will be replaced with the new one. + +The callback must be of this type: + + int (*RedisModuleEventCallback)(RedisModuleCtx *ctx, + RedisModuleEvent eid, + uint64_t subevent, + void *data); + +The 'ctx' is a normal Redis module context that the callback can use in +order to call other modules APIs. The 'eid' is the event itself, this +is only useful in the case the module subscribed to multiple events: using +the 'id' field of this structure it is possible to check if the event +is one of the events we registered with this callback. The 'subevent' field +depends on the event that fired. + +Finally the 'data' pointer may be populated, only for certain events, with +more relevant data. + +Here is a list of events you can use as 'eid' and related sub events: + +* `RedisModuleEvent_ReplicationRoleChanged`: + + This event is called when the instance switches from master + to replica or the other way around, however the event is + also called when the replica remains a replica but starts to + replicate with a different master. + + The following sub events are available: + + * `REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_MASTER` + * `REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_REPLICA` + + The 'data' field can be casted by the callback to a + `RedisModuleReplicationInfo` structure with the following fields: + + int master; // true if master, false if replica + char *masterhost; // master instance hostname for NOW_REPLICA + int masterport; // master instance port for NOW_REPLICA + char *replid1; // Main replication ID + char *replid2; // Secondary replication ID + uint64_t repl1_offset; // Main replication offset + uint64_t repl2_offset; // Offset of replid2 validity + +* `RedisModuleEvent_Persistence` + + This event is called when RDB saving or AOF rewriting starts + and ends. The following sub events are available: + + * `REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START` + * `REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START` + * `REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START` + * `REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START` + * `REDISMODULE_SUBEVENT_PERSISTENCE_ENDED` + * `REDISMODULE_SUBEVENT_PERSISTENCE_FAILED` + + The above events are triggered not just when the user calls the + relevant commands like BGSAVE, but also when a saving operation + or AOF rewriting occurs because of internal server triggers. + The SYNC_RDB_START sub events are happening in the foreground due to + SAVE command, FLUSHALL, or server shutdown, and the other RDB and + AOF sub events are executed in a background fork child, so any + action the module takes can only affect the generated AOF or RDB, + but will not be reflected in the parent process and affect connected + clients and commands. Also note that the AOF_START sub event may end + up saving RDB content in case of an AOF with rdb-preamble. + +* `RedisModuleEvent_FlushDB` + + The FLUSHALL, FLUSHDB or an internal flush (for instance + because of replication, after the replica synchronization) + happened. The following sub events are available: + + * `REDISMODULE_SUBEVENT_FLUSHDB_START` + * `REDISMODULE_SUBEVENT_FLUSHDB_END` + + The data pointer can be casted to a RedisModuleFlushInfo + structure with the following fields: + + int32_t async; // True if the flush is done in a thread. + // See for instance FLUSHALL ASYNC. + // In this case the END callback is invoked + // immediately after the database is put + // in the free list of the thread. + int32_t dbnum; // Flushed database number, -1 for all the DBs + // in the case of the FLUSHALL operation. + + The start event is called *before* the operation is initiated, thus + allowing the callback to call DBSIZE or other operation on the + yet-to-free keyspace. + +* `RedisModuleEvent_Loading` + + Called on loading operations: at startup when the server is + started, but also after a first synchronization when the + replica is loading the RDB file from the master. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_LOADING_RDB_START` + * `REDISMODULE_SUBEVENT_LOADING_AOF_START` + * `REDISMODULE_SUBEVENT_LOADING_REPL_START` + * `REDISMODULE_SUBEVENT_LOADING_ENDED` + * `REDISMODULE_SUBEVENT_LOADING_FAILED` + + Note that AOF loading may start with an RDB data in case of + rdb-preamble, in which case you'll only receive an AOF_START event. + +* `RedisModuleEvent_ClientChange` + + Called when a client connects or disconnects. + The data pointer can be casted to a RedisModuleClientInfo + structure, documented in RedisModule_GetClientInfoById(). + The following sub events are available: + + * `REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED` + * `REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED` + +* `RedisModuleEvent_Shutdown` + + The server is shutting down. No subevents are available. + +* `RedisModuleEvent_ReplicaChange` + + This event is called when the instance (that can be both a + master or a replica) get a new online replica, or lose a + replica since it gets disconnected. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE` + * `REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE` + + No additional information is available so far: future versions + of Redis will have an API in order to enumerate the replicas + connected and their state. + +* `RedisModuleEvent_CronLoop` + + This event is called every time Redis calls the serverCron() + function in order to do certain bookkeeping. Modules that are + required to do operations from time to time may use this callback. + Normally Redis calls this function 10 times per second, but + this changes depending on the "hz" configuration. + No sub events are available. + + The data pointer can be casted to a RedisModuleCronLoop + structure with the following fields: + + int32_t hz; // Approximate number of events per second. + +* `RedisModuleEvent_MasterLinkChange` + + This is called for replicas in order to notify when the + replication link becomes functional (up) with our master, + or when it goes down. Note that the link is not considered + up when we just connected to the master, but only if the + replication is happening correctly. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_MASTER_LINK_UP` + * `REDISMODULE_SUBEVENT_MASTER_LINK_DOWN` + +* `RedisModuleEvent_ModuleChange` + + This event is called when a new module is loaded or one is unloaded. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_MODULE_LOADED` + * `REDISMODULE_SUBEVENT_MODULE_UNLOADED` + + The data pointer can be casted to a RedisModuleModuleChange + structure with the following fields: + + const char* module_name; // Name of module loaded or unloaded. + int32_t module_version; // Module version. + +* `RedisModuleEvent_LoadingProgress` + + This event is called repeatedly called while an RDB or AOF file + is being loaded. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB` + * `REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF` + + The data pointer can be casted to a RedisModuleLoadingProgress + structure with the following fields: + + int32_t hz; // Approximate number of events per second. + int32_t progress; // Approximate progress between 0 and 1024, + // or -1 if unknown. + +* `RedisModuleEvent_SwapDB` + + This event is called when a SWAPDB command has been successfully + Executed. + For this event call currently there is no subevents available. + + The data pointer can be casted to a RedisModuleSwapDbInfo + structure with the following fields: + + int32_t dbnum_first; // Swap Db first dbnum + int32_t dbnum_second; // Swap Db second dbnum + +* `RedisModuleEvent_ReplBackup` + + WARNING: Replication Backup events are deprecated since Redis 7.0 and are never fired. + See RedisModuleEvent_ReplAsyncLoad for understanding how Async Replication Loading events + are now triggered when repl-diskless-load is set to swapdb. + + Called when repl-diskless-load config is set to swapdb, + And redis needs to backup the current database for the + possibility to be restored later. A module with global data and + maybe with aux_load and aux_save callbacks may need to use this + notification to backup / restore / discard its globals. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_REPL_BACKUP_CREATE` + * `REDISMODULE_SUBEVENT_REPL_BACKUP_RESTORE` + * `REDISMODULE_SUBEVENT_REPL_BACKUP_DISCARD` + +* `RedisModuleEvent_ReplAsyncLoad` + + Called when repl-diskless-load config is set to swapdb and a replication with a master of same + data set history (matching replication ID) occurs. + In which case redis serves current data set while loading new database in memory from socket. + Modules must have declared they support this mechanism in order to activate it, through + REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD flag. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED` + * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED` + * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_COMPLETED` + +* `RedisModuleEvent_ForkChild` + + Called when a fork child (AOFRW, RDBSAVE, module fork...) is born/dies + The following sub events are available: + + * `REDISMODULE_SUBEVENT_FORK_CHILD_BORN` + * `REDISMODULE_SUBEVENT_FORK_CHILD_DIED` + +* `RedisModuleEvent_EventLoop` + + Called on each event loop iteration, once just before the event loop goes + to sleep or just after it wakes up. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP` + * `REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP` + +* `RedisModule_Event_Config` + + Called when a configuration event happens + The following sub events are available: + + * `REDISMODULE_SUBEVENT_CONFIG_CHANGE` + + The data pointer can be casted to a RedisModuleConfigChange + structure with the following fields: + + const char **config_names; // An array of C string pointers containing the + // name of each modified configuration item + uint32_t num_changes; // The number of elements in the config_names array + +* `RedisModule_Event_Key` + + Called when a key is removed from the keyspace. We can't modify any key in + the event. + The following sub events are available: + + * `REDISMODULE_SUBEVENT_KEY_DELETED` + * `REDISMODULE_SUBEVENT_KEY_EXPIRED` + * `REDISMODULE_SUBEVENT_KEY_EVICTED` + * `REDISMODULE_SUBEVENT_KEY_OVERWRITTEN` + + The data pointer can be casted to a RedisModuleKeyInfo + structure with the following fields: + + RedisModuleKey *key; // Key name + +The function returns `REDISMODULE_OK` if the module was successfully subscribed +for the specified event. If the API is called from a wrong context or unsupported event +is given then `REDISMODULE_ERR` is returned. + + + +### `RedisModule_IsSubEventSupported` + + int RedisModule_IsSubEventSupported(RedisModuleEvent event, int64_t subevent); + +**Available since:** 6.0.9 + + +For a given server event and subevent, return zero if the +subevent is not supported and non-zero otherwise. + + + +## Module Configurations API + + + +### `RedisModule_RegisterStringConfig` + + int RedisModule_RegisterStringConfig(RedisModuleCtx *ctx, + const char *name, + const char *default_val, + unsigned int flags, + RedisModuleConfigGetStringFunc getfn, + RedisModuleConfigSetStringFunc setfn, + RedisModuleConfigApplyFunc applyfn, + void *privdata); + +**Available since:** 7.0.0 + +Create a string config that Redis users can interact with via the Redis config file, +`CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. + +The actual config value is owned by the module, and the `getfn`, `setfn` and optional +`applyfn` callbacks that are provided to Redis in order to access or manipulate the +value. The `getfn` callback retrieves the value from the module, while the `setfn` +callback provides a value to be stored into the module config. +The optional `applyfn` callback is called after a `CONFIG SET` command modified one or +more configs using the `setfn` callback and can be used to atomically apply a config +after several configs were changed together. +If there are multiple configs with `applyfn` callbacks set by a single `CONFIG SET` +command, they will be deduplicated if their `applyfn` function and `privdata` pointers +are identical, and the callback will only be run once. +Both the `setfn` and `applyfn` can return an error if the provided value is invalid or +cannot be used. +The config also declares a type for the value that is validated by Redis and +provided to the module. The config system provides the following types: + +* Redis String: Binary safe string data. +* Enum: One of a finite number of string tokens, provided during registration. +* Numeric: 64 bit signed integer, which also supports min and max values. +* Bool: Yes or no value. + +The `setfn` callback is expected to return `REDISMODULE_OK` when the value is successfully +applied. It can also return `REDISMODULE_ERR` if the value can't be applied, and the +*err pointer can be set with a `RedisModuleString` error message to provide to the client. +This `RedisModuleString` will be freed by redis after returning from the set callback. + +All configs are registered with a name, a type, a default value, private data that is made +available in the callbacks, as well as several flags that modify the behavior of the config. +The name must only contain alphanumeric characters or dashes. The supported flags are: + +* `REDISMODULE_CONFIG_DEFAULT`: The default flags for a config. This creates a config that can be modified after startup. +* `REDISMODULE_CONFIG_IMMUTABLE`: This config can only be provided loading time. +* `REDISMODULE_CONFIG_SENSITIVE`: The value stored in this config is redacted from all logging. +* `REDISMODULE_CONFIG_HIDDEN`: The name is hidden from `CONFIG GET` with pattern matching. +* `REDISMODULE_CONFIG_PROTECTED`: This config will be only be modifiable based off the value of enable-protected-configs. +* `REDISMODULE_CONFIG_DENY_LOADING`: This config is not modifiable while the server is loading data. +* `REDISMODULE_CONFIG_MEMORY`: For numeric configs, this config will convert data unit notations into their byte equivalent. +* `REDISMODULE_CONFIG_BITFLAGS`: For enum configs, this config will allow multiple entries to be combined as bit flags. + +Default values are used on startup to set the value if it is not provided via the config file +or command line. Default values are also used to compare to on a config rewrite. + +Notes: + + 1. On string config sets that the string passed to the set callback will be freed after execution and the module must retain it. + 2. On string config gets the string will not be consumed and will be valid after execution. + +Example implementation: + + RedisModuleString *strval; + int adjustable = 1; + RedisModuleString *getStringConfigCommand(const char *name, void *privdata) { + return strval; + } + + int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) { + if (adjustable) { + RedisModule_Free(strval); + RedisModule_RetainString(NULL, new); + strval = new; + return REDISMODULE_OK; + } + *err = RedisModule_CreateString(NULL, "Not adjustable.", 15); + return REDISMODULE_ERR; + } + ... + RedisModule_RegisterStringConfig(ctx, "string", NULL, REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL); + +If the registration fails, `REDISMODULE_ERR` is returned and one of the following +errno is set: +* EBUSY: Registering the Config outside of `RedisModule_OnLoad`. +* EINVAL: The provided flags are invalid for the registration or the name of the config contains invalid characters. +* EALREADY: The provided configuration name is already used. + + + +### `RedisModule_RegisterBoolConfig` + + int RedisModule_RegisterBoolConfig(RedisModuleCtx *ctx, + const char *name, + int default_val, + unsigned int flags, + RedisModuleConfigGetBoolFunc getfn, + RedisModuleConfigSetBoolFunc setfn, + RedisModuleConfigApplyFunc applyfn, + void *privdata); + +**Available since:** 7.0.0 + +Create a bool config that server clients can interact with via the +`CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. See +[`RedisModule_RegisterStringConfig`](#RedisModule_RegisterStringConfig) for detailed information about configs. + + + +### `RedisModule_RegisterEnumConfig` + + int RedisModule_RegisterEnumConfig(RedisModuleCtx *ctx, + const char *name, + int default_val, + unsigned int flags, + const char **enum_values, + const int *int_values, + int num_enum_vals, + RedisModuleConfigGetEnumFunc getfn, + RedisModuleConfigSetEnumFunc setfn, + RedisModuleConfigApplyFunc applyfn, + void *privdata); + +**Available since:** 7.0.0 + + +Create an enum config that server clients can interact with via the +`CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. +Enum configs are a set of string tokens to corresponding integer values, where +the string value is exposed to Redis clients but the value passed Redis and the +module is the integer value. These values are defined in `enum_values`, an array +of null-terminated c strings, and `int_vals`, an array of enum values who has an +index partner in `enum_values`. +Example Implementation: + const char *enum_vals[3] = {"first", "second", "third"}; + const int int_vals[3] = {0, 2, 4}; + int enum_val = 0; + + int getEnumConfigCommand(const char *name, void *privdata) { + return enum_val; + } + + int setEnumConfigCommand(const char *name, int val, void *privdata, const char **err) { + enum_val = val; + return REDISMODULE_OK; + } + ... + RedisModule_RegisterEnumConfig(ctx, "enum", 0, REDISMODULE_CONFIG_DEFAULT, enum_vals, int_vals, 3, getEnumConfigCommand, setEnumConfigCommand, NULL, NULL); + +Note that you can use `REDISMODULE_CONFIG_BITFLAGS` so that multiple enum string +can be combined into one integer as bit flags, in which case you may want to +sort your enums so that the preferred combinations are present first. + +See [`RedisModule_RegisterStringConfig`](#RedisModule_RegisterStringConfig) for detailed general information about configs. + + + +### `RedisModule_RegisterNumericConfig` + + int RedisModule_RegisterNumericConfig(RedisModuleCtx *ctx, + const char *name, + long long default_val, + unsigned int flags, + long long min, + long long max, + RedisModuleConfigGetNumericFunc getfn, + RedisModuleConfigSetNumericFunc setfn, + RedisModuleConfigApplyFunc applyfn, + void *privdata); + +**Available since:** 7.0.0 + + +Create an integer config that server clients can interact with via the +`CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. See +[`RedisModule_RegisterStringConfig`](#RedisModule_RegisterStringConfig) for detailed information about configs. + + + +### `RedisModule_LoadDefaultConfigs` + + int RedisModule_LoadDefaultConfigs(RedisModuleCtx *ctx); + +**Available since:** unreleased + +Applies all default configurations for the parameters the module registered. +Only call this function if the module would like to make changes to the +configuration values before the actual values are applied by [`RedisModule_LoadConfigs`](#RedisModule_LoadConfigs). +Otherwise it's sufficient to call [`RedisModule_LoadConfigs`](#RedisModule_LoadConfigs), it should already set the default values if needed. +This makes it possible to distinguish between default values and user provided values and apply other changes between setting the defaults and the user values. +This will return `REDISMODULE_ERR` if it is called: +1. outside `RedisModule_OnLoad` +2. more than once +3. after the [`RedisModule_LoadConfigs`](#RedisModule_LoadConfigs) call + + + +### `RedisModule_LoadConfigs` + + int RedisModule_LoadConfigs(RedisModuleCtx *ctx); + +**Available since:** 7.0.0 + +Applies all pending configurations on the module load. This should be called +after all of the configurations have been registered for the module inside of `RedisModule_OnLoad`. +This will return `REDISMODULE_ERR` if it is called outside `RedisModule_OnLoad`. +This API needs to be called when configurations are provided in either `MODULE LOADEX` +or provided as startup arguments. + + + +## RDB load/save API + + + +### `RedisModule_RdbStreamCreateFromFile` + + RedisModuleRdbStream *RedisModule_RdbStreamCreateFromFile(const char *filename); + +**Available since:** 7.2.0 + +Create a stream object to save/load RDB to/from a file. + +This function returns a pointer to `RedisModuleRdbStream` which is owned +by the caller. It requires a call to [`RedisModule_RdbStreamFree()`](#RedisModule_RdbStreamFree) to free +the object. + + + +### `RedisModule_RdbStreamFree` + + void RedisModule_RdbStreamFree(RedisModuleRdbStream *stream); + +**Available since:** 7.2.0 + +Release an RDB stream object. + + + +### `RedisModule_RdbLoad` + + int RedisModule_RdbLoad(RedisModuleCtx *ctx, + RedisModuleRdbStream *stream, + int flags); + +**Available since:** 7.2.0 + +Load RDB file from the `stream`. Dataset will be cleared first and then RDB +file will be loaded. + +`flags` must be zero. This parameter is for future use. + +On success `REDISMODULE_OK` is returned, otherwise `REDISMODULE_ERR` is returned +and errno is set accordingly. + +Example: + + RedisModuleRdbStream *s = RedisModule_RdbStreamCreateFromFile("exp.rdb"); + RedisModule_RdbLoad(ctx, s, 0); + RedisModule_RdbStreamFree(s); + + + +### `RedisModule_RdbSave` + + int RedisModule_RdbSave(RedisModuleCtx *ctx, + RedisModuleRdbStream *stream, + int flags); + +**Available since:** 7.2.0 + +Save dataset to the RDB stream. + +`flags` must be zero. This parameter is for future use. + +On success `REDISMODULE_OK` is returned, otherwise `REDISMODULE_ERR` is returned +and errno is set accordingly. + +Example: + + RedisModuleRdbStream *s = RedisModule_RdbStreamCreateFromFile("exp.rdb"); + RedisModule_RdbSave(ctx, s, 0); + RedisModule_RdbStreamFree(s); + + + +### `RedisModule_GetInternalSecret` + + const char* RedisModule_GetInternalSecret(RedisModuleCtx *ctx, size_t *len); + +**Available since:** unreleased + +Returns the internal secret of the cluster. +Should be used to authenticate as an internal connection to a node in the +cluster, and by that gain the permissions to execute internal commands. + + + +## Key eviction API + + + +### `RedisModule_SetLRU` + + int RedisModule_SetLRU(RedisModuleKey *key, mstime_t lru_idle); + +**Available since:** 6.0.0 + +Set the key last access time for LRU based eviction. not relevant if the +servers's maxmemory policy is LFU based. Value is idle time in milliseconds. +returns `REDISMODULE_OK` if the LRU was updated, `REDISMODULE_ERR` otherwise. + + + +### `RedisModule_GetLRU` + + int RedisModule_GetLRU(RedisModuleKey *key, mstime_t *lru_idle); + +**Available since:** 6.0.0 + +Gets the key last access time. +Value is idletime in milliseconds or -1 if the server's eviction policy is +LFU based. +returns `REDISMODULE_OK` if when key is valid. + + + +### `RedisModule_SetLFU` + + int RedisModule_SetLFU(RedisModuleKey *key, long long lfu_freq); + +**Available since:** 6.0.0 + +Set the key access frequency. only relevant if the server's maxmemory policy +is LFU based. +The frequency is a logarithmic counter that provides an indication of +the access frequencyonly (must be <= 255). +returns `REDISMODULE_OK` if the LFU was updated, `REDISMODULE_ERR` otherwise. + + + +### `RedisModule_GetLFU` + + int RedisModule_GetLFU(RedisModuleKey *key, long long *lfu_freq); + +**Available since:** 6.0.0 + +Gets the key access frequency or -1 if the server's eviction policy is not +LFU based. +returns `REDISMODULE_OK` if when key is valid. + + + +## Miscellaneous APIs + + + +### `RedisModule_GetModuleOptionsAll` + + int RedisModule_GetModuleOptionsAll(void); + +**Available since:** 7.2.0 + + +Returns the full module options flags mask, using the return value +the module can check if a certain set of module options are supported +by the redis server version in use. +Example: + + int supportedFlags = RedisModule_GetModuleOptionsAll(); + if (supportedFlags & REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS) { + // REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS is supported + } else{ + // REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS is not supported + } + + + +### `RedisModule_GetContextFlagsAll` + + int RedisModule_GetContextFlagsAll(void); + +**Available since:** 6.0.9 + + +Returns the full ContextFlags mask, using the return value +the module can check if a certain set of flags are supported +by the redis server version in use. +Example: + + int supportedFlags = RedisModule_GetContextFlagsAll(); + if (supportedFlags & REDISMODULE_CTX_FLAGS_MULTI) { + // REDISMODULE_CTX_FLAGS_MULTI is supported + } else{ + // REDISMODULE_CTX_FLAGS_MULTI is not supported + } + + + +### `RedisModule_GetKeyspaceNotificationFlagsAll` + + int RedisModule_GetKeyspaceNotificationFlagsAll(void); + +**Available since:** 6.0.9 + + +Returns the full KeyspaceNotification mask, using the return value +the module can check if a certain set of flags are supported +by the redis server version in use. +Example: + + int supportedFlags = RedisModule_GetKeyspaceNotificationFlagsAll(); + if (supportedFlags & REDISMODULE_NOTIFY_LOADED) { + // REDISMODULE_NOTIFY_LOADED is supported + } else{ + // REDISMODULE_NOTIFY_LOADED is not supported + } + + + +### `RedisModule_GetServerVersion` + + int RedisModule_GetServerVersion(void); + +**Available since:** 6.0.9 + + +Return the redis version in format of 0x00MMmmpp. +Example for 6.0.7 the return value will be 0x00060007. + + + +### `RedisModule_GetTypeMethodVersion` + + int RedisModule_GetTypeMethodVersion(void); + +**Available since:** 6.2.0 + + +Return the current redis-server runtime value of `REDISMODULE_TYPE_METHOD_VERSION`. +You can use that when calling [`RedisModule_CreateDataType`](#RedisModule_CreateDataType) to know which fields of +`RedisModuleTypeMethods` are gonna be supported and which will be ignored. + + + +### `RedisModule_ModuleTypeReplaceValue` + + int RedisModule_ModuleTypeReplaceValue(RedisModuleKey *key, + moduleType *mt, + void *new_value, + void **old_value); + +**Available since:** 6.0.0 + +Replace the value assigned to a module type. + +The key must be open for writing, have an existing value, and have a moduleType +that matches the one specified by the caller. + +Unlike [`RedisModule_ModuleTypeSetValue()`](#RedisModule_ModuleTypeSetValue) which will free the old value, this function +simply swaps the old value with the new value. + +The function returns `REDISMODULE_OK` on success, `REDISMODULE_ERR` on errors +such as: + +1. Key is not opened for writing. +2. Key is not a module data type key. +3. Key is a module datatype other than 'mt'. + +If `old_value` is non-NULL, the old value is returned by reference. + + + +### `RedisModule_GetCommandKeysWithFlags` + + int *RedisModule_GetCommandKeysWithFlags(RedisModuleCtx *ctx, + RedisModuleString **argv, + int argc, + int *num_keys, + int **out_flags); + +**Available since:** 7.0.0 + +For a specified command, parse its arguments and return an array that +contains the indexes of all key name arguments. This function is +essentially a more efficient way to do `COMMAND GETKEYS`. + +The `out_flags` argument is optional, and can be set to NULL. +When provided it is filled with `REDISMODULE_CMD_KEY_` flags in matching +indexes with the key indexes of the returned array. + +A NULL return value indicates the specified command has no keys, or +an error condition. Error conditions are indicated by setting errno +as follows: + +* ENOENT: Specified command does not exist. +* EINVAL: Invalid command arity specified. + +NOTE: The returned array is not a Redis Module object so it does not +get automatically freed even when auto-memory is used. The caller +must explicitly call [`RedisModule_Free()`](#RedisModule_Free) to free it, same as the `out_flags` pointer if +used. + + + +### `RedisModule_GetCommandKeys` + + int *RedisModule_GetCommandKeys(RedisModuleCtx *ctx, + RedisModuleString **argv, + int argc, + int *num_keys); + +**Available since:** 6.0.9 + +Identical to [`RedisModule_GetCommandKeysWithFlags`](#RedisModule_GetCommandKeysWithFlags) when flags are not needed. + + + +### `RedisModule_GetCurrentCommandName` + + const char *RedisModule_GetCurrentCommandName(RedisModuleCtx *ctx); + +**Available since:** 6.2.5 + +Return the name of the command currently running + + + +## Defrag API + + + +### `RedisModule_RegisterDefragFunc` + + int RedisModule_RegisterDefragFunc(RedisModuleCtx *ctx, + RedisModuleDefragFunc cb); + +**Available since:** 6.2.0 + +Register a defrag callback for global data, i.e. anything that the module +may allocate that is not tied to a specific data type. + + + +### `RedisModule_RegisterDefragFunc2` + + int RedisModule_RegisterDefragFunc2(RedisModuleCtx *ctx, + RedisModuleDefragFunc2 cb); + +**Available since:** unreleased + +Register a defrag callback for global data, i.e. anything that the module +may allocate that is not tied to a specific data type. +This is a more advanced version of [`RedisModule_RegisterDefragFunc`](#RedisModule_RegisterDefragFunc), in that it takes +a callbacks that has a return value, and can use [`RedisModule_DefragShouldStop`](#RedisModule_DefragShouldStop) +in and indicate that it should be called again later, or is it done (returned 0). + + + +### `RedisModule_RegisterDefragCallbacks` + + int RedisModule_RegisterDefragCallbacks(RedisModuleCtx *ctx, + RedisModuleDefragFunc start, + RedisModuleDefragFunc end); + +**Available since:** unreleased + +Register a defrag callbacks that will be called when defrag operation starts and ends. + +The callbacks are the same as [`RedisModule_RegisterDefragFunc`](#RedisModule_RegisterDefragFunc) but the user +can also assume the callbacks are called when the defrag operation starts and ends. + + + +### `RedisModule_DefragShouldStop` + + int RedisModule_DefragShouldStop(RedisModuleDefragCtx *ctx); + +**Available since:** 6.2.0 + +When the data type defrag callback iterates complex structures, this +function should be called periodically. A zero (false) return +indicates the callback may continue its work. A non-zero value (true) +indicates it should stop. + +When stopped, the callback may use [`RedisModule_DefragCursorSet()`](#RedisModule_DefragCursorSet) to store its +position so it can later use [`RedisModule_DefragCursorGet()`](#RedisModule_DefragCursorGet) to resume defragging. + +When stopped and more work is left to be done, the callback should +return 1. Otherwise, it should return 0. + + + +### `RedisModule_DefragCursorSet` + + int RedisModule_DefragCursorSet(RedisModuleDefragCtx *ctx, + unsigned long cursor); + +**Available since:** 6.2.0 + +Store an arbitrary cursor value for future re-use. + +This should only be called if [`RedisModule_DefragShouldStop()`](#RedisModule_DefragShouldStop) has returned a non-zero +value and the defrag callback is about to exit without fully iterating its +data type. + +This behavior is reserved to cases where late defrag is performed. Late +defrag is selected for keys that implement the `free_effort` callback and +return a `free_effort` value that is larger than the defrag +'active-defrag-max-scan-fields' configuration directive. + +Smaller keys, keys that do not implement `free_effort` or the global +defrag callback are not called in late-defrag mode. In those cases, a +call to this function will return `REDISMODULE_ERR`. + +The cursor may be used by the module to represent some progress into the +module's data type. Modules may also store additional cursor-related +information locally and use the cursor as a flag that indicates when +traversal of a new key begins. This is possible because the API makes +a guarantee that concurrent defragmentation of multiple keys will +not be performed. + + + +### `RedisModule_DefragCursorGet` + + int RedisModule_DefragCursorGet(RedisModuleDefragCtx *ctx, + unsigned long *cursor); + +**Available since:** 6.2.0 + +Fetch a cursor value that has been previously stored using [`RedisModule_DefragCursorSet()`](#RedisModule_DefragCursorSet). + +If not called for a late defrag operation, `REDISMODULE_ERR` will be returned and +the cursor should be ignored. See [`RedisModule_DefragCursorSet()`](#RedisModule_DefragCursorSet) for more details on +defrag cursors. + + + +### `RedisModule_DefragAlloc` + + void *RedisModule_DefragAlloc(RedisModuleDefragCtx *ctx, void *ptr); + +**Available since:** 6.2.0 + +Defrag a memory allocation previously allocated by [`RedisModule_Alloc`](#RedisModule_Alloc), [`RedisModule_Calloc`](#RedisModule_Calloc), etc. +The defragmentation process involves allocating a new memory block and copying +the contents to it, like `realloc()`. + +If defragmentation was not necessary, NULL is returned and the operation has +no other effect. + +If a non-NULL value is returned, the caller should use the new pointer instead +of the old one and update any reference to the old pointer, which must not +be used again. + + + +### `RedisModule_DefragAllocRaw` + + void *RedisModule_DefragAllocRaw(RedisModuleDefragCtx *ctx, size_t size); + +**Available since:** unreleased + +Allocate memory for defrag purposes + +On the common cases user simply want to reallocate a pointer with a single +owner. For such usecase [`RedisModule_DefragAlloc`](#RedisModule_DefragAlloc) is enough. But on some usecases the user +might want to replace a pointer with multiple owners in different keys. +In such case, an in place replacement can not work because the other key still +keep a pointer to the old value. + +[`RedisModule_DefragAllocRaw`](#RedisModule_DefragAllocRaw) and [`RedisModule_DefragFreeRaw`](#RedisModule_DefragFreeRaw) allows to control when the memory +for defrag purposes will be allocated and when it will be freed, +allow to support more complex defrag usecases. + + + +### `RedisModule_DefragFreeRaw` + + void RedisModule_DefragFreeRaw(RedisModuleDefragCtx *ctx, void *ptr); + +**Available since:** unreleased + +Free memory for defrag purposes + +See [`RedisModule_DefragAllocRaw`](#RedisModule_DefragAllocRaw) for more information. + + + +### `RedisModule_DefragRedisModuleString` + + RedisModuleString *RedisModule_DefragRedisModuleString(RedisModuleDefragCtx *ctx, + RedisModuleString *str); + +**Available since:** 6.2.0 + +Defrag a `RedisModuleString` previously allocated by [`RedisModule_Alloc`](#RedisModule_Alloc), [`RedisModule_Calloc`](#RedisModule_Calloc), etc. +See [`RedisModule_DefragAlloc()`](#RedisModule_DefragAlloc) for more information on how the defragmentation process +works. + +NOTE: It is only possible to defrag strings that have a single reference. +Typically this means strings retained with [`RedisModule_RetainString`](#RedisModule_RetainString) or [`RedisModule_HoldString`](#RedisModule_HoldString) +may not be defragmentable. One exception is command argvs which, if retained +by the module, will end up with a single reference (because the reference +on the Redis side is dropped as soon as the command callback returns). + + + +### `RedisModule_DefragRedisModuleDict` + + RedisModuleDict *RedisModule_DefragRedisModuleDict(RedisModuleDefragCtx *ctx, + RedisModuleDict *dict, + RedisModuleDefragDictValueCallback valueCB, + RedisModuleString **seekTo); + +**Available since:** unreleased + +Defragment a Redis Module Dictionary by scanning its contents and calling a value +callback for each value. + +The callback gets the current value in the dict, and should update newptr to the new pointer, +if the value was re-allocated to a different address. The callback also gets the key name just as a reference. +The callback returns 0 when defrag is complete for this node, 1 when node needs more work. + +The API can work incrementally by accepting a seek position to continue from, and +returning the next position to seek to on the next call (or return NULL when the iteration is completed). + +This API returns a new dict if it was re-allocated to a new address (will only +be attempted when *seekTo is NULL on entry). + + + +### `RedisModule_GetKeyNameFromDefragCtx` + + const RedisModuleString *RedisModule_GetKeyNameFromDefragCtx(RedisModuleDefragCtx *ctx); + +**Available since:** 7.0.0 + +Returns the name of the key currently being processed. +There is no guarantee that the key name is always available, so this may return NULL. + + + +### `RedisModule_GetDbIdFromDefragCtx` + + int RedisModule_GetDbIdFromDefragCtx(RedisModuleDefragCtx *ctx); + +**Available since:** 7.0.0 + +Returns the database id of the key currently being processed. +There is no guarantee that this info is always available, so this may return -1. + + + +## Function index + +* [`RedisModule_ACLAddLogEntry`](#RedisModule_ACLAddLogEntry) +* [`RedisModule_ACLAddLogEntryByUserName`](#RedisModule_ACLAddLogEntryByUserName) +* [`RedisModule_ACLCheckChannelPermissions`](#RedisModule_ACLCheckChannelPermissions) +* [`RedisModule_ACLCheckCommandPermissions`](#RedisModule_ACLCheckCommandPermissions) +* [`RedisModule_ACLCheckKeyPermissions`](#RedisModule_ACLCheckKeyPermissions) +* [`RedisModule_ACLCheckKeyPrefixPermissions`](#RedisModule_ACLCheckKeyPrefixPermissions) +* [`RedisModule_AbortBlock`](#RedisModule_AbortBlock) +* [`RedisModule_AddACLCategory`](#RedisModule_AddACLCategory) +* [`RedisModule_AddPostNotificationJob`](#RedisModule_AddPostNotificationJob) +* [`RedisModule_Alloc`](#RedisModule_Alloc) +* [`RedisModule_AuthenticateClientWithACLUser`](#RedisModule_AuthenticateClientWithACLUser) +* [`RedisModule_AuthenticateClientWithUser`](#RedisModule_AuthenticateClientWithUser) +* [`RedisModule_AutoMemory`](#RedisModule_AutoMemory) +* [`RedisModule_AvoidReplicaTraffic`](#RedisModule_AvoidReplicaTraffic) +* [`RedisModule_BlockClient`](#RedisModule_BlockClient) +* [`RedisModule_BlockClientGetPrivateData`](#RedisModule_BlockClientGetPrivateData) +* [`RedisModule_BlockClientOnAuth`](#RedisModule_BlockClientOnAuth) +* [`RedisModule_BlockClientOnKeys`](#RedisModule_BlockClientOnKeys) +* [`RedisModule_BlockClientOnKeysWithFlags`](#RedisModule_BlockClientOnKeysWithFlags) +* [`RedisModule_BlockClientSetPrivateData`](#RedisModule_BlockClientSetPrivateData) +* [`RedisModule_BlockedClientDisconnected`](#RedisModule_BlockedClientDisconnected) +* [`RedisModule_BlockedClientMeasureTimeEnd`](#RedisModule_BlockedClientMeasureTimeEnd) +* [`RedisModule_BlockedClientMeasureTimeStart`](#RedisModule_BlockedClientMeasureTimeStart) +* [`RedisModule_CachedMicroseconds`](#RedisModule_CachedMicroseconds) +* [`RedisModule_Call`](#RedisModule_Call) +* [`RedisModule_CallReplyArrayElement`](#RedisModule_CallReplyArrayElement) +* [`RedisModule_CallReplyAttribute`](#RedisModule_CallReplyAttribute) +* [`RedisModule_CallReplyAttributeElement`](#RedisModule_CallReplyAttributeElement) +* [`RedisModule_CallReplyBigNumber`](#RedisModule_CallReplyBigNumber) +* [`RedisModule_CallReplyBool`](#RedisModule_CallReplyBool) +* [`RedisModule_CallReplyDouble`](#RedisModule_CallReplyDouble) +* [`RedisModule_CallReplyInteger`](#RedisModule_CallReplyInteger) +* [`RedisModule_CallReplyLength`](#RedisModule_CallReplyLength) +* [`RedisModule_CallReplyMapElement`](#RedisModule_CallReplyMapElement) +* [`RedisModule_CallReplyPromiseAbort`](#RedisModule_CallReplyPromiseAbort) +* [`RedisModule_CallReplyPromiseSetUnblockHandler`](#RedisModule_CallReplyPromiseSetUnblockHandler) +* [`RedisModule_CallReplyProto`](#RedisModule_CallReplyProto) +* [`RedisModule_CallReplySetElement`](#RedisModule_CallReplySetElement) +* [`RedisModule_CallReplyStringPtr`](#RedisModule_CallReplyStringPtr) +* [`RedisModule_CallReplyType`](#RedisModule_CallReplyType) +* [`RedisModule_CallReplyVerbatim`](#RedisModule_CallReplyVerbatim) +* [`RedisModule_Calloc`](#RedisModule_Calloc) +* [`RedisModule_ChannelAtPosWithFlags`](#RedisModule_ChannelAtPosWithFlags) +* [`RedisModule_CloseKey`](#RedisModule_CloseKey) +* [`RedisModule_ClusterCanonicalKeyNameInSlot`](#RedisModule_ClusterCanonicalKeyNameInSlot) +* [`RedisModule_ClusterKeySlot`](#RedisModule_ClusterKeySlot) +* [`RedisModule_CommandFilterArgDelete`](#RedisModule_CommandFilterArgDelete) +* [`RedisModule_CommandFilterArgGet`](#RedisModule_CommandFilterArgGet) +* [`RedisModule_CommandFilterArgInsert`](#RedisModule_CommandFilterArgInsert) +* [`RedisModule_CommandFilterArgReplace`](#RedisModule_CommandFilterArgReplace) +* [`RedisModule_CommandFilterArgsCount`](#RedisModule_CommandFilterArgsCount) +* [`RedisModule_CommandFilterGetClientId`](#RedisModule_CommandFilterGetClientId) +* [`RedisModule_CreateCommand`](#RedisModule_CreateCommand) +* [`RedisModule_CreateDataType`](#RedisModule_CreateDataType) +* [`RedisModule_CreateDict`](#RedisModule_CreateDict) +* [`RedisModule_CreateModuleUser`](#RedisModule_CreateModuleUser) +* [`RedisModule_CreateString`](#RedisModule_CreateString) +* [`RedisModule_CreateStringFromCallReply`](#RedisModule_CreateStringFromCallReply) +* [`RedisModule_CreateStringFromDouble`](#RedisModule_CreateStringFromDouble) +* [`RedisModule_CreateStringFromLongDouble`](#RedisModule_CreateStringFromLongDouble) +* [`RedisModule_CreateStringFromLongLong`](#RedisModule_CreateStringFromLongLong) +* [`RedisModule_CreateStringFromStreamID`](#RedisModule_CreateStringFromStreamID) +* [`RedisModule_CreateStringFromString`](#RedisModule_CreateStringFromString) +* [`RedisModule_CreateStringFromULongLong`](#RedisModule_CreateStringFromULongLong) +* [`RedisModule_CreateStringPrintf`](#RedisModule_CreateStringPrintf) +* [`RedisModule_CreateSubcommand`](#RedisModule_CreateSubcommand) +* [`RedisModule_CreateTimer`](#RedisModule_CreateTimer) +* [`RedisModule_DbSize`](#RedisModule_DbSize) +* [`RedisModule_DeauthenticateAndCloseClient`](#RedisModule_DeauthenticateAndCloseClient) +* [`RedisModule_DefragAlloc`](#RedisModule_DefragAlloc) +* [`RedisModule_DefragAllocRaw`](#RedisModule_DefragAllocRaw) +* [`RedisModule_DefragCursorGet`](#RedisModule_DefragCursorGet) +* [`RedisModule_DefragCursorSet`](#RedisModule_DefragCursorSet) +* [`RedisModule_DefragFreeRaw`](#RedisModule_DefragFreeRaw) +* [`RedisModule_DefragRedisModuleDict`](#RedisModule_DefragRedisModuleDict) +* [`RedisModule_DefragRedisModuleString`](#RedisModule_DefragRedisModuleString) +* [`RedisModule_DefragShouldStop`](#RedisModule_DefragShouldStop) +* [`RedisModule_DeleteKey`](#RedisModule_DeleteKey) +* [`RedisModule_DictCompare`](#RedisModule_DictCompare) +* [`RedisModule_DictCompareC`](#RedisModule_DictCompareC) +* [`RedisModule_DictDel`](#RedisModule_DictDel) +* [`RedisModule_DictDelC`](#RedisModule_DictDelC) +* [`RedisModule_DictGet`](#RedisModule_DictGet) +* [`RedisModule_DictGetC`](#RedisModule_DictGetC) +* [`RedisModule_DictIteratorReseek`](#RedisModule_DictIteratorReseek) +* [`RedisModule_DictIteratorReseekC`](#RedisModule_DictIteratorReseekC) +* [`RedisModule_DictIteratorStart`](#RedisModule_DictIteratorStart) +* [`RedisModule_DictIteratorStartC`](#RedisModule_DictIteratorStartC) +* [`RedisModule_DictIteratorStop`](#RedisModule_DictIteratorStop) +* [`RedisModule_DictNext`](#RedisModule_DictNext) +* [`RedisModule_DictNextC`](#RedisModule_DictNextC) +* [`RedisModule_DictPrev`](#RedisModule_DictPrev) +* [`RedisModule_DictPrevC`](#RedisModule_DictPrevC) +* [`RedisModule_DictReplace`](#RedisModule_DictReplace) +* [`RedisModule_DictReplaceC`](#RedisModule_DictReplaceC) +* [`RedisModule_DictSet`](#RedisModule_DictSet) +* [`RedisModule_DictSetC`](#RedisModule_DictSetC) +* [`RedisModule_DictSize`](#RedisModule_DictSize) +* [`RedisModule_DigestAddLongLong`](#RedisModule_DigestAddLongLong) +* [`RedisModule_DigestAddStringBuffer`](#RedisModule_DigestAddStringBuffer) +* [`RedisModule_DigestEndSequence`](#RedisModule_DigestEndSequence) +* [`RedisModule_EmitAOF`](#RedisModule_EmitAOF) +* [`RedisModule_EventLoopAdd`](#RedisModule_EventLoopAdd) +* [`RedisModule_EventLoopAddOneShot`](#RedisModule_EventLoopAddOneShot) +* [`RedisModule_EventLoopDel`](#RedisModule_EventLoopDel) +* [`RedisModule_ExitFromChild`](#RedisModule_ExitFromChild) +* [`RedisModule_ExportSharedAPI`](#RedisModule_ExportSharedAPI) +* [`RedisModule_Fork`](#RedisModule_Fork) +* [`RedisModule_Free`](#RedisModule_Free) +* [`RedisModule_FreeCallReply`](#RedisModule_FreeCallReply) +* [`RedisModule_FreeClusterNodesList`](#RedisModule_FreeClusterNodesList) +* [`RedisModule_FreeDict`](#RedisModule_FreeDict) +* [`RedisModule_FreeModuleUser`](#RedisModule_FreeModuleUser) +* [`RedisModule_FreeServerInfo`](#RedisModule_FreeServerInfo) +* [`RedisModule_FreeString`](#RedisModule_FreeString) +* [`RedisModule_FreeThreadSafeContext`](#RedisModule_FreeThreadSafeContext) +* [`RedisModule_GetAbsExpire`](#RedisModule_GetAbsExpire) +* [`RedisModule_GetBlockedClientHandle`](#RedisModule_GetBlockedClientHandle) +* [`RedisModule_GetBlockedClientPrivateData`](#RedisModule_GetBlockedClientPrivateData) +* [`RedisModule_GetBlockedClientReadyKey`](#RedisModule_GetBlockedClientReadyKey) +* [`RedisModule_GetClientCertificate`](#RedisModule_GetClientCertificate) +* [`RedisModule_GetClientId`](#RedisModule_GetClientId) +* [`RedisModule_GetClientInfoById`](#RedisModule_GetClientInfoById) +* [`RedisModule_GetClientNameById`](#RedisModule_GetClientNameById) +* [`RedisModule_GetClientUserNameById`](#RedisModule_GetClientUserNameById) +* [`RedisModule_GetClusterNodeInfo`](#RedisModule_GetClusterNodeInfo) +* [`RedisModule_GetClusterNodesList`](#RedisModule_GetClusterNodesList) +* [`RedisModule_GetClusterSize`](#RedisModule_GetClusterSize) +* [`RedisModule_GetCommand`](#RedisModule_GetCommand) +* [`RedisModule_GetCommandKeys`](#RedisModule_GetCommandKeys) +* [`RedisModule_GetCommandKeysWithFlags`](#RedisModule_GetCommandKeysWithFlags) +* [`RedisModule_GetContextFlags`](#RedisModule_GetContextFlags) +* [`RedisModule_GetContextFlagsAll`](#RedisModule_GetContextFlagsAll) +* [`RedisModule_GetCurrentCommandName`](#RedisModule_GetCurrentCommandName) +* [`RedisModule_GetCurrentUserName`](#RedisModule_GetCurrentUserName) +* [`RedisModule_GetDbIdFromDefragCtx`](#RedisModule_GetDbIdFromDefragCtx) +* [`RedisModule_GetDbIdFromDigest`](#RedisModule_GetDbIdFromDigest) +* [`RedisModule_GetDbIdFromIO`](#RedisModule_GetDbIdFromIO) +* [`RedisModule_GetDbIdFromModuleKey`](#RedisModule_GetDbIdFromModuleKey) +* [`RedisModule_GetDbIdFromOptCtx`](#RedisModule_GetDbIdFromOptCtx) +* [`RedisModule_GetDetachedThreadSafeContext`](#RedisModule_GetDetachedThreadSafeContext) +* [`RedisModule_GetExpire`](#RedisModule_GetExpire) +* [`RedisModule_GetInternalSecret`](#RedisModule_GetInternalSecret) +* [`RedisModule_GetKeyNameFromDefragCtx`](#RedisModule_GetKeyNameFromDefragCtx) +* [`RedisModule_GetKeyNameFromDigest`](#RedisModule_GetKeyNameFromDigest) +* [`RedisModule_GetKeyNameFromIO`](#RedisModule_GetKeyNameFromIO) +* [`RedisModule_GetKeyNameFromModuleKey`](#RedisModule_GetKeyNameFromModuleKey) +* [`RedisModule_GetKeyNameFromOptCtx`](#RedisModule_GetKeyNameFromOptCtx) +* [`RedisModule_GetKeyspaceNotificationFlagsAll`](#RedisModule_GetKeyspaceNotificationFlagsAll) +* [`RedisModule_GetLFU`](#RedisModule_GetLFU) +* [`RedisModule_GetLRU`](#RedisModule_GetLRU) +* [`RedisModule_GetModuleOptionsAll`](#RedisModule_GetModuleOptionsAll) +* [`RedisModule_GetModuleUserACLString`](#RedisModule_GetModuleUserACLString) +* [`RedisModule_GetModuleUserFromUserName`](#RedisModule_GetModuleUserFromUserName) +* [`RedisModule_GetMyClusterID`](#RedisModule_GetMyClusterID) +* [`RedisModule_GetNotifyKeyspaceEvents`](#RedisModule_GetNotifyKeyspaceEvents) +* [`RedisModule_GetOpenKeyModesAll`](#RedisModule_GetOpenKeyModesAll) +* [`RedisModule_GetRandomBytes`](#RedisModule_GetRandomBytes) +* [`RedisModule_GetRandomHexChars`](#RedisModule_GetRandomHexChars) +* [`RedisModule_GetSelectedDb`](#RedisModule_GetSelectedDb) +* [`RedisModule_GetServerInfo`](#RedisModule_GetServerInfo) +* [`RedisModule_GetServerVersion`](#RedisModule_GetServerVersion) +* [`RedisModule_GetSharedAPI`](#RedisModule_GetSharedAPI) +* [`RedisModule_GetThreadSafeContext`](#RedisModule_GetThreadSafeContext) +* [`RedisModule_GetTimerInfo`](#RedisModule_GetTimerInfo) +* [`RedisModule_GetToDbIdFromOptCtx`](#RedisModule_GetToDbIdFromOptCtx) +* [`RedisModule_GetToKeyNameFromOptCtx`](#RedisModule_GetToKeyNameFromOptCtx) +* [`RedisModule_GetTypeMethodVersion`](#RedisModule_GetTypeMethodVersion) +* [`RedisModule_GetUsedMemoryRatio`](#RedisModule_GetUsedMemoryRatio) +* [`RedisModule_HashFieldMinExpire`](#RedisModule_HashFieldMinExpire) +* [`RedisModule_HashGet`](#RedisModule_HashGet) +* [`RedisModule_HashSet`](#RedisModule_HashSet) +* [`RedisModule_HoldString`](#RedisModule_HoldString) +* [`RedisModule_InfoAddFieldCString`](#RedisModule_InfoAddFieldCString) +* [`RedisModule_InfoAddFieldDouble`](#RedisModule_InfoAddFieldDouble) +* [`RedisModule_InfoAddFieldLongLong`](#RedisModule_InfoAddFieldLongLong) +* [`RedisModule_InfoAddFieldString`](#RedisModule_InfoAddFieldString) +* [`RedisModule_InfoAddFieldULongLong`](#RedisModule_InfoAddFieldULongLong) +* [`RedisModule_InfoAddSection`](#RedisModule_InfoAddSection) +* [`RedisModule_InfoBeginDictField`](#RedisModule_InfoBeginDictField) +* [`RedisModule_InfoEndDictField`](#RedisModule_InfoEndDictField) +* [`RedisModule_IsBlockedReplyRequest`](#RedisModule_IsBlockedReplyRequest) +* [`RedisModule_IsBlockedTimeoutRequest`](#RedisModule_IsBlockedTimeoutRequest) +* [`RedisModule_IsChannelsPositionRequest`](#RedisModule_IsChannelsPositionRequest) +* [`RedisModule_IsIOError`](#RedisModule_IsIOError) +* [`RedisModule_IsKeysPositionRequest`](#RedisModule_IsKeysPositionRequest) +* [`RedisModule_IsModuleNameBusy`](#RedisModule_IsModuleNameBusy) +* [`RedisModule_IsSubEventSupported`](#RedisModule_IsSubEventSupported) +* [`RedisModule_KeyAtPos`](#RedisModule_KeyAtPos) +* [`RedisModule_KeyAtPosWithFlags`](#RedisModule_KeyAtPosWithFlags) +* [`RedisModule_KeyExists`](#RedisModule_KeyExists) +* [`RedisModule_KeyType`](#RedisModule_KeyType) +* [`RedisModule_KillForkChild`](#RedisModule_KillForkChild) +* [`RedisModule_LatencyAddSample`](#RedisModule_LatencyAddSample) +* [`RedisModule_ListDelete`](#RedisModule_ListDelete) +* [`RedisModule_ListGet`](#RedisModule_ListGet) +* [`RedisModule_ListInsert`](#RedisModule_ListInsert) +* [`RedisModule_ListPop`](#RedisModule_ListPop) +* [`RedisModule_ListPush`](#RedisModule_ListPush) +* [`RedisModule_ListSet`](#RedisModule_ListSet) +* [`RedisModule_LoadConfigs`](#RedisModule_LoadConfigs) +* [`RedisModule_LoadDataTypeFromString`](#RedisModule_LoadDataTypeFromString) +* [`RedisModule_LoadDataTypeFromStringEncver`](#RedisModule_LoadDataTypeFromStringEncver) +* [`RedisModule_LoadDefaultConfigs`](#RedisModule_LoadDefaultConfigs) +* [`RedisModule_LoadDouble`](#RedisModule_LoadDouble) +* [`RedisModule_LoadFloat`](#RedisModule_LoadFloat) +* [`RedisModule_LoadLongDouble`](#RedisModule_LoadLongDouble) +* [`RedisModule_LoadSigned`](#RedisModule_LoadSigned) +* [`RedisModule_LoadString`](#RedisModule_LoadString) +* [`RedisModule_LoadStringBuffer`](#RedisModule_LoadStringBuffer) +* [`RedisModule_LoadUnsigned`](#RedisModule_LoadUnsigned) +* [`RedisModule_Log`](#RedisModule_Log) +* [`RedisModule_LogIOError`](#RedisModule_LogIOError) +* [`RedisModule_MallocSize`](#RedisModule_MallocSize) +* [`RedisModule_MallocSizeDict`](#RedisModule_MallocSizeDict) +* [`RedisModule_MallocSizeString`](#RedisModule_MallocSizeString) +* [`RedisModule_MallocUsableSize`](#RedisModule_MallocUsableSize) +* [`RedisModule_Microseconds`](#RedisModule_Microseconds) +* [`RedisModule_Milliseconds`](#RedisModule_Milliseconds) +* [`RedisModule_ModuleTypeGetType`](#RedisModule_ModuleTypeGetType) +* [`RedisModule_ModuleTypeGetValue`](#RedisModule_ModuleTypeGetValue) +* [`RedisModule_ModuleTypeReplaceValue`](#RedisModule_ModuleTypeReplaceValue) +* [`RedisModule_ModuleTypeSetValue`](#RedisModule_ModuleTypeSetValue) +* [`RedisModule_MonotonicMicroseconds`](#RedisModule_MonotonicMicroseconds) +* [`RedisModule_NotifyKeyspaceEvent`](#RedisModule_NotifyKeyspaceEvent) +* [`RedisModule_OpenKey`](#RedisModule_OpenKey) +* [`RedisModule_PoolAlloc`](#RedisModule_PoolAlloc) +* [`RedisModule_PublishMessage`](#RedisModule_PublishMessage) +* [`RedisModule_PublishMessageShard`](#RedisModule_PublishMessageShard) +* [`RedisModule_RandomKey`](#RedisModule_RandomKey) +* [`RedisModule_RdbLoad`](#RedisModule_RdbLoad) +* [`RedisModule_RdbSave`](#RedisModule_RdbSave) +* [`RedisModule_RdbStreamCreateFromFile`](#RedisModule_RdbStreamCreateFromFile) +* [`RedisModule_RdbStreamFree`](#RedisModule_RdbStreamFree) +* [`RedisModule_Realloc`](#RedisModule_Realloc) +* [`RedisModule_RedactClientCommandArgument`](#RedisModule_RedactClientCommandArgument) +* [`RedisModule_RegisterAuthCallback`](#RedisModule_RegisterAuthCallback) +* [`RedisModule_RegisterBoolConfig`](#RedisModule_RegisterBoolConfig) +* [`RedisModule_RegisterClusterMessageReceiver`](#RedisModule_RegisterClusterMessageReceiver) +* [`RedisModule_RegisterCommandFilter`](#RedisModule_RegisterCommandFilter) +* [`RedisModule_RegisterDefragCallbacks`](#RedisModule_RegisterDefragCallbacks) +* [`RedisModule_RegisterDefragFunc`](#RedisModule_RegisterDefragFunc) +* [`RedisModule_RegisterDefragFunc2`](#RedisModule_RegisterDefragFunc2) +* [`RedisModule_RegisterEnumConfig`](#RedisModule_RegisterEnumConfig) +* [`RedisModule_RegisterInfoFunc`](#RedisModule_RegisterInfoFunc) +* [`RedisModule_RegisterNumericConfig`](#RedisModule_RegisterNumericConfig) +* [`RedisModule_RegisterStringConfig`](#RedisModule_RegisterStringConfig) +* [`RedisModule_Replicate`](#RedisModule_Replicate) +* [`RedisModule_ReplicateVerbatim`](#RedisModule_ReplicateVerbatim) +* [`RedisModule_ReplySetArrayLength`](#RedisModule_ReplySetArrayLength) +* [`RedisModule_ReplySetAttributeLength`](#RedisModule_ReplySetAttributeLength) +* [`RedisModule_ReplySetMapLength`](#RedisModule_ReplySetMapLength) +* [`RedisModule_ReplySetSetLength`](#RedisModule_ReplySetSetLength) +* [`RedisModule_ReplyWithArray`](#RedisModule_ReplyWithArray) +* [`RedisModule_ReplyWithAttribute`](#RedisModule_ReplyWithAttribute) +* [`RedisModule_ReplyWithBigNumber`](#RedisModule_ReplyWithBigNumber) +* [`RedisModule_ReplyWithBool`](#RedisModule_ReplyWithBool) +* [`RedisModule_ReplyWithCString`](#RedisModule_ReplyWithCString) +* [`RedisModule_ReplyWithCallReply`](#RedisModule_ReplyWithCallReply) +* [`RedisModule_ReplyWithDouble`](#RedisModule_ReplyWithDouble) +* [`RedisModule_ReplyWithEmptyArray`](#RedisModule_ReplyWithEmptyArray) +* [`RedisModule_ReplyWithEmptyString`](#RedisModule_ReplyWithEmptyString) +* [`RedisModule_ReplyWithError`](#RedisModule_ReplyWithError) +* [`RedisModule_ReplyWithErrorFormat`](#RedisModule_ReplyWithErrorFormat) +* [`RedisModule_ReplyWithLongDouble`](#RedisModule_ReplyWithLongDouble) +* [`RedisModule_ReplyWithLongLong`](#RedisModule_ReplyWithLongLong) +* [`RedisModule_ReplyWithMap`](#RedisModule_ReplyWithMap) +* [`RedisModule_ReplyWithNull`](#RedisModule_ReplyWithNull) +* [`RedisModule_ReplyWithNullArray`](#RedisModule_ReplyWithNullArray) +* [`RedisModule_ReplyWithSet`](#RedisModule_ReplyWithSet) +* [`RedisModule_ReplyWithSimpleString`](#RedisModule_ReplyWithSimpleString) +* [`RedisModule_ReplyWithString`](#RedisModule_ReplyWithString) +* [`RedisModule_ReplyWithStringBuffer`](#RedisModule_ReplyWithStringBuffer) +* [`RedisModule_ReplyWithVerbatimString`](#RedisModule_ReplyWithVerbatimString) +* [`RedisModule_ReplyWithVerbatimStringType`](#RedisModule_ReplyWithVerbatimStringType) +* [`RedisModule_ResetDataset`](#RedisModule_ResetDataset) +* [`RedisModule_RetainString`](#RedisModule_RetainString) +* [`RedisModule_SaveDataTypeToString`](#RedisModule_SaveDataTypeToString) +* [`RedisModule_SaveDouble`](#RedisModule_SaveDouble) +* [`RedisModule_SaveFloat`](#RedisModule_SaveFloat) +* [`RedisModule_SaveLongDouble`](#RedisModule_SaveLongDouble) +* [`RedisModule_SaveSigned`](#RedisModule_SaveSigned) +* [`RedisModule_SaveString`](#RedisModule_SaveString) +* [`RedisModule_SaveStringBuffer`](#RedisModule_SaveStringBuffer) +* [`RedisModule_SaveUnsigned`](#RedisModule_SaveUnsigned) +* [`RedisModule_Scan`](#RedisModule_Scan) +* [`RedisModule_ScanCursorCreate`](#RedisModule_ScanCursorCreate) +* [`RedisModule_ScanCursorDestroy`](#RedisModule_ScanCursorDestroy) +* [`RedisModule_ScanCursorRestart`](#RedisModule_ScanCursorRestart) +* [`RedisModule_ScanKey`](#RedisModule_ScanKey) +* [`RedisModule_SelectDb`](#RedisModule_SelectDb) +* [`RedisModule_SendChildHeartbeat`](#RedisModule_SendChildHeartbeat) +* [`RedisModule_SendClusterMessage`](#RedisModule_SendClusterMessage) +* [`RedisModule_ServerInfoGetField`](#RedisModule_ServerInfoGetField) +* [`RedisModule_ServerInfoGetFieldC`](#RedisModule_ServerInfoGetFieldC) +* [`RedisModule_ServerInfoGetFieldDouble`](#RedisModule_ServerInfoGetFieldDouble) +* [`RedisModule_ServerInfoGetFieldSigned`](#RedisModule_ServerInfoGetFieldSigned) +* [`RedisModule_ServerInfoGetFieldUnsigned`](#RedisModule_ServerInfoGetFieldUnsigned) +* [`RedisModule_SetAbsExpire`](#RedisModule_SetAbsExpire) +* [`RedisModule_SetClientNameById`](#RedisModule_SetClientNameById) +* [`RedisModule_SetClusterFlags`](#RedisModule_SetClusterFlags) +* [`RedisModule_SetCommandACLCategories`](#RedisModule_SetCommandACLCategories) +* [`RedisModule_SetCommandInfo`](#RedisModule_SetCommandInfo) +* [`RedisModule_SetContextUser`](#RedisModule_SetContextUser) +* [`RedisModule_SetDisconnectCallback`](#RedisModule_SetDisconnectCallback) +* [`RedisModule_SetExpire`](#RedisModule_SetExpire) +* [`RedisModule_SetLFU`](#RedisModule_SetLFU) +* [`RedisModule_SetLRU`](#RedisModule_SetLRU) +* [`RedisModule_SetModuleOptions`](#RedisModule_SetModuleOptions) +* [`RedisModule_SetModuleUserACL`](#RedisModule_SetModuleUserACL) +* [`RedisModule_SetModuleUserACLString`](#RedisModule_SetModuleUserACLString) +* [`RedisModule_SignalKeyAsReady`](#RedisModule_SignalKeyAsReady) +* [`RedisModule_SignalModifiedKey`](#RedisModule_SignalModifiedKey) +* [`RedisModule_StopTimer`](#RedisModule_StopTimer) +* [`RedisModule_Strdup`](#RedisModule_Strdup) +* [`RedisModule_StreamAdd`](#RedisModule_StreamAdd) +* [`RedisModule_StreamDelete`](#RedisModule_StreamDelete) +* [`RedisModule_StreamIteratorDelete`](#RedisModule_StreamIteratorDelete) +* [`RedisModule_StreamIteratorNextField`](#RedisModule_StreamIteratorNextField) +* [`RedisModule_StreamIteratorNextID`](#RedisModule_StreamIteratorNextID) +* [`RedisModule_StreamIteratorStart`](#RedisModule_StreamIteratorStart) +* [`RedisModule_StreamIteratorStop`](#RedisModule_StreamIteratorStop) +* [`RedisModule_StreamTrimByID`](#RedisModule_StreamTrimByID) +* [`RedisModule_StreamTrimByLength`](#RedisModule_StreamTrimByLength) +* [`RedisModule_StringAppendBuffer`](#RedisModule_StringAppendBuffer) +* [`RedisModule_StringCompare`](#RedisModule_StringCompare) +* [`RedisModule_StringDMA`](#RedisModule_StringDMA) +* [`RedisModule_StringPtrLen`](#RedisModule_StringPtrLen) +* [`RedisModule_StringSet`](#RedisModule_StringSet) +* [`RedisModule_StringToDouble`](#RedisModule_StringToDouble) +* [`RedisModule_StringToLongDouble`](#RedisModule_StringToLongDouble) +* [`RedisModule_StringToLongLong`](#RedisModule_StringToLongLong) +* [`RedisModule_StringToStreamID`](#RedisModule_StringToStreamID) +* [`RedisModule_StringToULongLong`](#RedisModule_StringToULongLong) +* [`RedisModule_StringTruncate`](#RedisModule_StringTruncate) +* [`RedisModule_SubscribeToKeyspaceEvents`](#RedisModule_SubscribeToKeyspaceEvents) +* [`RedisModule_SubscribeToServerEvent`](#RedisModule_SubscribeToServerEvent) +* [`RedisModule_ThreadSafeContextLock`](#RedisModule_ThreadSafeContextLock) +* [`RedisModule_ThreadSafeContextTryLock`](#RedisModule_ThreadSafeContextTryLock) +* [`RedisModule_ThreadSafeContextUnlock`](#RedisModule_ThreadSafeContextUnlock) +* [`RedisModule_TrimStringAllocation`](#RedisModule_TrimStringAllocation) +* [`RedisModule_TryAlloc`](#RedisModule_TryAlloc) +* [`RedisModule_TryCalloc`](#RedisModule_TryCalloc) +* [`RedisModule_TryRealloc`](#RedisModule_TryRealloc) +* [`RedisModule_UnblockClient`](#RedisModule_UnblockClient) +* [`RedisModule_UnlinkKey`](#RedisModule_UnlinkKey) +* [`RedisModule_UnregisterCommandFilter`](#RedisModule_UnregisterCommandFilter) +* [`RedisModule_ValueLength`](#RedisModule_ValueLength) +* [`RedisModule_WrongArity`](#RedisModule_WrongArity) +* [`RedisModule_Yield`](#RedisModule_Yield) +* [`RedisModule_ZsetAdd`](#RedisModule_ZsetAdd) +* [`RedisModule_ZsetFirstInLexRange`](#RedisModule_ZsetFirstInLexRange) +* [`RedisModule_ZsetFirstInScoreRange`](#RedisModule_ZsetFirstInScoreRange) +* [`RedisModule_ZsetIncrby`](#RedisModule_ZsetIncrby) +* [`RedisModule_ZsetLastInLexRange`](#RedisModule_ZsetLastInLexRange) +* [`RedisModule_ZsetLastInScoreRange`](#RedisModule_ZsetLastInScoreRange) +* [`RedisModule_ZsetRangeCurrentElement`](#RedisModule_ZsetRangeCurrentElement) +* [`RedisModule_ZsetRangeEndReached`](#RedisModule_ZsetRangeEndReached) +* [`RedisModule_ZsetRangeNext`](#RedisModule_ZsetRangeNext) +* [`RedisModule_ZsetRangePrev`](#RedisModule_ZsetRangePrev) +* [`RedisModule_ZsetRangeStop`](#RedisModule_ZsetRangeStop) +* [`RedisModule_ZsetRem`](#RedisModule_ZsetRem) +* [`RedisModule_ZsetScore`](#RedisModule_ZsetScore) +* [`RedisModule__Assert`](#RedisModule__Assert) + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How to use native types in a Redis module +linkTitle: Native types API +title: Modules API for native types +weight: 1 +--- + +Redis modules can access Redis built-in data structures both at high level, +by calling Redis commands, and at low level, by manipulating the data structures +directly. + +By using these capabilities in order to build new abstractions on top of existing +Redis data structures, or by using strings DMA in order to encode modules +data structures into Redis strings, it is possible to create modules that +*feel like* they are exporting new data types. However, for more complex +problems, this is not enough, and the implementation of new data structures +inside the module is needed. + +We call the ability of Redis modules to implement new data structures that +feel like native Redis ones **native types support**. This document describes +the API exported by the Redis modules system in order to create new data +structures and handle the serialization in RDB files, the rewriting process +in AOF, the type reporting via the [`TYPE`]({{< relref "/commands/type" >}}) command, and so forth. + +## Overview of native types + +A module exporting a native type is composed of the following main parts: + +* The implementation of some kind of new data structure and of commands operating on the new data structure. +* A set of callbacks that handle: RDB saving, RDB loading, AOF rewriting, releasing of a value associated with a key, calculation of a value digest (hash) to be used with the `DEBUG DIGEST` command. +* A 9 characters name that is unique to each module native data type. +* An encoding version, used to persist into RDB files a module-specific data version, so that a module will be able to load older representations from RDB files. + +While to handle RDB loading, saving and AOF rewriting may look complex as a first glance, the modules API provide very high level function for handling all this, without requiring the user to handle read/write errors, so in practical terms, writing a new data structure for Redis is a simple task. + +A **very easy** to understand but complete example of native type implementation +is available inside the Redis distribution in the `/modules/hellotype.c` file. +The reader is encouraged to read the documentation by looking at this example +implementation to see how things are applied in the practice. + +## Register a new data type + +In order to register a new native type into the Redis core, the module needs +to declare a global variable that will hold a reference to the data type. +The API to register the data type will return a data type reference that will +be stored in the global variable. + + static RedisModuleType *MyType; + #define MYTYPE_ENCODING_VERSION 0 + + int RedisModule_OnLoad(RedisModuleCtx *ctx) { + RedisModuleTypeMethods tm = { + .version = REDISMODULE_TYPE_METHOD_VERSION, + .rdb_load = MyTypeRDBLoad, + .rdb_save = MyTypeRDBSave, + .aof_rewrite = MyTypeAOFRewrite, + .free = MyTypeFree + }; + + MyType = RedisModule_CreateDataType(ctx, "MyType-AZ", + MYTYPE_ENCODING_VERSION, &tm); + if (MyType == NULL) return REDISMODULE_ERR; + } + +As you can see from the example above, a single API call is needed in order to +register the new type. However a number of function pointers are passed as +arguments. Certain are optionals while some are mandatory. The above set +of methods *must* be passed, while `.digest` and `.mem_usage` are optional +and are currently not actually supported by the modules internals, so for +now you can just ignore them. + +The `ctx` argument is the context that we receive in the `OnLoad` function. +The type `name` is a 9 character name in the character set that includes +from `A-Z`, `a-z`, `0-9`, plus the underscore `_` and minus `-` characters. + +Note that **this name must be unique** for each data type in the Redis +ecosystem, so be creative, use both lower-case and upper case if it makes +sense, and try to use the convention of mixing the type name with the name +of the author of the module, to create a 9 character unique name. + +**NOTE:** It is very important that the name is exactly 9 chars or the +registration of the type will fail. Read more to understand why. + +For example if I'm building a *b-tree* data structure and my name is *antirez* +I'll call my type **btree1-az**. The name, converted to a 64 bit integer, +is stored inside the RDB file when saving the type, and will be used when the +RDB data is loaded in order to resolve what module can load the data. If Redis +finds no matching module, the integer is converted back to a name in order to +provide some clue to the user about what module is missing in order to load +the data. + +The type name is also used as a reply for the [`TYPE`]({{< relref "/commands/type" >}}) command when called +with a key holding the registered type. + +The `encver` argument is the encoding version used by the module to store data +inside the RDB file. For example I can start with an encoding version of 0, +but later when I release version 2.0 of my module, I can switch encoding to +something better. The new module will register with an encoding version of 1, +so when it saves new RDB files, the new version will be stored on disk. However +when loading RDB files, the module `rdb_load` method will be called even if +there is data found for a different encoding version (and the encoding version +is passed as argument to `rdb_load`), so that the module can still load old +RDB files. + +The last argument is a structure used in order to pass the type methods to the +registration function: `rdb_load`, `rdb_save`, `aof_rewrite`, `digest` and +`free` and `mem_usage` are all callbacks with the following prototypes and uses: + + typedef void *(*RedisModuleTypeLoadFunc)(RedisModuleIO *rdb, int encver); + typedef void (*RedisModuleTypeSaveFunc)(RedisModuleIO *rdb, void *value); + typedef void (*RedisModuleTypeRewriteFunc)(RedisModuleIO *aof, RedisModuleString *key, void *value); + typedef size_t (*RedisModuleTypeMemUsageFunc)(void *value); + typedef void (*RedisModuleTypeDigestFunc)(RedisModuleDigest *digest, void *value); + typedef void (*RedisModuleTypeFreeFunc)(void *value); + +* `rdb_load` is called when loading data from the RDB file. It loads data in the same format as `rdb_save` produces. +* `rdb_save` is called when saving data to the RDB file. +* `aof_rewrite` is called when the AOF is being rewritten, and the module needs to tell Redis what is the sequence of commands to recreate the content of a given key. +* `digest` is called when `DEBUG DIGEST` is executed and a key holding this module type is found. Currently this is not yet implemented so the function ca be left empty. +* `mem_usage` is called when the [`MEMORY`]({{< relref "/commands/memory" >}}) command asks for the total memory consumed by a specific key, and is used in order to get the amount of bytes used by the module value. +* `free` is called when a key with the module native type is deleted via [`DEL`]({{< relref "/commands/del" >}}) or in any other mean, in order to let the module reclaim the memory associated with such a value. + +### Why module types require nine character names + +When Redis persists to RDB files, modules specific data types require to +be persisted as well. Now RDB files are sequences of key-value pairs +like the following: + + [1 byte type] [key] [a type specific value] + +The 1 byte type identifies strings, lists, sets, and so forth. In the case +of modules data, it is set to a special value of `module data`, but of +course this is not enough, we need the information needed to link a specific +value with a specific module type that is able to load and handle it. + +So when we save a `type specific value` about a module, we prefix it with +a 64 bit integer. 64 bits is large enough to store the information needed +in order to lookup the module that can handle that specific type, but is +short enough that we can prefix each module value we store inside the RDB +without making the final RDB file too big. At the same time, this solution +of prefixing the value with a 64 bit *signature* does not require to do +strange things like defining in the RDB header a list of modules specific +types. Everything is pretty simple. + +So, what you can store in 64 bits in order to identify a given module in +a reliable way? Well if you build a character set of 64 symbols, you can +easily store 9 characters of 6 bits, and you are left with 10 bits, that +are used in order to store the *encoding version* of the type, so that +the same type can evolve in the future and provide a different and more +efficient or updated serialization format for RDB files. + +So the 64 bit prefix stored before each module value is like the following: + + 6|6|6|6|6|6|6|6|6|10 + +The first 9 elements are 6-bits characters, the final 10 bits is the +encoding version. + +When the RDB file is loaded back, it reads the 64 bit value, masks the final +10 bits, and searches for a matching module in the modules types cache. +When a matching one is found, the method to load the RDB file value is called +with the 10 bits encoding version as argument, so that the module knows +what version of the data layout to load, if it can support multiple versions. + +Now the interesting thing about all this is that, if instead the module type +cannot be resolved, since there is no loaded module having this signature, +we can convert back the 64 bit value into a 9 characters name, and print +an error to the user that includes the module type name! So that she or he +immediately realizes what's wrong. + +### Set and get keys + +After registering our new data type in the `RedisModule_OnLoad()` function, +we also need to be able to set Redis keys having as value our native type. + +This normally happens in the context of commands that write data to a key. +The native types API allow to set and get keys to module native data types, +and to test if a given key is already associated to a value of a specific data +type. + +The API uses the normal modules `RedisModule_OpenKey()` low level key access +interface in order to deal with this. This is an example of setting a +native type private data structure to a Redis key: + + RedisModuleKey *key = RedisModule_OpenKey(ctx,keyname,REDISMODULE_WRITE); + struct some_private_struct *data = createMyDataStructure(); + RedisModule_ModuleTypeSetValue(key,MyType,data); + +The function `RedisModule_ModuleTypeSetValue()` is used with a key handle open +for writing, and gets three arguments: the key handle, the reference to the +native type, as obtained during the type registration, and finally a `void*` +pointer that contains the private data implementing the module native type. + +Note that Redis has no clues at all about what your data contains. It will +just call the callbacks you provided during the method registration in order +to perform operations on the type. + +Similarly we can retrieve the private data from a key using this function: + + struct some_private_struct *data; + data = RedisModule_ModuleTypeGetValue(key); + +We can also test for a key to have our native type as value: + + if (RedisModule_ModuleTypeGetType(key) == MyType) { + /* ... do something ... */ + } + +However for the calls to do the right thing, we need to check if the key +is empty, if it contains a value of the right kind, and so forth. So +the idiomatic code to implement a command writing to our native type +is along these lines: + + RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1], + REDISMODULE_READ|REDISMODULE_WRITE); + int type = RedisModule_KeyType(key); + if (type != REDISMODULE_KEYTYPE_EMPTY && + RedisModule_ModuleTypeGetType(key) != MyType) + { + return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE); + } + +Then if we successfully verified the key is not of the wrong type, and +we are going to write to it, we usually want to create a new data structure if +the key is empty, or retrieve the reference to the value associated to the +key if there is already one: + + /* Create an empty value object if the key is currently empty. */ + struct some_private_struct *data; + if (type == REDISMODULE_KEYTYPE_EMPTY) { + data = createMyDataStructure(); + RedisModule_ModuleTypeSetValue(key,MyTyke,data); + } else { + data = RedisModule_ModuleTypeGetValue(key); + } + /* Do something with 'data'... */ + +### Free method + +As already mentioned, when Redis needs to free a key holding a native type +value, it needs help from the module in order to release the memory. This +is the reason why we pass a `free` callback during the type registration: + + typedef void (*RedisModuleTypeFreeFunc)(void *value); + +A trivial implementation of the free method can be something like this, +assuming our data structure is composed of a single allocation: + + void MyTypeFreeCallback(void *value) { + RedisModule_Free(value); + } + +However a more real world one will call some function that performs a more +complex memory reclaiming, by casting the void pointer to some structure +and freeing all the resources composing the value. + +### RDB load and save methods + +The RDB saving and loading callbacks need to create (and load back) a +representation of the data type on disk. Redis offers a high level API +that can automatically store inside the RDB file the following types: + +* Unsigned 64 bit integers. +* Signed 64 bit integers. +* Doubles. +* Strings. + +It is up to the module to find a viable representation using the above base +types. However note that while the integer and double values are stored +and loaded in an architecture and *endianness* agnostic way, if you use +the raw string saving API to, for example, save a structure on disk, you +have to care those details yourself. + +This is the list of functions performing RDB saving and loading: + + void RedisModule_SaveUnsigned(RedisModuleIO *io, uint64_t value); + uint64_t RedisModule_LoadUnsigned(RedisModuleIO *io); + void RedisModule_SaveSigned(RedisModuleIO *io, int64_t value); + int64_t RedisModule_LoadSigned(RedisModuleIO *io); + void RedisModule_SaveString(RedisModuleIO *io, RedisModuleString *s); + void RedisModule_SaveStringBuffer(RedisModuleIO *io, const char *str, size_t len); + RedisModuleString *RedisModule_LoadString(RedisModuleIO *io); + char *RedisModule_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr); + void RedisModule_SaveDouble(RedisModuleIO *io, double value); + double RedisModule_LoadDouble(RedisModuleIO *io); + +The functions don't require any error checking from the module, that can +always assume calls succeed. + +As an example, imagine I've a native type that implements an array of +double values, with the following structure: + + struct double_array { + size_t count; + double *values; + }; + +My `rdb_save` method may look like the following: + + void DoubleArrayRDBSave(RedisModuleIO *io, void *ptr) { + struct dobule_array *da = ptr; + RedisModule_SaveUnsigned(io,da->count); + for (size_t j = 0; j < da->count; j++) + RedisModule_SaveDouble(io,da->values[j]); + } + +What we did was to store the number of elements followed by each double +value. So when later we'll have to load the structure in the `rdb_load` +method we'll do something like this: + + void *DoubleArrayRDBLoad(RedisModuleIO *io, int encver) { + if (encver != DOUBLE_ARRAY_ENC_VER) { + /* We should actually log an error here, or try to implement + the ability to load older versions of our data structure. */ + return NULL; + } + + struct double_array *da; + da = RedisModule_Alloc(sizeof(*da)); + da->count = RedisModule_LoadUnsigned(io); + da->values = RedisModule_Alloc(da->count * sizeof(double)); + for (size_t j = 0; j < da->count; j++) + da->values[j] = RedisModule_LoadDouble(io); + return da; + } + +The load callback just reconstruct back the data structure from the data +we stored in the RDB file. + +Note that while there is no error handling on the API that writes and reads +from disk, still the load callback can return NULL on errors in case what +it reads does not look correct. Redis will just panic in that case. + +### AOF rewriting + + void RedisModule_EmitAOF(RedisModuleIO *io, const char *cmdname, const char *fmt, ...); + +### Allocate memory + +Modules data types should try to use `RedisModule_Alloc()` functions family +in order to allocate, reallocate and release heap memory used to implement the native data structures (see the other Redis Modules documentation for detailed information). + +This is not just useful in order for Redis to be able to account for the memory used by the module, but there are also more advantages: + +* Redis uses the `jemalloc` allocator, that often prevents fragmentation problems that could be caused by using the libc allocator. +* When loading strings from the RDB file, the native types API is able to return strings allocated directly with `RedisModule_Alloc()`, so that the module can directly link this memory into the data structure representation, avoiding a useless copy of the data. + +Even if you are using external libraries implementing your data structures, the +allocation functions provided by the module API is exactly compatible with +`malloc()`, `realloc()`, `free()` and `strdup()`, so converting the libraries +in order to use these functions should be trivial. + +In case you have an external library that uses libc `malloc()`, and you want +to avoid replacing manually all the calls with the Redis Modules API calls, +an approach could be to use simple macros in order to replace the libc calls +with the Redis API calls. Something like this could work: + + #define malloc RedisModule_Alloc + #define realloc RedisModule_Realloc + #define free RedisModule_Free + #define strdup RedisModule_Strdup + +However take in mind that mixing libc calls with Redis API calls will result +into troubles and crashes, so if you replace calls using macros, you need to +make sure that all the calls are correctly replaced, and that the code with +the substituted calls will never, for example, attempt to call +`RedisModule_Free()` with a pointer allocated using libc `malloc()`. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'How to implement blocking commands in a Redis module + + ' +linkTitle: Blocking commands +title: Redis modules and blocking commands +weight: 1 +--- + +Redis has a few blocking commands among the built-in set of commands. +One of the most used is [`BLPOP`]({{< relref "/commands/blpop" >}}) (or the symmetric [`BRPOP`]({{< relref "/commands/brpop" >}})) which blocks +waiting for elements arriving in a list. + +The interesting fact about blocking commands is that they do not block +the whole server, but just the client calling them. Usually the reason to +block is that we expect some external event to happen: this can be +some change in the Redis data structures like in the [`BLPOP`]({{< relref "/commands/blpop" >}}) case, a +long computation happening in a thread, to receive some data from the +network, and so forth. + +Redis modules have the ability to implement blocking commands as well, +this documentation shows how the API works and describes a few patterns +that can be used in order to model blocking commands. + + +How blocking and resuming works. +--- + +_Note: You may want to check the `helloblock.c` example in the Redis source tree +inside the `src/modules` directory, for a simple to understand example +on how the blocking API is applied._ + +In Redis modules, commands are implemented by callback functions that +are invoked by the Redis core when the specific command is called +by the user. Normally the callback terminates its execution sending +some reply to the client. Using the following function instead, the +function implementing the module command may request that the client +is put into the blocked state: + + RedisModuleBlockedClient *RedisModule_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(void*), long long timeout_ms); + +The function returns a `RedisModuleBlockedClient` object, which is later +used in order to unblock the client. The arguments have the following +meaning: + +* `ctx` is the command execution context as usually in the rest of the API. +* `reply_callback` is the callback, having the same prototype of a normal command function, that is called when the client is unblocked in order to return a reply to the client. +* `timeout_callback` is the callback, having the same prototype of a normal command function that is called when the client reached the `ms` timeout. +* `free_privdata` is the callback that is called in order to free the private data. Private data is a pointer to some data that is passed between the API used to unblock the client, to the callback that will send the reply to the client. We'll see how this mechanism works later in this document. +* `ms` is the timeout in milliseconds. When the timeout is reached, the timeout callback is called and the client is automatically aborted. + +Once a client is blocked, it can be unblocked with the following API: + + int RedisModule_UnblockClient(RedisModuleBlockedClient *bc, void *privdata); + +The function takes as argument the blocked client object returned by +the previous call to `RedisModule_BlockClient()`, and unblock the client. +Immediately before the client gets unblocked, the `reply_callback` function +specified when the client was blocked is called: this function will +have access to the `privdata` pointer used here. + +IMPORTANT: The above function is thread safe, and can be called from within +a thread doing some work in order to implement the command that blocked +the client. + +The `privdata` data will be freed automatically using the `free_privdata` +callback when the client is unblocked. This is useful **since the reply +callback may never be called** in case the client timeouts or disconnects +from the server, so it's important that it's up to an external function +to have the responsibility to free the data passed if needed. + +To better understand how the API works, we can imagine writing a command +that blocks a client for one second, and then send as reply "Hello!". + +Note: arity checks and other non important things are not implemented +int his command, in order to take the example simple. + + int Example_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + RedisModuleBlockedClient *bc = + RedisModule_BlockClient(ctx,reply_func,timeout_func,NULL,0); + + pthread_t tid; + pthread_create(&tid,NULL,threadmain,bc); + + return REDISMODULE_OK; + } + + void *threadmain(void *arg) { + RedisModuleBlockedClient *bc = arg; + + sleep(1); /* Wait one second and unblock. */ + RedisModule_UnblockClient(bc,NULL); + } + +The above command blocks the client ASAP, spawning a thread that will +wait a second and will unblock the client. Let's check the reply and +timeout callbacks, which are in our case very similar, since they +just reply the client with a different reply type. + + int reply_func(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + return RedisModule_ReplyWithSimpleString(ctx,"Hello!"); + } + + int timeout_func(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + return RedisModule_ReplyWithNull(ctx); + } + +The reply callback just sends the "Hello!" string to the client. +The important bit here is that the reply callback is called when the +client is unblocked from the thread. + +The timeout command returns `NULL`, as it often happens with actual +Redis blocking commands timing out. + +Passing reply data when unblocking +--- + +The above example is simple to understand but lacks an important +real world aspect of an actual blocking command implementation: often +the reply function will need to know what to reply to the client, +and this information is often provided as the client is unblocked. + +We could modify the above example so that the thread generates a +random number after waiting one second. You can think at it as an +actually expansive operation of some kind. Then this random number +can be passed to the reply function so that we return it to the command +caller. In order to make this working, we modify the functions as follow: + + void *threadmain(void *arg) { + RedisModuleBlockedClient *bc = arg; + + sleep(1); /* Wait one second and unblock. */ + + long *mynumber = RedisModule_Alloc(sizeof(long)); + *mynumber = rand(); + RedisModule_UnblockClient(bc,mynumber); + } + +As you can see, now the unblocking call is passing some private data, +that is the `mynumber` pointer, to the reply callback. In order to +obtain this private data, the reply callback will use the following +function: + + void *RedisModule_GetBlockedClientPrivateData(RedisModuleCtx *ctx); + +So our reply callback is modified like that: + + int reply_func(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + long *mynumber = RedisModule_GetBlockedClientPrivateData(ctx); + /* IMPORTANT: don't free mynumber here, but in the + * free privdata callback. */ + return RedisModule_ReplyWithLongLong(ctx,mynumber); + } + +Note that we also need to pass a `free_privdata` function when blocking +the client with `RedisModule_BlockClient()`, since the allocated +long value must be freed. Our callback will look like the following: + + void free_privdata(void *privdata) { + RedisModule_Free(privdata); + } + +NOTE: It is important to stress that the private data is best freed in the +`free_privdata` callback because the reply function may not be called +if the client disconnects or timeout. + +Also note that the private data is also accessible from the timeout +callback, always using the `GetBlockedClientPrivateData()` API. + +Aborting the blocking of a client +--- + +One problem that sometimes arises is that we need to allocate resources +in order to implement the non blocking command. So we block the client, +then, for example, try to create a thread, but the thread creation function +returns an error. What to do in such a condition in order to recover? We +don't want to take the client blocked, nor we want to call `UnblockClient()` +because this will trigger the reply callback to be called. + +In this case the best thing to do is to use the following function: + + int RedisModule_AbortBlock(RedisModuleBlockedClient *bc); + +Practically this is how to use it: + + int Example_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + RedisModuleBlockedClient *bc = + RedisModule_BlockClient(ctx,reply_func,timeout_func,NULL,0); + + pthread_t tid; + if (pthread_create(&tid,NULL,threadmain,bc) != 0) { + RedisModule_AbortBlock(bc); + RedisModule_ReplyWithError(ctx,"Sorry can't create a thread"); + } + + return REDISMODULE_OK; + } + +The client will be unblocked but the reply callback will not be called. + +Implementing the command, reply and timeout callback using a single function +--- + +The following functions can be used in order to implement the reply and +callback with the same function that implements the primary command +function: + + int RedisModule_IsBlockedReplyRequest(RedisModuleCtx *ctx); + int RedisModule_IsBlockedTimeoutRequest(RedisModuleCtx *ctx); + +So I could rewrite the example command without using a separated +reply and timeout callback: + + int Example_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, + int argc) + { + if (RedisModule_IsBlockedReplyRequest(ctx)) { + long *mynumber = RedisModule_GetBlockedClientPrivateData(ctx); + return RedisModule_ReplyWithLongLong(ctx,mynumber); + } else if (RedisModule_IsBlockedTimeoutRequest) { + return RedisModule_ReplyWithNull(ctx); + } + + RedisModuleBlockedClient *bc = + RedisModule_BlockClient(ctx,reply_func,timeout_func,NULL,0); + + pthread_t tid; + if (pthread_create(&tid,NULL,threadmain,bc) != 0) { + RedisModule_AbortBlock(bc); + RedisModule_ReplyWithError(ctx,"Sorry can't create a thread"); + } + + return REDISMODULE_OK; + } + +Functionally is the same but there are people that will prefer the less +verbose implementation that concentrates most of the command logic in a +single function. + +Working on copies of data inside a thread +--- + +An interesting pattern in order to work with threads implementing the +slow part of a command, is to work with a copy of the data, so that +while some operation is performed in a key, the user continues to see +the old version. However when the thread terminated its work, the +representations are swapped and the new, processed version, is used. + +An example of this approach is the +[Neural Redis module](https://github.com/antirez/neural-redis) +where neural networks are trained in different threads while the +user can still execute and inspect their older versions. + +Future work +--- + +An API is work in progress right now in order to allow Redis modules APIs +to be called in a safe way from threads, so that the threaded command +can access the data space and do incremental operations. + +There is no ETA for this feature but it may appear in the course of the +Redis 4.0 release at some point. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Introduction to writing Redis modules + + ' +linkTitle: Modules API +title: Redis modules API +weight: 2 +--- + +The modules documentation is composed of the following pages: + +* Introduction to Redis modules (this file). An overview about Redis Modules system and API. It's a good idea to start your reading here. +* [Implementing native data types]({{< relref "/develop/reference/modules/modules-native-types" >}}) covers the implementation of native data types into modules. +* [Blocking operations]({{< relref "/develop/reference/modules/modules-blocking-ops" >}}) shows how to write blocking commands that will not reply immediately, but will block the client, without blocking the Redis server, and will provide a reply whenever will be possible. +* [Redis modules API reference]({{< relref "/develop/reference/modules/modules-api-ref" >}}) is generated from module.c top comments of RedisModule functions. It is a good reference in order to understand how each function works. + +Redis modules make it possible to extend Redis functionality using external +modules, rapidly implementing new Redis commands with features +similar to what can be done inside the core itself. + +Redis modules are dynamic libraries that can be loaded into Redis at +startup, or using the [`MODULE LOAD`]({{< relref "/commands/module-load" >}}) command. Redis exports a C API, in the +form of a single C header file called `redismodule.h`. Modules are meant +to be written in C, however it will be possible to use C++ or other languages +that have C binding functionalities. + +Modules are designed in order to be loaded into different versions of Redis, +so a given module does not need to be designed, or recompiled, in order to +run with a specific version of Redis. For this reason, the module will +register to the Redis core using a specific API version. The current API +version is "1". + +This document is about an alpha version of Redis modules. API, functionalities +and other details may change in the future. + +## Loading modules + +In order to test the module you are developing, you can load the module +using the following `redis.conf` configuration directive: + + loadmodule /path/to/mymodule.so + +It is also possible to load a module at runtime using the following command: + + MODULE LOAD /path/to/mymodule.so + +In order to list all loaded modules, use: + + MODULE LIST + +Finally, you can unload (and later reload if you wish) a module using the +following command: + + MODULE UNLOAD mymodule + +Note that `mymodule` above is not the filename without the `.so` suffix, but +instead, the name the module used to register itself into the Redis core. +The name can be obtained using [`MODULE LIST`]({{< relref "/commands/module-list" >}}). However it is good practice +that the filename of the dynamic library is the same as the name the module +uses to register itself into the Redis core. + +## The simplest module you can write + +In order to show the different parts of a module, here we'll show a very +simple module that implements a command that outputs a random number. + + #include "redismodule.h" + #include + + int HelloworldRand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) { + RedisModule_ReplyWithLongLong(ctx,rand()); + return REDISMODULE_OK; + } + + int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) { + if (RedisModule_Init(ctx,"helloworld",1,REDISMODULE_APIVER_1) + == REDISMODULE_ERR) return REDISMODULE_ERR; + + if (RedisModule_CreateCommand(ctx,"helloworld.rand", + HelloworldRand_RedisCommand, "fast random", + 0, 0, 0) == REDISMODULE_ERR) + return REDISMODULE_ERR; + + return REDISMODULE_OK; + } + +The example module has two functions. One implements a command called +HELLOWORLD.RAND. This function is specific of that module. However the +other function called `RedisModule_OnLoad()` must be present in each +Redis module. It is the entry point for the module to be initialized, +register its commands, and potentially other private data structures +it uses. + +Note that it is a good idea for modules to call commands with the +name of the module followed by a dot, and finally the command name, +like in the case of `HELLOWORLD.RAND`. This way it is less likely to +have collisions. + +Note that if different modules have colliding commands, they'll not be +able to work in Redis at the same time, since the function +`RedisModule_CreateCommand` will fail in one of the modules, so the module +loading will abort returning an error condition. + +## Module initialization + +The above example shows the usage of the function `RedisModule_Init()`. +It should be the first function called by the module `OnLoad` function. +The following is the function prototype: + + int RedisModule_Init(RedisModuleCtx *ctx, const char *modulename, + int module_version, int api_version); + +The `Init` function announces the Redis core that the module has a given +name, its version (that is reported by [`MODULE LIST`]({{< relref "/commands/module-list" >}})), and that is willing +to use a specific version of the API. + +If the API version is wrong, the name is already taken, or there are other +similar errors, the function will return `REDISMODULE_ERR`, and the module +`OnLoad` function should return ASAP with an error. + +Before the `Init` function is called, no other API function can be called, +otherwise the module will segfault and the Redis instance will crash. + +The second function called, `RedisModule_CreateCommand`, is used in order +to register commands into the Redis core. The following is the prototype: + + int RedisModule_CreateCommand(RedisModuleCtx *ctx, const char *name, + RedisModuleCmdFunc cmdfunc, const char *strflags, + int firstkey, int lastkey, int keystep); + +As you can see, most Redis modules API calls all take as first argument +the `context` of the module, so that they have a reference to the module +calling it, to the command and client executing a given command, and so forth. + +To create a new command, the above function needs the context, the command's +name, a pointer to the function implementing the command, the command's flags +and the positions of key names in the command's arguments. + +The function that implements the command must have the following prototype: + + int mycommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc); + +The command function arguments are just the context, that will be passed +to all the other API calls, the command argument vector, and total number +of arguments, as passed by the user. + +As you can see, the arguments are provided as pointers to a specific data +type, the `RedisModuleString`. This is an opaque data type you have API +functions to access and use, direct access to its fields is never needed. + +Zooming into the example command implementation, we can find another call: + + int RedisModule_ReplyWithLongLong(RedisModuleCtx *ctx, long long integer); + +This function returns an integer to the client that invoked the command, +exactly like other Redis commands do, like for example [`INCR`]({{< relref "/commands/incr" >}}) or [`SCARD`]({{< relref "/commands/scard" >}}). + +## Module cleanup + +In most cases, there is no need for special cleanup. +When a module is unloaded, Redis will automatically unregister commands and +unsubscribe from notifications. +However in the case where a module contains some persistent memory or +configuration, a module may include an optional `RedisModule_OnUnload` +function. +If a module provides this function, it will be invoked during the module unload +process. +The following is the function prototype: + + int RedisModule_OnUnload(RedisModuleCtx *ctx); + +The `OnUnload` function may prevent module unloading by returning +`REDISMODULE_ERR`. +Otherwise, `REDISMODULE_OK` should be returned. + +## Setup and dependencies of a Redis module + +Redis modules don't depend on Redis or some other library, nor they +need to be compiled with a specific `redismodule.h` file. In order +to create a new module, just copy a recent version of `redismodule.h` +in your source tree, link all the libraries you want, and create +a dynamic library having the `RedisModule_OnLoad()` function symbol +exported. + +The module will be able to load into different versions of Redis. + +A module can be designed to support both newer and older Redis versions where certain API functions are not available in all versions. +If an API function is not implemented in the currently running Redis version, the function pointer is set to NULL. +This allows the module to check if a function exists before using it: + + if (RedisModule_SetCommandInfo != NULL) { + RedisModule_SetCommandInfo(cmd, &info); + } + +In recent versions of `redismodule.h`, a convenience macro `RMAPI_FUNC_SUPPORTED(funcname)` is defined. +Using the macro or just comparing with NULL is a matter of personal preference. + +## Passing configuration parameters to Redis modules + +When the module is loaded with the [`MODULE LOAD`]({{< relref "/commands/module-load" >}}) command, or using the +`loadmodule` directive in the `redis.conf` file, the user is able to pass +configuration parameters to the module by adding arguments after the module +file name: + + loadmodule mymodule.so foo bar 1234 + +In the above example the strings `foo`, `bar` and `1234` will be passed +to the module `OnLoad()` function in the `argv` argument as an array +of RedisModuleString pointers. The number of arguments passed is into `argc`. + +The way you can access those strings will be explained in the rest of this +document. Normally the module will store the module configuration parameters +in some `static` global variable that can be accessed module wide, so that +the configuration can change the behavior of different commands. + +## Working with RedisModuleString objects + +The command argument vector `argv` passed to module commands, and the +return value of other module APIs functions, are of type `RedisModuleString`. + +Usually you directly pass module strings to other API calls, however sometimes +you may need to directly access the string object. + +There are a few functions in order to work with string objects: + + const char *RedisModule_StringPtrLen(RedisModuleString *string, size_t *len); + +The above function accesses a string by returning its pointer and setting its +length in `len`. +You should never write to a string object pointer, as you can see from the +`const` pointer qualifier. + +However, if you want, you can create new string objects using the following +API: + + RedisModuleString *RedisModule_CreateString(RedisModuleCtx *ctx, const char *ptr, size_t len); + +The string returned by the above command must be freed using a corresponding +call to `RedisModule_FreeString()`: + + void RedisModule_FreeString(RedisModuleString *str); + +However if you want to avoid having to free strings, the automatic memory +management, covered later in this document, can be a good alternative, by +doing it for you. + +Note that the strings provided via the argument vector `argv` never need +to be freed. You only need to free new strings you create, or new strings +returned by other APIs, where it is specified that the returned string must +be freed. + +## Creating strings from numbers or parsing strings as numbers + +Creating a new string from an integer is a very common operation, so there +is a function to do this: + + RedisModuleString *mystr = RedisModule_CreateStringFromLongLong(ctx,10); + +Similarly in order to parse a string as a number: + + long long myval; + if (RedisModule_StringToLongLong(ctx,argv[1],&myval) == REDISMODULE_OK) { + /* Do something with 'myval' */ + } + +## Accessing Redis keys from modules + +Most Redis modules, in order to be useful, have to interact with the Redis +data space (this is not always true, for example an ID generator may +never touch Redis keys). Redis modules have two different APIs in order to +access the Redis data space, one is a low level API that provides very +fast access and a set of functions to manipulate Redis data structures. +The other API is more high level, and allows to call Redis commands and +fetch the result, similarly to how Lua scripts access Redis. + +The high level API is also useful in order to access Redis functionalities +that are not available as APIs. + +In general modules developers should prefer the low level API, because commands +implemented using the low level API run at a speed comparable to the speed +of native Redis commands. However there are definitely use cases for the +higher level API. For example often the bottleneck could be processing the +data and not accessing it. + +Also note that sometimes using the low level API is not harder compared to +the higher level one. + +## Calling Redis commands + +The high level API to access Redis is the sum of the `RedisModule_Call()` +function, together with the functions needed in order to access the +reply object returned by `Call()`. + +`RedisModule_Call` uses a special calling convention, with a format specifier +that is used to specify what kind of objects you are passing as arguments +to the function. + +Redis commands are invoked just using a command name and a list of arguments. +However when calling commands, the arguments may originate from different +kind of strings: null-terminated C strings, RedisModuleString objects as +received from the `argv` parameter in the command implementation, binary +safe C buffers with a pointer and a length, and so forth. + +For example if I want to call [`INCRBY`]({{< relref "/commands/incrby" >}}) using a first argument (the key) +a string received in the argument vector `argv`, which is an array +of RedisModuleString object pointers, and a C string representing the +number "10" as second argument (the increment), I'll use the following +function call: + + RedisModuleCallReply *reply; + reply = RedisModule_Call(ctx,"INCRBY","sc",argv[1],"10"); + +The first argument is the context, and the second is always a null terminated +C string with the command name. The third argument is the format specifier +where each character corresponds to the type of the arguments that will follow. +In the above case `"sc"` means a RedisModuleString object, and a null +terminated C string. The other arguments are just the two arguments as +specified. In fact `argv[1]` is a RedisModuleString and `"10"` is a null +terminated C string. + +This is the full list of format specifiers: + +* **c** -- Null terminated C string pointer. +* **b** -- C buffer, two arguments needed: C string pointer and `size_t` length. +* **s** -- RedisModuleString as received in `argv` or by other Redis module APIs returning a RedisModuleString object. +* **l** -- Long long integer. +* **v** -- Array of RedisModuleString objects. +* **!** -- This modifier just tells the function to replicate the command to replicas and AOF. It is ignored from the point of view of arguments parsing. +* **A** -- This modifier, when `!` is given, tells to suppress AOF propagation: the command will be propagated only to replicas. +* **R** -- This modifier, when `!` is given, tells to suppress replicas propagation: the command will be propagated only to the AOF if enabled. + +The function returns a `RedisModuleCallReply` object on success, on +error NULL is returned. + +NULL is returned when the command name is invalid, the format specifier uses +characters that are not recognized, or when the command is called with the +wrong number of arguments. In the above cases the `errno` var is set to `EINVAL`. NULL is also returned when, in an instance with Cluster enabled, the target +keys are about non local hash slots. In this case `errno` is set to `EPERM`. + +## Working with RedisModuleCallReply objects. + +`RedisModuleCall` returns reply objects that can be accessed using the +`RedisModule_CallReply*` family of functions. + +In order to obtain the type or reply (corresponding to one of the data types +supported by the Redis protocol), the function `RedisModule_CallReplyType()` +is used: + + reply = RedisModule_Call(ctx,"INCRBY","sc",argv[1],"10"); + if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_INTEGER) { + long long myval = RedisModule_CallReplyInteger(reply); + /* Do something with myval. */ + } + +Valid reply types are: + +* `REDISMODULE_REPLY_STRING` Bulk string or status replies. +* `REDISMODULE_REPLY_ERROR` Errors. +* `REDISMODULE_REPLY_INTEGER` Signed 64 bit integers. +* `REDISMODULE_REPLY_ARRAY` Array of replies. +* `REDISMODULE_REPLY_NULL` NULL reply. + +Strings, errors and arrays have an associated length. For strings and errors +the length corresponds to the length of the string. For arrays the length +is the number of elements. To obtain the reply length the following function +is used: + + size_t reply_len = RedisModule_CallReplyLength(reply); + +In order to obtain the value of an integer reply, the following function is used, as already shown in the example above: + + long long reply_integer_val = RedisModule_CallReplyInteger(reply); + +Called with a reply object of the wrong type, the above function always +returns `LLONG_MIN`. + +Sub elements of array replies are accessed this way: + + RedisModuleCallReply *subreply; + subreply = RedisModule_CallReplyArrayElement(reply,idx); + +The above function returns NULL if you try to access out of range elements. + +Strings and errors (which are like strings but with a different type) can +be accessed using in the following way, making sure to never write to +the resulting pointer (that is returned as a `const` pointer so that +misusing must be pretty explicit): + + size_t len; + char *ptr = RedisModule_CallReplyStringPtr(reply,&len); + +If the reply type is not a string or an error, NULL is returned. + +RedisCallReply objects are not the same as module string objects +(RedisModuleString types). However sometimes you may need to pass replies +of type string or integer, to API functions expecting a module string. + +When this is the case, you may want to evaluate if using the low level +API could be a simpler way to implement your command, or you can use +the following function in order to create a new string object from a +call reply of type string, error or integer: + + RedisModuleString *mystr = RedisModule_CreateStringFromCallReply(myreply); + +If the reply is not of the right type, NULL is returned. +The returned string object should be released with `RedisModule_FreeString()` +as usually, or by enabling automatic memory management (see corresponding +section). + +## Releasing call reply objects + +Reply objects must be freed using `RedisModule_FreeCallReply`. For arrays, +you need to free only the top level reply, not the nested replies. +Currently the module implementation provides a protection in order to avoid +crashing if you free a nested reply object for error, however this feature +is not guaranteed to be here forever, so should not be considered part +of the API. + +If you use automatic memory management (explained later in this document) +you don't need to free replies (but you still could if you wish to release +memory ASAP). + +## Returning values from Redis commands + +Like normal Redis commands, new commands implemented via modules must be +able to return values to the caller. The API exports a set of functions for +this goal, in order to return the usual types of the Redis protocol, and +arrays of such types as elements. Also errors can be returned with any +error string and code (the error code is the initial uppercase letters in +the error message, like the "BUSY" string in the "BUSY the sever is busy" error +message). + +All the functions to send a reply to the client are called +`RedisModule_ReplyWith`. + +To return an error, use: + + RedisModule_ReplyWithError(RedisModuleCtx *ctx, const char *err); + +There is a predefined error string for key of wrong type errors: + + REDISMODULE_ERRORMSG_WRONGTYPE + +Example usage: + + RedisModule_ReplyWithError(ctx,"ERR invalid arguments"); + +We already saw how to reply with a `long long` in the examples above: + + RedisModule_ReplyWithLongLong(ctx,12345); + +To reply with a simple string, that can't contain binary values or newlines, +(so it's suitable to send small words, like "OK") we use: + + RedisModule_ReplyWithSimpleString(ctx,"OK"); + +It's possible to reply with "bulk strings" that are binary safe, using +two different functions: + + int RedisModule_ReplyWithStringBuffer(RedisModuleCtx *ctx, const char *buf, size_t len); + + int RedisModule_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str); + +The first function gets a C pointer and length. The second a RedisModuleString +object. Use one or the other depending on the source type you have at hand. + +In order to reply with an array, you just need to use a function to emit the +array length, followed by as many calls to the above functions as the number +of elements of the array are: + + RedisModule_ReplyWithArray(ctx,2); + RedisModule_ReplyWithStringBuffer(ctx,"age",3); + RedisModule_ReplyWithLongLong(ctx,22); + +To return nested arrays is easy, your nested array element just uses another +call to `RedisModule_ReplyWithArray()` followed by the calls to emit the +sub array elements. + +## Returning arrays with dynamic length + +Sometimes it is not possible to know beforehand the number of items of +an array. As an example, think of a Redis module implementing a FACTOR +command that given a number outputs the prime factors. Instead of +factorializing the number, storing the prime factors into an array, and +later produce the command reply, a better solution is to start an array +reply where the length is not known, and set it later. This is accomplished +with a special argument to `RedisModule_ReplyWithArray()`: + + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); + +The above call starts an array reply so we can use other `ReplyWith` calls +in order to produce the array items. Finally in order to set the length, +use the following call: + + RedisModule_ReplySetArrayLength(ctx, number_of_items); + +In the case of the FACTOR command, this translates to some code similar +to this: + + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); + number_of_factors = 0; + while(still_factors) { + RedisModule_ReplyWithLongLong(ctx, some_factor); + number_of_factors++; + } + RedisModule_ReplySetArrayLength(ctx, number_of_factors); + +Another common use case for this feature is iterating over the arrays of +some collection and only returning the ones passing some kind of filtering. + +It is possible to have multiple nested arrays with postponed reply. +Each call to `SetArray()` will set the length of the latest corresponding +call to `ReplyWithArray()`: + + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); + ... generate 100 elements ... + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); + ... generate 10 elements ... + RedisModule_ReplySetArrayLength(ctx, 10); + RedisModule_ReplySetArrayLength(ctx, 100); + +This creates a 100 items array having as last element a 10 items array. + +## Arity and type checks + +Often commands need to check that the number of arguments and type of the key +is correct. In order to report a wrong arity, there is a specific function +called `RedisModule_WrongArity()`. The usage is trivial: + + if (argc != 2) return RedisModule_WrongArity(ctx); + +Checking for the wrong type involves opening the key and checking the type: + + RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1], + REDISMODULE_READ|REDISMODULE_WRITE); + + int keytype = RedisModule_KeyType(key); + if (keytype != REDISMODULE_KEYTYPE_STRING && + keytype != REDISMODULE_KEYTYPE_EMPTY) + { + RedisModule_CloseKey(key); + return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE); + } + +Note that you often want to proceed with a command both if the key +is of the expected type, or if it's empty. + +## Low level access to keys + +Low level access to keys allow to perform operations on value objects associated +to keys directly, with a speed similar to what Redis uses internally to +implement the built-in commands. + +Once a key is opened, a key pointer is returned that will be used with all the +other low level API calls in order to perform operations on the key or its +associated value. + +Because the API is meant to be very fast, it cannot do too many run-time +checks, so the user must be aware of certain rules to follow: + +* Opening the same key multiple times where at least one instance is opened for writing, is undefined and may lead to crashes. +* While a key is open, it should only be accessed via the low level key API. For example opening a key, then calling DEL on the same key using the `RedisModule_Call()` API will result into a crash. However it is safe to open a key, perform some operation with the low level API, closing it, then using other APIs to manage the same key, and later opening it again to do some more work. + +In order to open a key the `RedisModule_OpenKey` function is used. It returns +a key pointer, that we'll use with all the next calls to access and modify +the value: + + RedisModuleKey *key; + key = RedisModule_OpenKey(ctx,argv[1],REDISMODULE_READ); + +The second argument is the key name, that must be a `RedisModuleString` object. +The third argument is the mode: `REDISMODULE_READ` or `REDISMODULE_WRITE`. +It is possible to use `|` to bitwise OR the two modes to open the key in +both modes. Currently a key opened for writing can also be accessed for reading +but this is to be considered an implementation detail. The right mode should +be used in sane modules. + +You can open non existing keys for writing, since the keys will be created +when an attempt to write to the key is performed. However when opening keys +just for reading, `RedisModule_OpenKey` will return NULL if the key does not +exist. + +Once you are done using a key, you can close it with: + + RedisModule_CloseKey(key); + +Note that if automatic memory management is enabled, you are not forced to +close keys. When the module function returns, Redis will take care to close +all the keys which are still open. + +## Getting the key type + +In order to obtain the value of a key, use the `RedisModule_KeyType()` function: + + int keytype = RedisModule_KeyType(key); + +It returns one of the following values: + + REDISMODULE_KEYTYPE_EMPTY + REDISMODULE_KEYTYPE_STRING + REDISMODULE_KEYTYPE_LIST + REDISMODULE_KEYTYPE_HASH + REDISMODULE_KEYTYPE_SET + REDISMODULE_KEYTYPE_ZSET + +The above are just the usual Redis key types, with the addition of an empty +type, that signals the key pointer is associated with an empty key that +does not yet exists. + +## Creating new keys + +To create a new key, open it for writing and then write to it using one +of the key writing functions. Example: + + RedisModuleKey *key; + key = RedisModule_OpenKey(ctx,argv[1],REDISMODULE_WRITE); + if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) { + RedisModule_StringSet(key,argv[2]); + } + +## Deleting keys + +Just use: + + RedisModule_DeleteKey(key); + +The function returns `REDISMODULE_ERR` if the key is not open for writing. +Note that after a key gets deleted, it is setup in order to be targeted +by new key commands. For example `RedisModule_KeyType()` will return it is +an empty key, and writing to it will create a new key, possibly of another +type (depending on the API used). + +## Managing key expires (TTLs) + +To control key expires two functions are provided, that are able to set, +modify, get, and unset the time to live associated with a key. + +One function is used in order to query the current expire of an open key: + + mstime_t RedisModule_GetExpire(RedisModuleKey *key); + +The function returns the time to live of the key in milliseconds, or +`REDISMODULE_NO_EXPIRE` as a special value to signal the key has no associated +expire or does not exist at all (you can differentiate the two cases checking +if the key type is `REDISMODULE_KEYTYPE_EMPTY`). + +In order to change the expire of a key the following function is used instead: + + int RedisModule_SetExpire(RedisModuleKey *key, mstime_t expire); + +When called on a non existing key, `REDISMODULE_ERR` is returned, because +the function can only associate expires to existing open keys (non existing +open keys are only useful in order to create new values with data type +specific write operations). + +Again the `expire` time is specified in milliseconds. If the key has currently +no expire, a new expire is set. If the key already have an expire, it is +replaced with the new value. + +If the key has an expire, and the special value `REDISMODULE_NO_EXPIRE` is +used as a new expire, the expire is removed, similarly to the Redis +[`PERSIST`]({{< relref "/commands/persist" >}}) command. In case the key was already persistent, no operation is +performed. + +## Obtaining the length of values + +There is a single function in order to retrieve the length of the value +associated to an open key. The returned length is value-specific, and is +the string length for strings, and the number of elements for the aggregated +data types (how many elements there is in a list, set, sorted set, hash). + + size_t len = RedisModule_ValueLength(key); + +If the key does not exist, 0 is returned by the function: + +## String type API + +Setting a new string value, like the Redis [`SET`]({{< relref "/commands/set" >}}) command does, is performed +using: + + int RedisModule_StringSet(RedisModuleKey *key, RedisModuleString *str); + +The function works exactly like the Redis [`SET`]({{< relref "/commands/set" >}}) command itself, that is, if +there is a prior value (of any type) it will be deleted. + +Accessing existing string values is performed using DMA (direct memory +access) for speed. The API will return a pointer and a length, so that's +possible to access and, if needed, modify the string directly. + + size_t len, j; + char *myptr = RedisModule_StringDMA(key,&len,REDISMODULE_WRITE); + for (j = 0; j < len; j++) myptr[j] = 'A'; + +In the above example we write directly on the string. Note that if you want +to write, you must be sure to ask for `WRITE` mode. + +DMA pointers are only valid if no other operations are performed with the key +before using the pointer, after the DMA call. + +Sometimes when we want to manipulate strings directly, we need to change +their size as well. For this scope, the `RedisModule_StringTruncate` function +is used. Example: + + RedisModule_StringTruncate(mykey,1024); + +The function truncates, or enlarges the string as needed, padding it with +zero bytes if the previous length is smaller than the new length we request. +If the string does not exist since `key` is associated to an open empty key, +a string value is created and associated to the key. + +Note that every time `StringTruncate()` is called, we need to re-obtain +the DMA pointer again, since the old may be invalid. + +## List type API + +It's possible to push and pop values from list values: + + int RedisModule_ListPush(RedisModuleKey *key, int where, RedisModuleString *ele); + RedisModuleString *RedisModule_ListPop(RedisModuleKey *key, int where); + +In both the APIs the `where` argument specifies if to push or pop from tail +or head, using the following macros: + + REDISMODULE_LIST_HEAD + REDISMODULE_LIST_TAIL + +Elements returned by `RedisModule_ListPop()` are like strings created with +`RedisModule_CreateString()`, they must be released with +`RedisModule_FreeString()` or by enabling automatic memory management. + +## Set type API + +Work in progress. + +## Sorted set type API + +Documentation missing, please refer to the top comments inside `module.c` +for the following functions: + +* `RedisModule_ZsetAdd` +* `RedisModule_ZsetIncrby` +* `RedisModule_ZsetScore` +* `RedisModule_ZsetRem` + +And for the sorted set iterator: + +* `RedisModule_ZsetRangeStop` +* `RedisModule_ZsetFirstInScoreRange` +* `RedisModule_ZsetLastInScoreRange` +* `RedisModule_ZsetFirstInLexRange` +* `RedisModule_ZsetLastInLexRange` +* `RedisModule_ZsetRangeCurrentElement` +* `RedisModule_ZsetRangeNext` +* `RedisModule_ZsetRangePrev` +* `RedisModule_ZsetRangeEndReached` + +## Hash type API + +Documentation missing, please refer to the top comments inside `module.c` +for the following functions: + +* `RedisModule_HashSet` +* `RedisModule_HashGet` + +## Iterating aggregated values + +Work in progress. + +## Replicating commands + +If you want to use module commands exactly like normal Redis commands, in the +context of replicated Redis instances, or using the AOF file for persistence, +it is important for module commands to handle their replication in a consistent +way. + +When using the higher level APIs to invoke commands, replication happens +automatically if you use the "!" modifier in the format string of +`RedisModule_Call()` as in the following example: + + reply = RedisModule_Call(ctx,"INCRBY","!sc",argv[1],"10"); + +As you can see the format specifier is `"!sc"`. The bang is not parsed as a +format specifier, but it internally flags the command as "must replicate". + +If you use the above programming style, there are no problems. +However sometimes things are more complex than that, and you use the low level +API. In this case, if there are no side effects in the command execution, and +it consistently always performs the same work, what is possible to do is to +replicate the command verbatim as the user executed it. To do that, you just +need to call the following function: + + RedisModule_ReplicateVerbatim(ctx); + +When you use the above API, you should not use any other replication function +since they are not guaranteed to mix well. + +However this is not the only option. It's also possible to exactly tell +Redis what commands to replicate as the effect of the command execution, using +an API similar to `RedisModule_Call()` but that instead of calling the command +sends it to the AOF / replicas stream. Example: + + RedisModule_Replicate(ctx,"INCRBY","cl","foo",my_increment); + +It's possible to call `RedisModule_Replicate` multiple times, and each +will emit a command. All the sequence emitted is wrapped between a +`MULTI/EXEC` transaction, so that the AOF and replication effects are the +same as executing a single command. + +Note that `Call()` replication and `Replicate()` replication have a rule, +in case you want to mix both forms of replication (not necessarily a good +idea if there are simpler approaches). Commands replicated with `Call()` +are always the first emitted in the final `MULTI/EXEC` block, while all +the commands emitted with `Replicate()` will follow. + +## Automatic memory management + +Normally when writing programs in the C language, programmers need to manage +memory manually. This is why the Redis modules API has functions to release +strings, close open keys, free replies, and so forth. + +However given that commands are executed in a contained environment and +with a set of strict APIs, Redis is able to provide automatic memory management +to modules, at the cost of some performance (most of the time, a very low +cost). + +When automatic memory management is enabled: + +1. You don't need to close open keys. +2. You don't need to free replies. +3. You don't need to free RedisModuleString objects. + +However you can still do it, if you want. For example, automatic memory +management may be active, but inside a loop allocating a lot of strings, +you may still want to free strings no longer used. + +In order to enable automatic memory management, just call the following +function at the start of the command implementation: + + RedisModule_AutoMemory(ctx); + +Automatic memory management is usually the way to go, however experienced +C programmers may not use it in order to gain some speed and memory usage +benefit. + +## Allocating memory into modules + +Normal C programs use `malloc()` and `free()` in order to allocate and +release memory dynamically. While in Redis modules the use of malloc is +not technically forbidden, it is a lot better to use the Redis Modules +specific functions, that are exact replacements for `malloc`, `free`, +`realloc` and `strdup`. These functions are: + + void *RedisModule_Alloc(size_t bytes); + void* RedisModule_Realloc(void *ptr, size_t bytes); + void RedisModule_Free(void *ptr); + void RedisModule_Calloc(size_t nmemb, size_t size); + char *RedisModule_Strdup(const char *str); + +They work exactly like their `libc` equivalent calls, however they use +the same allocator Redis uses, and the memory allocated using these +functions is reported by the [`INFO`]({{< relref "/commands/info" >}}) command in the memory section, is +accounted when enforcing the `maxmemory` policy, and in general is +a first citizen of the Redis executable. On the contrary, the method +allocated inside modules with libc `malloc()` is transparent to Redis. + +Another reason to use the modules functions in order to allocate memory +is that, when creating native data types inside modules, the RDB loading +functions can return deserialized strings (from the RDB file) directly +as `RedisModule_Alloc()` allocations, so they can be used directly to +populate data structures after loading, instead of having to copy them +to the data structure. + +## Pool allocator + +Sometimes in commands implementations, it is required to perform many +small allocations that will be not retained at the end of the command +execution, but are just functional to execute the command itself. + +This work can be more easily accomplished using the Redis pool allocator: + + void *RedisModule_PoolAlloc(RedisModuleCtx *ctx, size_t bytes); + +It works similarly to `malloc()`, and returns memory aligned to the +next power of two of greater or equal to `bytes` (for a maximum alignment +of 8 bytes). However it allocates memory in blocks, so it the overhead +of the allocations is small, and more important, the memory allocated +is automatically released when the command returns. + +So in general short living allocations are a good candidates for the pool +allocator. + +## Writing commands compatible with Redis Cluster + +Documentation missing, please check the following functions inside `module.c`: + + RedisModule_IsKeysPositionRequest(ctx); + RedisModule_KeyAtPos(ctx,pos); +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Redis serialization protocol (RESP) is the wire protocol that clients + implement +linkTitle: Protocol spec +title: Redis serialization protocol specification +weight: 4 +--- + +To communicate with the Redis server, Redis clients use a protocol called Redis Serialization Protocol (RESP). +While the protocol was designed specifically for Redis, you can use it for other client-server software projects. + +RESP is a compromise among the following considerations: + +* Simple to implement. +* Fast to parse. +* Human readable. + +RESP can serialize different data types including integers, strings, and arrays. +It also features an error-specific type. +A client sends a request to the Redis server as an array of strings. +The array's contents are the command and its arguments that the server should execute. +The server's reply type is command-specific. + +RESP is binary-safe and uses prefixed length to transfer bulk data so it does not require processing bulk data transferred from one process to another. + +RESP is the protocol you should implement in your Redis client. + +{{% alert title="Note" color="info" %}} +The protocol outlined here is used only for client-server communication. +[Redis Cluster]({{< relref "/operate/oss_and_stack/reference/cluster-spec" >}}) uses a different binary protocol for exchanging messages between nodes. +{{% /alert %}} + +## RESP versions +Support for the first version of the RESP protocol was introduced in Redis 1.2. +Using RESP with Redis 1.2 was optional and had mainly served the purpose of working the kinks out of the protocol. + +In Redis 2.0, the protocol's next version, a.k.a RESP2, became the standard communication method for clients with the Redis server. + +[RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) is a superset of RESP2 that mainly aims to make a client author's life a little bit easier. +Redis 6.0 introduced experimental opt-in support of RESP3's features (excluding streaming strings and streaming aggregates). +In addition, the introduction of the [`HELLO`]({{< relref "/commands/hello" >}}) command allows clients to handshake and upgrade the connection's protocol version (see [Client handshake](#client-handshake)). + +From Redis version 7 and forward, both RESP2 and RESP3 clients can invoke all core commands. +However, commands may return differently typed replies for different protocol versions. +Each command has descriptions of RESP2 and RESP3 return values that you can reference. + +Future versions of Redis may change the default protocol version, but it is unlikely that RESP2 will become entirely deprecated. +It is possible, however, that new features in upcoming versions will require the use of RESP3. + +## Network layer +A client connects to a Redis server by creating a TCP connection to its port (the default is 6379). + +While RESP is technically non-TCP specific, the protocol is used exclusively with TCP connections (or equivalent stream-oriented connections like Unix sockets) in the context of Redis. + +## Request-Response model +The Redis server accepts commands composed of different arguments. +Then, the server processes the command and sends the reply back to the client. + +This is the simplest model possible; however, there are some exceptions: + +* Redis requests can be [pipelined](#multiple-commands-and-pipelining). + Pipelining enables clients to send multiple commands at once and wait for replies later. +* When a RESP2 connection subscribes to a [Pub/Sub]({{< relref "/develop/interact/pubsub" >}}) channel, the protocol changes semantics and becomes a *push* protocol. + The client no longer requires sending commands because the server will automatically send new messages to the client (for the channels the client is subscribed to) as soon as they are received. +* The [`MONITOR`]({{< relref "/commands/monitor" >}}) command. + Invoking the [`MONITOR`]({{< relref "/commands/monitor" >}}) command switches the connection to an ad-hoc push mode. + The protocol of this mode is not specified but is obvious to parse. +* [Protected mode]({{< relref "operate/oss_and_stack/management/security/#protected-mode" >}}). + Connections opened from a non-loopback address to a Redis while in protected mode are denied and terminated by the server. + Before terminating the connection, Redis unconditionally sends a `-DENIED` reply, regardless of whether the client writes to the socket. +* The [RESP3 Push type](#resp3-pushes). + As the name suggests, a push type allows the server to send out-of-band data to the connection. + The server may push data at any time, and the data isn't necessarily related to specific commands executed by the client. + +Excluding these exceptions, the Redis protocol is a simple request-response protocol. + +## RESP protocol description +RESP is essentially a serialization protocol that supports several data types. +In RESP, the first byte of data determines its type. + +Redis generally uses RESP as a [request-response](#request-response-model) protocol in the following way: + +* Clients send commands to a Redis server as an [array](#arrays) of [bulk strings](#bulk-strings). + The first (and sometimes also the second) bulk string in the array is the command's name. + Subsequent elements of the array are the arguments for the command. +* The server replies with a RESP type. + The reply's type is determined by the command's implementation and possibly by the client's protocol version. + +RESP is a binary protocol that uses control sequences encoded in standard ASCII. +The `A` character, for example, is encoded with the binary byte of value 65. +Similarly, the characters CR (`\r`), LF (`\n`) and SP (` `) have binary byte values of 13, 10 and 32, respectively. + +The `\r\n` (CRLF) is the protocol's _terminator_, which **always** separates its parts. + +The first byte in an RESP-serialized payload always identifies its type. +Subsequent bytes constitute the type's contents. + +We categorize every RESP data type as either _simple_, _bulk_ or _aggregate_. + +Simple types are similar to scalars in programming languages that represent plain literal values. Booleans and Integers are such examples. + +RESP strings are either _simple_ or _bulk_. +Simple strings never contain carriage return (`\r`) or line feed (`\n`) characters. +Bulk strings can contain any binary data and may also be referred to as _binary_ or _blob_. +Note that bulk strings may be further encoded and decoded, e.g. with a wide multi-byte encoding, by the client. + +Aggregates, such as Arrays and Maps, can have varying numbers of sub-elements and nesting levels. + +The following table summarizes the RESP data types that Redis supports: + +| RESP data type | Minimal protocol version | Category | First byte | +| --- | --- | --- | --- | +| [Simple strings](#simple-strings) | RESP2 | Simple | `+` | +| [Simple Errors](#simple-errors) | RESP2 | Simple | `-` | +| [Integers](#integers) | RESP2 | Simple | `:` | +| [Bulk strings](#bulk-strings) | RESP2 | Aggregate | `$` | +| [Arrays](#arrays) | RESP2 | Aggregate | `*` | +| [Nulls](#nulls) | RESP3 | Simple | `_` | +| [Booleans](#booleans) | RESP3 | Simple | `#` | +| [Doubles](#doubles) | RESP3 | Simple | `,` | +| [Big numbers](#big-numbers) | RESP3 | Simple | `(` | +| [Bulk errors](#bulk-errors) | RESP3 | Aggregate | `!` | +| [Verbatim strings](#verbatim-strings) | RESP3 | Aggregate | `=` | +| [Maps](#maps) | RESP3 | Aggregate | `%` | +| [Attributes](#attributes) | RESP3 | Aggregate | `|` | +| [Sets](#sets) | RESP3 | Aggregate | `~` | +| [Pushes](#pushes) | RESP3 | Aggregate | `>` | + + + +### Simple strings +Simple strings are encoded as a plus (`+`) character, followed by a string. +The string mustn't contain a CR (`\r`) or LF (`\n`) character and is terminated by CRLF (i.e., `\r\n`). + +Simple strings transmit short, non-binary strings with minimal overhead. +For example, many Redis commands reply with just "OK" on success. +The encoding of this Simple String is the following 5 bytes: + + +OK\r\n + +When Redis replies with a simple string, a client library should return to the caller a string value composed of the first character after the `+` up to the end of the string, excluding the final CRLF bytes. + +To send binary strings, use [bulk strings](#bulk-strings) instead. + + + +### Simple errors +RESP has specific data types for errors. +Simple errors, or simply just errors, are similar to [simple strings](#simple-strings), but their first character is the minus (`-`) character. +The difference between simple strings and errors in RESP is that clients should treat errors as exceptions, whereas the string encoded in the error type is the error message itself. + +The basic format is: + + -Error message\r\n + +Redis replies with an error only when something goes wrong, for example, when you try to operate against the wrong data type, or when the command does not exist. +The client should raise an exception when it receives an Error reply. + +The following are examples of error replies: + + -ERR unknown command 'asdf' + -WRONGTYPE Operation against a key holding the wrong kind of value + +The first upper-case word after the `-`, up to the first space or newline, represents the kind of error returned. +This word is called an _error prefix_. +Note that the error prefix is a convention used by Redis rather than part of the RESP error type. + +For example, in Redis, `ERR` is a generic error, whereas `WRONGTYPE` is a more specific error that implies that the client attempted an operation against the wrong data type. +The error prefix allows the client to understand the type of error returned by the server without checking the exact error message. + +A client implementation can return different types of exceptions for various errors, or provide a generic way for trapping errors by directly providing the error name to the caller as a string. + +However, such a feature should not be considered vital as it is rarely useful. +Also, simpler client implementations can return a generic error value, such as `false`. + + + +### Integers +This type is a CRLF-terminated string that represents a signed, base-10, 64-bit integer. + +RESP encodes integers in the following way: + + :[<+|->]\r\n + +* The colon (`:`) as the first byte. +* An optional plus (`+`) or minus (`-`) as the sign. +* One or more decimal digits (`0`..`9`) as the integer's unsigned, base-10 value. +* The CRLF terminator. + +For example, `:0\r\n` and `:1000\r\n` are integer replies (of zero and one thousand, respectively). + +Many Redis commands return RESP integers, including [`INCR`]({{< relref "/commands/incr" >}}), [`LLEN`]({{< relref "/commands/llen" >}}), and [`LASTSAVE`]({{< relref "/commands/lastsave" >}}). +An integer, by itself, has no special meaning other than in the context of the command that returned it. +For example, it is an incremental number for [`INCR`]({{< relref "/commands/incr" >}}), a UNIX timestamp for [`LASTSAVE`]({{< relref "/commands/lastsave" >}}), and so forth. +However, the returned integer is guaranteed to be in the range of a signed 64-bit integer. + +In some cases, integers can represent true and false Boolean values. +For instance, [`SISMEMBER`]({{< relref "/commands/sismember" >}}) returns 1 for true and 0 for false. + +Other commands, including [`SADD`]({{< relref "/commands/sadd" >}}), [`SREM`]({{< relref "/commands/srem" >}}), and [`SETNX`]({{< relref "/commands/setnx" >}}), return 1 when the data changes and 0 otherwise. + + + +### Bulk strings +A bulk string represents a single binary string. +The string can be of any size, but by default, Redis limits it to 512 MB (see the `proto-max-bulk-len` configuration directive). + +RESP encodes bulk strings in the following way: + + $\r\n\r\n + +* The dollar sign (`$`) as the first byte. +* One or more decimal digits (`0`..`9`) as the string's length, in bytes, as an unsigned, base-10 value. +* The CRLF terminator. +* The data. +* A final CRLF. + +So the string "hello" is encoded as follows: + + $5\r\nhello\r\n + +The empty string's encoding is: + + $0\r\n\r\n + + + +#### Null bulk strings +Whereas RESP3 has a dedicated data type for [null values](#nulls), RESP2 has no such type. +Instead, due to historical reasons, the representation of null values in RESP2 is via predetermined forms of the [bulk strings](#bulk-strings) and [arrays](#arrays) types. + +The null bulk string represents a non-existing value. +The [`GET`]({{< relref "/commands/get" >}}) command returns the Null Bulk String when the target key doesn't exist. + +It is encoded as a bulk string with the length of negative one (-1), like so: + + $-1\r\n + +A Redis client should return a nil object when the server replies with a null bulk string rather than the empty string. +For example, a Ruby library should return `nil` while a C library should return `NULL` (or set a special flag in the reply object). + + + +### Arrays +Clients send commands to the Redis server as RESP arrays. +Similarly, some Redis commands that return collections of elements use arrays as their replies. +An example is the [`LRANGE`]({{< relref "/commands/lrange" >}}) command that returns elements of a list. + +RESP Arrays' encoding uses the following format: + + *\r\n... + +* An asterisk (`*`) as the first byte. +* One or more decimal digits (`0`..`9`) as the number of elements in the array as an unsigned, base-10 value. +* The CRLF terminator. +* An additional RESP type for every element of the array. + +So an empty Array is just the following: + + *0\r\n + +Whereas the encoding of an array consisting of the two bulk strings "hello" and "world" is: + + *2\r\n$5\r\nhello\r\n$5\r\nworld\r\n + +As you can see, after the `*CRLF` part prefixing the array, the other data types that compose the array are concatenated one after the other. +For example, an Array of three integers is encoded as follows: + + *3\r\n:1\r\n:2\r\n:3\r\n + +Arrays can contain mixed data types. +For instance, the following encoding is of a list of four integers and a bulk string: + + *5\r\n + :1\r\n + :2\r\n + :3\r\n + :4\r\n + $5\r\n + hello\r\n + +(The raw RESP encoding is split into multiple lines for readability). + +The first line the server sent is `*5\r\n`. +This numeric value tells the client that five reply types are about to follow it. +Then, every successive reply constitutes an element in the array. + +All of the aggregate RESP types support nesting. +For example, a nested array of two arrays is encoded as follows: + + *2\r\n + *3\r\n + :1\r\n + :2\r\n + :3\r\n + *2\r\n + +Hello\r\n + -World\r\n + +(The raw RESP encoding is split into multiple lines for readability). + +The above encodes a two-element array. +The first element is an array that, in turn, contains three integers (1, 2, 3). +The second element is another array containing a simple string and an error. + +{{% alert title="Multi bulk reply" color="info" %}} +In some places, the RESP Array type may be referred to as _multi bulk_. +The two are the same. +{{% /alert %}} + + + +#### Null arrays +Whereas RESP3 has a dedicated data type for [null values](#nulls), RESP2 has no such type. Instead, due to historical reasons, the representation of null values in RESP2 is via predetermined forms of the [Bulk Strings](#bulk-strings) and [arrays](#arrays) types. + +Null arrays exist as an alternative way of representing a null value. +For instance, when the [`BLPOP`]({{< relref "/commands/blpop" >}}) command times out, it returns a null array. + +The encoding of a null array is that of an array with the length of -1, i.e.: + + *-1\r\n + +When Redis replies with a null array, the client should return a null object rather than an empty array. +This is necessary to distinguish between an empty list and a different condition (for instance, the timeout condition of the [`BLPOP`]({{< relref "/commands/blpop" >}}) command). + +#### Null elements in arrays +Single elements of an array may be [null bulk string](#null-bulk-strings). +This is used in Redis replies to signal that these elements are missing and not empty strings. This can happen, for example, with the [`SORT`]({{< relref "/commands/sort" >}}) command when used with the `GET pattern` option +if the specified key is missing. + +Here's an example of an array reply containing a null element: + + *3\r\n + $5\r\n + hello\r\n + $-1\r\n + $5\r\n + world\r\n + +Above, the second element is null. +The client library should return to its caller something like this: + + ["hello",nil,"world"] + + + +### Nulls +The null data type represents non-existent values. + +Nulls' encoding is the underscore (`_`) character, followed by the CRLF terminator (`\r\n`). +Here's Null's raw RESP encoding: + + _\r\n + +{{% alert title="Null Bulk String, Null Arrays and Nulls" color="info" %}} +Due to historical reasons, RESP2 features two specially crafted values for representing null values of bulk strings and arrays. +This duality has always been a redundancy that added zero semantical value to the protocol itself. + +The null type, introduced in RESP3, aims to fix this wrong. +{{% /alert %}} + + + +### Booleans +RESP booleans are encoded as follows: + + #\r\n + +* The octothorpe character (`#`) as the first byte. +* A `t` character for true values, or an `f` character for false ones. +* The CRLF terminator. + + + +### Doubles +The Double RESP type encodes a double-precision floating point value. +Doubles are encoded as follows: + + ,[<+|->][.][[sign]]\r\n + +* The comma character (`,`) as the first byte. +* An optional plus (`+`) or minus (`-`) as the sign. +* One or more decimal digits (`0`..`9`) as an unsigned, base-10 integral value. +* An optional dot (`.`), followed by one or more decimal digits (`0`..`9`) as an unsigned, base-10 fractional value. +* An optional capital or lowercase letter E (`E` or `e`), followed by an optional plus (`+`) or minus (`-`) as the exponent's sign, ending with one or more decimal digits (`0`..`9`) as an unsigned, base-10 exponent value. +* The CRLF terminator. + +Here's the encoding of the number 1.23: + + ,1.23\r\n + +Because the fractional part is optional, the integer value of ten (10) can, therefore, be RESP-encoded both as an integer as well as a double: + + :10\r\n + ,10\r\n + +In such cases, the Redis client should return native integer and double values, respectively, providing that these types are supported by the language of its implementation. + +The positive infinity, negative infinity and NaN values are encoded as follows: + + ,inf\r\n + ,-inf\r\n + ,nan\r\n + + + +### Big numbers +This type can encode integer values outside the range of signed 64-bit integers. + +Big numbers use the following encoding: + + ([+|-]\r\n + +* The left parenthesis character (`(`) as the first byte. +* An optional plus (`+`) or minus (`-`) as the sign. +* One or more decimal digits (`0`..`9`) as an unsigned, base-10 value. +* The CRLF terminator. + +Example: + + (3492890328409238509324850943850943825024385\r\n + +Big numbers can be positive or negative but can't include fractionals. +Client libraries written in languages with a big number type should return a big number. +When big numbers aren't supported, the client should return a string and, when possible, signal to the caller that the reply is a big integer (depending on the API used by the client library). + + + +### Bulk errors +This type combines the purpose of [simple errors](#simple-errors) with the expressive power of [bulk strings](#bulk-strings). + +It is encoded as: + + !\r\n\r\n + +* An exclamation mark (`!`) as the first byte. +* One or more decimal digits (`0`..`9`) as the error's length, in bytes, as an unsigned, base-10 value. +* The CRLF terminator. +* The error itself. +* A final CRLF. + +As a convention, the error begins with an uppercase (space-delimited) word that conveys the error message. + +For instance, the error "SYNTAX invalid syntax" is represented by the following protocol encoding: + + !21\r\n + SYNTAX invalid syntax\r\n + +(The raw RESP encoding is split into multiple lines for readability). + + + +### Verbatim strings +This type is similar to the [bulk string](#bulk-strings), with the addition of providing a hint about the data's encoding. + +A verbatim string's RESP encoding is as follows: + + =\r\n:\r\n + +* An equal sign (`=`) as the first byte. +* One or more decimal digits (`0`..`9`) as the string's total length, in bytes, as an unsigned, base-10 value. +* The CRLF terminator. +* Exactly three (3) bytes represent the data's encoding. +* The colon (`:`) character separates the encoding and data. +* The data. +* A final CRLF. + +Example: + + =15\r\n + txt:Some string\r\n + +(The raw RESP encoding is split into multiple lines for readability). + +Some client libraries may ignore the difference between this type and the string type and return a native string in both cases. +However, interactive clients, such as command line interfaces (e.g., [`redis-cli`]({{< relref "/develop/tools/cli" >}})), can use this type and know that their output should be presented to the human user as is and without quoting the string. + +For example, the Redis command [`INFO`]({{< relref "/commands/info" >}}) outputs a report that includes newlines. +When using RESP3, `redis-cli` displays it correctly because it is sent as a Verbatim String reply (with its three bytes being "txt"). +When using RESP2, however, the `redis-cli` is hard-coded to look for the [`INFO`]({{< relref "/commands/info" >}}) command to ensure its correct display to the user. + + + +### Maps +The RESP map encodes a collection of key-value tuples, i.e., a dictionary or a hash. + +It is encoded as follows: + + %\r\n... + +* A percent character (`%`) as the first byte. +* One or more decimal digits (`0`..`9`) as the number of entries, or key-value tuples, in the map as an unsigned, base-10 value. +* The CRLF terminator. +* Two additional RESP types for every key and value in the map. + +For example, the following JSON object: + + { + "first": 1, + "second": 2 + } + +Can be encoded in RESP like so: + + %2\r\n + +first\r\n + :1\r\n + +second\r\n + :2\r\n + +(The raw RESP encoding is split into multiple lines for readability). + +Both map keys and values can be any of RESP's types. + +Redis clients should return the idiomatic dictionary type that their language provides. +However, low-level programming languages (such as C, for example) will likely return an array along with type information that indicates to the caller that it is a dictionary. + +{{% alert title="Map pattern in RESP2" color="info" %}} +RESP2 doesn't have a map type. +A map in RESP2 is represented by a flat array containing the keys and the values. +The first element is a key, followed by the corresponding value, then the next key and so on, like this: +`key1, value1, key2, value2, ...`. +{{% /alert %}} + + + +### Attributes + +The attribute type is exactly like the Map type, but instead of a `%` character as the first byte, the `|` character is used. Attributes describe a dictionary exactly like the Map type. However the client should not consider such a dictionary part of the reply, but as auxiliary data that augments the reply. + +Note: in the examples below, indentation is shown only for clarity; the additional whitespace would not be part of a real reply. + +For example, newer versions of Redis may include the ability to report the popularity of keys for every executed command. The reply to the command `MGET a b` may be the following: + + |1\r\n + +key-popularity\r\n + %2\r\n + $1\r\n + a\r\n + ,0.1923\r\n + $1\r\n + b\r\n + ,0.0012\r\n + *2\r\n + :2039123\r\n + :9543892\r\n + +The actual reply to `MGET` is just the two item array `[2039123, 9543892]`. The returned attributes specify the popularity, or frequency of requests, given as floating point numbers ranging from `0.0` to `1.0`, of the keys mentioned in the original command. Note: the actual implementation in Redis may differ. + +When a client reads a reply and encounters an attribute type, it should read the attribute, and continue reading the reply. The attribute reply should be accumulated separately, and the user should have a way to access such attributes. For instance, if we imagine a session in an higher level language, something like this could happen: + +```python +> r = Redis.new +# +> r.mget("a","b") +# +> r +[2039123,9543892] +> r.attribs +{:key-popularity => {:a => 0.1923, :b => 0.0012}} +``` + +Attributes can appear anywhere before a valid part of the protocol identifying a given type, and supply information only about the part of the reply that immediately follows. For example: + + *3\r\n + :1\r\n + :2\r\n + |1\r\n + +ttl\r\n + :3600\r\n + :3\r\n + +In the above example the third element of the array has associated auxiliary information of `{ttl:3600}`. Note that it's not up to the client library to interpret the attributes, but it should pass them to the caller in a sensible way. + + + +### Sets +Sets are somewhat like [Arrays](#arrays) but are unordered and should only contain unique elements. + +RESP set's encoding is: + + ~\r\n... + +* A tilde (`~`) as the first byte. +* One or more decimal digits (`0`..`9`) as the number of elements in the set as an unsigned, base-10 value. +* The CRLF terminator. +* An additional RESP type for every element of the Set. + +Clients should return the native set type if it is available in their programming language. +Alternatively, in the absence of a native set type, an array coupled with type information can be used (in C, for example). + + + +### Pushes +RESP's pushes contain out-of-band data. +They are an exception to the protocol's request-response model and provide a generic _push mode_ for connections. + +Push events are encoded similarly to [arrays](#arrays), differing only in their first byte: + + >\r\n... + +* A greater-than sign (`>`) as the first byte. +* One or more decimal digits (`0`..`9`) as the number of elements in the message as an unsigned, base-10 value. +* The CRLF terminator. +* An additional RESP type for every element of the push event. + +Pushed data may precede or follow any of RESP's data types but never inside them. +That means a client won't find push data in the middle of a map reply, for example. +It also means that pushed data may appear before or after a command's reply, as well as by itself (without calling any command). + +Clients should react to pushes by invoking a callback that implements their handling of the pushed data. + +## Client handshake +New RESP connections should begin the session by calling the [`HELLO`]({{< relref "/commands/hello" >}}) command. +This practice accomplishes two things: + +1. It allows servers to be backward compatible with RESP2 versions. + This is needed in Redis to make the transition to version 3 of the protocol gentler. +2. The [`HELLO`]({{< relref "/commands/hello" >}}) command returns information about the server and the protocol that the client can use for different goals. + +The [`HELLO`]({{< relref "/commands/hello" >}}) command has the following high-level syntax: + + HELLO [optional-arguments] + +The first argument of the command is the protocol version we want the connection to be set. +By default, the connection starts in RESP2 mode. +If we specify a connection version that is too big and unsupported by the server, it should reply with a `-NOPROTO` error. Example: + + Client: HELLO 4 + Server: -NOPROTO sorry, this protocol version is not supported. + +At that point, the client may retry with a lower protocol version. + +Similarly, the client can easily detect a server that is only able to speak RESP2: + + Client: HELLO 3 + Server: -ERR unknown command 'HELLO' + +The client can then proceed and use RESP2 to communicate with the server. + +Note that even if the protocol's version is supported, the [`HELLO`]({{< relref "/commands/hello" >}}) command may return an error, perform no action and remain in RESP2 mode. +For example, when used with invalid authentication credentials in the command's optional `AUTH` clause: + + Client: HELLO 3 AUTH default mypassword + Server: -ERR invalid password + (the connection remains in RESP2 mode) + +A successful reply to the [`HELLO`]({{< relref "/commands/hello" >}}) command is a map reply. +The information in the reply is partly server-dependent, but certain fields are mandatory for all the RESP3 implementations: +* **server**: "redis" (or other software name). +* **version**: the server's version. +* **proto**: the highest supported version of the RESP protocol. + +In Redis' RESP3 implementation, the following fields are also emitted: + +* **id**: the connection's identifier (ID). +* **mode**: "standalone", "sentinel" or "cluster". +* **role**: "master" or "replica". +* **modules**: list of loaded modules as an Array of Bulk Strings. + +## Sending commands to a Redis server +Now that you are familiar with the RESP serialization format, you can use it to help write a Redis client library. +We can further specify how the interaction between the client and the server works: + +* A client sends the Redis server an [array](#arrays) consisting of only bulk strings. +* A Redis server replies to clients, sending any valid RESP data type as a reply. + +So, for example, a typical interaction could be the following. + +The client sends the command `LLEN mylist` to get the length of the list stored at the key _mylist_. +Then the server replies with an [integer](#integers) reply as in the following example (`C:` is the client, `S:` the server). + + C: *2\r\n + C: $4\r\n + C: LLEN\r\n + C: $6\r\n + C: mylist\r\n + + S: :48293\r\n + +As usual, we separate different parts of the protocol with newlines for simplicity, but the actual interaction is the client sending `*2\r\n$4\r\nLLEN\r\n$6\r\nmylist\r\n` as a whole. + +## Multiple commands and pipelining +A client can use the same connection to issue multiple commands. +Pipelining is supported, so multiple commands can be sent with a single write operation by the client. +The client can skip reading replies and continue to send the commands one after the other. +All the replies can be read at the end. + +For more information, see [Pipelining]({{< relref "/develop/use/pipelining" >}}). + +## Inline commands +Sometimes you may need to send a command to the Redis server but only have `telnet` available. +While the Redis protocol is simple to implement, it is not ideal for interactive sessions, and `redis-cli` may not always be available. +For this reason, Redis also accepts commands in the _inline command_ format. + +The following example demonstrates a server/client exchange using an inline command (the server chat starts with `S:`, the client chat with `C:`): + + C: PING + S: +PONG + +Here's another example of an inline command where the server returns an integer: + + C: EXISTS somekey + S: :0 + +Basically, to issue an inline command, you write space-separated arguments in a telnet session. +Since no command starts with `*` (the identifying byte of RESP Arrays), Redis detects this condition and parses your command inline. + +## High-performance parser for the Redis protocol + +While the Redis protocol is human-readable and easy to implement, its implementation can exhibit performance similar to that of a binary protocol. + +RESP uses prefixed lengths to transfer bulk data. +That makes scanning the payload for special characters unnecessary (unlike parsing JSON, for example). +For the same reason, quoting and escaping the payload isn't needed. + +Reading the length of aggregate types (for example, bulk strings or arrays) can be processed with code that performs a single operation per character while at the same time scanning for the CR character. + +Example (in C): + +```c +#include + +int main(void) { + unsigned char *p = "$123\r\n"; + int len = 0; + + p++; + while(*p != '\r') { + len = (len*10)+(*p - '0'); + p++; + } + + /* Now p points at '\r', and the length is in len. */ + printf("%d\n", len); + return 0; +} +``` + +After the first CR is identified, it can be skipped along with the following LF without further processing. +Then, the bulk data can be read with a single read operation that doesn't inspect the payload in any way. +Finally, the remaining CR and LF characters are discarded without additional processing. + +While comparable in performance to a binary protocol, the Redis protocol is significantly more straightforward to implement in most high-level languages, reducing the number of bugs in client software. + +## Tips for Redis client authors + +* For testing purposes, use [Lua's type conversions]({{< relref "develop/interact/programmability/lua-api#lua-to-resp3-type-conversion" >}}) to have Redis reply with any RESP2/RESP3 needed. + As an example, a RESP3 double can be generated like so: + ``` + EVAL "return { double = tonumber(ARGV[1]) }" 0 1e0 + ``` + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: How Redis commands expose their documentation programmatically +linkTitle: Command arguments +title: Redis command arguments +weight: 7 +--- + +The [`COMMAND DOCS`]({{< relref "/commands/command-docs" >}}) command returns documentation-focused information about available Redis commands. +The map reply that the command returns includes the _arguments_ key. +This key stores an array that describes the command's arguments. + +Every element in the _arguments_ array is a map with the following fields: + +* **name:** the argument's name, always present. + The name of an argument is given for identification purposes alone. + It isn't displayed during the command's syntax rendering. + The same name can appear more than once in the entire argument tree, but it is unique compared to other sibling arguments' names. + This allows obtaining a unique identifier for each argument (the concatenation of all names in the path from the root to any argument). +* **display_text:** the argument's display string, present in arguments that have a displayable representation (all arguments that aren't oneof/block). + This is the string used in the command's syntax rendering. +* **type:** the argument's type, always present. + An argument must have one of the following types: + - **string:** a string argument. + - **integer:** an integer argument. + - **double:** a double-precision argument. + - **key:** a string that represents the name of a key. + - **pattern:** a string that represents a glob-like pattern. + - **unix-time:** an integer that represents a Unix timestamp. + - **pure-token:** an argument is a token, meaning a reserved keyword, which may or may not be provided. + Not to be confused with free-text user input. + - **oneof**: the argument is a container for nested arguments. + This type enables choice among several nested arguments (see the [`XADD`]({{< relref "/commands/xadd" >}}) example below). + - **block:** the argument is a container for nested arguments. + This type enables grouping arguments and applying a property (such as _optional_) to all (see the [`XADD`]({{< relref "/commands/xadd" >}}) example below). +* **key_spec_index:** this value is available for every argument of the _key_ type. + It is a 0-based index of the specification in the command's [key specifications][tr] that corresponds to the argument. +* **token**: a constant literal that precedes the argument (user input) itself. +* **summary:** a short description of the argument. +* **since:** the debut Redis version of the argument (or for module commands, the module version). +* **deprecated_since:** the Redis version that deprecated the command (or for module commands, the module version). +* **flags:** an array of argument flags. + Possible flags are: + - **optional**: denotes that the argument is optional (for example, the _GET_ clause of the [`SET`]({{< relref "/commands/set" >}}) command). + - **multiple**: denotes that the argument may be repeated (such as the _key_ argument of [`DEL`]({{< relref "/commands/del" >}})). + - **multiple-token:** denotes the possible repetition of the argument with its preceding token (see [`SORT`]({{< relref "/commands/sort" >}})'s `GET pattern` clause). +* **value:** the argument's value. + For arguments types other than _oneof_ and _block_, this is a string that describes the value in the command's syntax. + For the _oneof_ and _block_ types, this is an array of nested arguments, each being a map as described in this section. + +[tr]: /develop/reference/key-specs.md + +## Example + +The trimming clause of [`XADD`]({{< relref "/commands/xadd" >}}), i.e., `[MAXLEN|MINID [=|~] threshold [LIMIT count]]`, is represented at the top-level as _block_-typed argument. + +It consists of four nested arguments: + +1. **trimming strategy:** this nested argument has an _oneof_ type with two nested arguments. + Each of the nested arguments, _MAXLEN_ and _MINID_, is typed as _pure-token_. +2. **trimming operator:** this nested argument is an optional _oneof_ type with two nested arguments. + Each of the nested arguments, _=_ and _~_, is a _pure-token_. +3. **threshold:** this nested argument is a _string_. +4. **count:** this nested argument is an optional _integer_ with a _token_ (_LIMIT_). + +Here's [`XADD`]({{< relref "/commands/xadd" >}})'s arguments array: + +``` +1) 1) "name" + 2) "key" + 3) "type" + 4) "key" + 5) "value" + 6) "key" +2) 1) "name" + 2) "nomkstream" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "NOMKSTREAM" + 7) "since" + 8) "6.2" + 9) "flags" + 10) 1) optional +3) 1) "name" + 2) "trim" + 3) "type" + 4) "block" + 5) "flags" + 6) 1) optional + 7) "value" + 8) 1) 1) "name" + 2) "strategy" + 3) "type" + 4) "oneof" + 5) "value" + 6) 1) 1) "name" + 2) "maxlen" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "MAXLEN" + 2) 1) "name" + 2) "minid" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "MINID" + 7) "since" + 8) "6.2" + 2) 1) "name" + 2) "operator" + 3) "type" + 4) "oneof" + 5) "flags" + 6) 1) optional + 7) "value" + 8) 1) 1) "name" + 2) "equal" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "=" + 2) 1) "name" + 2) "approximately" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "~" + 3) 1) "name" + 2) "threshold" + 3) "type" + 4) "string" + 5) "value" + 6) "threshold" + 4) 1) "name" + 2) "count" + 3) "type" + 4) "integer" + 5) "token" + 6) "LIMIT" + 7) "since" + 8) "6.2" + 9) "flags" + 10) 1) optional + 11) "value" + 12) "count" +4) 1) "name" + 2) "id_or_auto" + 3) "type" + 4) "oneof" + 5) "value" + 6) 1) 1) "name" + 2) "auto_id" + 3) "type" + 4) "pure-token" + 5) "token" + 6) "*" + 2) 1) "name" + 2) "id" + 3) "type" + 4) "string" + 5) "value" + 6) "id" +5) 1) "name" + 2) "field_value" + 3) "type" + 4) "block" + 5) "flags" + 6) 1) multiple + 7) "value" + 8) 1) 1) "name" + 2) "field" + 3) "type" + 4) "string" + 5) "value" + 6) "field" + 2) 1) "name" + 2) "value" + 3) "type" + 4) "string" + 5) "value" + 6) "value" +``` +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Specifications and protocols +linkTitle: Reference +title: Redis reference +weight: 70 +--- +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Overview of Redis key eviction policies (LRU, LFU, etc.) +linkTitle: Eviction +title: Key eviction +weight: 6 +--- + +Redis is commonly used as a cache to speed up read accesses to a slower server +or database. Since cache entries are copies of persistently-stored data, it +is usually safe to evict them when the cache runs out of memory (they can be +cached again in the future if necessary). + +Redis lets you specify an eviction policy to evict keys automatically +when the size of the cache exceeds a set memory limit. Whenever a client +runs a new command that adds more data to the cache, Redis checks the memory usage. +If it is greater than the limit, Redis evicts keys according to the chosen +eviction policy until the total memory used is back below the limit. + +Note that when a command adds a lot of data to the cache (for example, a big set +intersection stored into a new key), this might temporarily exceed the limit by +a large amount. + +The sections below explain how to [configure the memory limit](#maxmem) for the cache +and also describe the available [eviction policies](#eviction-policies) and when to +use them. + +## Using the `maxmemory` configuration directive {#maxmem} + +The `maxmemory` configuration directive specifies +the maximum amount of memory to use for the cache data. You can +set `maxmemory` with the [`redis.conf`](https://github.com/redis/redis/blob/7.4.0/redis.conf) +file at startup time. For example, to configure a memory limit of 100 megabytes, +you can use the following directive inside `redis.conf`: + +``` +maxmemory 100mb +``` + +You can also use [`CONFIG SET`]({{< relref "/commands/config-set" >}}) to +set `maxmemory` at runtime using [`redis-cli`]({{< relref "/develop/tools/cli" >}}): + +```bash +> CONFIG SET maxmemory 100mb +``` + +Set `maxmemory` to zero to specify that you don't want to limit the memory +for the dataset. This is the default behavior for 64-bit systems, while 32-bit +systems use an implicit memory limit of 3GB. + +When the size of your cache exceeds the limit set by `maxmemory`, Redis will +enforce your chosen [eviction policy](#eviction-policies) to prevent any +further growth of the cache. + +### Setting `maxmemory` for a replicated or persisted instance + +If you are using +[replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}) +or [persistence]({{< relref "/operate/rs/databases/configure/database-persistence" >}}) +for a server, Redis will use some RAM as a buffer to store the set of updates waiting +to be written to the replicas or AOF files. +The memory used by this buffer is not included in the total that +is compared to `maxmemory` to see if eviction is required. + +This is because the key evictions themselves generate updates that must be added +to the buffer. If the updates were counted among the used +memory then in some circumstances, the memory saved by +evicting keys would be immediately used up by the update data added to the buffer. +This, in turn, would trigger even more evictions and the resulting feedback loop +could evict many items from the cache unnecessarily. + +If you are using replication or persistence, we recommend that you set +`maxmemory` to leave a little RAM free to store the buffers. Note that this is not +necessary for the `noeviction` policy (see [the section below](#eviction-policies) +for more information about eviction policies). + +The [`INFO`]({{< relref "/commands/info" >}}) command returns a +`mem_not_counted_for_evict` value in the `memory` section (you can use +the `INFO memory` option to see just this section). This is the amount of +memory currently used by the buffers. Although the exact amount will vary, +you can use it to estimate how much to subtract from the total available RAM +before setting `maxmemory`. + +## Eviction policies + +Use the `maxmemory-policy` configuration directive to select the eviction +policy you want to use when the limit set by `maxmemory` is reached. + +The following policies are available: + +- `noeviction`: Keys are not evicted but the server will return an error + when you try to execute commands that cache new data. If your database uses replication + then this condition only applies to the primary database. Note that commands that only + read existing data still work as normal. +- `allkeys-lru`: Evict the [least recently used](#apx-lru) (LRU) keys. +- `allkeys-lfu`: Evict the [least frequently used](#lfu-eviction) (LFU) keys. +- `allkeys-random`: Evict keys at random. +- `volatile-lru`: Evict the least recently used keys that have the `expire` field + set to `true`. +- `volatile-lfu`: Evict the least frequently used keys that have the `expire` field + set to `true`. +- `volatile-random`: Evict keys at random only if they have the `expire` field set + to `true`. +- `volatile-ttl`: Evict keys with the `expire` field set to `true` that have the + shortest remaining time-to-live (TTL) value. + +The `volatile-xxx` policies behave like `noeviction` if no keys have the `expire` +field set to true, or for `volatile-ttl`, if no keys have a time-to-live value set. + +You should choose an eviction policy that fits the way your app +accesses keys. You may be able to predict the access pattern in advance +but you can also use information from the `INFO` command at runtime to +check or improve your choice of policy (see +[Using the `INFO` command](#using-the-info-command) below for more information). + +As a rule of thumb: + +- Use `allkeys-lru` when you expect that a subset of elements will be accessed far + more often than the rest. This is a very common case according to the + [Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle), so + `allkeys-lru` is a good default option if you have no reason to prefer any others. +- Use `allkeys-random` when you expect all keys to be accessed with roughly equal + frequency. An example of this is when your app reads data items in a repeating cycle. +- Use `volatile-ttl` if your code can estimate which keys are good candidates for eviction + and assign short TTLs to them. Note also that if you make good use of + key expiration, then you are less likely to run into the cache memory limit because keys + will often expire before they need to be evicted. + +The `volatile-lru` and `volatile-random` policies are mainly useful when you want to use +a single Redis instance for both caching and for a set of persistent keys. However, +you should consider running two separate Redis instances in a case like this, if possible. + +Also note that setting an `expire` value for a key costs memory, so a +policy like `allkeys-lru` is more memory efficient since it doesn't need an +`expire` value to operate. + +### Using the `INFO` command + +The [`INFO`]({{< relref "/commands/info" >}}) command provides several pieces +of data that are useful for checking the performance of your cache. In particular, +the `INFO stats` section includes two important entries, `keyspace_hits` (the number of +times keys were successfully found in the cache) and `keyspace_misses` (the number +of times a key was requested but was not in the cache). The calculation below gives +the percentage of attempted accesses that were satisfied from the cache: + +``` +keyspace_hits / (keyspace_hits + keyspace_misses) * 100 +``` + +Check that this is roughly equal to what you would expect for your app +(naturally, a higher percentage indicates better cache performance). + +{{< note >}} When the [`EXISTS`]({{< relref "/commands/exists" >}}) +command reports that a key is absent then this is counted as a keyspace miss. +{{< /note >}} + +If the percentage of hits is lower than expected, then this might +mean you are not using the best eviction policy. For example, if +you believe that a small subset of "hot" data (that will easily fit into the +cache) should account for about 75% of accesses, you could reasonably +expect the percentage of keyspace hits to be around 75%. If the actual +percentage is lower, check the value of `evicted_keys` (also returned by +`INFO stats`). A high proportion of evictions would suggest that the +wrong keys are being evicted too often by your chosen policy +(so `allkeys-lru` might be a good option here). If the +value of `evicted_keys` is low and you are using key expiration, check +`expired_keys` to see how many keys have expired. If this number is high, +you might be using a TTL that is too low or you are choosing the wrong +keys to expire and this is causing keys to disappear from the cache +before they should. + +Other useful pieces of information returned by `INFO` include: + +- `used_memory_dataset`: (`memory` section) The amount of memory used for + cached data. If this is greater than `maxmemory`, then the difference + is the amount by which `maxmemory` has been exceeded. +- `current_eviction_exceeded_time`: (`stats` section) The time since + the cache last started to exceed `maxmemory`. +- `commandstats` section: Among other things, this reports the number of + times each command issued to the server has been rejected. If you are + using `noeviction` or one of the `volatile_xxx` policies, you can use + this to find which commands are being stopped by the `maxmemory` limit + and how often it is happening. + +## Approximated LRU algorithm {#apx-lru} + +The Redis LRU algorithm uses an approximation of the least recently used +keys rather than calculating them exactly. It samples a small number of keys +at random and then evicts the ones with the longest time since last access. + +From Redis 3.0 onwards, the algorithm also tracks a pool of good +candidates for eviction. This improves the performance of the algorithm, making +it a close approximation to a true LRU algorithm. + +You can tune the performance of the algorithm by changing the number of samples to check +before every eviction with the `maxmemory-samples` configuration directive: + +``` +maxmemory-samples 5 +``` + +The reason Redis does not use a true LRU implementation is because it +costs more memory. However, the approximation is virtually equivalent for an +application using Redis. This figure compares +the LRU approximation used by Redis with true LRU. + +![LRU comparison](lru_comparison.png) + +The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last. The first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted. + +You can see three kind of dots in the graphs, forming three distinct bands. + +* The light gray band are objects that were evicted. +* The gray band are objects that were not evicted. +* The green band are objects that were added. + +In a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only *probabilistically* expire the older keys. + +As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0. + +Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely +resembles the power law, most of the accesses will be in the set of keys +the LRU approximated algorithm can handle well. + +In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent. + +However you can raise the sample size to 10 at the cost of some additional CPU +usage to closely approximate true LRU, and check if this makes a +difference in your cache misses rate. + +To experiment in production with different values for the sample size by using +the `CONFIG SET maxmemory-samples ` command, is very simple. + +## LFU eviction + +Starting with Redis 4.0, the [Least Frequently Used eviction mode](http://antirez.com/news/109) is available. This mode may work better (provide a better +hits/misses ratio) in certain cases. In LFU mode, Redis will try to track +the frequency of access of items, so the ones used rarely are evicted. This means +the keys used often have a higher chance of remaining in memory. + +To configure the LFU mode, the following policies are available: + +* `volatile-lfu` Evict using approximated LFU among the keys with an expire set. +* `allkeys-lfu` Evict any key using approximated LFU. + +LFU is approximated like LRU: it uses a probabilistic counter, called a [Morris counter](https://en.wikipedia.org/wiki/Approximate_counting_algorithm) to estimate the object access frequency using just a few bits per object, combined with a decay period so that the counter is reduced over time. At some point we no longer want to consider keys as frequently accessed, even if they were in the past, so that the algorithm can adapt to a shift in the access pattern. + +That information is sampled similarly to what happens for LRU (as explained in the previous section of this documentation) to select a candidate for eviction. + +However unlike LRU, LFU has certain tunable parameters: for example, how fast +should a frequent item lower in rank if it gets no longer accessed? It is also possible to tune the Morris counters range to better adapt the algorithm to specific use cases. + +By default Redis is configured to: + +* Saturate the counter at, around, one million requests. +* Decay the counter every one minute. + +Those should be reasonable values and were tested experimentally, but the user may want to play with these configuration settings to pick optimal values. + +Instructions about how to tune these parameters can be found inside the example `redis.conf` file in the source distribution. Briefly, they are: + +``` +lfu-log-factor 10 +lfu-decay-time 1 +``` + +The decay time is the obvious one, it is the amount of minutes a counter should be decayed, when sampled and found to be older than that value. A special value of `0` means: we will never decay the counter. + +The counter *logarithm factor* changes how many hits are needed to saturate the frequency counter, which is just in the range 0-255. The higher the factor, the more accesses are needed to reach the maximum. The lower the factor, the better is the resolution of the counter for low accesses, according to the following table: + +``` ++--------+------------+------------+------------+------------+------------+ +| factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | ++--------+------------+------------+------------+------------+------------+ +| 0 | 104 | 255 | 255 | 255 | 255 | ++--------+------------+------------+------------+------------+------------+ +| 1 | 18 | 49 | 255 | 255 | 255 | ++--------+------------+------------+------------+------------+------------+ +| 10 | 10 | 18 | 142 | 255 | 255 | ++--------+------------+------------+------------+------------+------------+ +| 100 | 8 | 11 | 49 | 143 | 255 | ++--------+------------+------------+------------+------------+------------+ +``` + +So basically the factor is a trade off between better distinguishing items with low accesses VS distinguishing items with high accesses. More information is available in the example `redis.conf` file. +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how to use Redis as a document database +linkTitle: Document database +stack: true +title: Redis as a document database quick start guide +weight: 2 +--- + + +This quick start guide shows you how to: + +1. Create a secondary index +2. Add [JSON]({{< relref "/develop/data-types/json/" >}}) documents +3. Search and query your data + +The examples in this article refer to a simple bicycle inventory that contains JSON documents with the following structure: + +```json +{ + "brand": "brand name", + "condition": "new | used | refurbished", + "description": "description", + "model": "model", + "price": 0 +} +``` + +## Setup + +The easiest way to get started with [Redis]({{< relref "/operate/oss_and_stack/" >}}) is to use Redis Cloud: + +1. Create a [free account](https://redis.com/try-free?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). + + +2. Follow the instructions to create a free database. + +This free Redis Cloud database comes out of the box with all the Redis Open Source features. + +You can alternatively use the [installation guides]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) to install Redis Open Source on your local machine. + + +## Connect + +The first step is to connect to your Redis Open Source database. You can find further details about the connection options in this documentation site's [Tools section]({{< relref "/develop/tools" >}}). The following example shows how to connect to a Redis Open Source server that runs on localhost (`-h 127.0.0.1`) and listens on the default port (`-p 6379`): + +{{< clients-example search_quickstart connect >}} +> redis-cli -h 127.0.0.1 -p 6379 +{{< /clients-example>}} + +
+{{% alert title="Tip" color="warning" %}} +You can copy and paste the connection details from the Redis Cloud database configuration page. Here is an example connection string of a Cloud database that is hosted in the AWS region `us-east-1` and listens on port 16379: `redis-16379.c283.us-east-1-4.ec2.cloud.redislabs.com:16379`. The connection string has the format `host:port`. You must also copy and paste your Cloud database's username and password and then pass the credentials to your client or use the [AUTH command]({{< relref "/commands/auth" >}}) after the connection is established. +{{% /alert %}} + + +## Create an index + +As explained in the [in-memory data store]({{< relref "/develop/get-started/data-store" >}}) quick start guide, Redis allows you to access an item directly via its key. You also learned how to scan the keyspace. Whereby you can use other data structures (e.g., hashes and sorted sets) as secondary indexes, your application would need to maintain those indexes manually. Redis is a document database that allows you to declare which fields are auto-indexed. Redis currently supports secondary index creation on the [hashes]({{< relref "/develop/data-types/hashes" >}}) and [JSON]({{< relref "/develop/data-types/json" >}}) documents. + +The following example shows an [FT.CREATE]({{< relref "commands/ft.create" >}}) command that creates an index with some text fields, a numeric field (price), and a tag field (condition). The text fields have a weight of 1.0, meaning they have the same relevancy in the context of full-text searches. The field names follow the [JSONPath]({{< relref "/develop/data-types/json/path" >}}) notion. Each such index field maps to a property within the JSON document. + + +{{< clients-example search_quickstart create_index >}} +> FT.CREATE idx:bicycle ON JSON PREFIX 1 bicycle: SCORE 1.0 SCHEMA $.brand AS brand TEXT WEIGHT 1.0 $.model AS model TEXT WEIGHT 1.0 $.description AS description TEXT WEIGHT 1.0 $.price AS price NUMERIC $.condition AS condition TAG SEPARATOR , +OK +{{< / clients-example >}} + +Any pre-existing JSON documents with a key prefix `bicycle:` are automatically added to the index. Additionally, any JSON documents with that prefix created or modified after index creation are added or re-added to the index. + +## Add JSON documents + +The example below shows you how to use the [JSON.SET]({{< relref "commands/json.set" >}}) command to create new JSON documents: + +{{< clients-example search_quickstart add_documents "" 2 >}} +> JSON.SET "bicycle:0" "." "{\"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": \"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\\u2019 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\", \"condition\": \"new\"}" +OK +> JSON.SET "bicycle:1" "." "{\"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": \"Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\\\" wheel bike is just too clumsy coming off a 24\\\" bike. The Hillcraft 26 is just the solution they need!\", \"condition\": \"used\"}" +OK +> JSON.SET "bicycle:2" "." "{\"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": \"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\", \"condition\": \"used\"}" +OK +> JSON.SET "bicycle:3" "." "{\"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": \"The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It\\u2019s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!\", \"condition\": \"used\"}" +OK +> JSON.SET "bicycle:4" "." "{\"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": \"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\\u2019s saddle, different bars and unique colourway.\", \"condition\": \"used\"}" +OK +> JSON.SET "bicycle:5" "." "{\"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": \"The XBN 2.1 Alloy is our entry-level road bike \\u2013 but that\\u2019s not to say that it\\u2019s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\\u2019s, this is a bike which doesn\\u2019t break the bank and delivers craved performance.\", \"condition\": \"new\"}" +OK +> JSON.SET "bicycle:6" "." "{\"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": \"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\\u2019s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\", \"condition\": \"new\"}" +OK +> JSON.SET "bicycle:7" "." "{\"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": \"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\", \"condition\": \"new\"}" +OK +> JSON.SET "bicycle:8" "." "{\"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": \"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\\u2019re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\", \"condition\": \"new\"}" +OK +> JSON.SET "bicycle:9" "." "{\"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": \"An artsy, retro-inspired bicycle that\\u2019s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\\u2019t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\", \"condition\": \"refurbished\"}" +OK +{{< / clients-example >}} + +## Search and query using the Redis Query Engine + +### Wildcard query + +You can retrieve all indexed documents using the [FT.SEARCH]({{< relref "commands/ft.search" >}}) command. Note the `LIMIT` clause below, which allows result pagination. + +{{< clients-example search_quickstart wildcard_query "" 10 >}} +> FT.SEARCH "idx:bicycle" "*" LIMIT 0 10 +1) (integer) 10 + 2) "bicycle:1" + 3) 1) "$" + 2) "{\"brand\":\"Bicyk\",\"model\":\"Hillcraft\",\"price\":1200,\"description\":\"Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\\\" wheel bike is just too clumsy coming off a 24\\\" bike. The Hillcraft 26 is just the solution they need!\",\"condition\":\"used\"}" + 4) "bicycle:2" + 5) 1) "$" + 2) "{\"brand\":\"Nord\",\"model\":\"Chook air 5\",\"price\":815,\"description\":\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\",\"condition\":\"used\"}" + 6) "bicycle:4" + 7) 1) "$" + 2) "{\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\xe2\x80\x99s saddle, different bars and unique colourway.\",\"condition\":\"used\"}" + 8) "bicycle:5" + 9) 1) "$" + 2) "{\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike \xe2\x80\x93 but that\xe2\x80\x99s not to say that it\xe2\x80\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\xe2\x80\x99s, this is a bike which doesn\xe2\x80\x99t break the bank and delivers craved performance.\",\"condition\":\"new\"}" +10) "bicycle:0" +11) 1) "$" + 2) "{\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\xe2\x80\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}" +12) "bicycle:6" +13) 1) "$" + 2) "{\"brand\":\"ScramBikes\",\"model\":\"WattBike\",\"price\":2300,\"description\":\"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\xe2\x80\x99s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\",\"condition\":\"new\"}" +14) "bicycle:7" +15) 1) "$" + 2) "{\"brand\":\"Peaknetic\",\"model\":\"Secto\",\"price\":430,\"description\":\"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\",\"condition\":\"new\"}" +16) "bicycle:9" +17) 1) "$" + 2) "{\"model\":\"ThrillCycle\",\"brand\":\"BikeShind\",\"price\":815,\"description\":\"An artsy, retro-inspired bicycle that\xe2\x80\x99s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\xe2\x80\x99t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\",\"condition\":\"refurbished\"}" +18) "bicycle:3" +19) 1) "$" + 2) "{\"brand\":\"Eva\",\"model\":\"Eva 291\",\"price\":3400,\"description\":\"The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It\xe2\x80\x99s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!\",\"condition\":\"used\"}" +20) "bicycle:8" +21) 1) "$" + 2) "{\"brand\":\"nHill\",\"model\":\"Summit\",\"price\":1200,\"description\":\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\xe2\x80\x99re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\",\"condition\":\"new\"}" +{{< / clients-example >}} + +### Single-term full-text query + +The following command shows a simple single-term query for finding all bicycles with a specific model: + +{{< clients-example search_quickstart query_single_term >}} +> FT.SEARCH "idx:bicycle" "@model:Jigger" LIMIT 0 10 +1) (integer) 1 +2) "bicycle:0" +3) 1) "$" + 2) "{\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\xe2\x80\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}" +{{< / clients-example >}} + +### Exact match query + +Below is a command to perform an exact match query that finds all bicycles with the brand name `Noka Bikes`. You must use double quotes around the search term when constructing an exact match query on a text field. + +{{< clients-example search_quickstart query_exact_matching >}} +> FT.SEARCH "idx:bicycle" "@brand:\"Noka Bikes\"" LIMIT 0 10 +1) (integer) 1 +2) "bicycle:4" +3) 1) "$" + 2) "{\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\xe2\x80\x99s saddle, different bars and unique colourway.\",\"condition\":\"used\"}" +{{< / clients-example >}} + +Please see the [query documentation]({{< relref "/develop/interact/search-and-query/query/" >}}) to learn how to make more advanced queries. + +## Next steps + +You can learn more about how to use Redis Open Source as a vector database in the following quick start guide: + +* [Redis as a vector database]({{< relref "/develop/get-started/vector-database" >}}) + +## Continue learning with Redis University + +{{< university-links >}}--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how to use Redis for RAG use cases +linkTitle: RAG with Redis +stack: true +title: RAG with Redis +weight: 4 +--- +### What is Retrieval Augmented Generation (RAG)? +Large Language Models (LLMs) generate human-like text but are limited by the data they were trained on. RAG enhances LLMs by integrating them with external, domain-specific data stored in a Redis [vector database]({{< relref "/develop/get-started/vector-database" >}}). + +RAG involves three main steps: + +- **Retrieve**: Fetch relevant information from Redis using vector search and filters based on the user query. +- **Augment**: Create a prompt for the LLM, including the user query, relevant context, and additional instructions. +- **Generate**: Return the response generated by the LLM to the user. + +RAG enables LLMs to use real-time information, improving the accuracy and relevance of generated content. +Redis is ideal for RAG due to its speed, versatility, and familiarity. + +### The role of Redis in RAG + +Redis provides a robust platform for managing real-time data. It supports the storage and retrieval of vectors, essential for handling large-scale, unstructured data and performing similarity searches. Key features and components of Redis that make it suitable for RAG include: + +1. **Vector database**: Stores and indexes vector embeddings that semantically represent unstructured data. +1. **Semantic cache**: Caches frequently asked questions (FAQs) in a RAG pipeline. Using vector search, Redis retrieves similar previously answered questions, reducing LLM inference costs and latency. +1. **LLM session manager**: Stores conversation history between an LLM and a user. Redis fetches recent and relevant portions of the chat history to provide context, improving the quality and accuracy of responses. +1. **High performance and scalability**: Known for its [low latency and high throughput](https://redis.io/blog/benchmarking-results-for-vector-databases/), Redis is ideal for RAG systems and AI agents requiring rapid data retrieval and generation. + +### Build a RAG Application with Redis + +To build a RAG application with Redis, follow these general steps: + +1. **Set up Redis**: Start by setting up a Redis instance and configuring it to handle vector data. + +1. **Use a Framework**: + 1. **Redis Vector Library (RedisVL)**: [RedisVL](https://redis.io/docs/latest/integrate/redisvl/) enhances the development of generative AI applications by efficiently managing vectors and metadata. It allows for storage of vector embeddings and facilitates fast similarity searches, crucial for retrieving relevant information in RAG. + 1. **Popular AI frameworks**: Redis integrates seamlessly with various AI frameworks and tools. For instance, combining Redis with [LangChain](https://python.langchain.com/v0.2/docs/integrations/vectorstores/redis/) or [LlamaIndex](https://docs.llamaindex.ai/en/latest/examples/vector_stores/RedisIndexDemo/), libraries for building language models, enables developers to create sophisticated RAG pipelines. These integrations support efficient data management and building real-time LLM chains. + 1. **Spring AI and Redis**: Using [Spring AI with Redis](https://redis.io/blog/building-a-rag-application-with-redis-and-spring-ai/) simplifies building RAG applications. Spring AI provides a structured approach to integrating AI capabilities into applications, while Redis handles data management, ensuring the RAG pipeline is efficient and scalable. + +1. **Embed and store data**: Convert your data into vector embeddings using a suitable model (e.g., BERT, GPT). Store these embeddings in Redis, where they can be quickly retrieved based on vector searches. + +1. **Integrate with a generative model**: Use a generative AI model that can leverage the retrieved data. The model will use the vectors stored in Redis to augment its generation process, ensuring the output is informed by relevant, up-to-date information. + +1. **Query and generate**: Implement the query logic to retrieve relevant vectors from Redis based on the input prompt. Feed these vectors into the generative model to produce augmented outputs. + +### Benefits of Using Redis for RAG + +- **Efficiency**: The in-memory data store of Redis ensures that retrieval operations are performed with minimal latency. +- **Scalability**: Redis scales horizontally, seamlessly handling growing volumes of data and queries. +- **Flexibility**: Redis supports a variety of data structures and integrates with AI frameworks. + +In summary, Redis offers a powerful and efficient platform for implementing RAG. Its vector management capabilities, high performance, and seamless integration with AI frameworks make it an ideal choice for enhancing generative AI applications with real-time data retrieval. + +### Resources + +- [RAG defined](https://redis.io/glossary/retrieval-augmented-generation/). +- [RAG overview](https://redis.io/kb/doc/2ok7xd1drq/how-to-perform-retrieval-augmented-generation-rag-with-redis). +- [Redis Vector Library (RedisVL)](https://redis.io/docs/latest/integrate/redisvl/) and [introductory article](https://redis.io/blog/introducing-the-redis-vector-library-for-enhancing-genai-development/). +- [RAG with Redis and SpringAI](https://redis.io/blog/building-a-rag-application-with-redis-and-spring-ai/) +- [Build a multimodal RAG app with LangChain and Redis](https://redis.io/blog/explore-the-new-multimodal-rag-template-from-langchain-and-redis/) +- [Get hands-on with advanced Redis AI Recipes](https://github.com/redis-developer/redis-ai-resources) + +## Continue learning with Redis University + +{{< university-links >}}--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how to use basic Redis data types +linkTitle: Data structure store +title: Redis as an in-memory data structure store quick start guide +weight: 1 +--- + +This quick start guide shows you how to: + +1. Get started with Redis +2. Store data under a key in Redis +3. Retrieve data with a key from Redis +4. Scan the keyspace for keys that match a specific pattern + +The examples in this article refer to a simple bicycle inventory. + +## Setup + +The easiest way to get started with Redis is to use Redis Cloud: + +1. Create a [free account](https://redis.com/try-free?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). + + +2. Follow the instructions to create a free database. + +You can alternatively follow the [installation guides]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) to install Redis on your local machine. + +## Connect + +The first step is to connect to Redis. You can find further details about the connection options in this documentation site's [Tools section]({{< relref "/develop/tools" >}}). The following example shows how to connect to a Redis server that runs on localhost (`-h 127.0.0.1`) and listens on the default port (`-p 6379`): + +{{< clients-example search_quickstart connect >}} +> redis-cli -h 127.0.0.1 -p 6379 +{{< /clients-example>}} +
+{{% alert title="Tip" color="warning" %}} +You can copy and paste the connection details from the Redis Cloud database configuration page. Here is an example connection string of a Cloud database that is hosted in the AWS region `us-east-1` and listens on port 16379: `redis-16379.c283.us-east-1-4.ec2.cloud.redislabs.com:16379`. The connection string has the format `host:port`. You must also copy and paste the username and password of your Cloud database and then either pass the credentials to your client or use the [AUTH command]({{< relref "/commands/auth" >}}) after the connection is established. +{{% /alert %}} + +## Store and retrieve data + +Redis stands for Remote Dictionary Server. You can use the same data types as in your local programming environment but on the server side within Redis. + +Similar to byte arrays, Redis strings store sequences of bytes, including text, serialized objects, counter values, and binary arrays. The following example shows you how to set and get a string value: + +{{< clients-example set_and_get >}} +SET bike:1 "Process 134" +GET bike:1 +{{< /clients-example >}} + +Hashes are the equivalent of dictionaries (dicts or hash maps). Among other things, you can use hashes to represent plain objects and to store groupings of counters. The following example explains how to set and access field values of an object: + +{{< clients-example hash_tutorial set_get_all >}} +> HSET bike:1 model Deimos brand Ergonom type 'Enduro bikes' price 4972 +(integer) 4 +> HGET bike:1 model +"Deimos" +> HGET bike:1 price +"4972" +> HGETALL bike:1 +1) "model" +2) "Deimos" +3) "brand" +4) "Ergonom" +5) "type" +6) "Enduro bikes" +7) "price" +8) "4972" +{{< /clients-example >}} + +You can get a complete overview of available data types in this documentation site's [data types section]({{< relref "/develop/data-types/" >}}). Each data type has commands allowing you to manipulate or retrieve data. The [commands reference]({{< relref "/commands/" >}}) provides a sophisticated explanation. + +## Scan the keyspace + +Each item within Redis has a unique key. All items live within the Redis [keyspace]({{< relref "/develop/use/keyspace" >}}). You can scan the Redis keyspace via the [SCAN command]({{< relref "/commands/scan" >}}). Here is an example that scans for the first 100 keys that have the prefix `bike:`: + +``` +SCAN 0 MATCH "bike:*" COUNT 100 +``` + +[SCAN]({{< relref "/commands/scan" >}}) returns a cursor position, allowing you to scan iteratively for the next batch of keys until you reach the cursor value 0. + +## Next steps + +You can address more use cases with Redis by reading these additional quick start guides: + +* [Redis as a document database]({{< relref "/develop/get-started/document-database" >}}) +* [Redis as a vector database]({{< relref "/develop/get-started/vector-database" >}}) + +## Continue learning with Redis University + +{{< university-links >}} + +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Commonly asked questions when getting started with Redis + + ' +linkTitle: FAQ +title: Redis FAQ +weight: 100 +--- +## How is Redis different from other key-value stores? + +* Redis has a different evolution path in the key-value DBs where values can contain more complex data types, with atomic operations defined on those data types. Redis data types are closely related to fundamental data structures and are exposed to the programmer as such, without additional abstraction layers. +* Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. Another advantage of +in-memory databases is that the memory representation of complex data structures +is much simpler to manipulate compared to the same data structures on disk, so +Redis can do a lot with little internal complexity. At the same time the +two on-disk storage formats (RDB and AOF) don't need to be suitable for random +access, so they are compact and always generated in an append-only fashion +(Even the AOF log rotation is an append-only operation, since the new version +is generated from the copy of data in memory). However this design also involves +different challenges compared to traditional on-disk stores. Being the main data +representation on memory, Redis operations must be carefully handled to make sure +there is always an updated version of the data set on disk. + +## What's the Redis memory footprint? + +To give you a few examples (all obtained using 64-bit instances): + +* An empty instance uses ~ 3MB of memory. +* 1 Million small Keys -> String Value pairs use ~ 85MB of memory. +* 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory. + +Testing your use case is trivial. Use the `redis-benchmark` utility to generate random data sets then check the space used with the `INFO memory` command. + +64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can +have a lot of memory in 64-bit systems, so in order to run large Redis servers a 64-bit system is more or less required. The alternative is sharding. + +## Why does Redis keep its entire dataset in memory? + +In the past the Redis developers experimented with Virtual Memory and other systems in order to allow larger than RAM datasets, but after all we are very happy if we can do one thing well: data served from memory, disk used for storage. So for now there are no plans to create an on disk backend for Redis. Most of what +Redis is, after all, a direct result of its current design. + +If your real problem is not the total RAM needed, but the fact that you need +to split your data set into multiple Redis instances, please read the +[partitioning page]({{< relref "/operate/oss_and_stack/management/scaling" >}}) in this documentation for more info. + +Redis Ltd., the company sponsoring Redis development, has developed a +"Redis on Flash" solution that uses a mixed RAM/flash approach for +larger data sets with a biased access pattern. You may check their offering +for more information, however this feature is not part of the Redis Open Source +code base. + +## Can you use Redis with a disk-based database? + +Yes, a common design pattern involves taking very write-heavy small data +in Redis (and data you need the Redis data structures to model your problem +in an efficient way), and big *blobs* of data into an SQL or eventually +consistent on-disk database. Similarly sometimes Redis is used in order to +take in memory another copy of a subset of the same data stored in the on-disk +database. This may look similar to caching, but actually is a more advanced model +since normally the Redis dataset is updated together with the on-disk DB dataset, +and not refreshed on cache misses. + +## How can I reduce Redis' overall memory usage? + +A good practice is to consider memory consumption when mapping your logical data model to the physical data model within Redis. These considerations include using specific data types, key patterns, and normalization. + +Beyond data modeling, there is more info in the [Memory Optimization page]({{< relref "/operate/oss_and_stack/management/optimization/memory-optimization" >}}). + +## What happens if Redis runs out of memory? + +Redis has built-in protections allowing the users to set a max limit on memory +usage, using the `maxmemory` option in the configuration file to put a limit +to the memory Redis can use. If this limit is reached, Redis will start to reply +with an error to write commands (but will continue to accept read-only +commands). + +You can also configure Redis to evict keys when the max memory limit +is reached. See the [eviction policy docs]({{< relref "/develop/reference/eviction" >}}) for more information on this. + +## Background saving fails with a fork() error on Linux? + +Short answer: `echo 1 > /proc/sys/vm/overcommit_memory` :) + +And now the long one: + +The Redis background saving schema relies on the copy-on-write semantic of the `fork` system call in +modern operating systems: Redis forks (creates a child process) that is an +exact copy of the parent. The child process dumps the DB on disk and finally +exits. In theory the child should use as much memory as the parent being a +copy, but actually thanks to the copy-on-write semantic implemented by most +modern operating systems the parent and child process will _share_ the common +memory pages. A page will be duplicated only when it changes in the child or in +the parent. Since in theory all the pages may change while the child process is +saving, Linux can't tell in advance how much memory the child will take, so if +the `overcommit_memory` setting is set to zero the fork will fail unless there is +as much free RAM as required to really duplicate all the parent memory pages. +If you have a Redis dataset of 3 GB and just 2 GB of free +memory it will fail. + +Setting `overcommit_memory` to 1 tells Linux to relax and perform the fork in a +more optimistic allocation fashion, and this is indeed what you want for Redis. + +You can refer to the [proc(5)][proc5] man page for explanations of the +available values. + +[proc5]: http://man7.org/linux/man-pages/man5/proc.5.html + +## Are Redis on-disk snapshots atomic? + +Yes, the Redis background saving process is always forked when the server is +outside of the execution of a command, so every command reported to be atomic +in RAM is also atomic from the point of view of the disk snapshot. + +## How can Redis use multiple CPUs or cores? + +It's not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound. +For instance, when using pipelining a Redis instance running on an average Linux system can deliver 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU. + +However, to maximize CPU usage you can start multiple instances of Redis in +the same box and treat them as different servers. At some point a single +box may not be enough anyway, so if you want to use multiple CPUs you can +start thinking of some way to shard earlier. + +You can find more information about using multiple Redis instances in the [Partitioning page]({{< relref "/operate/oss_and_stack/management/scaling" >}}). + +As of version 4.0, Redis has started implementing threaded actions. For now this is limited to deleting objects in the background and blocking commands implemented via Redis modules. For subsequent releases, the plan is to make Redis more and more threaded. + +## What is the maximum number of keys a single Redis instance can hold? What is the maximum number of elements in a Hash, List, Set, and Sorted Set? + +Redis can handle up to 2^32 keys, and was tested in practice to +handle at least 250 million keys per instance. + +Every hash, list, set, and sorted set, can hold 2^32 elements. + +In other words your limit is likely the available memory in your system. + +## Why does my replica have a different number of keys its master instance? + +If you use keys with limited time to live (Redis expires) this is normal behavior. This is what happens: + +* The primary generates an RDB file on the first synchronization with the replica. +* The RDB file will not include keys already expired in the primary but which are still in memory. +* These keys are still in the memory of the Redis primary, even if logically expired. They'll be considered non-existent, and their memory will be reclaimed later, either incrementally or explicitly on access. While these keys are not logically part of the dataset, they are accounted for in the [`INFO`]({{< relref "/commands/info" >}}) output and in the [`DBSIZE`]({{< relref "/commands/dbsize" >}}) command. +* When the replica reads the RDB file generated by the primary, this set of keys will not be loaded. + +Because of this, it's common for users with many expired keys to see fewer keys in the replicas. However, logically, the primary and replica will have the same content. + +## Where does the name "Redis" come from? + +Redis is an acronym that stands for **RE**mote **DI**ctionary **S**erver. + +## Why did Salvatore Sanfilippo start the Redis project? + +Salvatore originally created Redis to scale [LLOOGG](https://github.com/antirez/lloogg), a real-time log analysis tool. But after getting the basic Redis server working, he decided to share the work with other people and turn Redis into an open source project. + +## How is Redis pronounced? + +"Redis" (/ˈrɛd-ɪs/) is pronounced like the word "red" plus the word "kiss" without the "k". +--- +Title: Redis for GenAI apps +alwaysopen: false +categories: +- docs +- develop +description: Understand key benefits of using Redis for AI. +linktitle: GenAI apps +weight: 20 +--- + +Redis enables high-performance, scalable, and reliable data management, making it a key component for GenAI apps, chatbots, and AI agents. By leveraging Redis for fast data retrieval, caching, and vector search capabilities, you can enhance AI-powered interactions, reduce latency, and improve user experience. + +Redis excels in storing and indexing vector embeddings that semantically represent unstructured data. With vector search, Redis retrieves similar questions and relevant data, lowering LLM inference costs and latency. It fetches pertinent portions of chat history, enriching context for more accurate and relevant responses. These features make Redis an ideal choice for RAG systems and GenAI apps requiring fast data access. + +## Key Benefits of Redis in GenAI Apps + +- **Performance**: low-latency data access enables real-time interactions critical for AI-driven applications. +- **Scalability**: designed to handle numerous concurrent connections, Redis is perfect for high-demand GenAI apps. +- **Caching**: efficiently stores frequently accessed data and responses, reducing primary database load and accelerating response times. +- **Session Management**: in-memory data structures simplify managing session states in conversational AI scenarios. +- **Flexibility**: Redis supports diverse data structures (for example, strings, hashes, lists, sets), allowing tailored solutions for GenAI apps. + +[RedisVL]({{< relref "/integrate/redisvl" >}}) is a Python library with an integrated CLI, offering seamless integration with Redis to enhance GenAI applications. + +--- + +## Redis Use Cases in GenAI Apps + +Explore how Redis optimizes various GenAI applications through specific use cases, tutorials, and demo code repositories. + +### Optimizing AI Agent Performance + +Redis improves session persistence and caching for conversational agents managing high interaction volumes. See the [Flowise Conversational Agent with Redis](https://redis.io/learn/howtos/solutions/flowise/conversational-agent) tutorial and demo for implementation details. + +### Chatbot Development and Management + +Redis supports chatbot platforms by enabling: + +- **Caching**: enhances bot responsiveness. +- **Session Management**: tracks conversation states for seamless interactions. +- **Scalability**: handles high-traffic bot usage. + +Learn how to build a GenAI chatbot with Redis through the [LangChain and Redis tutorial](https://redis.io/learn/howtos/solutions/vector/gen-ai-chatbot). For customer engagement platforms integrating human support with chatbots, Redis ensures rapid access to frequently used data. Check out the tutorial on [AI-Powered Video Q&A Applications](https://redis.io/learn/howtos/solutions/vector/ai-qa-videos-langchain-redis-openai-google). + +### Integrating ML Frameworks with Redis + +Machine learning frameworks leverage Redis for: + +- **Message Queuing**: ensures smooth communication between components. +- **State Management**: tracks conversation states for real-time interactions. + +Refer to [Semantic Image-Based Queries Using LangChain and Redis](https://redis.io/learn/howtos/solutions/vector/image-summary-search) for a detailed guide. To expand your knowledge, enroll in the [Redis as a Vector Database course](https://redis.io/university/courses/ru402/), where you'll learn about integrations with tools like LangChain, LlamaIndex, FeatureForm, Amazon Bedrock, and AzureOpenAI. + +### Advancing Natural Language Processing + +Redis enhances natural language understanding by: + +- **Session Management**: tracks user interactions for seamless conversational experiences. +- **Caching**: reduces latency for frequent queries. + +See the [Streaming LLM Output Using Redis Streams](https://redis.io/learn/howtos/solutions/streams/streaming-llm-output) tutorial for an in-depth walkthrough. + +Redis is a powerful tool to elevate your GenAI applications, enabling them to deliver superior performance, scalability, and user satisfaction. + +## Resources + +Check out the [Redis for AI]({{< relref "/develop/ai" >}}) documentation for getting started guides, concepts, ecosystem integrations, examples, and Python notebooks. + +## Continue learning with Redis University + +{{< university-links >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: Understand how to use Redis as a vector database +linkTitle: Vector database +stack: true +title: Redis as a vector database quick start guide +weight: 3 +--- + +This quick start guide helps you to: + +1. Understand what a vector database is +2. Create a Redis vector database +3. Create vector embeddings and store vectors +4. Query data and perform a vector search + +{{< note >}}This guide uses [RedisVL]({{< relref "/develop/clients/redis-vl" >}}), +which is a Python client library for Redis that is highly specialized for +vector processing. You may also be interested in the vector query examples +for our other client libraries: + +- [`redis-py` (Python)]({{< relref "/develop/clients/redis-py/vecsearch" >}}) +- [`NRedisStack`(C#/.NET)]({{< relref "/develop/clients/dotnet/vecsearch" >}}) +- [`node-redis` (JavaScript/Node.js)]({{< relref "/develop/clients/nodejs/vecsearch" >}}) +- [`jedis` (Java)]({{< relref "/develop/clients/jedis/vecsearch" >}}) +- [`go-redis` (Go)]({{< relref "/develop/clients/go/vecsearch" >}}) +{{< /note >}} + +## Understand vector databases + +Data is often unstructured, which means that it isn't described by a well-defined schema. Examples of unstructured data include text passages, images, videos, or audio. One approach to storing and searching through unstructured data is to use vector embeddings. + +**What are vectors?** In machine learning and AI, vectors are sequences of numbers that represent data. They are the inputs and outputs of models, encapsulating underlying information in a numerical form. Vectors transform unstructured data, such as text, images, videos, and audio, into a format that machine learning models can process. + +- **Why are they important?** Vectors capture complex patterns and semantic meanings inherent in data, making them powerful tools for a variety of applications. They allow machine learning models to understand and manipulate unstructured data more effectively. +- **Enhancing traditional search.** Traditional keyword or lexical search relies on exact matches of words or phrases, which can be limiting. In contrast, vector search, or semantic search, leverages the rich information captured in vector embeddings. By mapping data into a vector space, similar items are positioned near each other based on their meaning. This approach allows for more accurate and meaningful search results, as it considers the context and semantic content of the query rather than just the exact words used. + + +## Create a Redis vector database +You can use [Redis Open Source]({{< relref "/operate/oss_and_stack/" >}}) as a vector database. It allows you to: + +* Store vectors and the associated metadata within hashes or [JSON]({{< relref "/develop/data-types/json" >}}) documents +* Create and configure secondary indices for search +* Perform vector searches +* Update vectors and metadata +* Delete and cleanup + +The easiest way to get started is to use Redis Cloud: + +1. Create a [free account](https://redis.com/try-free?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). + + +2. Follow the instructions to create a free database. + +This free Redis Cloud database comes out of the box with all the Redis Open Source features. + +You can alternatively use the [installation guides]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) to install Redis on your local machine. + +## Install the required Python packages + +Create a Python virtual environment and install the following dependencies using `pip`: + +* `redis`: You can find further details about the `redis-py` client library in the [clients]({{< relref "/develop/clients/redis-py" >}}) section of this documentation site. +* `pandas`: Pandas is a data analysis library. +* `sentence-transformers`: You will use the [SentenceTransformers](https://www.sbert.net/) framework to generate embeddings on full text. +* `tabulate`: `pandas` uses `tabulate` to render Markdown. + +You will also need the following imports in your Python code: + +{{< clients-example search_vss imports />}} + +## Connect + +Connect to Redis. By default, Redis returns binary responses. To decode them, you pass the `decode_responses` parameter set to `True`: + +{{< clients-example search_vss connect />}} +
+{{% alert title="Tip" color="warning" %}} +Instead of using a local Redis server, you can copy and paste the connection details from the Redis Cloud database configuration page. Here is an example connection string of a Cloud database that is hosted in the AWS region `us-east-1` and listens on port 16379: `redis-16379.c283.us-east-1-4.ec2.cloud.redislabs.com:16379`. The connection string has the format `host:port`. You must also copy and paste the username and password of your Cloud database. The line of code for connecting with the default user changes then to `client = redis.Redis(host="redis-16379.c283.us-east-1-4.ec2.cloud.redislabs.com", port=16379, password="your_password_here", decode_responses=True)`. +{{% /alert %}} + + +## Prepare the demo dataset + +This quick start guide also uses the **bikes** dataset. Here is an example document from it: + +```json +{ + "model": "Jigger", + "brand": "Velorim", + "price": 270, + "type": "Kids bikes", + "specs": { + "material": "aluminium", + "weight": "10" + }, + "description": "Small and powerful, the Jigger is the best ride for the smallest of tikes! ..." +} +``` + +The `description` field contains free-form text descriptions of bikes and will be used to create vector embeddings. + + +### 1. Fetch the demo data +You need to first fetch the demo dataset as a JSON array: + +{{< clients-example search_vss get_data />}} + +Inspect the structure of one of the bike JSON documents: + +{{< clients-example search_vss dump_data />}} + +### 2. Store the demo data in Redis +Now iterate over the `bikes` array to store the data as [JSON]({{< relref "/develop/data-types/json/" >}}) documents in Redis by using the [JSON.SET]({{< relref "commands/json.set/" >}}) command. The below code uses a [pipeline]({{< relref "/develop/use/pipelining" >}}) to minimize the network round-trip times: + +{{< clients-example search_vss load_data />}} + +Once loaded, you can retrieve a specific attribute from one of the JSON documents in Redis using a [JSONPath](https://goessner.net/articles/JsonPath/) expression: + +{{< clients-example search_vss get />}} + +### 3. Select a text embedding model + +[HuggingFace](https://huggingface.co) has a large catalog of text embedding models that are locally servable through the `SentenceTransformers` framework. Here we use the [MS MARCO](https://microsoft.github.io/msmarco/) model that is widely used in search engines, chatbots, and other AI applications. + +```python +from sentence_transformers import SentenceTransformer + +embedder = SentenceTransformer('msmarco-distilbert-base-v4') +``` + +### 4. Generate text embeddings +Iterate over all the Redis keys with the prefix `bikes:`: + +{{< clients-example search_vss get_keys />}} + +Use the keys as input to the [JSON.MGET]({{< relref "commands/json.mget/" >}}) command, along with the `$.description` field, to collect the descriptions in a list. Then, pass the list of descriptions to the `.encode()` method: + +{{< clients-example search_vss generate_embeddings />}} + +Insert the vectorized descriptions to the bike documents in Redis using the [JSON.SET]({{< relref "commands/json.set" >}}) command. The following command inserts a new field into each of the documents under the JSONPath `$.description_embeddings`. Once again, do this using a pipeline to avoid unnecessary network round-trips: + +{{< clients-example search_vss load_embeddings />}} + +Inspect one of the updated bike documents using the [JSON.GET]({{< relref "commands/json.get" >}}) command: + +{{< clients-example search_vss dump_example />}} + +{{% alert title="Note" color="warning" %}} +When storing a vector embedding within a JSON document, the embedding is stored as a JSON array. In the example above, the array was shortened considerably for the sake of readability. +{{% /alert %}} + + +## Create an index + +### 1. Create an index with a vector field + +You must create an index to query document metadata or to perform vector searches. Use the [FT.CREATE]({{< relref "commands/ft.create" >}}) command: + +{{< clients-example search_vss create_index >}} +FT.CREATE idx:bikes_vss ON JSON + PREFIX 1 bikes: SCORE 1.0 + SCHEMA + $.model TEXT WEIGHT 1.0 NOSTEM + $.brand TEXT WEIGHT 1.0 NOSTEM + $.price NUMERIC + $.type TAG SEPARATOR "," + $.description AS description TEXT WEIGHT 1.0 + $.description_embeddings AS vector VECTOR FLAT 6 TYPE FLOAT32 DIM 768 DISTANCE_METRIC COSINE +{{< /clients-example >}} + +Here is a breakdown of the `VECTOR` field definition: + +* `$.description_embeddings AS vector`: The vector field's JSON path and its field alias `vector`. +* `FLAT`: Specifies the indexing method, which is either a flat index or a hierarchical navigable small world graph ([HNSW](https://arxiv.org/ftp/arxiv/papers/1603/1603.09320.pdf)). +* `TYPE FLOAT32`: Sets the float precision of a vector component, in this case a 32-bit floating point number. +* `DIM 768`: The length or dimension of the embeddings, determined by the chosen embedding model. +* `DISTANCE_METRIC COSINE`: The chosen distance function: [cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity). + +You can find further details about all these options in the [vector reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). + +### 2. Check the state of the index + +As soon as you execute the [FT.CREATE]({{< relref "commands/ft.create" >}}) command, the indexing process runs in the background. In a short time, all JSON documents should be indexed and ready to be queried. To validate that, you can use the [FT.INFO]({{< relref "commands/ft.info" >}}) command, which provides details and statistics about the index. Of particular interest are the number of documents successfully indexed and the number of failures: + +{{< clients-example search_vss validate_index >}} +FT.INFO idx:bikes_vss +{{< /clients-example >}} + +## Perform vector searches + +This quick start guide focuses on vector search. However, you can learn more about how to query based on document metadata in the [document database quick start guide]({{< relref "/develop/get-started/document-database" >}}). + +### 1. Embed your queries + +The following code snippet shows a list of text queries you will use to perform vector search in Redis: + +{{< clients-example search_vss def_bulk_queries />}} + +First, encode each input query as a vector embedding using the same SentenceTransformers model: + +{{< clients-example search_vss enc_bulk_queries />}} + +
+{{% alert title="Tip" color="warning" %}} +It is vital that you use the same embedding model to embed your queries as you did your documents. Using a different model will result in poor semantic search results or error. +{{% /alert %}} + +### 2. K-nearest neighbors (KNN) search +The KNN algorithm calculates the distance between the query vector and each vector in Redis based on the chosen distance function. It then returns the top K items with the smallest distances to the query vector. These are the most semantically similar items. + +Now construct a query to do just that: + +```python +query = ( + Query('(*)=>[KNN 3 @vector $query_vector AS vector_score]') + .sort_by('vector_score') + .return_fields('vector_score', 'id', 'brand', 'model', 'description') + .dialect(2) +) +``` + +Let's break down the above query template: +- The filter expression `(*)` means `all`. In other words, no filtering was applied. You could replace it with an expression that filters by additional metadata. +- The `KNN` part of the query searches for the top 3 nearest neighbors. +- The query vector must be passed in as the param `query_vector`. +- The distance to the query vector is returned as `vector_score`. +- The results are sorted by this `vector_score`. +- Finally, it returns the fields `vector_score`, `id`, `brand`, `model`, and `description` for each result. + +{{% alert title="Note" color="warning" %}} +To utilize a vector query with the [`FT.SEARCH`]({{< relref "commands/ft.search/" >}}) command, you must specify DIALECT 2 or greater. +{{% /alert %}} + +You must pass the vectorized query as a byte array with the param name `query_vector`. The following code creates a Python NumPy array from the query vector and converts it into a compact, byte-level representation that can be passed as a parameter to the query: + +```python +client.ft('idx:bikes_vss').search( + query, + { + 'query_vector': np.array(encoded_query, dtype=np.float32).tobytes() + } +).docs +``` + +With the template for the query in place, you can execute all queries in a loop. Notice that the script calculates the `vector_score` for each result as `1 - doc.vector_score`. Because the cosine distance is used as the metric, the items with the smallest distance are closer and, therefore, more similar to the query. + +Then, loop over the matched documents and create a list of results that can be converted into a Pandas table to visualize the results: + +{{< clients-example search_vss define_bulk_query />}} + +The query results show the individual queries' top three matches (our K parameter) along with the bike's id, brand, and model for each query. + +For example, for the query "Best Mountain bikes for kids", the highest similarity score (`0.54`) and, therefore the closest match was the 'Nord' brand 'Chook air 5' bike model, described as: + +> The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails. The Chook Air 5 is the perfect intro to mountain biking. + +From the description, this bike is an excellent match for younger children, and the embeddings accurately captured the semantics of the description. + +{{< clients-example search_vss run_knn_query />}} + +| query | score | id | brand | model | description | +| :--- | :--- | :--- | :--- | :--- | :--- | +| Best Mountain bikes for kids | 0.54 | bikes:003 | Nord | Chook air 5 | The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails. The Chook Air 5 is the perfect intro to mountain biking. | +| | 0.51 | bikes:010 | nHill | Summit | This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail riding on the weekends or you’re just after a stable,... | +| | 0.46 | bikes:001 | Velorim | Jigger | Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids’ pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go. We say rare because this smokin’ little bike is not ideal for a nervous first-time rider, but it’s a true giddy up for a true speedster. The Jigger is a 12 inch lightweight kids bicycle and it will meet your little one’s need for speed. It’s a single... | + + +## Next steps + +1. You can learn more about the query options, such as filters and vector range queries, by reading the [vector reference documentation]({{< relref "/develop/interact/search-and-query/advanced-concepts/vectors" >}}). +2. The complete [Redis Query Engine documentation]({{< relref "/develop/interact/search-and-query/" >}}) might be interesting for you. +3. If you want to follow the code examples more interactively, then you can use the [Jupyter notebook](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb) that inspired this quick start guide. +4. If you want to see more advanced examples of a Redis vector database in action, visit the [Redis AI Resources](https://github.com/redis-developer/redis-ai-resources) page on GitHub. + +## Continue learning with Redis University + +{{< university-links >}} +--- +categories: +- docs +- develop +- stack +- oss +- rs +- rc +- oss +- kubernetes +- clients +description: 'Redis quick start guides + + ' +hideListLinks: true +linkTitle: Quick starts +title: Quick starts +weight: 20 +--- + +Redis can be used as a database, cache, streaming engine, message broker, and more. The following quick start guides will show you how to use Redis for the following specific purposes: + +1. [Data structure store]({{< relref "/develop/get-started/data-store" >}}) +2. [Document database]({{< relref "/develop/get-started/document-database" >}}) +3. [Vector database]({{< relref "/develop/get-started/vector-database" >}}) + +Please select the guide that aligns best with your specific usage scenario. + +You can find answers to frequently asked questions in the [FAQ]({{< relref "/develop/get-started/faq" >}}). +--- +title: Develop with Redis +description: Learn how to develop with Redis +linkTitle: Develop +--- + +Explore the pages below to learn more about developing with Redis Open Source. +--- +LinkTitle: RedisOM for Java +Title: RedisOM for Java +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis Stack and Spring +group: library +stack: true +summary: The Redis OM for Java library is based on the Spring framework and provides + object-mapping abstractions. +title: Redis OM Spring +type: integration +weight: 9 +--- + +Redis Stack provides a seamless and straightforward way to use different data models and functionality from Redis, including a document store, a time series data database, probabilistic data structures, and a full-text search engine. + +Redis Stack is supported by several client libraries, including Node.js, Java, and Python, so that developers can use their preferred language. We'll be using one of the Redis Stack supporting libraries; [Redis OM Spring](https://github.com/redis/redis-om-spring). +Redis OM Spring provides a robust repository and custom object-mapping abstractions built on the powerful Spring Data Redis (SDR) framework. + +## What you’ll need: + +* Redis Stack: See [{{< relref "/operate/oss_and_stack/install/install-stack/" >}}]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) +* [Redis Insight]({{< relref "/develop/tools/insight" >}}) +* Your favorite browser +* Java 11 or greater + +## Spring Boot scaffold with Spring Initializer + +We’ll start by creating a skeleton app using the [Spring Initializer](https://start.spring.io), open your browser to https://start.spring.io and let's configure our skeleton application as follows: + +* We’ll use a Maven-based build (check Maven checkbox) +* And version **`2.6.4`** of Spring Boot which is the current version supported by Redis OM Spring +* Group: **`com.redis.om`** +* Artifact: **`skeleton`** +* Name: **`skeleton`** +* Description: Skeleton App for Redis OM Spring +* Package Name: **`com.redis.om.skeleton`** +* Packaging: JAR +* Java: **`11`** +* Dependencies: **`web`**, **`devtools`** and **`lombok`**. + +The `web` (Spring Web) gives us the ability to build RESTful applications using Spring MVC. With `devtools` we get fast application restarts and reloads. And `lombok` reduces boilerplate code like getters and setters. + +![Spring Initializer](./images/001_stack_spring.png "Spring Initializer") + +Click `Generate` and download the ZIP file, unzip it and load the Maven project into your IDE of choice. + +## Adding Redis OM Spring + +Open the Maven `pom.xml` and between the `` and `` sections we’ll add the snapshots repositories so that we can get to latest SNAPSHOT release of redis-om-spring: + +{{< highlight xml >}} + + + snapshots-repo + https://s01.oss.sonatype.org/content/repositories/snapshots/ + + +{{< / highlight >}} + +And then in the `` section add version `0.3.0` of Redis OM Spring: + +{{< highlight xml >}} + + com.redis.om + redis-om-spring + 0.3.0-SNAPSHOT + +{{< / highlight >}} + +## Adding Swagger + +We'll use the Swagger UI to test our web services endpoint. To add Swagger 2 to a Spring REST web service, using the Springfox implementation add the following dependencies to the POM: + +{{< highlight xml >}} + + io.springfox + springfox-boot-starter + 3.0.0 + + + io.springfox + springfox-swagger-ui + 3.0.0 + +{{< / highlight >}} + +Let's add Swagger Docker Bean to the Spring App class: + +{{< highlight java >}} +@Bean +public Docket api() { + return new Docket(DocumentationType.SWAGGER_2) + .select() + .apis(RequestHandlerSelectors.any()) + .paths(PathSelectors.any()) + .build(); +} +{{< / highlight >}} + +Which will pick up any HTTP endpoints exposed by our application. Add to your app's property file (src/main/resources/application.properties): + +{{< highlight bash >}} +spring.mvc.pathmatch.matching-strategy=ANT_PATH_MATCHER +{{< / highlight >}} + +And finally, to enable Swagger on the application, we need to use the `EnableSwagger2` annotation, by +annotating the main application class: + +{{< highlight java >}} +@EnableSwagger2 +@SpringBootApplication +public class SkeletonApplication { + // ... +} +{{< / highlight >}} + +## Creating the Domain + +Our domain will be fairly simple; `Person`s that have `Address`es. Let's start with the `Person` entity: + +{{< highlight java >}} +package com.redis.om.skeleton.models; + +import java.util.Set; + +import org.springframework.data.annotation.Id; +import org.springframework.data.geo.Point; + +import com.redis.om.spring.annotations.Document; +import com.redis.om.spring.annotations.Indexed; +import com.redis.om.spring.annotations.Searchable; + +import lombok.AccessLevel; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NonNull; +import lombok.RequiredArgsConstructor; + +@RequiredArgsConstructor(staticName = "of") +@AllArgsConstructor(access = AccessLevel.PROTECTED) +@Data +@Document +public class Person { + // Id Field, also indexed + @Id + @Indexed + private String id; + + // Indexed for exact text matching + @Indexed @NonNull + private String firstName; + + @Indexed @NonNull + private String lastName; + + //Indexed for numeric matches + @Indexed @NonNull + private Integer age; + + //Indexed for Full Text matches + @Searchable @NonNull + private String personalStatement; + + //Indexed for Geo Filtering + @Indexed @NonNull + private Point homeLoc; + + // Nest indexed object + @Indexed @NonNull + private Address address; + + @Indexed @NonNull + private Set skills; +} +{{< / highlight >}} + +The `Person` class has the following properties: + +* `id`: An autogenerated `String` using [ULIDs](https://github.com/ulid/spec) +* `firstName`: A `String` representing their first or given name. +* `lastName`: A `String` representing their last or surname. +* `age`: An `Integer` representing their age in years. +* `personalStatement`: A `String` representing a personal text statement containing facts or other biographical information. +* `homeLoc`: A `org.springframework.data.geo.Point` representing the geo coordinates. +* `address`: An entity of type `Address` representing the Person's postal address. +* `skills`: A `Set` representing a collection of Strings representing skills the Person possesses. + +### @Document + +The `Person` class (`com.redis.om.skeleton.models.Person`) is annotated with `@Document` (`com.redis.om.spring.annotations.Document`), which is marks the object as a Redis entity to be persisted as a JSON document by the appropriate type of repository. + +### @Indexed and @Searchable + +The fields `id`, `firstName`, `lastName`, `age`, `homeLoc`, `address`, and `skills` are all annotated +with `@Indexed` (`com.redis.om.spring.annotations.Indexed`). On entities annotated with `@Document` Redis OM Spring will scan the fields and add an appropriate search index field to the schema for the entity. For example, for the `Person` class +an index named `com.redis.om.skeleton.models.PersonIdx` will be created on application startup. In the index schema, a search field will be added for each `@Indexed` annotated property. RediSearch, the underlying search engine powering searches, supports Text (full-text searches), Tag (exact-match searches), Numeric (range queries), Geo (geographic range queries), and Vector (vector queries) fields. For `@Indexed` fields, the appropriate search field (Tag, Numeric, or Geo) is selected based on the property's data type. + +Fields marked as `@Searchable` (`com.redis.om.spring.annotations.Searchable`) such as `personalStatement` in `Person` are reflected as Full-Text search fields in the search index schema. + +### Nested Field Search Features + +The embedded class `Address` (`com.redis.om.skeleton.models.Address`) has several properties annotated with `@Indexed` and `@Searchable`, which will generate search index fields in Redis. The scanning of these fields is triggered by the `@Indexed` annotation on the `address` property in the `Person` class: + +{{< highlight java >}} +package com.redis.om.skeleton.models; + +import com.redis.om.spring.annotations.Indexed; +import com.redis.om.spring.annotations.Searchable; + +import lombok.Data; +import lombok.NonNull; +import lombok.RequiredArgsConstructor; + +@Data +@RequiredArgsConstructor(staticName = "of") +public class Address { + + @NonNull + @Indexed + private String houseNumber; + + @NonNull + @Searchable(nostem = true) + private String street; + + @NonNull + @Indexed + private String city; + + @NonNull + @Indexed + private String state; + + @NonNull + @Indexed + private String postalCode; + + @NonNull + @Indexed + private String country; +} +{{< / highlight >}} + +## Spring Data Repositories + +With the model in place now, we need to create the bridge between the models and the Redis, a Spring Data Repository. Like other Spring Data Repositories, Redis OM Spring data repository's goal is to reduce the boilerplate code required to implement data access significantly. Create a Java interface like: + +{{< highlight java >}} +package com.redis.om.skeleton.models.repositories; + +import com.redis.om.skeleton.models.Person; +import com.redis.om.spring.repository.RedisDocumentRepository; + +public interface PeopleRepository extends RedisDocumentRepository { + +} +{{< / highlight >}} + +That's really all we need to get all the CRUD and Paging/Sorting functionality. The +`RedisDocumentRepository` (`com.redis.om.spring.repository.RedisDocumentRepository`) extends `PagingAndSortingRepository` (`org.springframework.data.repository.PagingAndSortingRepository`) which extends CrudRepository to provide additional methods to retrieve entities using the pagination and sorting. + +### @EnableRedisDocumentRepositories + +Before we can fire up the application, we need to enable our Redis Document repositories. Like most +Spring Data projects, Redis OM Spring provides an annotation to do so; the `@EnableRedisDocumentRepositories`. We annotate the main application class: + +{{< highlight java >}} +@EnableRedisDocumentRepositories(basePackages = "com.redis.om.skeleton.*") +@EnableSwagger2 +@SpringBootApplication +public class SkeletonApplication { +{{< / highlight >}} + +## CRUD with Repositories + +With the repositories enabled, we can use our repo; let's put in some data to see the object mapping in action. Let’s create `CommandLineRunner` that will execute on application startup: + +{{< highlight java >}} +public class SkeletonApplication { + + @Bean + CommandLineRunner loadTestData(PeopleRepository repo) { + return args -> { + repo.deleteAll(); + + String thorSays = “The Rabbit Is Correct, And Clearly The Smartest One Among You.”; + + // Serendipity, 248 Seven Mile Beach Rd, Broken Head NSW 2481, Australia + Address thorsAddress = Address.of("248", "Seven Mile Beach Rd", "Broken Head", "NSW", "2481", "Australia"); + + Person thor = Person.of("Chris", "Hemsworth", 38, thorSays, new Point(153.616667, -28.716667), thorsAddress, Set.of("hammer", "biceps", "hair", "heart")); + + repo.save(thor); + }; + } +{{< / highlight >}} + +In the `loadTestData` method, we will take an instance of the `PeopleRepository` (thank you, Spring, for Dependency Injection!). Inside the returned lambda, we will first call the repo’s `deleteAll` method, which will ensure that we have clean data on each application reload. + +We create a `Person` object using the Lombok generated builder method and then save it using the repo’s `save` method. + +### Keeping tabs with Redis Insight + +Let’s launch Redis Insight and connect to the localhost at port 6379. With a clean Redis Stack install, we can use the built-in CLI to check the keys in the system: + +![Redis Insight](./images/002_stack_spring.png "Redis Insight") + +For a small amount of data, you can use the `keys` command (for any significant amount of data, use `scan`): + +{{< highlight bash >}} +keys * +{{< / highlight >}} + +If you want to keep an eye on the commands issued against the server, Redis Insight provides a +profiler. If you click the "profile" button at the bottom of the screen, it should reveal the profiler window, and there you can start the profiler by clicking on the “Start Profiler” arrow. + +Let's start our Spring Boot application by using the Maven command: + +{{< highlight bash >}} +./mvnw spring-boot:run +{{< / highlight >}} + +On Redis Insight, if the application starts correctly, you should see a barrage of commands fly by on the profiler: + +![Redis Insight](./images/003_stack_spring.png "Redis Insight") + +Now we can inspect the newly loaded data by simply refreshing the "Keys" view: + +![Redis Insight](./images/004_stack_spring.png "Redis Insight") + +You should now see two keys; one for the JSON document for “Thor” and one for the Redis Set that Spring Data Redis (and Redis OM Spring) use to maintain the list of primary keys for an entity. + +You can select any of the keys on the key list to reveal their contents on the details panel. For JSON documents, we get a nice tree-view: + +![Redis Insight](./images/005_stack_spring.png "Redis Insight") + +Several Redis commands were executed on application startup. Let’s break them down so that we can understand what's transpired. + +### Index Creation + +The first one is a call to [`FT.CREATE`]({{< relref "commands/ft.create/" >}}), which happens after Redis OM Spring scanned the `@Document` annotations. As you can see, since it encountered the annotation on `Person`, it creates the `PersonIdx` index. + +{{< highlight bash >}} +"FT.CREATE" + "com.redis.om.skeleton.models.PersonIdx" "ON" "JSON" + "PREFIX" "1" "com.redis.om.skeleton.models.Person:" +"SCHEMA" + "$.id" "AS" "id" "TAG" + "$.firstName" "AS" "firstName" "TAG" + "$.lastName" "AS" "lastName" "TAG" + "$.age" "AS" "age" "NUMERIC" + "$.personalStatement" "AS" "personalStatement" "TEXT" + "$.homeLoc" "AS" "homeLoc" "GEO" + "$.address.houseNumber" "AS" "address_houseNumber" "TAG" + "$.address.street" "AS" "address_street" "TEXT" "NOSTEM" + "$.address.city" "AS" "address_city" "TAG" + "$.address.state" "AS" "address_state" "TAG" + "$.address.postalCode" "AS" "address_postalCode" "TAG" + "$.address.country" "AS" "address_country" "TAG" + "$.skills[*]" "AS" "skills" +{{< / highlight >}} + +### Cleaning the Person Repository + +The next set of commands are generated by the call to `repo.deleteAll()`: + +{{< highlight bash >}} +"DEL" "com.redis.om.skeleton.models.Person" +"KEYS" "com.redis.om.skeleton.models.Person:*" +{{< / highlight >}} + +The first call clears the set of Primary Keys that Spring Data Redis maintains (and therefore Redis OM Spring), the second call collects all the keys to delete them, but there are none to delete on this first load of the data. + +### Saving Person Entities + +The next repo call is `repo.save(thor)` that triggers the following sequence: + +{{< highlight bash >}} +"SISMEMBER" "com.redis.om.skeleton.models.Person" "01FYANFH68J6WKX2PBPX21RD9H" +"EXISTS" "com.redis.om.skeleton.models.Person:01FYANFH68J6WKX2PBPX21RD9H" +"JSON.SET" "com.redis.om.skeleton.models.Person:01FYANFH68J6WKX2PBPX21RD9H" "." "{"id":"01FYANFH68J6WKX2PBPX21RD9H","firstName":"Chris","lastName":"Hemsworth","age":38,"personalStatement":"The Rabbit Is Correct, And Clearly The Smartest One Among You.","homeLoc":"153.616667,-28.716667","address":{"houseNumber":"248","street":"Seven Mile Beach Rd","city":"Broken Head","state":"NSW","postalCode":"2481","country":"Australia"},"skills":["biceps","hair","heart","hammer"]} +"SADD" "com.redis.om.skeleton.models.Person" "01FYANFH68J6WKX2PBPX21RD9H" +{{< / highlight >}} + +Let's break it down: + +* The first call uses the generated ULID to check if the id is in the set of primary keys (if it is, it’ll be removed) +* The second call checks if JSON document exists (if it is, it’ll be removed) +* The third call uses the [`JSON.SET`]({{< relref "commands/json.set/" >}}) command to save the JSON payload +* The last call adds the primary key of the saved document to the set of primary keys + +Now that we’ve seen the repository in action via the `.save` method, we know that the trip from Java to Redis work. Now let’s add some more data to make the interactions more interesting: + +{{< highlight java >}} +@Bean +CommandLineRunner loadTestData(PeopleRepository repo) { + return args -> { + repo.deleteAll(); + + String thorSays = “The Rabbit Is Correct, And Clearly The Smartest One Among You.”; + String ironmanSays = “Doth mother know you weareth her drapes?”; + String blackWidowSays = “Hey, fellas. Either one of you know where the Smithsonian is? I’m here to pick up a fossil.”; + String wandaMaximoffSays = “You Guys Know I Can Move Things With My Mind, Right?”; + String gamoraSays = “I Am Going To Die Surrounded By The Biggest Idiots In The Galaxy.”; + String nickFurySays = “Sir, I’m Gonna Have To Ask You To Exit The Donut”; + + // Serendipity, 248 Seven Mile Beach Rd, Broken Head NSW 2481, Australia + Address thorsAddress = Address.of("248", "Seven Mile Beach Rd", "Broken Head", "NSW", "2481", "Australia"); + + // 11 Commerce Dr, Riverhead, NY 11901 + Address ironmansAddress = Address.of("11", "Commerce Dr", "Riverhead", "NY", "11901", "US"); + + // 605 W 48th St, New York, NY 10019 + Address blackWidowAddress = Address.of("605", "48th St", "New York", "NY", "10019", "US"); + + // 20 W 34th St, New York, NY 10001 + Address wandaMaximoffsAddress = Address.of("20", "W 34th St", "New York", "NY", "10001", "US"); + + // 107 S Beverly Glen Blvd, Los Angeles, CA 90024 + Address gamorasAddress = Address.of("107", "S Beverly Glen Blvd", "Los Angeles", "CA", "90024", "US"); + + // 11461 Sunset Blvd, Los Angeles, CA 90049 + Address nickFuryAddress = Address.of("11461", "Sunset Blvd", "Los Angeles", "CA", "90049", "US"); + + Person thor = Person.of("Chris", "Hemsworth", 38, thorSays, new Point(153.616667, -28.716667), thorsAddress, Set.of("hammer", "biceps", "hair", "heart")); + Person ironman = Person.of("Robert", "Downey", 56, ironmanSays, new Point(40.9190747, -72.5371874), ironmansAddress, Set.of("tech", "money", "one-liners", "intelligence", "resources")); + Person blackWidow = Person.of("Scarlett", "Johansson", 37, blackWidowSays, new Point(40.7215259, -74.0129994), blackWidowAddress, Set.of("deception", "martial_arts")); + Person wandaMaximoff = Person.of("Elizabeth", "Olsen", 32, wandaMaximoffSays, new Point(40.6976701, -74.2598641), wandaMaximoffsAddress, Set.of("magic", "loyalty")); + Person gamora = Person.of("Zoe", "Saldana", 43, gamoraSays, new Point(-118.399968, 34.073087), gamorasAddress, Set.of("skills", "martial_arts")); + Person nickFury = Person.of("Samuel L.", "Jackson", 73, nickFurySays, new Point(-118.4345534, 34.082615), nickFuryAddress, Set.of("planning", "deception", "resources")); + + repo.saveAll(List.of(thor, ironman, blackWidow, wandaMaximoff, gamora, nickFury)); + }; +} +{{< / highlight >}} + +We have 6 People in the database now; since we’re using the devtools in Spring, the app should have reloaded, and the database reseeded with new data. Press enter the key pattern input box in Redis Insight to refresh the view. Notice that we used the repository’s `saveAll` to save several objects in bulk. + +![Redis Insight](./images/006_stack_spring.png "Redis Insight") + +## Web Service Endpoints + +Before we beef up the repository with more interesting queries, let’s create a controller so that we can test our queries using the Swagger UI: + +{{< highlight java >}} +package com.redis.om.skeleton.controllers; + +import com.redis.om.skeleton.models.Person; +import com.redis.om.skeleton.models.repositories.PeopleRepository; + +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; + +@RestController +@RequestMapping("/api/v1/people") +public class PeopleControllerV1 { + @Autowired + PeopleRepository repo; + + @GetMapping("all") + Iterable all() { + return repo.findAll(); + } +} +{{< / highlight >}} + +In this controller, we inject a repository and use one of the CRUD methods, `findAll()`, to return all the `Person` documents in the database. + +If we navigate to http://localhost:8080/swagger-ui/ you should see the Swagger UI: + +![SwaggerUI](./images/007_stack_spring.png "SwaggerUI") + +We can see the `/all` method from our people-controller-v-1, expanding that you should see: + +![SwaggerUI](./images/008_stack_spring.png "SwaggerUI") + +And if you select “Try it out” and then “Execute,” you should see the resulting JSON array containing all People documents in the database: + +![SwaggerUI](./images/009_stack_spring.png "SwaggerUI") + +Let’s also add the ability to retrieve a Person by its id by using the repo’s findById method: + +{{< highlight java >}} +@GetMapping("{id}") +Optional byId(@PathVariable String id) { + return repo.findById(id); +} +{{< / highlight >}} + +Refreshing the Swagger UI, we should see the newly added endpoint. We can grab an id using the [`SRANDMEMBER`]({{< relref "/commands/srandmember" >}}) command on the Redis Insight CLI like this: + +{{< highlight bash >}} +SRANDMEMBER com.redis.om.skeleton.models.Person +{{< / highlight >}} + +Plugging the resulting ID in the Swagger UI, we can get the corresponding JSON document: + +![SwaggerUI](./images/010_stack_spring.png "SwaggerUI") + +## Custom Repository Finders + +Now that we tested quite a bit of the CRUD functionality, let's add some custom finders to our repository. We’ll start with a finder over a numeric range, on the `age` property of `Person`: + +{{< highlight java >}} +public interface PeopleRepository extends RedisDocumentRepository { + // Find people by age range + Iterable findByAgeBetween(int minAge, int maxAge); +} +{{< / highlight >}} + +At runtime, the repository method `findByAgeBetween` is fulfilled by the framework, so all you need to do is declare it, and Redis OM Spring will handle the querying and mapping of the results. The property or properties to be used are picked after the key phrase "findBy". The "Between" keyword is the predicate that tells the query builder what operation to use. + +To test it on the Swagger UI, let’s add a corresponding method to the controller: + +{{< highlight java >}} +@GetMapping("age_between") +Iterable byAgeBetween( // + @RequestParam("min") int min, // + @RequestParam("max") int max) { + return repo.findByAgeBetween(min, max); +} +{{< / highlight >}} + +Refreshing the UI, we can see the new endpoint. Let’s try it with some data: + +![SwaggerUI](./images/011_stack_spring.png "SwaggerUI") + +Invoke the endpoint with the value `30` for `min` and `37` for `max` we get two hits; +“Scarlett Johansson” and “Elizabeth Olsen” are the only two people with ages between 30 and 37. + +![SwaggerUI](./images/012_stack_spring.png "SwaggerUI") + +If we look at the Redis Insight Profiler, we can see the resulting query, which is a range query on the index numeric field `age`: + +![Redis Insight](./images/013_stack_spring.png "Redis Insight Profiler") + +We can also create query methods with more than one property. For example, if we wanted to do a query by first and last names, we would declare a repository method like: + +{{< highlight java >}} +// Find people by their first and last name +Iterable findByFirstNameAndLastName(String firstName, String lastName); +{{< / highlight >}} + +Let’s add a corresponding controller method: + +{{< highlight java >}} +@GetMapping("name") +Iterable byFirstNameAndLastName(@RequestParam("first") String firstName, // + @RequestParam("last") String lastName) { + return repo.findByFirstNameAndLastName(firstName, lastName); +} +{{< / highlight >}} + +Once again, we can refresh the swagger UI and test the newly created endpoint: + +![SwaggerUI](./images/014_stack_spring.png "SwaggerUI") + +Executing the request with the first name `Robert` and last name `Downey`, we get: + +![SwaggerUI](./images/015_stack_spring.png "SwaggerUI") + +And the resulting query on Redis Insight: + +![Redis Insight](./images/016_stack_spring.png "Redis Insight Profiler") + +Now let’s try a Geospatial query. The `homeLoc` property is a Geo Point, and by using the “Near” predicate in our method declaration, we can get a finder that takes a point and a radius around that point to search: + +{{< highlight java >}} +// Draws a circular geofilter around a spot and returns all people in that +// radius +Iterable findByHomeLocNear(Point point, Distance distance); +And the corresponding controller method: + +@GetMapping("homeloc") +Iterable byHomeLoc(// + @RequestParam("lat") double lat, // + @RequestParam("lon") double lon, // + @RequestParam("d") double distance) { + return repo.findByHomeLocNear(new Point(lon, lat), new Distance(distance, Metrics.MILES)); +} +{{< / highlight >}} + +Refreshing the Swagger US, we should now see the `byHomeLoc` endpoint. Let’s see which of the Avengers live within 10 miles of Suffolk Park Pub in South Wales, Australia... hmmm. + +![SwaggerUI](./images/017_stack_spring.png "SwaggerUI") + +Executing the request, we get the record for Chris Hemsworth: + +![SwaggerUI](./images/018_stack_spring.png "SwaggerUI") + +and in Redis Insight we can see the backing query: + +![Redis Insight](./images/019_stack_spring.png "Redis Insight Profiler") + +Let’s try a full-text search query against the `personalStatement` property. To do so, we prefix our query method with the word `search` as shown below: + +{{< highlight java >}} +// Performs full-text search on a person’s personal Statement +Iterable searchByPersonalStatement(String text); +{{< / highlight >}} + +And the corresponding controller method: + +{{< highlight java >}} +@GetMapping("statement") +Iterable byPersonalStatement(@RequestParam("q") String q) { + return repo.searchByPersonalStatement(q); +} +{{< / highlight >}} + +Once again, we can try it on the Swagger UI with the text “mother”: + +![SwaggerUI](./images/020_stack_spring.png "SwaggerUI") + +Which results in a single hit, the record for Robert Downey Jr.: + +![SwaggerUI](./images/021_stack_spring.png "SwaggerUI") + +Notice that you can pass a query string like “moth*” with wildcards if needed + +![SwaggerUI](./images/022_stack_spring.png "SwaggerUI") + +### Nested object searches + +You’ve noticed that the `address` object in `Person` is mapped as a JSON object. If we want to search by address fields, we use an underscore to access the nested fields. For example, if we wanted to find a Person by their city, the method signature would be: + +{{< highlight java >}} +// Performing a tag search on city +Iterable findByAddress_City(String city); +{{< / highlight >}} + +Let’s add the matching controller method so that we can test it: + +{{< highlight java >}} +@GetMapping("city") +Iterable byCity(@RequestParam("city") String city) { + return repo.findByAddress_City(city); +} +{{< / highlight >}} + +Let’s test the byCity endpoint: + +![SwaggerUI](./images/023_stack_spring.png "SwaggerUI") + +As expected, we should get two hits; Scarlett Johansson and Elizabeth Olsen, both with addresses in Nee York: + +![SwaggerUI](./images/024_stack_spring.png "SwaggerUI") + +The skills set is indexed as tag search. To find a Person with any of the skills in a provided list, we can add a repository method like: + +{{< highlight java >}} +// Search Persons that have one of multiple skills (OR condition) +Iterable findBySkills(Set skills); +{{< / highlight >}} + +And the corresponding controller method: + +{{< highlight java >}} +@GetMapping("skills") +Iterable byAnySkills(@RequestParam("skills") Set skills) { + return repo.findBySkills(skills); +} +{{< / highlight >}} + +Let's test the endpoint with the value "deception": + +![SwaggerUI](./images/025_stack_spring.png "SwaggerUI") + +The search returns the records for Scarlett Johansson and Samuel L. Jackson: + +![SwaggerUI](./images/026_stack_spring.png "SwaggerUI") + +We can see the backing query using a tag search: + +![Redis Insight](./images/027_stack_spring.png "Redis Insight Profiler") + +## Fluid Searching with Entity Streams + +Redis OM Spring Entity Streams provides a Java 8 Streams interface to Query Redis JSON documents using Redis Stack. Entity Streams allow you to process data in a typesafe declarative way similar to SQL statements. Streams can be used to express a query as a chain of operations. + +Entity Streams in Redis OM Spring provide the same semantics as Java 8 streams. Streams can be made of Redis Mapped entities (`@Document`) or one or more properties of an Entity. Entity Streams progressively build the query until a terminal operation is invoked (such as `collect`). Whenever a Terminal operation is applied to a Stream, the Stream cannot accept additional operations to its pipeline, which means that the Stream is started. + +Let’s start with a simple example, a Spring `@Service` which includes `EntityStream` to query for instances of the mapped class `Person`: + +{{< highlight java >}} +package com.redis.om.skeleton.services; + +import java.util.stream.Collectors; + +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import com.redis.om.skeleton.models.Person; +import com.redis.om.skeleton.models.Person$; +import com.redis.om.spring.search.stream.EntityStream; + +@Service +public class PeopleService { + @Autowired + EntityStream entityStream; + + // Find all people + public Iterable findAllPeople(int minAge, int maxAge) { + return entityStream // + .of(Person.class) // + .collect(Collectors.toList()); + } + +} +{{< / highlight >}} + +The `EntityStream` is injected into the `PeopleService` using `@Autowired`. We can then get a stream for `Person` objects by using `entityStream.of(Person.class)`. The stream represents the equivalent of a `SELECT * FROM Person` on a relational database. The call to `collect` will then execute the underlying query and return a collection of all `Person` objects in Redis. + +### Entity Meta-model + +You’re provided with a generated meta-model to produce more elaborate queries, a class with the same name as your model but ending with a dollar sign. In the +example below, our entity model is `Person`; therefore, we get a meta-model named `Person$`. With the meta-model, you have access to the +underlying search engine field operations. For example, we have an `age` property which is an integer. Therefore our meta-model has an `AGE` property with +numeric operations we can use with the stream’s `filter` method such as `between`. + +{{< highlight java >}} +// Find people by age range +public Iterable findByAgeBetween(int minAge, int maxAge) { + return entityStream // + .of(Person.class) // + .filter(Person$.AGE.between(minAge, maxAge)) // + .sorted(Person$.AGE, SortOrder.ASC) // + .collect(Collectors.toList()); +} +{{< / highlight >}} + +In this example, we also use the Streams `sorted` method to declare that our stream will be sorted by the `Person$.AGE` in `ASC`ending order. + +To "AND" property expressions we can chain multiple `.filter` statements. For example, to recreate +the finder by first and last name we can use an Entity Stream in the following way: + +{{< highlight java >}} +// Find people by their first and last name +public Iterable findByFirstNameAndLastName(String firstName, String lastName) { + return entityStream // + .of(Person.class) // + .filter(Person$.FIRST_NAME.eq(firstName)) // + .filter(Person$.LAST_NAME.eq(lastName)) // + .collect(Collectors.toList()); +} +{{< / highlight >}} + +In this article, we explored how Redis OM Spring provides a couple of APIs to tap into the power of Redis Stack’s document database and search features from Spring Boot application. We’ll explore other Redis Stack features via Redis OM Spring in future articles + + + + + + +--- +description: RIOT quick start guide +linkTitle: Quick Start +title: Quick Start +type: integration +weight: 3 +--- + +You can launch RIOT with the following command: + +``` +riot +``` + +This will show usage help, which you can also get by running: + +``` +riot --help +``` + +{{< tip >}} +You can use `--help` on any command and sub-command: +{{< /tip >}} + +``` +riot command --help +riot command sub-command --help +``` + +Full documentation is available at [redis.github.io/riot](https://redis.github.io/riot/). +--- +description: Install RIOT on macOS, Linux, Windows, and Docker +linkTitle: Install +title: Install +type: integration +weight: 2 +--- + +RIOT can be installed in different ways depending on your environment and preference. + +## macOS via Homebrew + +``` +brew install redis/tap/riot +``` + +## Windows via Scoop + +``` +scoop bucket add redis https://github.com/redis/scoop.git +scoop install riot +``` + +## Linux via Homebrew + +``` +brew install redis/tap/riot +``` + +## Docker + +``` +docker run riotx/riot [OPTIONS] [COMMAND] +``` + +## Manual installation + +Download the pre-compiled binary from the [releases page](https://github.com/redis/riot/releases), uncompress, and copy to the desired location. + +Full documentation is available at [redis.github.io/riot](https://redis.github.io/riot/). +--- +description: RIOT documentation +linkTitle: Documentation +title: Documentation +type: integration +weight: 4 +--- + +Full documentation for RIOT is available at [redis.github.io/riot](https://redis.github.io/riot/).--- +categories: +- docs +- integrate +- stack +- oss +- rs +- rc +- oss +description: Redis Input/Output Tools +group: mig +hidden: false +hideListLinks: true +linkTitle: RIOT +summary: Redis Input/Output Tools (RIOT) is a command-line utility designed to help + you get data in and out of Redis. +title: RIOT +type: integration +weight: 1 +--- + +Redis Input/Output Tools (RIOT) is a command-line utility designed to help you get data in and out of Redis. + +It supports many different sources and targets: + +* [Files](https://redis.github.io/riot/#_file) (CSV, JSON, XML) +* [Data generators](https://redis.github.io/riot/#_datagen) (Redis data structures, Faker) +* [Relational databases](https://redis.github.io/riot/#_db) +* [Redis itself](https://redis.github.io/riot/#_replication) (snapshot and live replication) + +Full documentation is available at [redis.github.io/riot](https://redis.github.io/riot/) +--- +LinkTitle: Spring Data Redis +Title: Spring Data Redis +alwaysopen: false +categories: +- docs +- integrate +- stack +- oss +- rs +- rc +- oss +- client +description: Plug Redis into your Spring application with minimal effort +group: framework +summary: Spring Data Redis implements the Spring framework's cache abstraction for + Redis, which allows you to plug Redis into your Spring application with minimal + effort. +type: integration +weight: 8 +--- + +Spring Data Redis implements the Spring framework's cache abstraction for Redis, which allows you to plug Redis into your Spring application with minimal effort. + +Spring's cache abstraction applies cache-aside to methods, reducing executions by storing and reusing results. When a method is invoked, the abstraction checks if it's been called with the same arguments before. If so, it returns the cached result. If not, it invokes the method, caches the result, and returns it. This way, costly methods are invoked less often. Further details are in the [Spring cache abstraction documentation](https://docs.spring.io/spring-framework/reference/integration/cache.html). + +## Get started + +In a nutshell, you need to perform the following steps to use Redis as your cache storage: + +1. [Configure the cache storage](https://docs.spring.io/spring-framework/reference/integration/cache/store-configuration.html) by using the [Redis cache manager](https://docs.spring.io/spring-data/redis/reference/redis/redis-cache.html) that is part of Spring Data. +2. Annotate a repository with your `@CacheConfig`. +3. Use the `@Cachable` annotation on a repository method to cache the results of that method. + +Here is an example: + +``` +@CacheConfig("books") +public class BookRepositoryImpl implements BookRepository { + + @Cacheable + public Book findBook(ISBN isbn) {...} +} +``` + +## Further readings + +Please read the Spring framework's documentation to learn more about how to use the Redis cache abstraction for Spring: + +* [Spring cache abstraction](https://docs.spring.io/spring-framework/reference/integration/cache.html) +* [Spring cache store configuration](https://docs.spring.io/spring-framework/reference/integration/cache/store-configuration.html) +* [Spring Data Redis Cache](https://docs.spring.io/spring-data/redis/reference/redis/redis-cache.html)--- +LinkTitle: lettuce +Title: Java client for Redis +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis and Java +group: library +stack: true +summary: Lettuce is a Java library for Redis. +title: Lettuce +type: integration +weight: 2 +--- + +Connect your Java application to a Redis database using the Lettuce client library. + +Refer to the complete [Lettuce guide]({{< relref "/develop/clients/lettuce" >}}) to install, connect, and use Lettuce. +--- +LinkTitle: Uptrace with Redis Enterprise +Title: Uptrace with Redis Enterprise +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Uptrace to your Redis Enterprise cluster using + OpenTelemetry Collector. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Uptrace to your Redis Enterprise cluster using + OpenTelemetry Collector. +type: integration +weight: 7 +--- + +Uptrace is an [open source APM tool](https://uptrace.dev/get/open-source-apm.html) that supports distributed tracing, metrics, and logs. You can use it to monitor applications and set up automatic alerts to receive notifications. + +Uptrace uses OpenTelemetry to collect and export telemetry data from software applications such as Redis. OpenTelemetry is an open source observability framework that aims to provide a single standard for all types of observability signals such as traces, metrics, and logs. + +With OpenTelemetry Collector, you can receive, process, and export telemetry data to any [OpenTelemetry backend](https://uptrace.dev/blog/opentelemetry-backend.html). You can also use Collector to scrape Prometheus metrics provided by Redis and then export those metrics to Uptrace. + +You can use Uptrace to: + +- Collect and display data metrics not available in the [admin console]({{< relref "/operate/rs/references/metrics" >}}). +- Use prebuilt dashboard templates maintained by the Uptrace community. +- Set up automatic alerts and receive notifications via email, Slack, Telegram, and others. +- Monitor your app performance and logs using [OpenTelemetry tracing](https://uptrace.dev/opentelemetry/distributed-tracing.html). + +{{< image filename="/images/rs/uptrace-redis-nodes.png" >}} + +## Install Collector and Uptrace + +Because installing OpenTelemetry Collector and Uptrace can take some time, you can use the [docker-compose](https://github.com/uptrace/uptrace/tree/master/example/redis-enterprise) example that also comes with Redis Enterprise cluster. + +After you download the Docker example, you can edit the following configuration files in the `uptrace/example/redis-enterprise` directory before you start the Docker containers: + +- `otel-collector.yaml` - Configures `/etc/otelcol-contrib/config.yaml` in the OpenTelemetry Collector container. +- `uptrace.yml` - Configures`/etc/uptrace/uptrace.yml` in the Uptrace container. + +You can also install OpenTelemetry and Uptrace from scratch using the following guides: + +- [Getting started with OpenTelemetry Collector](https://uptrace.dev/opentelemetry/collector.html) +- [Getting started with Uptrace](https://uptrace.dev/get/get-started.html) + +After you install Uptrace, you can access the Uptrace UI at [http://localhost:14318/](http://localhost:14318/). + +## Scrape Prometheus metrics + +Redis Enterprise cluster exposes a Prometheus scraping endpoint on `http://localhost:8070/`. You can scrape that endpoint by adding the following lines to the OpenTelemetry Collector config: + +```yaml +# /etc/otelcol-contrib/config.yaml + +prometheus_simple/cluster1: + collection_interval: 10s + endpoint: "localhost:8070" # Redis Cluster endpoint + metrics_path: "/" + tls: + insecure: false + insecure_skip_verify: true + min_version: "1.0" +``` + +Next, you can export the collected metrics to Uptrace using OpenTelemetry protocol (OTLP): + +```yaml +# /etc/otelcol-contrib/config.yaml + +receivers: + otlp: + protocols: + grpc: + http: + +exporters: + otlp/uptrace: + # Uptrace is accepting metrics on this port + endpoint: localhost:14317 + headers: { "uptrace-dsn": "http://project1_secret_token@localhost:14317/1" } + tls: { insecure: true } + +service: + pipelines: + traces: + receivers: [otlp] + processors: [batch] + exporters: [otlp/uptrace] + metrics: + receivers: [otlp, prometheus_simple/cluster1] + processors: [batch] + exporters: [otlp/uptrace] + logs: + receivers: [otlp] + processors: [batch] + exporters: [otlp/uptrace] +``` + +Don't forget to restart the Collector and then check logs for any errors: + +```shell +docker-compose logs otel-collector + +# or + +sudo journalctl -u otelcol-contrib -f +``` + +You can also check the full OpenTelemetry Collector config [here](https://github.com/uptrace/uptrace/blob/master/example/redis-enterprise/otel-collector.yaml). + +## View metrics + +When metrics start arriving to Uptrace, you should see a couple of dashboards in the Metrics tab. In total, Uptrace should create 3 dashboards for Redis Enterprise metrics: + +- "Redis: Nodes" dashboard displays a list of cluster nodes. You can select a node to view its metrics. + +- "Redis: Databases" displays a list of Redis databases in all cluster nodes. To find a specific database, you can use filters or sort the table by columns. + +- "Redis: Shards" contains a list of shards that you have in all cluster nodes. You can filter or sort shards and select a shard for more details. + +## Monitor metrics + +To start monitoring metrics, you need to create metrics monitors using Uptrace UI: + +- Open "Alerts" -> "Monitors". +- Click "Create monitor" -> "Create metrics monitor". + +For example, the following monitor uses the `group by node` expression to create an alert whenever an individual Redis shard is down: + +```yaml +monitors: + - name: Redis shard is down + metrics: + - redis_up as $redis_up + query: + - group by cluster # monitor each cluster, + - group by bdb # each database, + - group by node # and each shard + - $redis_up + min_allowed_value: 1 + # shard should be down for 5 minutes to trigger an alert + for_duration: 5m +``` + +You can also create queries with more complex expressions. + +For example, the following monitors create an alert when the keyspace hit rate is lower than 75% or memory fragmentation is too high: + +```yaml +monitors: + - name: Redis read hit rate < 75% + metrics: + - redis_keyspace_read_hits as $hits + - redis_keyspace_read_misses as $misses + query: + - group by cluster + - group by bdb + - group by node + - $hits / ($hits + $misses) as hit_rate + min_allowed_value: 0.75 + for_duration: 5m + + - name: Memory fragmentation is too high + metrics: + - redis_used_memory as $mem_used + - redis_mem_fragmentation_ratio as $fragmentation + query: + - group by cluster + - group by bdb + - group by node + - where $mem_used > 32mb + - $fragmentation + max_allowed_value: 3 + for_duration: 5m +``` + +You can learn more about the query language [here](https://uptrace.dev/get/querying-metrics.html). +--- +LinkTitle: Confluent with Redis Cloud +Title: Confluent with Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rc +description: Describes how to integrate Redis Cloud into Confluent Cloud. +group: di +summary: The Redis Sink connector for Confluent Cloud allows you to send data from + Confluent Cloud to your Redis Cloud database. +type: integration +weight: 8 +--- + +You can send data from [Confluent Cloud](https://confluent.cloud/) to your Redis Cloud database using the [Redis Sink connector for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-redis-sink.html). + +## Prerequisites + +Before you add the Redis Sink Confluent connector to your Confluent Cloud cluster: + +1. [Create a database]({{< relref "/operate/rc/databases/create-database" >}}) in the same region as your Confluent Cloud cluster. + +1. If you decide to [enable Transport Layer Security (TLS)]({{< relref "/operate/rc/security/database-security/tls-ssl" >}}) for your Redis database, [download the server certificate]({{< relref "/operate/rc/security/database-security/tls-ssl#download-certificates" >}}) from the Redis Cloud console and [encode it](#encode-server-certificate) to be used with Confluent Cloud. + +1. Ensure you meet the prerequisites in the [Redis Sink connector documentation](https://docs.confluent.io/cloud/current/connectors/cc-redis-sink.html#quick-start) to set up your Redis Sink with Confluent Cloud. + +### Encode server certificate + +If you decide to enable Transport Layer Security (TLS) for your database, you will need to encode the [server certificate]({{< relref "/operate/rc/security/database-security/tls-ssl#download-certificates" >}}) (`redis_ca.pem`) for use as the Confluent Cloud Truststore file. To do this: + +1. Use a base64 utility to encode `redis_ca.pem` into base64 in a new file. For example, using the [`base64` command-line utility](https://linux.die.net/man/1/base64): + + ```sh + $ base64 -i redis_ca.pem -o + ``` + +1. Using a text editor, add the following text to the beginning of the truststore file: + + ```text + data:text/plain;base64 + ``` + +1. Save and close the truststore file. + +## Connect the Redis Sink connector to Redis Cloud + +To add the Redis Sink connector to your Confluent Cloud environment from the Redis Cloud console: + +1. From the [Redis Cloud console](https://cloud.redis.io/), select **Account Settings** and then select the **Integrations** tab. + +1. Select the **Configure** button in the **Confluent** tile. + + {{The Confluent integration tile.}} + +1. This will take you to [New Sink Connector](https://confluent.cloud/go/new-sink-connector/RedisSink) on Confluent Cloud. If you have more than one Confluent Cloud environment or Cluster, select your environment and cluster from the lists and select **Continue**. + + {{Select your environment and cluster from the Create a Connector selector.}} + +1. From there, follow the steps to [Enter the connector details](https://docs.confluent.io/cloud/current/connectors/cc-redis-sink.html#step-4-enter-the-connector-details) on the Confluent documentation. + + When you get to the **Authentication** step, fill in the fields with the following information: + + - **Redis hostname**: The Public endpoint of your database, without the port number. This can be found in the [Redis Cloud console](https://cloud.redis.io/) from the database list or from the **General** section of the **Configuration** tab for the source database. + - **Redis port number**: The database's port. This is the number at the end of your database's Public endpoint. + - **Redis database index**: Set this to 0 for a Redis Cloud database. + - **Redis server password**: Enter the database password. If you have not set your own database user and password, use the [default user password]({{< relref "/operate/rc/security/access-control/data-access-control/default-user" >}}), which appears in the **Security** section of the **Configuration** tab of the database details screen. + - **SSL mode**: Set depending on what type of [TLS authentication]({{< relref "/operate/rc/security/database-security/tls-ssl" >}}) is set for your database. + - If TLS authentication is turned off, select **disabled**. + - If TLS authentication is turned on, select **server**. + - **Trustore file**: If the **SSL mode** is set to **server**, upload the truststore file created when you [encoded the server certificate](#encode-server-certificate). + - **Redis Server mode**: If [OSS Cluster API]({{< relref "/operate/rc/databases/configuration/clustering#oss-cluster-api" >}}) is enabled, select **Cluster**. Otherwise, select **Standalone**. + + Select **Continue** once you have entered the database information. Enter the rest of the [connector details](https://docs.confluent.io/cloud/current/connectors/cc-redis-sink.html#step-4-enter-the-connector-details) from the **Configuration** step. + +1. [Connect to your database]({{< relref "/operate/rc/rc-quickstart#connect-to-a-database" >}}) to verify that data is being stored. + + + +--- +LinkTitle: Get started +Title: Get started with Pulumi +alwaysopen: false +categories: +- docs +- integrate +- rc +description: Shows how to install the Redis Cloud Pulumi provider and create a subscription. +group: provisioning +headerRange: '[1-3]' +summary: With the Redis Cloud Resource Provider you can provision Redis Cloud resources + by using the programming language of your choice. +toc: 'true' +type: integration +weight: $weight +--- + +Here, you'll learn how to use the [Redis Cloud Pulumi provider]({{< relref "/integrate/pulumi-provider-for-redis-cloud/" >}}) to create a Redis Cloud Pro subscription and a database using Python. + +## Prerequisites + +1. [Install Pulumi](https://www.pulumi.com/docs/install/) and [create a Pulumi account](https://app.pulumi.com/signin) if you do not have one already. + +1. [Create a Redis Cloud account]({{< relref "/operate/rc/rc-quickstart#create-an-account" >}}) if you do not have one already. + +1. [Enable the Redis Cloud API]({{< relref "/operate/rc/api/get-started/enable-the-api" >}}). + +1. Get your Redis Cloud [API keys]({{< relref "/operate/rc/api/get-started/manage-api-keys" >}}). + +## Install the Pulumi provider files + +1. In your Python project, create an empty folder. From this folder, run `pulumi new rediscloud-python`. + +1. Log into Pulumi using your [Pulumi access token](https://app.pulumi.com/account/tokens) if prompted. + +1. Enter a project name, description, and stack name. + +1. Enter your Redis Cloud access and secret keys. + +1. Enter the credit card type (Visa, Mastercard) on file with your Redis Cloud account. + +1. Enter the last four numbers of the card on file with your Redis Cloud account. + +Once these steps are completed, the dependencies needed for the project will be installed and a Python virtual environment will be created. + +## Deploy resources with Pulumi + +The Pulumi Python project includes three main files: + +- `pulumi.yaml` : A metadata file which is used to help configure the Python runtime environment. + +- `pulumi.YOUR_PROJECT_NAME.yaml`: Contains the information related to the Cloud API access and secret key, credit card type and last 4 digits. + +- `__main__.py`: A Pulumi template file that creates a Redis Cloud Pro subscription. Use this template file as a starting point to create the subscription with a cloud provider and define specifications for the database (this includes memory, throughput, Redis advanced capabilities, and other information). + +To deploy the resources described in `__main__.py`, run `pulumi up`. This will take some time. You will be able to see your subscription being created through the [Redis Cloud console](https://cloud.redis.io/). + +If you want to remove these resources, run `pulumi down`. + +## More info + +- [Redis Cloud Pulumi registry](https://www.pulumi.com/registry/packages/rediscloud/) +- [Pulumi documentation](https://www.pulumi.com/docs/) +--- +LinkTitle: Pulumi provider for Redis Cloud +Title: Pulumi provider for Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rc +description: Explains how to use Pulumi to provision Redis Cloud infrastructure +group: provisioning +summary: With the Redis Cloud Resource Provider you can provision Redis Cloud resources + by using the programming language of your choice. +type: integration +weight: 4 +hideListLinks: true +--- + +[Pulumi](https://www.pulumi.com/) is an automation tool that allows you to easily provision infrastructure as code. Pulumi allows developers to write infrastructure code using programming languages rather than using domain-specific languages. + +With the [Redis Cloud Resource Provider](https://www.pulumi.com/registry/packages/rediscloud/), you can create Redis Cloud resources in a programming language. The Pulumi Redis Cloud Provider supports the following programming languages: + +* TypeScript +* Python +* C# +* Java +* Go +* YAML + +The Redis Cloud Pulumi provider is based on the [Redis Cloud Terraform provider]({{< relref "/integrate/terraform-provider-for-redis-cloud/" >}}). + +{{}} +The Redis Cloud Pulumi Redis Cloud provider supports Redis Cloud Pro. It does not support Redis Cloud Essentials. +{{}} + +See [Get started with Pulumi]({{< relref "/integrate/pulumi-provider-for-redis-cloud/get-started" >}}) for an example of how to use the Pulumi provider with Python. + +## Resources and functions + +Pulumi resources represent the fundamental units that make up cloud infrastructure. A provider can make functions available in its SDK and resource types. These functions are often used to acquire information that is not part of a resource. + +The Redis Cloud Pulumi provider allows for the following resources: + +* [`Subscription`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/subscription/): The basic building block of a Redis Cloud subscription. +* [`SubscriptionDatabase`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/subscriptiondatabase/): Represents a Redis database which belongs to a specific Redis Cloud subscription. +* [`SubscriptionPeering`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/subscriptionpeering/): A VPC peering connection (AWS or GCP) to a specific Redis Cloud subscription. +* [`CloudAccount`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/cloudaccount/): Represents an AWS account in which you want to deploy Redis Cloud infrastructure components. + + {{}} +The "bring your own AWS account" option for Redis Cloud has been deprecated. The `CloudAccount` resource is only available for legacy Redis Cloud integrations. + {{}} + +* [`ActiveActiveSubscription`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/activeactivesubscription/): The basic building block of an active-active Redis Cloud subscription. +* [`ActiveActiveSubscriptionDatabase`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/activeactivesubscriptiondatabase/): Represents a Redis database which belongs to a specific Redis Cloud active-active subscription. +* [`ActiveActiveSubscriptionRegions`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/activeactivesubscriptionregions/): The different regions where the active-active subscription will be deployed. +* [`ActiveActiveSubscriptionPeering`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/activeactivesubscriptionpeering/): A VPC peering connection (AWS or GCP) to a specific Redis Cloud active-active subscription. +* [`AclRule`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/aclrule/), [`AclRole`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/aclrole/), and [`AclUser`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/acluser/): Rules, Roles, and Users for [Role-based access control]({{< relref "/operate/rc/security/access-control/data-access-control/role-based-access-control" >}}). + +It also allows for the following functions: + +* [`GetCloudAccount`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getcloudaccount/): Get the information related to the AWS account. + + {{}} +The "bring your own AWS account" option for Redis Cloud has been deprecated. The `CloudAccount` resource is only available for legacy Redis Cloud integrations. + {{}} + +* [`GetDataPersistence`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getdatapersistence/): Get the type of database persistence. +* [`GetDatabase`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getdatabase/): Get the information related to a specific database. +* [`GetDatabaseModules`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getdatabasemodules/): Get the capabilities for a specific database. +* [`GetPaymentMethod`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getpaymentmethod/): Get the payment method related to the Redis Cloud account. +* [`GetRegions`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getregions/): Get the regions related to an active-active subscription +* [`GetSubscription`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getsubscription/): Get the information related to a specific subscription. +* [`GetSubscriptionPeerings`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getsubscriptionpeerings/): Get the VPC peerings (AWS or GCP) related to a specific subscription. +* [`GetAclRule`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getaclrule/), [`GetAclRole`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getaclrole/), and [`GetAclUser`](https://www.pulumi.com/registry/packages/rediscloud/api-docs/getacluser/): Get the Rules, Roles, and Users for [Role-based access control]({{< relref "/operate/rc/security/access-control/data-access-control/role-based-access-control" >}}). + +## More info + +- [Get started with Pulumi]({{< relref "/integrate/pulumi-provider-for-redis-cloud/get-started" >}}) +- [Redis Cloud Pulumi registry](https://www.pulumi.com/registry/packages/rediscloud/) +- [Pulumi documentation](https://www.pulumi.com/docs/) +--- +LinkTitle: RedisOM for Python +Title: RedisOM for Python +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis Stack and Python +group: library +stack: true +summary: Redis OM for Python is an object-mapping library for Redis. +title: Redis OM Python +type: integration +weight: 9 +--- + +[Redis OM Python](https://github.com/redis/redis-om-python) is a Redis client that provides high-level abstractions for managing document data in Redis. This tutorial shows you how to get up and running with Redis OM Python, Redis Stack, and the [Flask](https://flask.palletsprojects.com/) micro-framework. + +We'd love to see what you build with Redis Stack and Redis OM. [Join the Redis community on Discord](https://discord.gg/redis) to chat with us about all things Redis OM and Redis Stack. Read more about Redis OM Python [our announcement blog post](https://redis.com/blog/introducing-redis-om-for-python/). + +## Overview + +This application, an API built with Flask and a simple domain model, demonstrates common data manipulation patterns using Redis OM. + +Our entity is a Person, with the following JSON representation: + +```json +{ + "first_name": "A string, the person's first or given name", + "last_name": "A string, the person's last or surname", + "age": 36, + "address": { + "street_number": 56, + "unit": "A string, optional unit number e.g. B or 1", + "street_name": "A string, name of the street they live on", + "city": "A string, name of the city they live in", + "state": "A string, state, province or county that they live in", + "postal_code": "A string, their zip or postal code", + "country": "A string, country that they live in." + }, + "personal_statement": "A string, free text personal statement", + "skills": [ + "A string: a skill the person has", + "A string: another still that the person has" + ] +} +``` + +We'll let Redis OM handle generation of unique IDs, which it does using [ULIDs](https://github.com/ulid/spec). Redis OM will also handle creation of unique Redis key names for us, as well as saving and retrieving entities from JSON documents stored in a Redis Stack database. + +## Getting Started + +### Requirements + +To run this application you'll need: + +* [git](https://git-scm.com/download) - to clone the repo to your machine. +* [Python 3.9 or higher](https://www.python.org/downloads/). +* A [Redis Stack](https://redis.io) database, or Redis with the [Search and Query]({{< relref "/develop/interact/search-and-query/" >}}) and [JSON]({{< relref "/develop/data-types/json/" >}}) features installed. We've provided a `docker-compose.yml` for this. You can also [sign up for a free 30Mb database with Redis Cloud](https://redis.com/try-free/?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users) - be sure to check the Redis Stack option when creating your cloud database. +* [curl](https://curl.se/), or [Postman](https://www.postman.com/) - to send HTTP requests to the application. We'll provide examples using curl in this document. +* Optional: [Redis Insight](https://redis.com/redis-enterprise/redis-insight/), a free data visualization and database management tool for Redis. When downloading Redis Insight, be sure to select version 2.x or use the version that comes with Redis Stack. + +### Get the Source Code + +Clone the repository from GitHub: + +```bash +$ git clone https://github.com/redis-developer/redis-om-python-flask-skeleton-app.git +$ cd redis-om-python-flask-skeleton-app +``` + +### Start a Redis Stack Database, or Configure your Redis Cloud Credentials + +Next, we'll get a Redis Stack database up and running. If you're using Docker: + +```bash +$ docker-compose up -d +Creating network "redis-om-python-flask-skeleton-app_default" with the default driver +Creating redis_om_python_flask_starter ... done +``` + +If you're using Redis Cloud, you'll need the hostname, port number, and password for your database. Use these to set the `REDIS_OM_URL` environment variable like this: + +```bash +$ export REDIS_OM_URL=redis://default:@: +``` + +(This step is not required when working with Docker as the Docker container runs Redis on `localhost` port `6379` with no password, which is the default connection that Redis OM uses.) + +For example if your Redis Cloud database is at port `9139` on host `enterprise.redis.com` and your password is `5uper53cret` then you'd set `REDIS_OM_URL` as follows: + +```bash +$ export REDIS_OM_URL=redis://default:5uper53cret@enterprise.redis.com:9139 +``` + +### Create a Python Virtual Environment and Install the Dependencies + +Create a Python virtual environment, and install the project dependencies which are [Flask](https://pypi.org/project/Flask/), [Requests](https://pypi.org/project/requests/) (used only in the data loader script) and [Redis OM](https://pypi.org/project/redis-om/): + +```bash +$ python3 -m venv venv +$ . ./venv/bin/activate +$ pip install -r requirements.txt +``` + +### Start the Flask Application + +Let's start the Flask application in development mode, so that Flask will restart the server for you each time you save code changes in `app.py`: + +```bash +$ export FLASK_ENV=development +$ flask run +``` + +If all goes well, you should see output similar to this: + +```bash +$ flask run + * Environment: development + * Debug mode: on + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) + * Restarting with stat + * Debugger is active! + * Debugger PIN: XXX-XXX-XXX +``` + +You're now up and running, and ready to perform CRUD operations on data with Redis, Search and Query, JSON and Redis OM for Python! To make sure the server's running, point your browser at `http://127.0.0.1:5000/`, where you can expect to see the application's basic home page: + +![screenshot](./images/python_server_running.png) + +### Load the Sample Data + +We've provided a small amount of sample data (it's in `data/people.json`. The Python script `dataloader.py` loads each person into Redis by posting the data to the application's create a new person endpoint. Run it like this: + +```bash +$ python dataloader.py +Created person Robert McDonald with ID 01FX8RMR7NRS45PBT3XP9KNAZH +Created person Kareem Khan with ID 01FX8RMR7T60ANQTS4P9NKPKX8 +Created person Fernando Ortega with ID 01FX8RMR7YB283BPZ88HAG066P +Created person Noor Vasan with ID 01FX8RMR82D091TC37B45RCWY3 +Created person Dan Harris with ID 01FX8RMR8545RWW4DYCE5MSZA1 +``` + +Make sure to take a copy of the output of the data loader, as your IDs will differ from those used in the tutorial. To follow along, substitute your IDs for the ones shown above. e.g. whenever we are working with Kareem Khan, change `01FX8RMR7T60ANQTS4P9NKPKX8` for the ID that your data loader assigned to Kareem in your Redis database. + +### Problems? + +If the Flask server fails to start, take a look at its output. If you see log entries similar to this: + +```py +raise ConnectionError(self._error_message(e)) +redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused. +``` + +then you need to start the Redis Docker container if using Docker, or set the `REDIS_OM_URL` environment variable if using Redis Cloud. + +If you've set the `REDIS_OM_URL` environment variable, and the code errors with something like this on startup: + +```py +raise ConnectionError(self._error_message(e)) +redis.exceptions.ConnectionError: Error 8 connecting to enterprise.redis.com:9139. nodename nor servname provided, or not known. +``` + +then you'll need to check that you used the correct hostname, port, password and format when setting `REDIS_OM_URL`. + +If the data loader fails to post the sample data into the application, make sure that the Flask application is running **before** running the data loader. + +## Create, Read, Update and Delete Data + +Let's create and manipulate some instances of our data model in Redis. Here we'll look at how to call the Flask API with curl (you could also use Postman), how the code works, and how the data's stored in Redis. + +### Building a Person Model with Redis OM + +Redis OM allows us to model entities using Python classes, and the [Pydantic](https://pypi.org/project/pydantic/) framework. Our person model is contained in the file `person.py`. Here's some notes about how it works: + +* We declare a class `Person` which extends a Redis OM class `JsonModel`. This tells Redis OM that we want to store these entities in Redis as JSON documents. +* We then declare each field in our model, specifying the data type and whether or not we want to index on that field. For example, here's the `age` field, which we've declared as a positive integer that we want to index on: + +```py +age: PositiveInt = Field(index=True) +``` + +* The `skills` field is a list of strings, declared thus: + +```py +skills: List[str] = Field(index=True) +``` + +* For the `personal_statement` field, we don't want to index on the field's value, as it's a free text sentence rather than a single word or digit. For this, we'll tell Redis OM that we want to be able to perform full text searches on the values: + +```py +personal_statement: str = Field(index=True, full_text_search=True) +``` + +* `address` works differently from the other fields. Note that in our JSON representation of the model, address is an object rather than a string or numerical field. With Redis OM, this is modeled as a second class, which extends the Redis OM `EmbeddedJsonModel` class: + +```py +class Address(EmbeddedJsonModel): + # field definitions... +``` + +* Fields in an `EmbeddedJsonModel` are defined in the same way, so our class contains a field definition for each data item in the address. + +* Not every field in our JSON is present in every address, Redis OM allows us to declare a field as optional so long as we don't index it: + +```py +unit: Optional[str] = Field(index=False) +``` + +* We can also set a default value for a field... let's say country should be "United Kingdom" unless otherwise specified: + +```py +country: str = Field(index=True, default="United Kingdom") +``` + +* Finally, to add the embedded address object to our Person model, we declare a field of type `Address` in the Person class: + +```py +address: Address +``` + +### Adding New People + +The function `create_person` in `app.py` handles the creation of a new person in Redis. It expects a JSON object that adheres to our Person model's schema. The code to then create a new Person object with that data and save it in Redis is simple: + +```py + new_person = Person(**request.json) + new_person.save() + return new_person.pk +``` + +When a new Person instance is created, Redis OM assigns it a unique ULID primary key, which we can access as `.pk`. We return that to the caller, so that they know the ID of the object they just created. + +Persisting the object to Redis is then simply a matter of calling `.save()` on it. + +Try it out... with the server running, add a new person using curl: + +```bash +curl --location --request POST 'http://127.0.0.1:5000/person/new' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "first_name": "Joanne", + "last_name": "Peel", + "age": 36, + "personal_statement": "Music is my life, I love gigging and playing with my band.", + "address": { + "street_number": 56, + "unit": "4A", + "street_name": "The Rushes", + "city": "Birmingham", + "state": "West Midlands", + "postal_code": "B91 6HG", + "country": "United Kingdom" + }, + "skills": [ + "synths", + "vocals", + "guitar" + ] +}' +``` + +Running the above curl command will return the unique ULID ID assigned to the newly created person. For example `01FX8SSSDN7PT9T3N0JZZA758G`. + +### Examining the data in Redis + +Let's take a look at what we just saved in Redis. Using Redis Insight or redis-cli, connect to the database and look at the value stored at key `:person.Person:01FX8SSSDN7PT9T3N0JZZA758G`. This is stored as a JSON document in Redis, so if using redis-cli you'll need the following command: + +```bash +$ redis-cli +127.0.0.1:6379> json.get :person.Person:01FX8SSSDN7PT9T3N0JZZA758G +``` + +If you're using Redis Insight, the browser will render the key value for you when you click on the key name: + +![Data in Redis Insight](./images/python_insight_explore_person.png) + +When storing data as JSON in Redis, we can update and retrieve the whole document, or just parts of it. For example, to retrieve only the person's address and first skill, use the following command (Redis Insight users should use the built in redis-cli for this): + +```bash +$ redis-cli +127.0.0.1:6379> json.get :person.Person:01FX8SSSDN7PT9T3N0JZZA758G $.address $.skills[0] +"{\"$.skills[0]\":[\"synths\"],\"$.address\":[{\"pk\":\"01FX8SSSDNRDSRB3HMVH00NQTT\",\"street_number\":56,\"unit\":\"4A\",\"street_name\":\"The Rushes\",\"city\":\"Birmingham\",\"state\":\"West Midlands\",\"postal_code\":\"B91 6HG\",\"country\":\"United Kingdom\"}]}" +``` + +For more information on the JSON Path syntax used to query JSON documents in Redis, see the [documentation]({{}}). + +### Find a Person by ID + +If we know a person's ID, we can retrieve their data. The function `find_by_id` in `app.py` receives an ID as its parameter, and asks Redis OM to retrieve and populate a Person object using the ID and the Person `.get` class method: + +```py + try: + person = Person.get(id) + return person.dict() + except NotFoundError: + return {} +``` + +The `.dict()` method converts our Person object to a Python dictionary that Flask then returns to the caller. + +Note that if there is no Person with the supplied ID in Redis, `get` will throw a `NotFoundError`. + +Try this out with curl, substituting `01FX8SSSDN7PT9T3N0JZZA758G` for the ID of a person that you just created in your database: + +```bash +curl --location --request GET 'http://localhost:5000/person/byid/01FX8SSSDN7PT9T3N0JZZA758G' +``` + +The server responds with a JSON object containing the user's data: + +```json +{ + "address": { + "city": "Birmingham", + "country": "United Kingdom", + "pk": "01FX8SSSDNRDSRB3HMVH00NQTT", + "postal_code": "B91 6HG", + "state": "West Midlands", + "street_name": "The Rushes", + "street_number": 56, + "unit": null + }, + "age": 36, + "first_name": "Joanne", + "last_name": "Peel", + "personal_statement": "Music is my life, I love gigging and playing with my band.", + "pk": "01FX8SSSDN7PT9T3N0JZZA758G", + "skills": [ + "synths", + "vocals", + "guitar" + ] +} +``` + +### Find People with Matching First and Last Name + +Let's find all the people who have a given first and last name... This is handled by the function `find_by_name` in `app.py`. + +Here, we're using Person's `find` class method that's provided by Redis OM. We pass it a search query, specifying that we want to find people whose `first_name` field contains the value of the `first_name` parameter passed to `find_by_name` AND whose `last_name` field contains the value of the `last_name` parameter: + +```py + people = Person.find( + (Person.first_name == first_name) & + (Person.last_name == last_name) + ).all() +``` + +`.all()` tells Redis OM that we want to retrieve all matching people. + +Try this out with curl as follows: + +```bash +curl --location --request GET 'http://127.0.0.1:5000/people/byname/Kareem/Khan' +``` + +**Note:** First and last name are case sensitive. + +The server responds with an object containing `results`, an array of matches: + +```json +{ + "results": [ + { + "address": { + "city": "Sheffield", + "country": "United Kingdom", + "pk": "01FX8RMR7THMGA84RH8ZRQRRP9", + "postal_code": "S1 5RE", + "state": "South Yorkshire", + "street_name": "The Beltway", + "street_number": 1, + "unit": "A" + }, + "age": 27, + "first_name": "Kareem", + "last_name": "Khan", + "personal_statement":"I'm Kareem, a multi-instrumentalist and singer looking to join a new rock band.", + "pk":"01FX8RMR7T60ANQTS4P9NKPKX8", + "skills": [ + "drums", + "guitar", + "synths" + ] + } + ] +} +``` + +### Find People within a Given Age Range + +It's useful to be able to find people that fall into a given age range... the function `find_in_age_range` in `app.py` handles this as follows... + +We'll again use Person's `find` class method, this time passing it a minimum and maximum age, specifying that we want results where the `age` field is between those values only: + +```py + people = Person.find( + (Person.age >= min_age) & + (Person.age <= max_age) + ).sort_by("age").all() +``` + +Note that we can also use `.sort_by` to specify which field we want our results sorted by. + +Let's find everyone between 30 and 47 years old, sorted by age: + +```bash +curl --location --request GET 'http://127.0.0.1:5000/people/byage/30/47' +``` + +This returns a `results` object containing an array of matches: + +```json +{ + "results": [ + { + "address": { + "city": "Sheffield", + "country": "United Kingdom", + "pk": "01FX8RMR7NW221STN6NVRDPEDT", + "postal_code": "S12 2MX", + "state": "South Yorkshire", + "street_name": "Main Street", + "street_number": 9, + "unit": null + }, + "age": 35, + "first_name": "Robert", + "last_name": "McDonald", + "personal_statement": "My name is Robert, I love meeting new people and enjoy music, coding and walking my dog.", + "pk": "01FX8RMR7NRS45PBT3XP9KNAZH", + "skills": [ + "guitar", + "piano", + "trombone" + ] + }, + { + "address": { + "city": "Birmingham", + "country": "United Kingdom", + "pk": "01FX8SSSDNRDSRB3HMVH00NQTT", + "postal_code": "B91 6HG", + "state": "West Midlands", + "street_name": "The Rushes", + "street_number": 56, + "unit": null + }, + "age": 36, + "first_name": "Joanne", + "last_name": "Peel", + "personal_statement": "Music is my life, I love gigging and playing with my band.", + "pk": "01FX8SSSDN7PT9T3N0JZZA758G", + "skills": [ + "synths", + "vocals", + "guitar" + ] + }, + { + "address": { + "city": "Nottingham", + "country": "United Kingdom", + "pk": "01FX8RMR82DDJ90CW8D1GM68YZ", + "postal_code": "NG1 1AA", + "state": "Nottinghamshire", + "street_name": "Broadway", + "street_number": 12, + "unit": "A-1" + }, + "age": 37, + "first_name": "Noor", + "last_name": "Vasan", + "personal_statement": "I sing and play the guitar, I enjoy touring and meeting new people on the road.", + "pk": "01FX8RMR82D091TC37B45RCWY3", + "skills": [ + "vocals", + "guitar" + ] + }, + { + "address": { + "city": "San Diego", + "country": "United States", + "pk": "01FX8RMR7YCDAVSWBMWCH2B07G", + "postal_code": "92102", + "state": "California", + "street_name": "C Street", + "street_number": 1299, + "unit": null + }, + "age": 43, + "first_name": "Fernando", + "last_name": "Ortega", + "personal_statement": "I'm in a really cool band that plays a lot of cover songs. I'm the drummer!", + "pk": "01FX8RMR7YB283BPZ88HAG066P", + "skills": [ + "clarinet", + "oboe", + "drums" + ] + } + ] +} +``` + +### Find People in a Given City with a Specific Skill + +Now, we'll try a slightly different sort of query. We want to find all of the people that live in a given city AND who also have a certain skill. This requires a search over both the `city` field which is a string, and the `skills` field, which is an array of strings. + +Essentially we want to say "Find me all the people whose city is `city` AND whose skills array CONTAINS `desired_skill`", where `city` and `desired_skill` are the parameters to the `find_matching_skill` function in `app.py`. Here's the code for that: + +```py + people = Person.find( + (Person.skills << desired_skill) & + (Person.address.city == city) + ).all() +``` + +The `<<` operator here is used to indicate "in" or "contains". + +Let's find all the guitar players in Sheffield: + +```bash +curl --location --request GET 'http://127.0.0.1:5000/people/byskill/guitar/Sheffield' +``` + +**Note:** `Sheffield` is case sensitive. + +The server returns a `results` array containing matching people: + +```json +{ + "results": [ + { + "address": { + "city": "Sheffield", + "country": "United Kingdom", + "pk": "01FX8RMR7THMGA84RH8ZRQRRP9", + "postal_code": "S1 5RE", + "state": "South Yorkshire", + "street_name": "The Beltway", + "street_number": 1, + "unit": "A" + }, + "age": 28, + "first_name": "Kareem", + "last_name": "Khan", + "personal_statement": "I'm Kareem, a multi-instrumentalist and singer looking to join a new rock band.", + "pk": "01FX8RMR7T60ANQTS4P9NKPKX8", + "skills": [ + "drums", + "guitar", + "synths" + ] + }, + { + "address": { + "city": "Sheffield", + "country": "United Kingdom", + "pk": "01FX8RMR7NW221STN6NVRDPEDT", + "postal_code": "S12 2MX", + "state": "South Yorkshire", + "street_name": "Main Street", + "street_number": 9, + "unit": null + }, + "age": 35, + "first_name": "Robert", + "last_name": "McDonald", + "personal_statement": "My name is Robert, I love meeting new people and enjoy music, coding and walking my dog.", + "pk": "01FX8RMR7NRS45PBT3XP9KNAZH", + "skills": [ + "guitar", + "piano", + "trombone" + ] + } + ] +} +``` + +### Find People using Full Text Search on their Personal Statements + +Each person has a `personal_statement` field, which is a free text string containing a couple of sentences about them. We chose to index this in a way that makes it full text searchable, so let's see how to use this now. The code for this is in the function `find_matching_statements` in `app.py`. + +To search for people who have the value of the parameter `search_term` in their `personal_statement` field, we use the `%` operator: + +```py + Person.find(Person.personal_statement % search_term).all() +``` + +Let's find everyone who talks about "play" in their personal statement. + +```bash +curl --location --request GET 'http://127.0.0.1:5000/people/bystatement/play' +``` + +The server responds with a `results` array of matching people: + +```json +{ + "results": [ + { + "address": { + "city": "San Diego", + "country": "United States", + "pk": "01FX8RMR7YCDAVSWBMWCH2B07G", + "postal_code": "92102", + "state": "California", + "street_name": "C Street", + "street_number": 1299, + "unit": null + }, + "age": 43, + "first_name": "Fernando", + "last_name": "Ortega", + "personal_statement": "I'm in a really cool band that plays a lot of cover songs. I'm the drummer!", + "pk": "01FX8RMR7YB283BPZ88HAG066P", + "skills": [ + "clarinet", + "oboe", + "drums" + ] + }, { + "address": { + "city": "Nottingham", + "country": "United Kingdom", + "pk": "01FX8RMR82DDJ90CW8D1GM68YZ", + "postal_code": "NG1 1AA", + "state": "Nottinghamshire", + "street_name": "Broadway", + "street_number": 12, + "unit": "A-1" + }, + "age": 37, + "first_name": "Noor", + "last_name": "Vasan", + "personal_statement": "I sing and play the guitar, I enjoy touring and meeting new people on the road.", + "pk": "01FX8RMR82D091TC37B45RCWY3", + "skills": [ + "vocals", + "guitar" + ] + }, + { + "address": { + "city": "Birmingham", + "country": "United Kingdom", + "pk": "01FX8SSSDNRDSRB3HMVH00NQTT", + "postal_code": "B91 6HG", + "state": "West Midlands", + "street_name": "The Rushes", + "street_number": 56, + "unit": null + }, + "age": 36, + "first_name": "Joanne", + "last_name": "Peel", + "personal_statement": "Music is my life, I love gigging and playing with my band.", + "pk": "01FX8SSSDN7PT9T3N0JZZA758G", + "skills": [ + "synths", + "vocals", + "guitar" + ] + } + ] +} +``` + +Note that we get results including matches for "play", "plays" and "playing". + +### Update a Person's Age + +As well as retrieving information from Redis, we'll also want to update a Person's data from time to time. Let's see how to do that with Redis OM for Python. + +The function `update_age` in `app.py` accepts two parameters: `id` and `new_age`. Using these, we first retrieve the person's data from Redis and create a new object with it: + +```py + try: + person = Person.get(id) + + except NotFoundError: + return "Bad request", 400 +``` + +Assuming we find the person, let's update their age and save the data back to Redis: + +```py + person.age = new_age + person.save() +``` + +Let's change Kareem Khan's age from 27 to 28: + +```bash +curl --location --request POST 'http://127.0.0.1:5000/person/01FX8RMR7T60ANQTS4P9NKPKX8/age/28' +``` + +The server responds with `ok`. + +### Delete a Person + +If we know a person's ID, we can delete them from Redis without first having to load their data into a Person object. In the function `delete_person` in `app.py`, we call the `delete` class method on the Person class to do this: + +```py + Person.delete(id) +``` + +Let's delete Dan Harris, the person with ID `01FX8RMR8545RWW4DYCE5MSZA1`: + +```bash +curl --location --request POST 'http://127.0.0.1:5000/person/01FX8RMR8545RWW4DYCE5MSZA1/delete' +``` + +The server responds with an `ok` response regardless of whether the ID provided existed in Redis. + +### Setting an Expiry Time for a Person + +This is an example of how to run arbitrary Redis commands against instances of a model saved in Redis. Let's see how we can set the time to live (TTL) on a person, so that Redis will expire the JSON document after a configurable number of seconds have passed. + +The function `expire_by_id` in `app.py` handles this as follows. It takes two parameters: `id` - the ID of a person to expire, and `seconds` - the number of seconds in the future to expire the person after. This requires us to run the Redis [`EXPIRE`]({{< relref "/commands/expire" >}}) command against the person's key. To do this, we need to access the Redis connection from the `Person` model like so: + +```py + person_to_expire = Person.get(id) + Person.db().expire(person_to_expire.key(), seconds) +``` + +Let's set the person with ID `01FX8RMR82D091TC37B45RCWY3` to expire in 600 seconds: + +```bash +curl --location --request POST 'http://localhost:5000/person/01FX8RMR82D091TC37B45RCWY3/expire/600' +``` + +Using `redis-cli`, you can check that the person now has a TTL set with the Redis `expire` command: + +```bash +127.0.0.1:6379> ttl :person.Person:01FX8RMR82D091TC37B45RCWY3 +(integer) 584 +``` + +This shows that Redis will expire the key 584 seconds from now. + +You can use the `.db()` function on your model class to get at the underlying redis-py connection whenever you want to run lower level Redis commands. For more details, see the [redis-py documentation](https://redis-py.readthedocs.io/en/stable/). + +## Shutting Down Redis (Docker) + +If you're using Docker, and want to shut down the Redis container when you are finished with the application, use `docker-compose down`: + +```bash +$ docker-compose down +Stopping redis_om_python_flask_starter ... done +Removing redis_om_python_flask_starter ... done +Removing network redis-om-python-flask-skeleton-app_default +``` +--- +LinkTitle: Get started +Title: Get started with Terraform +alwaysopen: false +categories: +- docs +- integrate +- rc +description: Shows how to install the Redis Cloud provider and create a subscription. +group: provisioning +headerRange: '[1-3]' +summary: The Redis Cloud Terraform provider allows you to provision and manage Redis + Cloud resources. +toc: 'true' +type: integration +weight: $weight +--- + +Here, you'll learn how to use the [Redis Cloud Terraform Provider]({{< relref "/integrate/terraform-provider-for-redis-cloud/" >}}) to create a subscription and a database. + +## Prerequisites + +1. [Install Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli). + +1. [Create a Redis Cloud account]({{< relref "/operate/rc/rc-quickstart#create-an-account" >}}) if you do not have one already. + +1. [Enable the Redis Cloud API]({{< relref "/operate/rc/api/get-started/enable-the-api" >}}). + +1. Get your Redis Cloud [API keys]({{< relref "/operate/rc/api/get-started/manage-api-keys" >}}). Set them to the following environment variables: + + - Set `REDISCLOUD_ACCESS_KEY` to your API account key. + - Set `REDISCLOUD_SECRET_KEY` to your API user key. + +1. Set a [payment method]({{< relref "/operate/rc/billing-and-payments#add-payment-method" >}}). + +## Install the Redis Cloud provider + +1. Create a file to contain the Terraform configuration called `main.tf`. + +1. Go to the [Redis Cloud Terraform Registry](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/). + +1. Select **Use Provider** and copy the Terraform code located there. Paste the code into `main.tf` and save the file. + + ```text + provider "rediscloud" { + } + + # Example resource configuration + resource "rediscloud_subscription" "example" { + # ... + } + ``` + +1. Run `terraform init`. + +## Create a Redis Cloud subscription with Terraform + +In your Terraform configuration file, you can add resources and data sources to plan and create subscriptions and databases. See the [Redis Cloud Terraform Registry documentation](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs) for more info about the resources and data sources you can use as part of the Redis Cloud provider. + +The steps in this section show you how to plan and create a Redis Cloud Pro subscription with one database. + +1. Use the [`rediscloud_payment_method`](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_payment_method) data source to get the payment method ID. + + ```text + # Get credit card details + data "rediscloud_payment_method" "card" { + card_type = "" + last_four_numbers = "" + } + ``` + + Example: + + ```text + data "rediscloud_payment_method" "card" { + card_type = "Visa" + last_four_numbers = "5625" + } + ``` + +1. Define a [`rediscloud_subscription`](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_subscription) resource to create the subscription. + + ```text + # Create a subscription + resource "rediscloud_subscription" "subscription-resource" { + name = "subscription-name" + payment_method_id = data.rediscloud_payment_method.card.id # If you want to pay with a marketplace account, replace this line with payment_method = 'marketplace'. + memory_storage = "ram" + + # Specify the cloud provider information here + cloud_provider { + provider = "" + region { + region = "" + networking_deployment_cidr = "" + } + } + + #Define the average database specification for databases in the subscription + creation_plan { + memory_limit_in_gb = 2 + quantity = 1 + replication = true + throughput_measurement_by = "operations-per-second" + throughput_measurement_value = 20000 + } + } + ``` + + Example: + + ```text + resource "rediscloud_subscription" "subscription-resource" { + name = "redis-docs-sub" + payment_method_id = data.rediscloud_payment_method.card.id # If you want to pay with a marketplace account, replace this line with payment_method = 'marketplace'. + memory_storage = "ram" + + cloud_provider { + provider = "GCP" + region { + region = "us-west1" + networking_deployment_cidr = "192.168.0.0/24" + } + } + + creation_plan { + memory_limit_in_gb = 2 + quantity = 1 + replication = true + throughput_measurement_by = "operations-per-second" + throughput_measurement_value = 20000 + modules = ["RedisJSON"] + } + } + ``` + +1. Define a [`rediscloud_subscription_database`](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_subscription_database) resource to create a database. + + ```text + # Create a Database + resource "rediscloud_subscription_database" "database-resource" { + subscription_id = rediscloud_subscription.subscription-resource.id + name = "database-name" + memory_limit_in_gb = 2 + data_persistence = "aof-every-write" + throughput_measurement_by = "operations-per-second" + throughput_measurement_value = 20000 + replication = true + + alert { + name = "dataset-size" + value = 40 + } + depends_on = [rediscloud_subscription.subscription-resource] + + } + ``` + + Example: + + ```text + resource "rediscloud_subscription_database" "database-resource" { + subscription_id = rediscloud_subscription.subscription-resource.id + name = "redis-docs-db" + memory_limit_in_gb = 2 + data_persistence = "aof-every-write" + throughput_measurement_by = "operations-per-second" + throughput_measurement_value = 20000 + replication = true + + modules = [ + { + name = "RedisJSON" + } + ] + + alert { + name = "dataset-size" + value = 40 + } + depends_on = [rediscloud_subscription.subscription-resource] + + } + ``` + +2. Run `terraform plan` to check for any syntax errors. + + ```sh + $ terraform plan + data.rediscloud_payment_method.card: Reading... + data.rediscloud_payment_method.card: Read complete after 1s [id=8859] + + Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following + symbols: + + create + + Terraform will perform the following actions: + + # rediscloud_subscription.subscription-resource will be created + + resource "rediscloud_subscription" "subscription-resource" { + [...] + } + + # rediscloud_subscription_database.database-resource will be created + + resource "rediscloud_subscription_database" "database-resource" { + [...] + } + + Plan: 2 to add, 0 to change, 0 to destroy. + ``` + +3. Run `terraform apply` to apply the changes and enter `yes` to confirm when prompted. + + This will take some time. You will see messages in your terminal while the subscription and database are being created: + + ```text + rediscloud_subscription.subscription-resource: Creating... + rediscloud_subscription.subscription-resource: Still creating... [10s elapsed] + rediscloud_subscription.subscription-resource: Still creating... [20s elapsed] + rediscloud_subscription.subscription-resource: Still creating... [30s elapsed] + ``` + + When provisioning is complete, you will see a message in your terminal: + + ```text + Apply complete! Resources: 2 added, 0 changed, 0 destroyed. + ``` + + View the [Redis Cloud console](https://cloud.redis.io/) to verify your subscription and database creation. + +4. If you want to remove these sample resources, run `terraform destroy`. + +## More info + +- [Redis Cloud Terraform Registry](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs) +- [Terraform documentation](https://developer.hashicorp.com/terraform/docs) +- [Terraform configuration syntax](https://developer.hashicorp.com/terraform/language/syntax/configuration) +--- +LinkTitle: Terraform provider for Redis Cloud +Title: Terraform provider for Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rc +description: null +group: provisioning +headerRange: '[1-3]' +summary: The Redis Cloud Terraform provider allows you to provision and manage Redis + Cloud resources. +toc: 'true' +type: integration +weight: 4 +hideListLinks: true +--- + +[Terraform](https://developer.hashicorp.com/terraform) is an open source automation tool developed by Hashicorp that allows you to easily provision infrastructure as code. + +Redis develops and maintains a [Terraform provider for Redis Cloud](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest). The Redis Cloud Terraform provider allows many of the same actions as found in the [Redis Cloud API]({{< relref "/operate/rc/api" >}}). + +See [Get started with Terraform]({{< relref "/integrate/terraform-provider-for-redis-cloud/get-started" >}}) for an example of how to use the Terraform provider. + +## Data sources and Resources + +The Terraform provider represents API actions as data sources and resources. Data sources are read-only and allow you to get information, while resources allow you to create and manage infrastructure. + +The Redis Cloud Terraform provider allows for the following data sources: + +- Redis Cloud Pro: + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_database) + - [Database capabilities](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_database_modules) + - [VPC peering connections](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_subscription_peerings) + - [Cloud accounts](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_cloud_account) + - [Supported persistence options](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_data_persistence) + - [AWS Transit Gateways](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_transit_gateway) + - Google Cloud Private Service Connect [Services](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_private_service_connect) and [Endpoints](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_private_service_connect_endpoints) +- Redis Cloud Essentials: + - [Plans](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_essentials_plan) + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_essentials_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_essentials_database) +- Active-Active: + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription_database) + - [AWS Transit Gateways](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_active_active_transit_gateway) + - Google Cloud Private Service Connect [services](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_active_active_private_service_connect) and [endpoints](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_active_active_private_service_connect_endpoints) +- [Payment methods](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_payment_method) +- [Supported cloud provider regions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_regions) +- ACL [roles](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_acl_role), [rules](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_acl_rule), and [users](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/data-sources/rediscloud_acl_user) + +It also allows you to create and manage the following resources: + +- Redis Cloud Pro: + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_subscription_database) + - **NOTE**: Upgrade your Terraform provider to version 1.8.1 to create databases with Search and Query. + - [VPC peering connections](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_subscription_peering) + - [Cloud accounts](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_cloud_account) + - [AWS Transit Gateway attachments](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_transit_gateway_attachment) + - Google Cloud Private Service Connect [connections](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_private_service_connect), [endpoints](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_private_service_connect_endpoint) and [endpoint acceptors](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_private_service_connect_endpoint_accepter) +- Redis Cloud Essentials: + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_essentials_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_essentials_database) +- Active-Active: + - [Subscriptions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription) + - [Databases](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription_database) + - [Regions](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription_regions) + - [VPC peering connections](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_subscription_peering) + - [AWS Transit Gateway attachments](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_transit_gateway_attachment) + - Google Cloud Private Service Connect [connections](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_private_service_connect), [endpoints](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_private_service_connect_endpoint) and [endpoint acceptors](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_active_active_private_service_connect_endpoint_accepter) +- ACL [rules](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_acl_rule), [roles](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_acl_role), and [users](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs/resources/rediscloud_acl_user) + + +## More info + +- [Get started with Terraform]({{< relref "/integrate/terraform-provider-for-redis-cloud/get-started" >}}) +- [Redis Cloud Terraform Registry](https://registry.terraform.io/providers/RedisLabs/rediscloud/latest/docs) +- [Terraform documentation](https://developer.hashicorp.com/terraform/docs) +- [Terraform configuration syntax](https://developer.hashicorp.com/terraform/language/syntax/configuration)--- +LinkTitle: RedisOM for .NET +Title: RedisOM for .NET +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis Stack and .NET +group: library +stack: true +summary: Redis OM for .NET is an object-mapping library for Redis. +title: Redis OM .NET +type: integration +weight: 9 +--- + +[Redis OM .NET](https://github.com/redis/redis-om-dotnet) is a purpose-built library for handling documents in Redis Stack. In this tutorial, we'll build a simple ASP.NET Core Web-API app for performing CRUD operations on a simple Person & Address model, and we'll accomplish all of this with Redis OM .NET. + +## Prerequisites + +* [.NET 6 SDK](https://dotnet.microsoft.com/en-us/download/dotnet/6.0) +* Any IDE for writing .NET (Visual Studio, Rider, Visual Studio Code). +* RediSearch must be installed as part of your Redis Stack configuration. +* Optional: Docker Desktop for running redis-stack in docker for local testing. + +## Skip to the code + +If you want to skip this tutorial and just jump straight into code, all the source code is available in [GitHub](https://github.com/redis-developer/redis-om-dotnet-skeleton-app) + +## Run Redis Stack + +There are a variety of ways to run Redis Stack. One way is to use the docker image: + +``` +docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack +``` + +## Create the project + +To create the project, just run: + +```bash +dotnet new webapi -n Redis.OM.Skeleton --no-https --kestrelHttpPort 5000 +``` + +Then open the `Redis.OM.Skeleton.csproj` file in your IDE of choice. + +## Configure the app + +Add a `REDIS_CONNECTION_STRING` field to your `appsettings.json` file to configure the application. Set that connection string to be the URI of your Redis instance. If using the docker command mentioned earlier, your connection string will be `redis://localhost:6379`. + +### Connection string specification + +The specification for Redis URIs is located [here](https://www.iana.org/assignments/uri-schemes/prov/redis). You can use `:password@host:port` or `default:password@host:port` for connection strings that do not include `username`. + +## Create the model + +Make sure to add the `Redis.OM` package to your project. This package makes it easy to create models and query your Redis domain objects. + +```bash +dotnet add package Redis.OM +``` + +Now it's time to create the `Person`/`Address` model that the app will use for storing/retrieving people. Create a new directory called `Model` and add the files `Address.cs` and `Person.cs` to it. In `Address.cs`, add the following: + +```csharp +using Redis.OM.Modeling; + +namespace Redis.OM.Skeleton.Model; + +public class Address +{ + [Indexed] + public int? StreetNumber { get; set; } + + [Indexed] + public string? Unit { get; set; } + + [Searchable] + public string? StreetName { get; set; } + + [Indexed] + public string? City { get; set; } + + [Indexed] + public string? State { get; set; } + + [Indexed] + public string? PostalCode { get; set; } + + [Indexed] + public string? Country { get; set; } + + [Indexed] + public GeoLoc Location { get; set; } +} +``` + +Here, you'll notice that except `StreetName`, marked as `Searchable`, all the fields are decorated with the `Indexed` attribute. These attributes (`Searchable` and `Indexed`) tell Redis OM that you want to be able to use those fields in queries when querying your documents in Redis Stack. `Address` will not be a Document itself, so the top-level class is not decorated with anything; instead, the `Address` model will be embedded in our `Person` model. + +To that end, add the following to `Person.cs` + +```csharp +using Redis.OM.Modeling; + +namespace Redis.OM.Skeleton.Model; + +[Document(StorageType = StorageType.Json, Prefixes = new []{"Person"})] +public class Person +{ + [RedisIdField] [Indexed]public string? Id { get; set; } + + [Indexed] public string? FirstName { get; set; } + + [Indexed] public string? LastName { get; set; } + + [Indexed] public int Age { get; set; } + + [Searchable] public string? PersonalStatement { get; set; } + + [Indexed] public string[] Skills { get; set; } = Array.Empty(); + + [Indexed(CascadeDepth = 1)] public Address? Address { get; set; } + +} +``` + +There are a few things to take note of here: + +1. `[Document(StorageType = StorageType.Json, Prefixes = new []{"Person"})]` Indicates that the data type that Redis OM will use to store the document in Redis is JSON and that the prefix for the keys for the Person class will be `Person`. + +2. `[Indexed(CascadeDepth = 1)] Address? Address { get; set; }` is one of two ways you can index an embedded object with Redis OM. This way instructs the index to cascade to the objects in the object graph, `CascadeDepth` of 1 means that it will traverse just one level, indexing the object as if it were building the index from scratch. The other method uses the `JsonPath` property of the individual indexed fields you want to search for. This more surgical approach limits the size of the index. + +3. the `Id` property is marked as a `RedisIdField`. This denotes the field as one that will be used to generate the document's key name when it's stored in Redis. + +## Create the Index + +With the model built, the next step is to create the index in Redis. The most correct way to manage this is to spin the index creation out into a Hosted Service, which will run when the app spins up. +Create a `HostedServices` directory and add `IndexCreationService.cs` to that. In that file, add the following, which will create the index on startup. + +```csharp +using Redis.OM.Skeleton.Model; + +namespace Redis.OM.Skeleton.HostedServices; + +public class IndexCreationService : IHostedService +{ + private readonly RedisConnectionProvider _provider; + public IndexCreationService(RedisConnectionProvider provider) + { + _provider = provider; + } + + public async Task StartAsync(CancellationToken cancellationToken) + { + await _provider.Connection.CreateIndexAsync(typeof(Person)); + } + + public Task StopAsync(CancellationToken cancellationToken) + { + return Task.CompletedTask; + } +} +``` + +Next, add the following to `Program.cs` to register the service on startup: + +```csharp +builder.Services.AddHostedService(); +``` + +## Inject the RedisConnectionProvider + +Redis OM uses the `RedisConnectionProvider` class to handle connections to Redis and provides the classes you can use to interact with Redis. To use it, simply inject an instance of the RedisConnectionProvider into your app. In your `Program.cs` file, add: + +```csharp +builder.Services.AddSingleton(new RedisConnectionProvider(builder.Configuration["REDIS_CONNECTION_STRING"])); +``` + +This will pull your connection string out of the config and initialize the provider. The provider will now be available in your controllers/services to use. + +## Create the PeopleController + +The final puzzle piece is to write the actual API controller for our People API. In the `controllers` directory, add the file `PeopleController.cs`, the skeleton of the `PeopleController`class will be: + +```csharp +using Microsoft.AspNetCore.Mvc; +using Redis.OM.Searching; +using Redis.OM.Skeleton.Model; + +namespace Redis.OM.Skeleton.Controllers; + +[ApiController] +[Route("[controller]")] +public class PeopleController : ControllerBase +{ + +} +``` + +### Inject the RedisConnectionProvider + +To interact with Redis, inject the RedisConnectionProvider. During this dependency injection, pull out a `RedisCollection` instance, which will allow a fluent interface for querying documents in Redis. + +```csharp +private readonly RedisCollection _people; +private readonly RedisConnectionProvider _provider; +public PeopleController(RedisConnectionProvider provider) +{ + _provider = provider; + _people = (RedisCollection)provider.RedisCollection(); +} +``` + +### Add route for creating a Person + +The first route to add to the API is a POST request for creating a person, using the `RedisCollection`, it's as simple as calling `InsertAsync`, passing in the person object: + + +```csharp +[HttpPost] +public async Task AddPerson([FromBody] Person person) +{ + await _people.InsertAsync(person); + return person; +} +``` + +### Add route to filter by age + +The first filter route to add to the API will let the user filter by a minimum and maximum age. Using the LINQ interface available to the `RedisCollection`, this is a simple operation: + +```csharp +[HttpGet("filterAge")] +public IList FilterByAge([FromQuery] int minAge, [FromQuery] int maxAge) +{ + return _people.Where(x => x.Age >= minAge && x.Age <= maxAge).ToList(); +} +``` + +### Filter by GeoLocation + +Redis OM has a `GeoLoc` data structure, an instance of which is indexed by the `Address` model, with the `RedisCollection`, it's possible to find all objects with a radius of particular position using the `GeoFilter` method along with the field you want to filter: + + +```csharp +[HttpGet("filterGeo")] +public IList FilterByGeo([FromQuery] double lon, [FromQuery] double lat, [FromQuery] double radius, [FromQuery] string unit) +{ + return _people.GeoFilter(x => x.Address!.Location, lon, lat, radius, Enum.Parse(unit)).ToList(); +} +``` + +### Filter by exact string + +When a string property in your model is marked as `Indexed`, e.g. `FirstName` and `LastName`, Redis OM can perform exact text matches against them. For example, the following two routes filter by `PostalCode` and name demonstrate exact string matches. + +```csharp +[HttpGet("filterName")] +public IList FilterByName([FromQuery] string firstName, [FromQuery] string lastName) +{ + return _people.Where(x => x.FirstName == firstName && x.LastName == lastName).ToList(); +} + +[HttpGet("postalCode")] +public IList FilterByPostalCode([FromQuery] string postalCode) +{ + return _people.Where(x => x.Address!.PostalCode == postalCode).ToList(); +} +``` + +### Filter with a full-text search + +When a property in the model is marked as `Searchable`, like `StreetAddress` and `PersonalStatement`, you can perform a full-text search, see the filters for the `PersonalStatement` and `StreetAddress`: + + +```csharp +[HttpGet("fullText")] +public IList FilterByPersonalStatement([FromQuery] string text){ + return _people.Where(x => x.PersonalStatement == text).ToList(); +} + +[HttpGet("streetName")] +public IList FilterByStreetName([FromQuery] string streetName) +{ + return _people.Where(x => x.Address!.StreetName == streetName).ToList(); +} +``` + +### Filter by array membership + +When a string array or list is marked as `Indexed`, Redis OM can filter all the records containing a given string using the `Contains` method of the array or list. For example, our `Person` model has a list of skills you can query by adding the following route. + +```csharp +[HttpGet("skill")] +public IList FilterBySkill([FromQuery] string skill) +{ + return _people.Where(x => x.Skills.Contains(skill)).ToList(); +} +``` + +### Updating a person + +Updating a document in Redis Stack with Redis OM can be done by first materializing the person object, making your desired changes, and then calling `Save` on the collection. The collection is responsible for keeping track of updates made to entities materialized in it; therefore, it will track and apply any updates you make in it. For example, add the following route to update the age of a Person given their Id: + + +```csharp +[HttpPatch("updateAge/{id}")] +public IActionResult UpdateAge([FromRoute] string id, [FromBody] int newAge) +{ + foreach (var person in _people.Where(x => x.Id == id)) + { + person.Age = newAge; + } + _people.Save(); + return Accepted(); +} +``` + +### Delete a person + +Deleting a document from Redis can be done with `Unlink`. All that's needed is to call Unlink, passing in the key name. Given an id, we can reconstruct the key name using the prefix and the id: + + +```csharp +[HttpDelete("{id}")] +public IActionResult DeletePerson([FromRoute] string id) +{ + _provider.Connection.Unlink($"Person:{id}"); + return NoContent(); +} +``` + +## Run the app + +All that's left to do now is to run the app and test it. You can do so by running `dotnet run`, the app is now exposed on port 5000, and there should be a swagger UI that you can use to play with the API at http://localhost:5000/swagger. There's a couple of scripts, along with some data files, to insert some people into Redis using the API in the [GitHub repo](https://github.com/redis-developer/redis-om-dotnet-skeleton-app/tree/main/data) + +## Viewing data in with Redis Insight + +You can either install the Redis Insight GUI or use the Redis Insight GUI running on http://localhost:8001/. + +You can view the data by following these steps: + +1. Accept the EULA + +![Accept EULA](./images/Accept_EULA.png) + +2. Click the Add Redis Database button + +![Add Redis Database Button](./images/Add_Redis_Database_button.png) + +3. Enter your hostname and port name for your redis server. If you are using the docker image, this is `localhost` and `6379` and give your database an alias + +![Configure Redis Insight Database](./images/Configure_Redis_Insight_Database.png) + +4. Click `Add Redis Database.` + +## Resources + +The source code for this tutorial can be found in [GitHub](https://github.com/redis-developer/redis-om-dotnet-skeleton-app). + +--- +Title: Prometheus metrics v2 preview +alwaysopen: false +categories: +- docs +- integrate +- rs +description: V2 metrics available to Prometheus as of Redis Enterprise Software version 7.8.2. +group: observability +linkTitle: Prometheus metrics v2 +summary: V2 metrics available to Prometheus as of Redis Enterprise Software version 7.8.2. +type: integration +weight: 50 +tocEmbedHeaders: true +--- + +{{}} +While the metrics stream engine is in preview, this document provides only a partial list of v2 metrics. More metrics will be added. +{{}} + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +The v2 metrics in the following tables are available as of Redis Enterprise Software version 7.8.0. For help transitioning from v1 metrics to v2 PromQL, see [Prometheus v1 metrics and equivalent v2 PromQL]({{}}). + +The v2 scraping endpoint also exposes metrics for `node_exporter` version 1.8.1. For more information, see the [Prometheus node_exporter GitHub repository](https://github.com/prometheus/node_exporter). + +{{}} +--- +Title: Prometheus metrics v1 +alwaysopen: false +categories: +- docs +- integrate +- rs +description: V1 metrics available to Prometheus. +group: observability +linkTitle: Prometheus metrics v1 +summary: You can use Prometheus and Grafana to collect and visualize your Redis Enterprise Software metrics. +type: integration +weight: 48 +tocEmbedHeaders: true +--- + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +As of Redis Enterprise Software version 7.8.2, v1 metrics are deprecated but still available. For help transitioning from v1 metrics to v2 PromQL, see [Prometheus v1 metrics and equivalent v2 PromQL]({{}}). + +{{}} +--- +Title: Redis Enterprise Software observability and monitoring guidance +alwaysopen: false +categories: +- docs +- integrate +- rs +description: Using monitoring and observability with Redis Enterprise +group: observability +linkTitle: Observability and monitoring +summary: Observe Redis Enterprise resources and database perfomance indicators. +type: integration +weight: 45 +tocEmbedHeaders: true +--- + +{{}} +--- +LinkTitle: Prometheus & Grafana with Redis Software +Title: Prometheus and Grafana with Redis Enterprise Software +alwaysopen: false +categories: +- docs +- integrate +- rs +description: Use Prometheus and Grafana to collect and visualize Redis Enterprise Software metrics. +group: observability +summary: You can use Prometheus and Grafana to collect and visualize your Redis Enterprise + Software metrics. +type: integration +weight: 5 +tocEmbedHeaders: true +--- + +{{}} +--- +Title: Transition from Prometheus v1 to Prometheus v2 +alwaysopen: false +categories: +- docs +- integrate +- rs +description: Transition from v1 metrics to v2 PromQL equivalents. +group: observability +linkTitle: Transition from Prometheus v1 to v2 +summary: Transition from v1 metrics to v2 PromQL equivalents. +type: integration +weight: 49 +tocEmbedHeaders: true +--- + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +As of Redis Enterprise Software version 7.8.2, [PromQL (Prometheus Query Language)](https://prometheus.io/docs/prometheus/latest/querying/basics/) metrics are available. V1 metrics are deprecated but still available. You can use the following tables to transition from v1 metrics to equivalent v2 PromQL. For a list of all available v2 PromQL metrics, see [Prometheus metrics v2]({{}}). + +{{}} +--- +LinkTitle: New Relic with Redis Enterprise +Title: New Relic with Redis Enterprise +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect New Relic to your Redis Enterprise cluster using + the Redis New Relic Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect New Relic to your Redis Enterprise cluster using + the Redis New Relic Integration. +type: integration +weight: 7 +--- + + +[New Relic](https://newrelic.com/?customer-bypass=true) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The New Relic Integration for Redis Enterprise uses Prometheus remote write functionality to connect Prometheus data +sources to New Relic. This integration enables Redis Enterprise users to export metrics to New Relic for analysis, +and includes Redis-designed dashboards for use in monitoring Redis Enterprise clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rs/redis-enterprise-newrelic.png" >}} +## Install Redis' New Relic Integration for Redis Enterprise + +The New Relic Integration for Redis is based on a feature of the Prometheus data source. Prometheus can forward metrics on to +another destination using remote writes. The Prometheus installation must be configured to pull metrics from Redis +Enterprise and write them to New Relic. There are two sections, first the pull from Redis and second the write to New Relic. + +Get metrics from Redis Enterprise: + +```yaml + - job_name: "redis-enterprise" + scrape_interval: 30s + scrape_timeout: 30s + metrics_path: / + scheme: https + tls_config: + insecure_skip_verify: true + static_configs: + # The default Redis Enterprise Prometheus port is 8070. + # Replace REDIS_ENTERPRISE_HOST with your cluster's hostname. + - targets: ["REDIS_ENTERPRISE_HOST:8070"] +``` + +Write them to New Relic: + +```yaml +# Remote write configuration for New Relic. +# - Replace REDIS_ENTERPRISE_SERVICE NAME with any name you'd like to use to refer to this data source. +# - Replace NEW_RELIC_BEARER_TOKEN with the token you generated on the New Relic Administration -> API Keys page. +remote_write: +- url: https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=REDIS_ENTERPRISE_SERVICE_NAME + authorization: + credentials: NEW_RELIC_BEARER_TOKEN +``` + +## View metrics + +The Redis Enterprise Integration for New Relic contains pre-defined dashboards to aid in monitoring your Redis Enterprise deployment. + +The following dashboards are currently available: + +- Cluster: top-level statistics indicating the general health of the cluster +- Database: performance metrics at the database level +- Node +- Shard: low-level details of an individual shard +- Active-Active: replication and performance for geo-replicated clusters +- Proxy: network and command information regarding the proxy +- Proxy Threads: processor usage information regarding the proxy's component threads + +## Monitor metrics + +New Relic dashboards can be filtered using the text area. For example, when viewing a cluster dashboard it is possible to +filter the display to show data for only one cluster by typing 'cluster' in the text area and waiting for the system to +retrieve the relevant data before choosing one of the options in the 'cluster' section. + +Certain types of data do not know the name of the database from which they were drawn. The dashboard should have a list +of database names and ids; use the id value when filtering input to the dashboard. + + +--- +LinkTitle: RedisOM for Node.js +Title: RedisOM for Node.js +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis Stack and Node.js +group: library +stack: true +summary: Redis OM for Node.js is an object-mapping library for Redis. +title: Redis OM Node.js +type: integration +weight: 9 +--- + +This tutorial will show you how to build an API using Node.js and Redis Stack. + +We'll be using [Express](https://expressjs.com/) and [Redis OM](https://github.com/redis/redis-om-node) to do this, and we assume that you have a basic understanding of Express. + +The API we'll be building is a simple and relatively RESTful API that reads, writes, and finds data on persons: first name, last name, age, etc. We'll also add a simple location tracking feature just for a bit of extra interest. + +But before we start with the coding, let's start with a description of what Redis OM *is*. + + +## Prerequisites + +Like anything software-related, you need to have some dependencies installed before you can get started: + +- [Node.js 14.8+](https://nodejs.org/en/): In this tutorial, we're using JavaScript's top-level `await` feature which was introduced in Node 14.8. So, make sure you are using that version or later. +- [Redis Stack](/download): You need a version of Redis Stack, either running locally on your machine or [in the cloud](https://redis.com/try-free/?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). +- [Redis Insight](https://redis.com/redis-enterprise/redis-insight/): We'll use this to look inside Redis and make sure our code is doing what we think it's doing. + + +## Starter code + +We're not going to code this completely from scratch. Instead, we've provided some starter code for you. Go ahead and clone it to a folder of your convenience: + + git clone git@github.com:redis-developer/express-redis-om-workshop.git + +Now that you have the starter code, let's explore it a bit. Opening up `server.js` in the root we see that we have a simple Express app that uses [*Dotenv*](https://www.npmjs.com/package/dotenv) for configuration and [Swagger UI Express](https://www.npmjs.com/package/swagger-ui-express) for testing our API: + +{{< highlight javascript >}} +import 'dotenv/config' + +import express from 'express' +import swaggerUi from 'swagger-ui-express' +import YAML from 'yamljs' + +/* create an express app and use JSON */ +const app = new express() +app.use(express.json()) + +/* set up swagger in the root */ +const swaggerDocument = YAML.load('api.yaml') +app.use('/', swaggerUi.serve, swaggerUi.setup(swaggerDocument)) + +/* start the server */ +app.listen(8080) +{{< / highlight >}} + +Alongside this is `api.yaml`, which defines the API we're going to build and provides the information Swagger UI Express needs to render its UI. You don't need to mess with it unless you want to add some additional routes. + +The `persons` folder has some JSON files and a shell script. The JSON files are sample persons—all musicians because fun—that you can load into the API to test it. The shell script—`load-data.sh`—will load all the JSON files into the API using `curl`. + +There are two empty folders, `om` and `routers`. The `om` folder is where all the Redis OM code will go. The `routers` folder will hold code for all of our Express routes. + + +## Configure and run + +The starter code is perfectly runnable if a bit thin. Let's configure and run it to make sure it works before we move on to writing actual code. First, get all the dependencies: + + npm install + +Then, set up a `.env` file in the root that Dotenv can make use of. There's a `sample.env` file in the root that you can copy and modify: + + cp sample.env .env + +The contents of `.env` looks like this: + +{{< highlight bash >}} +# Put your local Redis Stack URL here. Want to run in the +# cloud instead? Sign up at https://redis.com/try-free/. +REDIS_URL=redis://localhost:6379 +{{< / highlight >}} + +There's a good chance this is already correct. However, if you need to change the `REDIS_URL` for your particular environment (e.g., you're running Redis Stack in the cloud), this is the time to do it. Once done, you should be able to run the app: + + npm start + +Navigate to `http://localhost:8080` and check out the client that Swagger UI Express has created. None of it *works* yet because we haven't implemented any of the routes. But, you can try them out and watch them fail! + +The starter code runs. Let's add some Redis OM to it so it actually *does* something! + + +## Setting up a Client + +First things first, let's set up a **client**. The `Client` class is the thing that knows how to talk to Redis on behalf of Redis OM. One option is to put our client in its own file and export it. This ensures that the application has one and only one instance of `Client` and thus only one connection to Redis Stack. Since Redis and JavaScript are both (more or less) single-threaded, this works neatly. + +Let's create our first file. In the `om` folder add a file called `client.js` and add the following code: + +{{< highlight javascript >}} +import { Client } from 'redis-om' + +/* pulls the Redis URL from .env */ +const url = process.env.REDIS_URL + +/* create and open the Redis OM Client */ +const client = await new Client().open(url) + +export default client +{{< / highlight >}} + +> Remember that _top-level await_ stuff we mentioned earlier? There it is! + +Note that we are getting our Redis URL from an environment variable. It was put there by Dotenv and read from our `.env` file. If we didn't have the `.env` file or have a `REDIS_URL` property in our `.env` file, this code would gladly read this value from the *actual* environment variables. + +Also note that the `.open()` method conveniently returns `this`. This `this` (can I say *this* again? I just did!) lets us chain the instantiation of the client with the opening of the client. If this isn't to your liking, you could always write it like this: + +{{< highlight javascript >}} +/* create and open the Redis OM Client */ +const client = new Client() +await client.open(url) +{{< / highlight >}} + + +## Entity, Schema, and Repository + +Now that we have a client that's connected to Redis, we need to start mapping some persons. To do that, we need to define an `Entity` and a `Schema`. Let's start by creating a file named `person.js` in the `om` folder and importing `client` from `client.js` and the `Entity` and `Schema` classes from Redis OM: + +{{< highlight javascript >}} +import { Entity, Schema } from 'redis-om' +import client from './client.js' +{{< / highlight >}} + +### Entity + +Next, we need to define an **entity**. An `Entity` is the class that holds you data when you work with it—the thing being mapped to. It is what you create, read, update, and delete. Any class that extends `Entity` is an entity. We'll define our `Person` entity with a single line: + +{{< highlight javascript >}} +/* our entity */ +class Person extends Entity {} +{{< / highlight >}} + +### Schema + +A **schema** defines the fields on your entity, their types, and how they are mapped internally to Redis. By default, entities map to JSON documents. Let's create our `Schema` in `person.js`: + +{{< highlight javascript >}} +/* create a Schema for Person */ +const personSchema = new Schema(Person, { + firstName: { type: 'string' }, + lastName: { type: 'string' }, + age: { type: 'number' }, + verified: { type: 'boolean' }, + location: { type: 'point' }, + locationUpdated: { type: 'date' }, + skills: { type: 'string[]' }, + personalStatement: { type: 'text' } +}) +{{< / highlight >}} + +When you create a `Schema`, it modifies the `Entity` class you handed it (`Person` in our case) adding getters and setters for the properties you define. The type those getters and setters accept and return are defined with the type parameter as shown above. Valid values are: `string`, `number`, `boolean`, `string[]`, `date`, `point`, and `text`. + +The first three do exactly what you think—they define a property that is a [`String`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String), a [`Number`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number), or a [`Boolean`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Boolean). `string[]` does what you'd think as well, specifically defining an [`Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array) of strings. + +`date` is a little different, but still more or less what you'd expect. It defines a property that returns a [`Date`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) and can be set using not only a `Date` but also a `String` containing an [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date or a `Number` with the [UNIX epoch time](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date#the_ecmascript_epoch_and_timestamps) in *milliseconds*. + +A `point` defines a point somewhere on the globe as a longitude and a latitude. It creates a property that returns and accepts a simple object with the properties of `longitude` and `latitude`. Like this: + +{{< highlight javascript >}} +let point = { longitude: 12.34, latitude: 56.78 } +{{< / highlight >}} + +A `text` field is a lot like a `string`. If you're just reading and writing objects, they are identical. But if you want to *search* on them, they are very, very different. We'll talk about search more later, but the tl;dr is that `string` fields can only be matched on their whole value—no partial matches—and are best for keys while `text` fields have full-text search enabled on them and are optimized for human-readable text. + +### Repository + +Now we have all the pieces that we need to create a **repository**. A `Repository` is the main interface into Redis OM. It gives us the methods to read, write, and remove a specific `Entity`. Create a `Repository` in `person.js` and make sure it's exported as you'll need it when we start implementing out API: + +{{< highlight javascript >}} +/* use the client to create a Repository just for Persons */ +export const personRepository = new Repository(personSchema, client) +{{< / highlight >}} + +We're almost done with setting up our repository. But we still need to create an index or we won't be able to search. We do that by calling `.createIndex()`. If an index already exists and it's identical, this function won't do anything. If it's different, it'll drop it and create a new one. Add a call to `.createIndex()` to `person.js`: + +{{< highlight javascript >}} +/* create the index for Person */ +await personRepository.createIndex() +{{< / highlight >}} + +That's all we need for `person.js` and all we need to start talking to Redis using Redis OM. Here's the code in its entirety: + +{{< highlight javascript >}} +import { Entity, Schema } from 'redis-om' +import client from './client.js' + +/* our entity */ +class Person extends Entity {} + +/* create a Schema for Person */ +const personSchema = new Schema(Person, { + firstName: { type: 'string' }, + lastName: { type: 'string' }, + age: { type: 'number' }, + verified: { type: 'boolean' }, + location: { type: 'point' }, + locationUpdated: { type: 'date' }, + skills: { type: 'string[]' }, + personalStatement: { type: 'text' } +}) + +/* use the client to create a Repository just for Persons */ +export const personRepository = client.fetchRepository(personSchema) + +/* create the index for Person */ +await personRepository.createIndex() +{{< / highlight >}} + +Now, let's add some routes in Express. + + +## Set up the Person Router + +Let's create a truly RESTful API with the CRUD operations mapping to PUT, GET, POST, and DELETE respectively. We're going to do this using [Express Routers](https://expressjs.com/en/4x/api.html#router) as this makes our code nice and tidy. Create a file called `person-router.js` in the `routers` folder and in it import `Router` from Express and `personRepository` from `person.js`. Then create and export a `Router`: + +{{< highlight javascript >}} +import { Router } from 'express' +import { personRepository } from '../om/person.js' + +export const router = Router() +{{< / highlight >}} + +Imports and exports done, let's bind the router to our Express app. Open up `server.js` and import the `Router` we just created: + +{{< highlight javascript >}} +/* import routers */ +import { router as personRouter } from './routers/person-router.js' +{{< / highlight >}} + +Then add the `personRouter` to the Express app: + +{{< highlight javascript >}} +/* bring in some routers */ +app.use('/person', personRouter) +{{< / highlight >}} + +Your `server.js` should now look like this: + +{{< highlight javascript >}} +import 'dotenv/config' + +import express from 'express' +import swaggerUi from 'swagger-ui-express' +import YAML from 'yamljs' + +/* import routers */ +import { router as personRouter } from './routers/person-router.js' + +/* create an express app and use JSON */ +const app = new express() +app.use(express.json()) + +/* bring in some routers */ +app.use('/person', personRouter) + +/* set up swagger in the root */ +const swaggerDocument = YAML.load('api.yaml') +app.use('/', swaggerUi.serve, swaggerUi.setup(swaggerDocument)) + +/* start the server */ +app.listen(8080) +{{< / highlight >}} + +Now we can add our routes to create, read, update, and delete persons. Head back to the `person-router.js` file so we can do just that. + + +### Creating a Person + +We'll create a person first as you need to have persons in Redis before you can do any of the reading, writing, or removing of them. Add the PUT route below. This route will call `.createAndSave()` to create a `Person` from the request body and immediately save it to the Redis: + +{{< highlight javascript >}} +router.put('/', async (req, res) => { + const person = await personRepository.createAndSave(req.body) + res.send(person) +}) +{{< / highlight >}} + +Note that we are also returning the newly created `Person`. Let's see what that looks like by actually calling our API using the Swagger UI. Go to http://localhost:8080 in your browser and try it out. The default request body in Swagger will be fine for testing. You should see a response that looks like this: + +{{< highlight json >}} +{ + "entityId": "01FY9MWDTWW4XQNTPJ9XY9FPMN", + "firstName": "Rupert", + "lastName": "Holmes", + "age": 75, + "verified": false, + "location": { + "longitude": 45.678, + "latitude": 45.678 + }, + "locationUpdated": "2022-03-01T12:34:56.123Z", + "skills": [ + "singing", + "songwriting", + "playwriting" + ], + "personalStatement": "I like piña coladas and walks in the rain" +} +{{< / highlight >}} + +This is exactly what we handed it with one exception: the `entityId`. Every entity in Redis OM has an entity ID which is—as you've probably guessed—the unique ID of that entity. It was randomly generated when we called `.createAndSave()`. Yours will be different, so make note of it. + +You can see this newly created JSON document in Redis with Redis Insight. Go ahead and launch Redis Insight and you should see a key with a name like `Person:01FY9MWDTWW4XQNTPJ9XY9FPMN`. The `Person` bit of the key was derived from the class name of our entity and the sequence of letters and numbers is our generated entity ID. Click on it to take a look at the JSON document you've created. + +You'll also see a key named `Person:index:hash`. That's a unique value that Redis OM uses to see if it needs to recreate the index or not when `.createIndex()` is called. You can safely ignore it. + + +### Reading a Person + +Create down, let's add a GET route to read this newly created `Person`: + +{{< highlight javascript >}} +router.get('/:id', async (req, res) => { + const person = await personRepository.fetch(req.params.id) + res.send(person) +}) +{{< / highlight >}} + +This code extracts a parameter from the URL used in the route—the `entityId` that we received previously. It uses the `.fetch()` method on the `personRepository` to retrieve a `Person` using that `entityId`. Then, it returns that `Person`. + +Let's go ahead and test that in Swagger as well. You should get back exactly the same response. In fact, since this is a simple GET, we should be able to just load the URL into our browser. Test that out too by navigating to http://localhost:8080/person/01FY9MWDTWW4XQNTPJ9XY9FPMN, replacing the entity ID with your own. + +Now that we can read and write, let's implement the *REST* of the HTTP verbs. REST... get it? + + +### Updating a Person + +Let's add the code to update a person using a POST route: + +{{< highlight javascript >}} +router.post('/:id', async (req, res) => { + + const person = await personRepository.fetch(req.params.id) + + person.firstName = req.body.firstName ?? null + person.lastName = req.body.lastName ?? null + person.age = req.body.age ?? null + person.verified = req.body.verified ?? null + person.location = req.body.location ?? null + person.locationUpdated = req.body.locationUpdated ?? null + person.skills = req.body.skills ?? null + person.personalStatement = req.body.personalStatement ?? null + + await personRepository.save(person) + + res.send(person) +}) +{{< / highlight >}} + +This code fetches the `Person` from the `personRepository` using the `entityId` just like our previous route did. However, now we change all the properties based on the properties in the request body. If any of them are missing, we set them to `null`. Then, we call `.save()` and return the changed `Person`. + +Let's test this in Swagger too, why not? Make some changes. Try removing some of the fields. What do you get back when you read it after you've changed it? + +### Deleting a Person + +Deletion—my favorite! Remember kids, deletion is 100% compression. The route that deletes is just as straightforward as the one that reads, but much more destructive: + +{{< highlight javascript >}} +router.delete('/:id', async (req, res) => { + await personRepository.remove(req.params.id) + res.send({ entityId: req.params.id }) +}) +{{< / highlight >}} + +I guess we should probably test this one out too. Load up Swagger and exercise the route. You should get back JSON with the entity ID you just removed: + +{{< highlight json >}} +{ + "entityId": "01FY9MWDTWW4XQNTPJ9XY9FPMN" +} +{{< / highlight >}} + +And just like that, it's gone! + + +### All the CRUD + +Do a quick check with what you've written so far. Here's what should be the totality of your `person-router.js` file: + +{{< highlight javascript >}} +import { Router } from 'express' +import { personRepository } from '../om/person.js' + +export const router = Router() + +router.put('/', async (req, res) => { + const person = await personRepository.createAndSave(req.body) + res.send(person) +}) + +router.get('/:id', async (req, res) => { + const person = await personRepository.fetch(req.params.id) + res.send(person) +}) + +router.post('/:id', async (req, res) => { + + const person = await personRepository.fetch(req.params.id) + + person.firstName = req.body.firstName ?? null + person.lastName = req.body.lastName ?? null + person.age = req.body.age ?? null + person.verified = req.body.verified ?? null + person.location = req.body.location ?? null + person.locationUpdated = req.body.locationUpdated ?? null + person.skills = req.body.skills ?? null + person.personalStatement = req.body.personalStatement ?? null + + await personRepository.save(person) + + res.send(person) +}) + +router.delete('/:id', async (req, res) => { + await personRepository.remove(req.params.id) + res.send({ entityId: req.params.id }) +}) +{{< / highlight >}} + + +## Preparing to search + +CRUD completed, let's do some searching. In order to search, we need data to search over. Remember that `persons` folder with all the JSON documents and the `load-data.sh` shell script? Its time has arrived. Go into that folder and run the script: + + cd persons + ./load-data.sh + +You should get a rather verbose response containing the JSON response from the API and the names of the files you loaded. Like this: + +{{< highlight json >}} +{"entityId":"01FY9Z4RRPKF4K9H78JQ3K3CP3","firstName":"Chris","lastName":"Stapleton","age":43,"verified":true,"location":{"longitude":-84.495,"latitude":38.03},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","football","coal mining"],"personalStatement":"There are days that I can walk around like I'm alright. And I pretend to wear a smile on my face. And I could keep the pain from comin' out of my eyes. But sometimes, sometimes, sometimes I cry."} <- chris-stapleton.json +{"entityId":"01FY9Z4RS2QQVN4XFYSNPKH6B2","firstName":"David","lastName":"Paich","age":67,"verified":false,"location":{"longitude":-118.25,"latitude":34.05},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","keyboard","blessing"],"personalStatement":"I seek to cure what's deep inside frightened of this thing that I've become"} <- david-paich.json +{"entityId":"01FY9Z4RSD7SQMSWDFZ6S4M5MJ","firstName":"Ivan","lastName":"Doroschuk","age":64,"verified":true,"location":{"longitude":-88.273,"latitude":40.115},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","dancing","friendship"],"personalStatement":"We can dance if we want to. We can leave your friends behind. 'Cause your friends don't dance and if they don't dance well they're no friends of mine."} <- ivan-doroschuk.json +{"entityId":"01FY9Z4RSRZFGQ21BMEKYHEVK6","firstName":"Joan","lastName":"Jett","age":63,"verified":false,"location":{"longitude":-75.273,"latitude":40.003},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","guitar","black eyeliner"],"personalStatement":"I love rock n' roll so put another dime in the jukebox, baby."} <- joan-jett.json +{"entityId":"01FY9Z4RT25ABWYTW6ZG7R79V4","firstName":"Justin","lastName":"Timberlake","age":41,"verified":true,"location":{"longitude":-89.971,"latitude":35.118},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","dancing","half-time shows"],"personalStatement":"What goes around comes all the way back around."} <- justin-timberlake.json +{"entityId":"01FY9Z4RTD9EKBDS2YN9CRMG1D","firstName":"Kerry","lastName":"Livgren","age":72,"verified":false,"location":{"longitude":-95.689,"latitude":39.056},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["poetry","philosophy","songwriting","guitar"],"personalStatement":"All we are is dust in the wind."} <- kerry-livgren.json +{"entityId":"01FY9Z4RTR73HZQXK83JP94NWR","firstName":"Marshal","lastName":"Mathers","age":49,"verified":false,"location":{"longitude":-83.046,"latitude":42.331},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["rapping","songwriting","comics"],"personalStatement":"Look, if you had, one shot, or one opportunity to seize everything you ever wanted, in one moment, would you capture it, or just let it slip?"} <- marshal-mathers.json +{"entityId":"01FY9Z4RV2QHH0Z1GJM5ND15JE","firstName":"Rupert","lastName":"Holmes","age":75,"verified":true,"location":{"longitude":-2.518,"latitude":53.259},"locationUpdated":"2022-01-01T12:00:00.000Z","skills":["singing","songwriting","playwriting"],"personalStatement":"I like piña coladas and taking walks in the rain."} <- rupert-holmes.json +{{< / highlight >}} + +A little messy, but if you don't see this, then it didn't work! + +Now that we have some data, let's add another router to hold the search routes we want to add. Create a file named `search-router.js` in the routers folder and set it up with imports and exports just like we did in `person-router.js`: + +{{< highlight javascript >}} +import { Router } from 'express' +import { personRepository } from '../om/person.js' + +export const router = Router() +{{< / highlight >}} + +Import the `Router` into `server.js` the same way we did for the `personRouter`: + +{{< highlight javascript >}} +/* import routers */ +import { router as personRouter } from './routers/person-router.js' +import { router as searchRouter } from './routers/search-router.js' +{{< / highlight >}} + +Then add the `searchRouter` to the Express app: + +{{< highlight javascript >}} +/* bring in some routers */ +app.use('/person', personRouter) +app.use('/persons', searchRouter) +{{< / highlight >}} + +Router bound, we can now add some routes. + + +### Search all the things + +We're going to add a plethora of searches to our new `Router`. But the first will be the easiest as it's just going to return everything. Go ahead and add the following code to `search-router.js`: + +{{< highlight javascript >}} +router.get('/all', async (req, res) => { + const persons = await personRepository.search().return.all() + res.send(persons) +}) +{{< / highlight >}} + +Here we see how to start and finish a search. Searches start just like CRUD operations start—on a `Repository`. But instead of calling `.createAndSave()`, `.fetch()`, `.save()`, or `.remove()`, we call `.search()`. And unlike all those other methods, `.search()` doesn't end there. Instead, it allows you to build up a query (which you'll see in the next example) and then resolve it with a call to `.return.all()`. + +With this new route in place, go into the Swagger UI and exercise the `/persons/all` route. You should see all of the folks you added with the shell script as a JSON array. + +In the example above, the query is not specified—we didn't build anything up. If you do this, you'll just get everything. Which is what you want sometimes. But not most of the time. It's not really searching if you just return everything. So let's add a route that lets us find persons by their last name. Add the following code: + +{{< highlight javascript >}} +router.get('/by-last-name/:lastName', async (req, res) => { + const lastName = req.params.lastName + const persons = await personRepository.search() + .where('lastName').equals(lastName).return.all() + res.send(persons) +}) +{{< / highlight >}} + +In this route, we're specifying a field we want to filter on and a value that it needs to equal. The field name in the call to `.where()` is the name of the field specified in our schema. This field was defined as a `string`, which matters because the type of the field determines the methods that are available query it. + +In the case of a `string`, there's just `.equals()`, which will query against the value of the entire string. This is aliased as `.eq()`, `.equal()`, and `.equalTo()` for your convenience. You can even add a little more syntactic sugar with calls to `.is` and `.does` that really don't do anything but make your code pretty. Like this: + +{{< highlight javascript >}} +const persons = await personRepository.search().where('lastName').is.equalTo(lastName).return.all() +const persons = await personRepository.search().where('lastName').does.equal(lastName).return.all() +{{< / highlight >}} + +You can also invert the query with a call to `.not`: + +{{< highlight javascript >}} +const persons = await personRepository.search().where('lastName').is.not.equalTo(lastName).return.all() +const persons = await personRepository.search().where('lastName').does.not.equal(lastName).return.all() +{{< / highlight >}} + +In all these cases, the call to `.return.all()` executes the query we build between it and the call to `.search()`. We can search on other field types as well. Let's add some routes to search on a `number` and a `boolean` field: + +{{< highlight javascript >}} +router.get('/old-enough-to-drink-in-america', async (req, res) => { + const persons = await personRepository.search() + .where('age').gte(21).return.all() + res.send(persons) +}) + +router.get('/non-verified', async (req, res) => { + const persons = await personRepository.search() + .where('verified').is.not.true().return.all() + res.send(persons) +}) +{{< / highlight >}} + +The `number` field is filtering persons by age where the age is great than or equal to 21. Again, there are aliases and syntactic sugar: + +{{< highlight javascript >}} +const persons = await personRepository.search().where('age').is.greaterThanOrEqualTo(21).return.all() +{{< / highlight >}} + +But there are also more ways to query: + +{{< highlight javascript >}} +const persons = await personRepository.search().where('age').eq(21).return.all() +const persons = await personRepository.search().where('age').gt(21).return.all() +const persons = await personRepository.search().where('age').gte(21).return.all() +const persons = await personRepository.search().where('age').lt(21).return.all() +const persons = await personRepository.search().where('age').lte(21).return.all() +const persons = await personRepository.search().where('age').between(21, 65).return.all() +{{< / highlight >}} + +The `boolean` field is searching for persons by their verification status. It already has some of our syntactic sugar in it. Note that this query will match a missing value or a false value. That's why I specified `.not.true()`. You can also call `.false()` on boolean fields as well as all the variations of `.equals`. + +{{< highlight javascript >}} +const persons = await personRepository.search().where('verified').true().return.all() +const persons = await personRepository.search().where('verified').false().return.all() +const persons = await personRepository.search().where('verified').equals(true).return.all() +{{< / highlight >}} + +> So, we've created a few routes and I haven't told you to test them. Maybe you have anyhow. If so, good for you, you rebel. For the rest of you, why don't you go ahead and test them now with Swagger? And, going forward, just test them when you want. Heck, create some routes of your own using the provided syntax and try those out too. Don't let me tell you how to live your life. + +Of course, querying on just one field is never enough. Not a problem, Redis OM can handle `.and()` and `.or()` like in this route: + +{{< highlight javascript >}} +router.get('/verified-drinkers-with-last-name/:lastName', async (req, res) => { + const lastName = req.params.lastName + const persons = await personRepository.search() + .where('verified').is.true() + .and('age').gte(21) + .and('lastName').equals(lastName).return.all() + res.send(persons) +}) +{{< / highlight >}} + +Here, I'm just showing the syntax for `.and()` but, of course, you can also use `.or()`. + + +### Full-text search + +If you've defined a field with a type of `text` in your schema, you can perform full-text searches against it. The way a `text` field is searched is different from how a `string` is searched. A `string` can only be compared with `.equals()` and must match the entire string. With a `text` field, you can look for words within the string. + +A `text` field is optimized for human-readable text, like an essay or song lyrics. It's pretty clever. It understands that certain words (like *a*, *an*, or *the*) are common and ignores them. It understands how words are grammatically similar and so if you search for *give*, it matches *gives*, *given*, *giving*, and *gave* too. And it ignores punctuation. + +Let's add a route that does full-text search against our `personalStatement` field: + +{{< highlight javascript >}} +router.get('/with-statement-containing/:text', async (req, res) => { + const text = req.params.text + const persons = await personRepository.search() + .where('personalStatement').matches(text) + .return.all() + res.send(persons) +}) +{{< / highlight >}} + +Note the use of the `.matches()` function. This is the only one that works with `text` fields. It takes a string that can be one or more words—space-delimited—that you want to query for. Let's try it out. In Swagger, use this route to search for the word "walk". You should get the following results: + +{{< highlight json >}} +[ + { + "entityId": "01FYC7CTR027F219455PS76247", + "firstName": "Rupert", + "lastName": "Holmes", + "age": 75, + "verified": true, + "location": { + "longitude": -2.518, + "latitude": 53.259 + }, + "locationUpdated": "2022-01-01T12:00:00.000Z", + "skills": [ + "singing", + "songwriting", + "playwriting" + ], + "personalStatement": "I like piña coladas and taking walks in the rain." + }, + { + "entityId": "01FYC7CTNBJD9CZKKWPQEZEW14", + "firstName": "Chris", + "lastName": "Stapleton", + "age": 43, + "verified": true, + "location": { + "longitude": -84.495, + "latitude": 38.03 + }, + "locationUpdated": "2022-01-01T12:00:00.000Z", + "skills": [ + "singing", + "football", + "coal mining" + ], + "personalStatement": "There are days that I can walk around like I'm alright. And I pretend to wear a smile on my face. And I could keep the pain from comin' out of my eyes. But sometimes, sometimes, sometimes I cry." + } +] +{{< / highlight >}} + +Notice how the word "walk" is matched for Rupert Holmes' personal statement that contains "walks" *and* matched for Chris Stapleton's that contains "walk". Now search "walk raining". You'll see that this returns Rupert's entry only even though the exact text of neither of these words is found in his personal statement. But they are *grammatically* related so it matched them. This is called stemming and it's a pretty cool feature of Redis Stack that Redis OM exploits. + +And if you search for "a rain walk" you'll *still* match Rupert's entry even though the word "a" is not in the text. Why? Because it's a common word that's not very helpful with searching. These common words are called stop words and this is another cool feature of Redis Stack that Redis OM just gets for free. + + +### Searching the globe + +Redis Stack, and therefore Redis OM, both support searching by geographic location. You specify a point in the globe, a radius, and the units for that radius and it'll gleefully return all the entities therein. Let's add a route to do just that: + +{{< highlight javascript >}} +router.get('/near/:lng,:lat/radius/:radius', async (req, res) => { + const longitude = Number(req.params.lng) + const latitude = Number(req.params.lat) + const radius = Number(req.params.radius) + + const persons = await personRepository.search() + .where('location') + .inRadius(circle => circle + .longitude(longitude) + .latitude(latitude) + .radius(radius) + .miles) + .return.all() + + res.send(persons) +}) +{{< / highlight >}} + +This code looks a little different than the others because the way we define the circle we want to search is done with a function that is passed into the `.inRadius` method: + +{{< highlight javascript >}} +circle => circle.longitude(longitude).latitude(latitude).radius(radius).miles +{{< / highlight >}} + +All this function does is accept an instance of a [`Circle`](https://github.com/redis/redis-om-node/blob/main/docs/classes/Circle.md) that has been initialized with default values. We override those values by calling various builder methods to define the origin of our search (i.e. the longitude and latitude), the radius, and the units that radius is measured in. Valid units are `miles`, `meters`, `feet`, and `kilometers`. + +Let's try the route out. I know we can find Joan Jett at around longitude -75.0 and latitude 40.0, which is in eastern Pennsylvania. So use those coordinates with a radius of 20 miles. You should receive in response: + +{{< highlight json >}} +[ + { + "entityId": "01FYC7CTPKYNXQ98JSTBC37AS1", + "firstName": "Joan", + "lastName": "Jett", + "age": 63, + "verified": false, + "location": { + "longitude": -75.273, + "latitude": 40.003 + }, + "locationUpdated": "2022-01-01T12:00:00.000Z", + "skills": [ + "singing", + "guitar", + "black eyeliner" + ], + "personalStatement": "I love rock n' roll so put another dime in the jukebox, baby." + } +] +{{< / highlight >}} + +Try widening the radius and see who else you can find. + + +## Adding location tracking + +We're getting toward the end of the tutorial here, but before we go, I'd like to add that location tracking piece that I mentioned way back in the beginning. This next bit of code should be easily understood if you've gotten this far as it's not really doing anything I haven't talked about already. + +Add a new file called `location-router.js` in the `routers` folder: + +{{< highlight javascript >}} +import { Router } from 'express' +import { personRepository } from '../om/person.js' + +export const router = Router() + +router.patch('/:id/location/:lng,:lat', async (req, res) => { + + const id = req.params.id + const longitude = Number(req.params.lng) + const latitude = Number(req.params.lat) + + const locationUpdated = new Date() + + const person = await personRepository.fetch(id) + person.location = { longitude, latitude } + person.locationUpdated = locationUpdated + await personRepository.save(person) + + res.send({ id, locationUpdated, location: { longitude, latitude } }) +}) +{{< / highlight >}} + +Here we're calling `.fetch()` to fetch a person, we're updating some values for that person—the `.location` property with our longitude and latitude and the `.locationUpdated` property with the current date and time. Easy stuff. + +To use this `Router`, import it in `server.js`: + +{{< highlight javascript >}} +/* import routers */ +import { router as personRouter } from './routers/person-router.js' +import { router as searchRouter } from './routers/search-router.js' +import { router as locationRouter } from './routers/location-router.js' +{{< / highlight >}} + +And bind the router to a path: + +{{< highlight javascript >}} +/* bring in some routers */ +app.use('/person', personRouter, locationRouter) +app.use('/persons', searchRouter) +{{< / highlight >}} + +And that's that. But this just isn't enough to satisfy. It doesn't show you anything new, except maybe the usage of a `date` field. And, it's not really location *tracking*. It just shows where these people last were, no history. So let's add some!. + +To add some history, we're going to use a [Redis Stream]({{< relref "/develop/data-types/streams" >}}). Streams are a big topic but don't worry if you’re not familiar with them, you can think of them as being sort of like a log file stored in a Redis key where each entry represents an event. In our case, the event would be the person moving about or checking in or whatever. + +But there's a problem. Redis OM doesn’t support Streams even though Redis Stack does. So how do we take advantage of them in our application? By using [Node Redis](https://github.com/redis/node-redis). Node Redis is a low-level Redis client for Node.js that gives you access to all the Redis commands and data types. Internally, Redis OM is creating and using a Node Redis connection. You can use that connection too. Or rather, Redis OM can be *told* to use the connection you are using. Let me show you how. + + +## Using Node Redis + +Open up `client.js` in the `om` folder. Remember how we created a Redis OM `Client` and then called `.open()` on it? + +{{< highlight javascript >}} +const client = await new Client().open(url) +{{< / highlight >}} + +Well, the `Client` class also has a `.use()` method that takes a Node Redis connection. Modify `client.js` to open a connection to Redis using Node Redis and then `.use()` it: + +{{< highlight javascript >}} +import { Client } from 'redis-om' +import { createClient } from 'redis' + +/* pulls the Redis URL from .env */ +const url = process.env.REDIS_URL + +/* create a connection to Redis with Node Redis */ +export const connection = createClient({ url }) +await connection.connect() + +/* create a Client and bind it to the Node Redis connection */ +const client = await new Client().use(connection) + +export default client +{{< / highlight >}} + +And that's it. Redis OM is now using the `connection` you created. Note that we are exporting both the `client` *and* the `connection`. Got to export the `connection` if we want to use it in our newest route. + + +## Storing location history with Streams + +To add an event to a Stream we need to use the [XADD]({{< relref "/commands/xadd" >}}) command. Node Redis exposes that as `.xAdd()`. So, we need to add a call to `.xAdd()` in our route. Modify `location-router.js` to import our `connection`: + +{{< highlight javascript >}} +import { connection } from '../om/client.js' +{{< / highlight >}} + +And then in the route itself add a call to `.xAdd()`: + +{{< highlight javascript >}} + ...snip... + const person = await personRepository.fetch(id) + person.location = { longitude, latitude } + person.locationUpdated = locationUpdated + await personRepository.save(person) + + let keyName = `${person.keyName}:locationHistory` + await connection.xAdd(keyName, '*', person.location) + ...snip... +{{< / highlight >}} + +`.xAdd()` takes a key name, an event ID, and a JavaScript object containing the keys and values that make up the event, i.e. the event data. For the key name, we're building a string using the `.keyName` property that `Person` inherited from `Entity` (which will return something like `Person:01FYC7CTPKYNXQ98JSTBC37AS1`) combined with a hard-coded value. We're passing in `*` for our event ID, which tells Redis to just generate it based on the current time and previous event ID. And we're passing in the location—with properties of longitude and latitude—as our event data. + +Now, whenever this route is exercised, the longitude and latitude will be logged and the event ID will encode the time. Go ahead and use Swagger to move Joan Jett around a few times. + +Now, go into Redis Insight and take a look at the Stream. You'll see it there in the list of keys but if you click on it, you'll get a message saying that "This data type is coming soon!". If you don't get this message, congratulations, you live in the future! For us here in the past, we'll just issue the raw command instead: + + XRANGE Person:01FYC7CTPKYNXQ98JSTBC37AS1:locationHistory - + + +This tells Redis to get a range of values from a Stream stored in the given the key name—`Person:01FYC7CTPKYNXQ98JSTBC37AS1:locationHistory` in our example. The next values are the starting event ID and the ending event ID. `-` is the beginning of the Stream. `+` is the end. So this returns everything in the Stream: + + 1) 1) "1647536562911-0" + 2) 1) "longitude" + 2) "45.678" + 3) "latitude" + 4) "45.678" + 2) 1) "1647536564189-0" + 2) 1) "longitude" + 2) "45.679" + 3) "latitude" + 4) "45.679" + 3) 1) "1647536565278-0" + 2) 1) "longitude" + 2) "45.680" + 3) "latitude" + 4) "45.680" + +And just like that, we're tracking Joan Jett. + + +## Wrap-up + +So, now you know how to use Express + Redis OM to build an API backed by Redis Stack. And, you've got yourself some pretty decent started code in the process. Good deal! If you want to learn more, you can check out the [documentation](https://github.com/redis/redis-om-node) for Redis OM. It covers the full breadth of Redis OM's features. + +And thanks for taking the time to work through this. I sincerely hope you found it useful. If you have any questions, the [Redis Discord server](https://discord.gg/redis) is by far the best place to get them answered. Join the server and ask away! +--- +linkTitle: Vectorizers +title: Vectorizers +type: integration +weight: 04 +--- + + +In this notebook, we will show how to use RedisVL to create embeddings using the built-in text embedding vectorizers. Today RedisVL supports: +1. OpenAI +2. HuggingFace +3. Vertex AI +4. Cohere +5. Mistral AI +6. Amazon Bedrock +7. Bringing your own vectorizer +8. VoyageAI + +Before running this notebook, be sure to +1. Have installed ``redisvl`` and have that environment active for this notebook. +2. Have a running Redis Stack instance with RediSearch > 2.4 active. + +For example, you can run Redis Stack locally with Docker: + +```bash +docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest +``` + +This will run Redis on port 6379 and RedisInsight at http://localhost:8001. + + +```python +# import necessary modules +import os +``` + +## Creating Text Embeddings + +This example will show how to create an embedding from 3 simple sentences with a number of different text vectorizers in RedisVL. + +- "That is a happy dog" +- "That is a happy person" +- "Today is a nice day" + + +### OpenAI + +The ``OpenAITextVectorizer`` makes it simple to use RedisVL with the embeddings models at OpenAI. For this you will need to install ``openai``. + +```bash +pip install openai +``` + + + +```python +import getpass + +# setup the API Key +api_key = os.environ.get("OPENAI_API_KEY") or getpass.getpass("Enter your OpenAI API key: ") +``` + + +```python +from redisvl.utils.vectorize import OpenAITextVectorizer + +# create a vectorizer +oai = OpenAITextVectorizer( + model="text-embedding-ada-002", + api_config={"api_key": api_key}, +) + +test = oai.embed("This is a test sentence.") +print("Vector dimensions: ", len(test)) +test[:10] +``` + + Vector dimensions: 1536 + + + + + + [-0.0011391325388103724, + -0.003206387162208557, + 0.002380132209509611, + -0.004501554183661938, + -0.010328996926546097, + 0.012922565452754498, + -0.005491119809448719, + -0.0029864837415516376, + -0.007327961269766092, + -0.03365817293524742] + + + + +```python +# Create many embeddings at once +sentences = [ + "That is a happy dog", + "That is a happy person", + "Today is a sunny day" +] + +embeddings = oai.embed_many(sentences) +embeddings[0][:10] +``` + + + + + [-0.017466850578784943, + 1.8471690054866485e-05, + 0.00129731057677418, + -0.02555876597762108, + -0.019842341542243958, + 0.01603139191865921, + -0.0037347301840782166, + 0.0009670283179730177, + 0.006618348415941, + -0.02497442066669464] + + + + +```python +# openai also supports asyncronous requests, which we can use to speed up the vectorization process. +embeddings = await oai.aembed_many(sentences) +print("Number of Embeddings:", len(embeddings)) + +``` + + Number of Embeddings: 3 + + +### Azure OpenAI + +The ``AzureOpenAITextVectorizer`` is a variation of the OpenAI vectorizer that calls OpenAI models within Azure. If you've already installed ``openai``, then you're ready to use Azure OpenAI. + +The only practical difference between OpenAI and Azure OpenAI is the variables required to call the API. + + +```python +# additionally to the API Key, setup the API endpoint and version +api_key = os.environ.get("AZURE_OPENAI_API_KEY") or getpass.getpass("Enter your AzureOpenAI API key: ") +api_version = os.environ.get("OPENAI_API_VERSION") or getpass.getpass("Enter your AzureOpenAI API version: ") +azure_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT") or getpass.getpass("Enter your AzureOpenAI API endpoint: ") +deployment_name = os.environ.get("AZURE_OPENAI_DEPLOYMENT_NAME", "text-embedding-ada-002") + +``` + + +```python +from redisvl.utils.vectorize import AzureOpenAITextVectorizer + +# create a vectorizer +az_oai = AzureOpenAITextVectorizer( + model=deployment_name, # Must be your CUSTOM deployment name + api_config={ + "api_key": api_key, + "api_version": api_version, + "azure_endpoint": azure_endpoint + }, +) + +test = az_oai.embed("This is a test sentence.") +print("Vector dimensions: ", len(test)) +test[:10] +``` + + + --------------------------------------------------------------------------- + + ValueError Traceback (most recent call last) + + Cell In[7], line 4 + 1 from redisvl.utils.vectorize import AzureOpenAITextVectorizer + 3 # create a vectorizer + ----> 4 az_oai = AzureOpenAITextVectorizer( + 5 model=deployment_name, # Must be your CUSTOM deployment name + 6 api_config={ + 7 "api_key": api_key, + 8 "api_version": api_version, + 9 "azure_endpoint": azure_endpoint + 10 }, + 11 ) + 13 test = az_oai.embed("This is a test sentence.") + 14 print("Vector dimensions: ", len(test)) + + + File ~/src/redis-vl-python/redisvl/utils/vectorize/text/azureopenai.py:78, in AzureOpenAITextVectorizer.__init__(self, model, api_config, dtype) + 54 def __init__( + 55 self, + 56 model: str = "text-embedding-ada-002", + 57 api_config: Optional[Dict] = None, + 58 dtype: str = "float32", + 59 ): + 60 """Initialize the AzureOpenAI vectorizer. + 61 + 62 Args: + (...) + 76 ValueError: If an invalid dtype is provided. + 77 """ + ---> 78 self._initialize_clients(api_config) + 79 super().__init__(model=model, dims=self._set_model_dims(model), dtype=dtype) + + + File ~/src/redis-vl-python/redisvl/utils/vectorize/text/azureopenai.py:106, in AzureOpenAITextVectorizer._initialize_clients(self, api_config) + 99 azure_endpoint = ( + 100 api_config.pop("azure_endpoint") + 101 if api_config + 102 else os.getenv("AZURE_OPENAI_ENDPOINT") + 103 ) + 105 if not azure_endpoint: + --> 106 raise ValueError( + 107 "AzureOpenAI API endpoint is required. " + 108 "Provide it in api_config or set the AZURE_OPENAI_ENDPOINT\ + 109 environment variable." + 110 ) + 112 api_version = ( + 113 api_config.pop("api_version") + 114 if api_config + 115 else os.getenv("OPENAI_API_VERSION") + 116 ) + 118 if not api_version: + + + ValueError: AzureOpenAI API endpoint is required. Provide it in api_config or set the AZURE_OPENAI_ENDPOINT environment variable. + + + +```python +# Just like OpenAI, AzureOpenAI supports batching embeddings and asynchronous requests. +sentences = [ + "That is a happy dog", + "That is a happy person", + "Today is a sunny day" +] + +embeddings = await az_oai.aembed_many(sentences) +embeddings[0][:10] +``` + +### Huggingface + +[Huggingface](https://huggingface.co/models) is a popular NLP platform that has a number of pre-trained models you can use off the shelf. RedisVL supports using Huggingface "Sentence Transformers" to create embeddings from text. To use Huggingface, you will need to install the ``sentence-transformers`` library. + +```bash +pip install sentence-transformers +``` + + +```python +os.environ["TOKENIZERS_PARALLELISM"] = "false" +from redisvl.utils.vectorize import HFTextVectorizer + + +# create a vectorizer +# choose your model from the huggingface website +hf = HFTextVectorizer(model="sentence-transformers/all-mpnet-base-v2") + +# embed a sentence +test = hf.embed("This is a test sentence.") +test[:10] +``` + + +```python +# You can also create many embeddings at once +embeddings = hf.embed_many(sentences, as_buffer=True) + +``` + +### VertexAI + +[VertexAI](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings) is GCP's fully-featured AI platform including a number of pretrained LLMs. RedisVL supports using VertexAI to create embeddings from these models. To use VertexAI, you will first need to install the ``google-cloud-aiplatform`` library. + +```bash +pip install google-cloud-aiplatform>=1.26 +``` + +1. Then you need to gain access to a [Google Cloud Project](https://cloud.google.com/gcp?hl=en) and provide [access to credentials](https://cloud.google.com/docs/authentication/application-default-credentials). This is accomplished by setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable pointing to the path of a JSON key file downloaded from your service account on GCP. +2. Lastly, you need to find your [project ID](https://support.google.com/googleapi/answer/7014113?hl=en) and [geographic region for VertexAI](https://cloud.google.com/vertex-ai/docs/general/locations). + + +**Make sure the following env vars are set:** + +``` +GOOGLE_APPLICATION_CREDENTIALS= +GCP_PROJECT_ID= +GCP_LOCATION= +``` + + +```python +from redisvl.utils.vectorize import VertexAITextVectorizer + + +# create a vectorizer +vtx = VertexAITextVectorizer(api_config={ + "project_id": os.environ.get("GCP_PROJECT_ID") or getpass.getpass("Enter your GCP Project ID: "), + "location": os.environ.get("GCP_LOCATION") or getpass.getpass("Enter your GCP Location: "), + "google_application_credentials": os.environ.get("GOOGLE_APPLICATION_CREDENTIALS") or getpass.getpass("Enter your Google App Credentials path: ") +}) + +# embed a sentence +test = vtx.embed("This is a test sentence.") +test[:10] +``` + +### Cohere + +[Cohere](https://dashboard.cohere.ai/) allows you to implement language AI into your product. The `CohereTextVectorizer` makes it simple to use RedisVL with the embeddings models at Cohere. For this you will need to install `cohere`. + +```bash +pip install cohere +``` + + +```python +import getpass +# setup the API Key +api_key = os.environ.get("COHERE_API_KEY") or getpass.getpass("Enter your Cohere API key: ") +``` + + +Special attention needs to be paid to the `input_type` parameter for each `embed` call. For example, for embedding +queries, you should set `input_type='search_query'`; for embedding documents, set `input_type='search_document'`. See +more information [here](https://docs.cohere.com/reference/embed) + + +```python +from redisvl.utils.vectorize import CohereTextVectorizer + +# create a vectorizer +co = CohereTextVectorizer( + model="embed-english-v3.0", + api_config={"api_key": api_key}, +) + +# embed a search query +test = co.embed("This is a test sentence.", input_type='search_query') +print("Vector dimensions: ", len(test)) +print(test[:10]) + +# embed a document +test = co.embed("This is a test sentence.", input_type='search_document') +print("Vector dimensions: ", len(test)) +print(test[:10]) +``` + +Learn more about using RedisVL and Cohere together through [this dedicated user guide](https://docs.cohere.com/docs/redis-and-cohere). + +### VoyageAI + +[VoyageAI](https://dash.voyageai.com/) allows you to implement language AI into your product. The `VoyageAITextVectorizer` makes it simple to use RedisVL with the embeddings models at VoyageAI. For this you will need to install `voyageai`. + +```bash +pip install voyageai +``` + + +```python +import getpass +# setup the API Key +api_key = os.environ.get("VOYAGE_API_KEY") or getpass.getpass("Enter your VoyageAI API key: ") +``` + + +Special attention needs to be paid to the `input_type` parameter for each `embed` call. For example, for embedding +queries, you should set `input_type='query'`; for embedding documents, set `input_type='document'`. See +more information [here](https://docs.voyageai.com/docs/embeddings) + + +```python +from redisvl.utils.vectorize import VoyageAITextVectorizer + +# create a vectorizer +vo = VoyageAITextVectorizer( + model="voyage-law-2", # Please check the available models at https://docs.voyageai.com/docs/embeddings + api_config={"api_key": api_key}, +) + +# embed a search query +test = vo.embed("This is a test sentence.", input_type='query') +print("Vector dimensions: ", len(test)) +print(test[:10]) + +# embed a document +test = vo.embed("This is a test sentence.", input_type='document') +print("Vector dimensions: ", len(test)) +print(test[:10]) +``` + +### Mistral AI + +[Mistral](https://console.mistral.ai/) offers LLM and embedding APIs for you to implement into your product. The `MistralAITextVectorizer` makes it simple to use RedisVL with their embeddings model. +You will need to install `mistralai`. + +```bash +pip install mistralai +``` + + +```python +from redisvl.utils.vectorize import MistralAITextVectorizer + +mistral = MistralAITextVectorizer() + +# embed a sentence using their asyncronous method +test = await mistral.aembed("This is a test sentence.") +print("Vector dimensions: ", len(test)) +print(test[:10]) +``` + +### Amazon Bedrock + +Amazon Bedrock provides fully managed foundation models for text embeddings. Install the required dependencies: + +```bash +pip install 'redisvl[bedrock]' # Installs boto3 +``` + +#### Configure AWS credentials: + + +```python +import os +import getpass + +if "AWS_ACCESS_KEY_ID" not in os.environ: + os.environ["AWS_ACCESS_KEY_ID"] = getpass.getpass("Enter AWS Access Key ID: ") +if "AWS_SECRET_ACCESS_KEY" not in os.environ: + os.environ["AWS_SECRET_ACCESS_KEY"] = getpass.getpass("Enter AWS Secret Key: ") + +os.environ["AWS_REGION"] = "us-east-1" # Change as needed +``` + +#### Create embeddings: + + +```python +from redisvl.utils.vectorize import BedrockTextVectorizer + +bedrock = BedrockTextVectorizer( + model="amazon.titan-embed-text-v2:0" +) + +# Single embedding +text = "This is a test sentence." +embedding = bedrock.embed(text) +print(f"Vector dimensions: {len(embedding)}") + +# Multiple embeddings +sentences = [ + "That is a happy dog", + "That is a happy person", + "Today is a sunny day" +] +embeddings = bedrock.embed_many(sentences) +``` + +### Custom Vectorizers + +RedisVL supports the use of other vectorizers and provides a class to enable compatibility with any function that generates a vector or vectors from string data + + +```python +from redisvl.utils.vectorize import CustomTextVectorizer + +def generate_embeddings(text_input, **kwargs): + return [0.101] * 768 + +custom_vectorizer = CustomTextVectorizer(generate_embeddings) + +custom_vectorizer.embed("This is a test sentence.")[:10] +``` + +This enables the use of custom vectorizers with other RedisVL components + + +```python +from redisvl.extensions.cache.llm import SemanticCache + +cache = SemanticCache(name="custom_cache", vectorizer=custom_vectorizer) + +cache.store("this is a test prompt", "this is a test response") +cache.check("this is also a test prompt") +``` + +## Search with Provider Embeddings + +Now that we've created our embeddings, we can use them to search for similar sentences. We will use the same 3 sentences from above and search for similar sentences. + +First, we need to create the schema for our index. + +Here's what the schema for the example looks like in yaml for the HuggingFace vectorizer: + +```yaml +version: '0.1.0' + +index: + name: vectorizers + prefix: doc + storage_type: hash + +fields: + - name: sentence + type: text + - name: embedding + type: vector + attrs: + dims: 768 + algorithm: flat + distance_metric: cosine +``` + + +```python +from redisvl.index import SearchIndex + +# construct a search index from the schema +index = SearchIndex.from_yaml("./schema.yaml", redis_url="redis://localhost:6379") + +# create the index (no data yet) +index.create(overwrite=True) +``` + + +```python +# use the CLI to see the created index +!rvl index listall +``` + +Loading data to RedisVL is easy. It expects a list of dictionaries. The vector is stored as bytes. + + +```python +from redisvl.redis.utils import array_to_buffer + +embeddings = hf.embed_many(sentences) + +data = [{"text": t, + "embedding": array_to_buffer(v, dtype="float32")} + for t, v in zip(sentences, embeddings)] + +index.load(data) +``` + + +```python +from redisvl.query import VectorQuery + +# use the HuggingFace vectorizer again to create a query embedding +query_embedding = hf.embed("That is a happy cat") + +query = VectorQuery( + vector=query_embedding, + vector_field_name="embedding", + return_fields=["text"], + num_results=3 +) + +results = index.query(query) +for doc in results: + print(doc["text"], doc["vector_distance"]) +``` + +## Selecting your float data type +When embedding text as byte arrays RedisVL supports 4 different floating point data types, `float16`, `float32`, `float64` and `bfloat16`, and 2 integer types, `int8` and `uint8`. +Your dtype set for your vectorizer must match what is defined in your search index. If one is not explicitly set the default is `float32`. + + +```python +vectorizer = HFTextVectorizer(dtype="float16") + +# subsequent calls to embed('', as_buffer=True) and embed_many('', as_buffer=True) will now encode as float16 +float16_bytes = vectorizer.embed('test sentence', as_buffer=True) + +# to generate embeddings with different dtype instantiate a new vectorizer +vectorizer_64 = HFTextVectorizer(dtype='float64') +float64_bytes = vectorizer_64.embed('test sentence', as_buffer=True) + +float16_bytes != float64_bytes +``` + + +```python +# cleanup +index.delete() +``` +--- +linkTitle: Semantic routing +title: Semantic Routing +type: integration +weight: 08 +--- + + +RedisVL provides a `SemanticRouter` interface to utilize Redis' built-in search & aggregation in order to perform +KNN-style classification over a set of `Route` references to determine the best match. + +This notebook will go over how to use Redis as a Semantic Router for your applications + +## Define the Routes + +Below we define 3 different routes. One for `technology`, one for `sports`, and +another for `entertainment`. Now for this example, the goal here is +surely topic "classification". But you can create routes and references for +almost anything. + +Each route has a set of references that cover the "semantic surface area" of the +route. The incoming query from a user needs to be semantically similar to one or +more of the references in order to "match" on the route. + +Additionally, each route has a `distance_threshold` which determines the maximum distance between the query and the reference for the query to be routed to the route. This value is unique to each route. + + +```python +from redisvl.extensions.router import Route + + +# Define routes for the semantic router +technology = Route( + name="technology", + references=[ + "what are the latest advancements in AI?", + "tell me about the newest gadgets", + "what's trending in tech?" + ], + metadata={"category": "tech", "priority": 1}, + distance_threshold=0.71 +) + +sports = Route( + name="sports", + references=[ + "who won the game last night?", + "tell me about the upcoming sports events", + "what's the latest in the world of sports?", + "sports", + "basketball and football" + ], + metadata={"category": "sports", "priority": 2}, + distance_threshold=0.72 +) + +entertainment = Route( + name="entertainment", + references=[ + "what are the top movies right now?", + "who won the best actor award?", + "what's new in the entertainment industry?" + ], + metadata={"category": "entertainment", "priority": 3}, + distance_threshold=0.7 +) + +``` + +## Initialize the SemanticRouter + +``SemanticRouter`` will automatically create an index within Redis upon initialization for the route references. By default, it uses the `HFTextVectorizer` to +generate embeddings for each route reference. + + +```python +import os +from redisvl.extensions.router import SemanticRouter +from redisvl.utils.vectorize import HFTextVectorizer + +os.environ["TOKENIZERS_PARALLELISM"] = "false" + +# Initialize the SemanticRouter +router = SemanticRouter( + name="topic-router", + vectorizer=HFTextVectorizer(), + routes=[technology, sports, entertainment], + redis_url="redis://localhost:6379", + overwrite=True # Blow away any other routing index with this name +) +``` + + 19:18:32 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:18:32 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 17.78it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 37.43it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.28it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 48.76it/s] + + + +```python +# look at the index specification created for the semantic router +!rvl index info -i topic-router +``` + + + + Index Information: + ╭──────────────────┬──────────────────┬──────────────────┬──────────────────┬──────────────────╮ + │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ + ├──────────────────┼──────────────────┼──────────────────┼──────────────────┼──────────────────┤ + | topic-router | HASH | ['topic-router'] | [] | 0 | + ╰──────────────────┴──────────────────┴──────────────────┴──────────────────┴──────────────────╯ + Index Fields: + ╭─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────╮ + │ Name │ Attribute │ Type │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ + ├─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┤ + │ reference_id │ reference_id │ TAG │ SEPARATOR │ , │ │ │ │ │ │ │ + │ route_name │ route_name │ TAG │ SEPARATOR │ , │ │ │ │ │ │ │ + │ reference │ reference │ TEXT │ WEIGHT │ 1 │ │ │ │ │ │ │ + │ vector │ vector │ VECTOR │ algorithm │ FLAT │ data_type │ FLOAT32 │ dim │ 768 │ distance_metric │ COSINE │ + ╰─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────╯ + + + +```python +router._index.info()["num_docs"] +``` + + + + + 11 + + + +## Simple routing + + +```python +# Query the router with a statement +route_match = router("Can you tell me about the latest in artificial intelligence?") +route_match +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 6.40it/s] + + + + + + RouteMatch(name='technology', distance=0.419145842393) + + + + +```python +# Query the router with a statement and return a miss +route_match = router("are aliens real?") +route_match +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 39.83it/s] + + + + + + RouteMatch(name=None, distance=None) + + + +We can also route a statement to many routes and order them by distance: + + +```python +# Perform multi-class classification with route_many() -- toggle the max_k and the distance_threshold +route_matches = router.route_many("How is AI used in basketball?", max_k=3) +route_matches +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 40.50it/s] + + + + + + [RouteMatch(name='technology', distance=0.556493878365), + RouteMatch(name='sports', distance=0.671060125033)] + + + + +```python +# Toggle the aggregation method -- note the different distances in the result +from redisvl.extensions.router.schema import DistanceAggregationMethod + +route_matches = router.route_many("How is AI used in basketball?", aggregation_method=DistanceAggregationMethod.min, max_k=3) +route_matches +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 66.18it/s] + + + + + + [RouteMatch(name='technology', distance=0.556493878365), + RouteMatch(name='sports', distance=0.629264354706)] + + + +Note the different route match distances. This is because we used the `min` aggregation method instead of the default `avg` approach. + +## Update the routing config + + +```python +from redisvl.extensions.router import RoutingConfig + +router.update_routing_config( + RoutingConfig(aggregation_method=DistanceAggregationMethod.min, max_k=3) +) +``` + + +```python +route_matches = router.route_many("Lebron James") +route_matches +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 41.89it/s] + + + + + + [RouteMatch(name='sports', distance=0.663254022598)] + + + +## Router serialization + + +```python +router.to_dict() +``` + + + + + {'name': 'topic-router', + 'routes': [{'name': 'technology', + 'references': ['what are the latest advancements in AI?', + 'tell me about the newest gadgets', + "what's trending in tech?"], + 'metadata': {'category': 'tech', 'priority': 1}, + 'distance_threshold': 0.71}, + {'name': 'sports', + 'references': ['who won the game last night?', + 'tell me about the upcoming sports events', + "what's the latest in the world of sports?", + 'sports', + 'basketball and football'], + 'metadata': {'category': 'sports', 'priority': 2}, + 'distance_threshold': 0.72}, + {'name': 'entertainment', + 'references': ['what are the top movies right now?', + 'who won the best actor award?', + "what's new in the entertainment industry?"], + 'metadata': {'category': 'entertainment', 'priority': 3}, + 'distance_threshold': 0.7}], + 'vectorizer': {'type': 'hf', + 'model': 'sentence-transformers/all-mpnet-base-v2'}, + 'routing_config': {'max_k': 3, 'aggregation_method': 'min'}} + + + + +```python +router2 = SemanticRouter.from_dict(router.to_dict(), redis_url="redis://localhost:6379") + +assert router2.to_dict() == router.to_dict() +``` + + 19:18:38 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:18:38 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 54.94it/s] + + 19:18:40 redisvl.index.index INFO Index already exists, not overwriting. + + + + + + +```python +router.to_yaml("router.yaml", overwrite=True) +``` + + +```python +router3 = SemanticRouter.from_yaml("router.yaml", redis_url="redis://localhost:6379") + +assert router3.to_dict() == router2.to_dict() == router.to_dict() +``` + + 19:18:40 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:18:40 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 18.77it/s] + + 19:18:41 redisvl.index.index INFO Index already exists, not overwriting. + + + + + +# Add route references + + +```python +router.add_route_references(route_name="technology", references=["latest AI trends", "new tech gadgets"]) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 13.22it/s] + + + + + + ['topic-router:technology:f243fb2d073774e81c7815247cb3013794e6225df3cbe3769cee8c6cefaca777', + 'topic-router:technology:7e4bca5853c1c3298b4d001de13c3c7a79a6e0f134f81acc2e7cddbd6845961f'] + + + +# Get route references + + +```python +# by route name +refs = router.get_route_references(route_name="technology") +refs +``` + + + + + [{'id': 'topic-router:technology:7e4bca5853c1c3298b4d001de13c3c7a79a6e0f134f81acc2e7cddbd6845961f', + 'reference_id': '7e4bca5853c1c3298b4d001de13c3c7a79a6e0f134f81acc2e7cddbd6845961f', + 'route_name': 'technology', + 'reference': 'new tech gadgets'}, + {'id': 'topic-router:technology:f243fb2d073774e81c7815247cb3013794e6225df3cbe3769cee8c6cefaca777', + 'reference_id': 'f243fb2d073774e81c7815247cb3013794e6225df3cbe3769cee8c6cefaca777', + 'route_name': 'technology', + 'reference': 'latest AI trends'}, + {'id': 'topic-router:technology:851f51cce5a9ccfbbcb66993908be6b7871479af3e3a4b139ad292a1bf7e0676', + 'reference_id': '851f51cce5a9ccfbbcb66993908be6b7871479af3e3a4b139ad292a1bf7e0676', + 'route_name': 'technology', + 'reference': 'what are the latest advancements in AI?'}, + {'id': 'topic-router:technology:149a9c9919c58534aa0f369e85ad95ba7f00aa0513e0f81e2aff2ea4a717b0e0', + 'reference_id': '149a9c9919c58534aa0f369e85ad95ba7f00aa0513e0f81e2aff2ea4a717b0e0', + 'route_name': 'technology', + 'reference': "what's trending in tech?"}, + {'id': 'topic-router:technology:85cc73a1437df27caa2f075a29c497e5a2e532023fbb75378aedbae80779ab37', + 'reference_id': '85cc73a1437df27caa2f075a29c497e5a2e532023fbb75378aedbae80779ab37', + 'route_name': 'technology', + 'reference': 'tell me about the newest gadgets'}] + + + + +```python +# by reference id +refs = router.get_route_references(reference_ids=[refs[0]["reference_id"]]) +refs +``` + + + + + [{'id': 'topic-router:technology:7e4bca5853c1c3298b4d001de13c3c7a79a6e0f134f81acc2e7cddbd6845961f', + 'reference_id': '7e4bca5853c1c3298b4d001de13c3c7a79a6e0f134f81acc2e7cddbd6845961f', + 'route_name': 'technology', + 'reference': 'new tech gadgets'}] + + + +# Delete route references + + +```python +# by route name +deleted_count = router.delete_route_references(route_name="sports") +deleted_count +``` + + + + + 5 + + + + +```python +# by id +deleted_count = router.delete_route_references(reference_ids=[refs[0]["reference_id"]]) +deleted_count +``` + + + + + 1 + + + +## Clean up the router + + +```python +# Use clear to flush all routes from the index +router.clear() +``` + + +```python +# Use delete to clear the index and remove it completely +router.delete() +``` +--- +linkTitle: Hash vs JSON storage +title: Hash vs JSON Storage +type: integration +weight: 05 +--- + + + +Out of the box, Redis provides a [variety of data structures](https://redis.com/redis-enterprise/data-structures/) that can adapt to your domain specific applications and use cases. +In this notebook, we will demonstrate how to use RedisVL with both [Hash](https://redis.io/docs/data-types/hashes/) and [JSON](https://redis.io/docs/data-types/json/) data. + + +Before running this notebook, be sure to +1. Have installed ``redisvl`` and have that environment active for this notebook. +2. Have a running Redis Stack or Redis Enterprise instance with RediSearch > 2.4 activated. + +For example, you can run [Redis Stack](https://redis.io/docs/install/install-stack/) locally with Docker: + +```bash +docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest +``` + +Or create a [FREE Redis Cloud](https://redis.io/cloud). + + +```python +# import necessary modules +import pickle + +from redisvl.redis.utils import buffer_to_array +from redisvl.index import SearchIndex + + +# load in the example data and printing utils +data = pickle.load(open("hybrid_example_data.pkl", "rb")) +``` + + +```python +from jupyterutils import result_print, table_print + +table_print(data) +``` + + +
useragejobcredit_scoreoffice_locationuser_embedding
john18engineerhigh-122.4194,37.7749b'\xcd\xcc\xcc=\xcd\xcc\xcc=\x00\x00\x00?'
derrick14doctorlow-122.4194,37.7749b'\xcd\xcc\xcc=\xcd\xcc\xcc=\x00\x00\x00?'
nancy94doctorhigh-122.4194,37.7749b'333?\xcd\xcc\xcc=\x00\x00\x00?'
tyler100engineerhigh-122.0839,37.3861b'\xcd\xcc\xcc=\xcd\xcc\xcc>\x00\x00\x00?'
tim12dermatologisthigh-122.0839,37.3861b'\xcd\xcc\xcc>\xcd\xcc\xcc>\x00\x00\x00?'
taimur15CEOlow-122.0839,37.3861b'\x9a\x99\x19?\xcd\xcc\xcc=\x00\x00\x00?'
joe35dentistmedium-122.0839,37.3861b'fff?fff?\xcd\xcc\xcc='
+ + +## Hash or JSON -- how to choose? +Both storage options offer a variety of features and tradeoffs. Below we will work through a dummy dataset to learn when and how to use both. + +### Working with Hashes +Hashes in Redis are simple collections of field-value pairs. Think of it like a mutable single-level dictionary contains multiple "rows": + + +```python +{ + "model": "Deimos", + "brand": "Ergonom", + "type": "Enduro bikes", + "price": 4972, +} +``` + +Hashes are best suited for use cases with the following characteristics: +- Performance (speed) and storage space (memory consumption) are top concerns +- Data can be easily normalized and modeled as a single-level dict + +Hashes are typically the default recommendation. + + +```python +# define the hash index schema +hash_schema = { + "index": { + "name": "user-hash", + "prefix": "user-hash-docs", + "storage_type": "hash", # default setting -- HASH + }, + "fields": [ + {"name": "user", "type": "tag"}, + {"name": "credit_score", "type": "tag"}, + {"name": "job", "type": "text"}, + {"name": "age", "type": "numeric"}, + {"name": "office_location", "type": "geo"}, + { + "name": "user_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} +``` + + +```python +# construct a search index from the hash schema +hindex = SearchIndex.from_dict(hash_schema, redis_url="redis://localhost:6379") + +# create the index (no data yet) +hindex.create(overwrite=True) +``` + + +```python +# show the underlying storage type +hindex.storage_type +``` + + + + + + + + +#### Vectors as byte strings +One nuance when working with Hashes in Redis, is that all vectorized data must be passed as a byte string (for efficient storage, indexing, and processing). An example of that can be seen below: + + +```python +# show a single entry from the data that will be loaded +data[0] +``` + + + + + {'user': 'john', + 'age': 18, + 'job': 'engineer', + 'credit_score': 'high', + 'office_location': '-122.4194,37.7749', + 'user_embedding': b'\xcd\xcc\xcc=\xcd\xcc\xcc=\x00\x00\x00?'} + + + + +```python +# load hash data +keys = hindex.load(data) +``` + + +```python +!rvl stats -i user-hash +``` + + + Statistics: + ╭─────────────────────────────┬─────────────╮ + │ Stat Key │ Value │ + ├─────────────────────────────┼─────────────┤ + │ num_docs │ 7 │ + │ num_terms │ 6 │ + │ max_doc_id │ 7 │ + │ num_records │ 44 │ + │ percent_indexed │ 1 │ + │ hash_indexing_failures │ 0 │ + │ number_of_uses │ 1 │ + │ bytes_per_record_avg │ 3.40909 │ + │ doc_table_size_mb │ 0.000767708 │ + │ inverted_sz_mb │ 0.000143051 │ + │ key_table_size_mb │ 0.000248909 │ + │ offset_bits_per_record_avg │ 8 │ + │ offset_vectors_sz_mb │ 8.58307e-06 │ + │ offsets_per_term_avg │ 0.204545 │ + │ records_per_doc_avg │ 6.28571 │ + │ sortable_values_size_mb │ 0 │ + │ total_indexing_time │ 1.053 │ + │ total_inverted_index_blocks │ 18 │ + │ vector_index_sz_mb │ 0.0202332 │ + ╰─────────────────────────────┴─────────────╯ + + +#### Performing Queries +Once our index is created and data is loaded into the right format, we can run queries against the index with RedisVL: + + +```python +from redisvl.query import VectorQuery +from redisvl.query.filter import Tag, Text, Num + +t = (Tag("credit_score") == "high") & (Text("job") % "enginee*") & (Num("age") > 17) + +v = VectorQuery( + vector=[0.1, 0.1, 0.5], + vector_field_name="user_embedding", + return_fields=["user", "credit_score", "age", "job", "office_location"], + filter_expression=t +) + + +results = hindex.query(v) +result_print(results) + +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
+ + + +```python +# clean up +hindex.delete() + +``` + +### Working with JSON + +JSON is best suited for use cases with the following characteristics: +- Ease of use and data model flexibility are top concerns +- Application data is already native JSON +- Replacing another document storage/db solution + + +```python +# define the json index schema +json_schema = { + "index": { + "name": "user-json", + "prefix": "user-json-docs", + "storage_type": "json", # JSON storage type + }, + "fields": [ + {"name": "user", "type": "tag"}, + {"name": "credit_score", "type": "tag"}, + {"name": "job", "type": "text"}, + {"name": "age", "type": "numeric"}, + {"name": "office_location", "type": "geo"}, + { + "name": "user_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} +``` + + +```python +# construct a search index from the json schema +jindex = SearchIndex.from_dict(json_schema, redis_url="redis://localhost:6379") + +# create the index (no data yet) +jindex.create(overwrite=True) +``` + + +```python +# note the multiple indices in the same database +!rvl index listall +``` + + 11:54:18 [RedisVL] INFO Indices: + 11:54:18 [RedisVL] INFO 1. user-json + + +#### Vectors as float arrays +Vectorized data stored in JSON must be stored as a pure array (python list) of floats. We will modify our sample data to account for this below: + + +```python +json_data = data.copy() + +for d in json_data: + d['user_embedding'] = buffer_to_array(d['user_embedding'], dtype='float32') +``` + + +```python +# inspect a single JSON record +json_data[0] +``` + + + + + {'user': 'john', + 'age': 18, + 'job': 'engineer', + 'credit_score': 'high', + 'office_location': '-122.4194,37.7749', + 'user_embedding': [0.10000000149011612, 0.10000000149011612, 0.5]} + + + + +```python +keys = jindex.load(json_data) +``` + + +```python +# we can now run the exact same query as above +result_print(jindex.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
+ + +## Cleanup + + +```python +jindex.delete() +``` + +# Working with nested data in JSON + +Redis also supports native **JSON** objects. These can be multi-level (nested) objects, with full JSONPath support for updating/retrieving sub elements: + +```json +{ + "name": "Specialized Stump jumper", + "metadata": { + "model": "Stumpjumper", + "brand": "Specialized", + "type": "Enduro bikes", + "price": 3000 + }, +} +``` + +#### Full JSON Path support +Because Redis enables full JSON path support, when creating an index schema, elements need to be indexed and selected by their path with the desired `name` AND `path` that points to where the data is located within the objects. + +By default, RedisVL will assume the path as `$.{name}` if not provided in JSON fields schema. If nested provide path as `$.object.attribute` + +### As an example: + + +```python +from redisvl.utils.vectorize import HFTextVectorizer + +emb_model = HFTextVectorizer() + +bike_data = [ + { + "name": "Specialized Stump jumper", + "metadata": { + "model": "Stumpjumper", + "brand": "Specialized", + "type": "Enduro bikes", + "price": 3000 + }, + "description": "The Specialized Stumpjumper is a versatile enduro bike that dominates both climbs and descents. Features a FACT 11m carbon fiber frame, FOX FLOAT suspension with 160mm travel, and SRAM X01 Eagle drivetrain. The asymmetric frame design and internal storage compartment make it a practical choice for all-day adventures." + }, + { + "name": "bike_2", + "metadata": { + "model": "Slash", + "brand": "Trek", + "type": "Enduro bikes", + "price": 5000 + }, + "description": "Trek's Slash is built for aggressive enduro riding and racing. Featuring Trek's Alpha Aluminum frame with RE:aktiv suspension technology, 160mm travel, and Knock Block frame protection. Equipped with Bontrager components and a Shimano XT drivetrain, this bike excels on technical trails and enduro race courses." + } +] + +bike_data = [{**d, "bike_embedding": emb_model.embed(d["description"])} for d in bike_data] + +bike_schema = { + "index": { + "name": "bike-json", + "prefix": "bike-json", + "storage_type": "json", # JSON storage type + }, + "fields": [ + { + "name": "model", + "type": "tag", + "path": "$.metadata.model" # note the '$' + }, + { + "name": "brand", + "type": "tag", + "path": "$.metadata.brand" + }, + { + "name": "price", + "type": "numeric", + "path": "$.metadata.price" + }, + { + "name": "bike_embedding", + "type": "vector", + "attrs": { + "dims": len(bike_data[0]["bike_embedding"]), + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} +``` + + /Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/huggingface_hub/file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. + warnings.warn( + + + +```python +# construct a search index from the json schema +bike_index = SearchIndex.from_dict(bike_schema, redis_url="redis://localhost:6379") + +# create the index (no data yet) +bike_index.create(overwrite=True) +``` + + +```python +bike_index.load(bike_data) +``` + + + + + ['bike-json:de92cb9955434575b20f4e87a30b03d5', + 'bike-json:054ab3718b984532b924946fa5ce00c6'] + + + + +```python +from redisvl.query import VectorQuery + +vec = emb_model.embed("I'd like a bike for aggressive riding") + +v = VectorQuery( + vector=vec, + vector_field_name="bike_embedding", + return_fields=[ + "brand", + "name", + "$.metadata.type" + ] +) + + +results = bike_index.query(v) +``` + +**Note:** As shown in the example if you want to retrieve a field from json object that was not indexed you will also need to supply the full path as with `$.metadata.type`. + + +```python +results +``` + + + + + [{'id': 'bike-json:054ab3718b984532b924946fa5ce00c6', + 'vector_distance': '0.519989073277', + 'brand': 'Trek', + '$.metadata.type': 'Enduro bikes'}, + {'id': 'bike-json:de92cb9955434575b20f4e87a30b03d5', + 'vector_distance': '0.657624483109', + 'brand': 'Specialized', + '$.metadata.type': 'Enduro bikes'}] + + + +# Cleanup + + +```python +bike_index.delete() +``` +--- +linkTitle: Rerankers +title: Rerankers +type: integration +weight: 06 +--- + + +In this notebook, we will show how to use RedisVL to rerank search results +(documents or chunks or records) based on the input query. Today RedisVL +supports reranking through: + +- A re-ranker that uses pre-trained [Cross-Encoders](https://sbert.net/examples/applications/cross-encoder/README.html) which can use models from [Hugging Face cross encoder models](https://huggingface.co/cross-encoder) or Hugging Face models that implement a cross encoder function ([example: BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)). +- The [Cohere /rerank API](https://docs.cohere.com/docs/rerank-2). +- The [VoyageAI /rerank API](https://docs.voyageai.com/docs/reranker). + +Before running this notebook, be sure to: +1. Have installed ``redisvl`` and have that environment active for this notebook. +2. Have a running Redis Stack instance with RediSearch > 2.4 active. + +For example, you can run Redis Stack locally with Docker: + +```bash +docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest +``` + +This will run Redis on port 6379 and RedisInsight at http://localhost:8001. + + +```python +# import necessary modules +import os +``` + +## Simple Reranking + +Reranking provides a relevance boost to search results generated by +traditional (lexical) or semantic search strategies. + +As a simple demonstration, take the passages and user query below: + + +```python +query = "What is the capital of the United States?" +docs = [ + "Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.", + "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.", + "Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas.", + "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.", + "Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment." +] +``` + +The goal of reranking is to provide a more fine-grained quality improvement to +initial search results. With RedisVL, this would likely be results coming back +from a search operation like full text or vector. + +### Using the Cross-Encoder Reranker + +To use the cross-encoder reranker we initialize an instance of `HFCrossEncoderReranker` passing a suitable model (if no model is provided, the `cross-encoder/ms-marco-MiniLM-L-6-v2` model is used): + + +```python +from redisvl.utils.rerank import HFCrossEncoderReranker + +cross_encoder_reranker = HFCrossEncoderReranker("BAAI/bge-reranker-base") +``` + +### Rerank documents with HFCrossEncoderReranker + +With the obtained reranker instance we can rerank and truncate the list of +documents based on relevance to the initial query. + + +```python +results, scores = cross_encoder_reranker.rank(query=query, docs=docs) +``` + + +```python +for result, score in zip(results, scores): + print(score, " -- ", result) +``` + + 0.07461125403642654 -- {'content': 'Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.'} + 0.05220315232872963 -- {'content': 'Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas.'} + 0.3802368640899658 -- {'content': 'Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.'} + + +### Using the Cohere Reranker + +To initialize the Cohere reranker you'll need to install the cohere library and provide the right Cohere API Key. + + +```python +#!pip install cohere +``` + + +```python +import getpass + +# setup the API Key +api_key = os.environ.get("COHERE_API_KEY") or getpass.getpass("Enter your Cohere API key: ") +``` + + +```python +from redisvl.utils.rerank import CohereReranker + +cohere_reranker = CohereReranker(limit=3, api_config={"api_key": api_key}) +``` + +### Rerank documents with CohereReranker + +Below we will use the `CohereReranker` to rerank and truncate the list of +documents above based on relevance to the initial query. + + +```python +results, scores = cohere_reranker.rank(query=query, docs=docs) +``` + + +```python +for result, score in zip(results, scores): + print(score, " -- ", result) +``` + + 0.9990564 -- Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America. + 0.7516481 -- Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment. + 0.08882029 -- The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan. + + +### Working with semi-structured documents + +Often times the initial result set includes other metadata and components that could be used to steer the reranking relevancy. To accomplish this, we can set the `rank_by` argument and provide documents with those additional fields. + + +```python +docs = [ + { + "source": "wiki", + "passage": "Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274." + }, + { + "source": "encyclopedia", + "passage": "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan." + }, + { + "source": "textbook", + "passage": "Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas." + }, + { + "source": "textbook", + "passage": "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America." + }, + { + "source": "wiki", + "passage": "Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment." + } +] +``` + + +```python +results, scores = cohere_reranker.rank(query=query, docs=docs, rank_by=["passage", "source"]) +``` + + +```python +for result, score in zip(results, scores): + print(score, " -- ", result) +``` + + 0.9988121 -- {'source': 'textbook', 'passage': 'Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.'} + 0.5974905 -- {'source': 'wiki', 'passage': 'Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.'} + 0.059101548 -- {'source': 'encyclopedia', 'passage': 'The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.'} + + +### Using the VoyageAI Reranker + +To initialize the VoyageAI reranker you'll need to install the voyaeai library and provide the right VoyageAI API Key. + + +```python +#!pip install voyageai +``` + + +```python +import getpass + +# setup the API Key +api_key = os.environ.get("VOYAGE_API_KEY") or getpass.getpass("Enter your VoyageAI API key: ") +``` + + +```python +from redisvl.utils.rerank import VoyageAIReranker + +reranker = VoyageAIReranker(model="rerank-lite-1", limit=3, api_config={"api_key": api_key})# Please check the available models at https://docs.voyageai.com/docs/reranker +``` + +### Rerank documents with VoyageAIReranker + +Below we will use the `VoyageAIReranker` to rerank and also truncate the list of +documents above based on relevance to the initial query. + + +```python +results, scores = reranker.rank(query=query, docs=docs) +``` + + +```python +for result, score in zip(results, scores): + print(score, " -- ", result) +``` + + 0.796875 -- Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America. + 0.578125 -- Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas. + 0.5625 -- Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274. + +--- +linkTitle: LLM message history +title: LLM Message History +type: integration +weight: 07 +--- + + +Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticable when asking simple questions, it becomes a hinderance when engaging in long running conversations that rely on conversational context. + +The solution to this problem is to append the previous conversation history to each subsequent call to the LLM. + +This notebook will show how to use Redis to structure and store and retrieve this conversational message history. + + +```python +from redisvl.extensions.message_history import MessageHistory +chat_history = MessageHistory(name='student tutor') +``` + + 12:24:11 redisvl.index.index INFO Index already exists, not overwriting. + + +To align with common LLM APIs, Redis stores messages with `role` and `content` fields. +The supported roles are "system", "user" and "llm". + +You can store messages one at a time or all at once. + + +```python +chat_history.add_message({"role":"system", "content":"You are a helpful geography tutor, giving simple and short answers to questions about European countries."}) +chat_history.add_messages([ + {"role":"user", "content":"What is the capital of France?"}, + {"role":"llm", "content":"The capital is Paris."}, + {"role":"user", "content":"And what is the capital of Spain?"}, + {"role":"llm", "content":"The capital is Madrid."}, + {"role":"user", "content":"What is the population of Great Britain?"}, + {"role":"llm", "content":"As of 2023 the population of Great Britain is approximately 67 million people."},] + ) +``` + +At any point we can retrieve the recent history of the conversation. It will be ordered by entry time. + + +```python +context = chat_history.get_recent() +for message in context: + print(message) +``` + + {'role': 'llm', 'content': 'The capital is Paris.'} + {'role': 'user', 'content': 'And what is the capital of Spain?'} + {'role': 'llm', 'content': 'The capital is Madrid.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + + +In many LLM flows the conversation progresses in a series of prompt and response pairs. Message history offer a convenience function `store()` to add these simply. + + +```python +prompt = "what is the size of England compared to Portugal?" +response = "England is larger in land area than Portal by about 15000 square miles." +chat_history.store(prompt, response) + +context = chat_history.get_recent(top_k=6) +for message in context: + print(message) +``` + + {'role': 'user', 'content': 'And what is the capital of Spain?'} + {'role': 'llm', 'content': 'The capital is Madrid.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + + +## Managing multiple users and conversations + +For applications that need to handle multiple conversations concurrently, Redis supports tagging messages to keep conversations separated. + + +```python +chat_history.add_message({"role":"system", "content":"You are a helpful algebra tutor, giving simple answers to math problems."}, session_tag='student two') +chat_history.add_messages([ + {"role":"user", "content":"What is the value of x in the equation 2x + 3 = 7?"}, + {"role":"llm", "content":"The value of x is 2."}, + {"role":"user", "content":"What is the value of y in the equation 3y - 5 = 7?"}, + {"role":"llm", "content":"The value of y is 4."}], + session_tag='student two' + ) + +for math_message in chat_history.get_recent(session_tag='student two'): + print(math_message) +``` + + {'role': 'system', 'content': 'You are a helpful algebra tutor, giving simple answers to math problems.'} + {'role': 'user', 'content': 'What is the value of x in the equation 2x + 3 = 7?'} + {'role': 'llm', 'content': 'The value of x is 2.'} + {'role': 'user', 'content': 'What is the value of y in the equation 3y - 5 = 7?'} + {'role': 'llm', 'content': 'The value of y is 4.'} + + +## Semantic message history +For longer conversations our list of messages keeps growing. Since LLMs are stateless we have to continue to pass this conversation history on each subsequent call to ensure the LLM has the correct context. + +A typical flow looks like this: +``` +while True: + prompt = input('enter your next question') + context = chat_history.get_recent() + response = LLM_api_call(prompt=prompt, context=context) + chat_history.store(prompt, response) +``` + +This works, but as context keeps growing so too does our LLM token count, which increases latency and cost. + +Conversation histories can be truncated, but that can lead to losing relevant information that appeared early on. + +A better solution is to pass only the relevant conversational context on each subsequent call. + +For this, RedisVL has the `SemanticMessageHistory`, which uses vector similarity search to return only semantically relevant sections of the conversation. + + +```python +from redisvl.extensions.message_history import SemanticMessageHistory +semantic_history = SemanticMessageHistory(name='tutor') + +semantic_history.add_messages(chat_history.get_recent(top_k=8)) +``` + + 12:24:15 redisvl.index.index INFO Index already exists, not overwriting. + + + +```python +prompt = "what have I learned about the size of England?" +semantic_history.set_distance_threshold(0.35) +context = semantic_history.get_relevant(prompt) +for message in context: + print(message) +``` + + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + + +You can adjust the degree of semantic similarity needed to be included in your context. + +Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everthing. + + +```python +semantic_history.set_distance_threshold(0.7) + +larger_context = semantic_history.get_relevant(prompt) +for message in larger_context: + print(message) +``` + + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + + +## Conversation control + +LLMs can hallucinate on occasion and when this happens it can be useful to prune incorrect information from conversational histories so this incorrect information doesn't continue to be passed as context. + + +```python +semantic_history.store( + prompt="what is the smallest country in Europe?", + response="Monaco is the smallest country in Europe at 0.78 square miles." # Incorrect. Vatican City is the smallest country in Europe + ) + +# get the key of the incorrect message +context = semantic_history.get_recent(top_k=1, raw=True) +bad_key = context[0]['entry_id'] +semantic_history.drop(bad_key) + +corrected_context = semantic_history.get_recent() +for message in corrected_context: + print(message) +``` + + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + {'role': 'user', 'content': 'what is the smallest country in Europe?'} + + + +```python +chat_history.clear() +semantic_history.clear() +``` +--- +linkTitle: Getting started with RedisVL +title: Getting Started with RedisVL +type: integration +weight: 01 +--- + +`redisvl` is a versatile Python library with an integrated CLI, designed to enhance AI applications using Redis. This guide will walk you through the following steps: + +1. Defining an `IndexSchema` +2. Preparing a sample dataset +3. Creating a `SearchIndex` object +4. Testing `rvl` CLI functionality +5. Loading the sample data +6. Building `VectorQuery` objects and executing searches +7. Updating a `SearchIndex` object + +...and more! + +Prerequisites: +- Ensure `redisvl` is installed in your Python environment. +- Have a running instance of [Redis Stack](https://redis.io/docs/install/install-stack/) or [Redis Cloud](https://redis.io/cloud). + +_____ + +## Define an `IndexSchema` + +The `IndexSchema` maintains crucial **index configuration** and **field definitions** to +enable search with Redis. For ease of use, the schema can be constructed from a +python dictionary or yaml file. + +### Example Schema Creation +Consider a dataset with user information, including `job`, `age`, `credit_score`, +and a 3-dimensional `user_embedding` vector. + +You must also decide on a Redis index name and key prefix to use for this +dataset. Below are example schema definitions in both YAML and Dict format. + +**YAML Definition:** + +```yaml +version: '0.1.0' + +index: + name: user_simple + prefix: user_simple_docs + +fields: + - name: user + type: tag + - name: credit_score + type: tag + - name: job + type: text + - name: age + type: numeric + - name: user_embedding + type: vector + attrs: + algorithm: flat + dims: 3 + distance_metric: cosine + datatype: float32 +``` +Store this in a local file, such as `schema.yaml`, for RedisVL usage. + +**Python Dictionary:** + + +```python +schema = { + "index": { + "name": "user_simple", + "prefix": "user_simple_docs", + }, + "fields": [ + {"name": "user", "type": "tag"}, + {"name": "credit_score", "type": "tag"}, + {"name": "job", "type": "text"}, + {"name": "age", "type": "numeric"}, + { + "name": "user_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + } + ] +} +``` + +## Sample Dataset Preparation + +Below, create a mock dataset with `user`, `job`, `age`, `credit_score`, and +`user_embedding` fields. The `user_embedding` vectors are synthetic examples +for demonstration purposes. + +For more information on creating real-world embeddings, refer to this +[article](https://mlops.community/vector-similarity-search-from-basics-to-production/). + + +```python +import numpy as np + + +data = [ + { + 'user': 'john', + 'age': 1, + 'job': 'engineer', + 'credit_score': 'high', + 'user_embedding': np.array([0.1, 0.1, 0.5], dtype=np.float32).tobytes() + }, + { + 'user': 'mary', + 'age': 2, + 'job': 'doctor', + 'credit_score': 'low', + 'user_embedding': np.array([0.1, 0.1, 0.5], dtype=np.float32).tobytes() + }, + { + 'user': 'joe', + 'age': 3, + 'job': 'dentist', + 'credit_score': 'medium', + 'user_embedding': np.array([0.9, 0.9, 0.1], dtype=np.float32).tobytes() + } +] +``` + +As seen above, the sample `user_embedding` vectors are converted into bytes. Using the `NumPy`, this is fairly trivial. + +## Create a `SearchIndex` + +With the schema and sample dataset ready, create a `SearchIndex`. + +### Bring your own Redis connection instance + +This is ideal in scenarios where you have custom settings on the connection instance or if your application will share a connection pool: + + +```python +from redisvl.index import SearchIndex +from redis import Redis + +client = Redis.from_url("redis://localhost:6379") +index = SearchIndex.from_dict(schema, redis_client=client, validate_on_load=True) +``` + +### Let the index manage the connection instance + +This is ideal for simple cases: + + +```python +index = SearchIndex.from_dict(schema, redis_url="redis://localhost:6379", validate_on_load=True) + +# If you don't specify a client or Redis URL, the index will attempt to +# connect to Redis at the default address "redis://localhost:6379". +``` + +### Create the index + +Now that we are connected to Redis, we need to run the create command. + + +```python +index.create(overwrite=True) +``` + +Note that at this point, the index has no entries. Data loading follows. + +## Inspect with the `rvl` CLI +Use the `rvl` CLI to inspect the created index and its fields: + + +```python +!rvl index listall +``` + + 19:17:09 [RedisVL] INFO Indices: + 19:17:09 [RedisVL] INFO 1. user_simple + + + +```python +!rvl index info -i user_simple +``` + + + + Index Information: + ╭──────────────────────┬──────────────────────┬──────────────────────┬──────────────────────┬──────────────────────╮ + │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ + ├──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┤ + | user_simple | HASH | ['user_simple_docs'] | [] | 0 | + ╰──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────╯ + Index Fields: + ╭─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────╮ + │ Name │ Attribute │ Type │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ + ├─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┤ + │ user │ user │ TAG │ SEPARATOR │ , │ │ │ │ │ │ │ + │ credit_score │ credit_score │ TAG │ SEPARATOR │ , │ │ │ │ │ │ │ + │ job │ job │ TEXT │ WEIGHT │ 1 │ │ │ │ │ │ │ + │ age │ age │ NUMERIC │ │ │ │ │ │ │ │ │ + │ user_embedding │ user_embedding │ VECTOR │ algorithm │ FLAT │ data_type │ FLOAT32 │ dim │ 3 │ distance_metric │ COSINE │ + ╰─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────╯ + + +## Load Data to `SearchIndex` + +Load the sample dataset to Redis. + +### Validate data entries on load +RedisVL uses pydantic validation under the hood to ensure loaded data is valid and confirms to your schema. This setting is optional and can be configured in the `SearchIndex` class. + + +```python +keys = index.load(data) + +print(keys) +``` + + ['user_simple_docs:01JT4PPPNJZMSK2395RKD208T9', 'user_simple_docs:01JT4PPPNM63J55ZESZ4TV1VR8', 'user_simple_docs:01JT4PPPNM59RCKS2YQ58B1HQW'] + + +By default, `load` will create a unique Redis key as a combination of the index key `prefix` and a random ULID. You can also customize the key by providing direct keys or pointing to a specified `id_field` on load. + +### Load invalid data +This will raise a `SchemaValidationError` if `validate_on_load` is set to true in the `SearchIndex` class. + + +```python +# NBVAL_SKIP + +keys = index.load([{"user_embedding": True}]) +``` + + 19:17:21 redisvl.index.index ERROR Schema validation error while loading data + Traceback (most recent call last): + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/index/storage.py", line 204, in _preprocess_and_validate_objects + processed_obj = self._validate(processed_obj) + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/index/storage.py", line 160, in _validate + return validate_object(self.index_schema, obj) + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/schema/validation.py", line 276, in validate_object + validated = model_class.model_validate(flat_obj) + File "/Users/justin.cechmanek/.pyenv/versions/3.13/envs/redisvl-dev/lib/python3.13/site-packages/pydantic/main.py", line 627, in model_validate + return cls.__pydantic_validator__.validate_python( + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ + obj, strict=strict, from_attributes=from_attributes, context=context + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ) + ^ + pydantic_core._pydantic_core.ValidationError: 1 validation error for user_simple__PydanticModel + user_embedding + Input should be a valid bytes [type=bytes_type, input_value=True, input_type=bool] + For further information visit https://errors.pydantic.dev/2.10/v/bytes_type + + The above exception was the direct cause of the following exception: + + Traceback (most recent call last): + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/index/index.py", line 686, in load + return self._storage.write( + ~~~~~~~~~~~~~~~~~~~^ + self._redis_client, # type: ignore + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ...<6 lines>... + validate=self._validate_on_load, + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ) + ^ + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/index/storage.py", line 265, in write + prepared_objects = self._preprocess_and_validate_objects( + list(objects), # Convert Iterable to List + ...<3 lines>... + validate=validate, + ) + File "/Users/justin.cechmanek/Documents/redisvl/redisvl/index/storage.py", line 211, in _preprocess_and_validate_objects + raise SchemaValidationError(str(e), index=i) from e + redisvl.exceptions.SchemaValidationError: Validation failed for object at index 0: 1 validation error for user_simple__PydanticModel + user_embedding + Input should be a valid bytes [type=bytes_type, input_value=True, input_type=bool] + For further information visit https://errors.pydantic.dev/2.10/v/bytes_type + + + + --------------------------------------------------------------------------- + + ValidationError Traceback (most recent call last) + + File ~/Documents/redisvl/redisvl/index/storage.py:204, in BaseStorage._preprocess_and_validate_objects(self, objects, id_field, keys, preprocess, validate) + 203 if validate: + --> 204 processed_obj = self._validate(processed_obj) + 206 # Store valid object with its key for writing + + + File ~/Documents/redisvl/redisvl/index/storage.py:160, in BaseStorage._validate(self, obj) + 159 # Pass directly to validation function and let any errors propagate + --> 160 return validate_object(self.index_schema, obj) + + + File ~/Documents/redisvl/redisvl/schema/validation.py:276, in validate_object(schema, obj) + 275 # Validate against model + --> 276 validated = model_class.model_validate(flat_obj) + 277 return validated.model_dump(exclude_none=True) + + + File ~/.pyenv/versions/3.13/envs/redisvl-dev/lib/python3.13/site-packages/pydantic/main.py:627, in BaseModel.model_validate(cls, obj, strict, from_attributes, context) + 626 __tracebackhide__ = True + --> 627 return cls.__pydantic_validator__.validate_python( + 628 obj, strict=strict, from_attributes=from_attributes, context=context + 629 ) + + + ValidationError: 1 validation error for user_simple__PydanticModel + user_embedding + Input should be a valid bytes [type=bytes_type, input_value=True, input_type=bool] + For further information visit https://errors.pydantic.dev/2.10/v/bytes_type + + + The above exception was the direct cause of the following exception: + + + SchemaValidationError Traceback (most recent call last) + + Cell In[31], line 3 + 1 # NBVAL_SKIP + ----> 3 keys = index.load([{"user_embedding": True}]) + + + File ~/Documents/redisvl/redisvl/index/index.py:686, in SearchIndex.load(self, data, id_field, keys, ttl, preprocess, batch_size) + 656 """Load objects to the Redis database. Returns the list of keys loaded + 657 to Redis. + 658 + (...) + 683 RedisVLError: If there's an error loading data to Redis. + 684 """ + 685 try: + --> 686 return self._storage.write( + 687 self._redis_client, # type: ignore + 688 objects=data, + 689 id_field=id_field, + 690 keys=keys, + 691 ttl=ttl, + 692 preprocess=preprocess, + 693 batch_size=batch_size, + 694 validate=self._validate_on_load, + 695 ) + 696 except SchemaValidationError: + 697 # Pass through validation errors directly + 698 logger.exception("Schema validation error while loading data") + + + File ~/Documents/redisvl/redisvl/index/storage.py:265, in BaseStorage.write(self, redis_client, objects, id_field, keys, ttl, preprocess, batch_size, validate) + 262 return [] + 264 # Pass 1: Preprocess and validate all objects + --> 265 prepared_objects = self._preprocess_and_validate_objects( + 266 list(objects), # Convert Iterable to List + 267 id_field=id_field, + 268 keys=keys, + 269 preprocess=preprocess, + 270 validate=validate, + 271 ) + 273 # Pass 2: Write all valid objects in batches + 274 added_keys = [] + + + File ~/Documents/redisvl/redisvl/index/storage.py:211, in BaseStorage._preprocess_and_validate_objects(self, objects, id_field, keys, preprocess, validate) + 207 prepared_objects.append((key, processed_obj)) + 209 except ValidationError as e: + 210 # Convert Pydantic ValidationError to SchemaValidationError with index context + --> 211 raise SchemaValidationError(str(e), index=i) from e + 212 except Exception as e: + 213 # Capture other exceptions with context + 214 object_id = f"at index {i}" + + + SchemaValidationError: Validation failed for object at index 0: 1 validation error for user_simple__PydanticModel + user_embedding + Input should be a valid bytes [type=bytes_type, input_value=True, input_type=bool] + For further information visit https://errors.pydantic.dev/2.10/v/bytes_type + + +### Upsert the index with new data +Upsert data by using the `load` method again: + + +```python +# Add more data +new_data = [{ + 'user': 'tyler', + 'age': 9, + 'job': 'engineer', + 'credit_score': 'high', + 'user_embedding': np.array([0.1, 0.3, 0.5], dtype=np.float32).tobytes() +}] +keys = index.load(new_data) + +print(keys) +``` + + ['user_simple_docs:01JT4PPX63CH5YRN2BGEYB5TS2'] + + +## Creating `VectorQuery` Objects + +Next we will create a vector query object for our newly populated index. This example will use a simple vector to demonstrate how vector similarity works. Vectors in production will likely be much larger than 3 floats and often require Machine Learning models (i.e. Huggingface sentence transformers) or an embeddings API (Cohere, OpenAI). `redisvl` provides a set of [Vectorizers]({{< relref "vectorizers#openai" >}}) to assist in vector creation. + + +```python +from redisvl.query import VectorQuery +from jupyterutils import result_print + +query = VectorQuery( + vector=[0.1, 0.1, 0.5], + vector_field_name="user_embedding", + return_fields=["user", "age", "job", "credit_score", "vector_distance"], + num_results=3 +) +``` + +### Executing queries +With our `VectorQuery` object defined above, we can execute the query over the `SearchIndex` using the `query` method. + + +```python +results = index.query(query) +result_print(results) +``` + + +
vector_distanceuseragejobcredit_score
0john1engineerhigh
0mary2doctorlow
0.0566299557686tyler9engineerhigh
+ + +## Using an Asynchronous Redis Client + +The `AsyncSearchIndex` class along with an async Redis python client allows for queries, index creation, and data loading to be done asynchronously. This is the +recommended route for working with `redisvl` in production-like settings. + + +```python +schema +``` + + + + + {'index': {'name': 'user_simple', 'prefix': 'user_simple_docs'}, + 'fields': [{'name': 'user', 'type': 'tag'}, + {'name': 'credit_score', 'type': 'tag'}, + {'name': 'job', 'type': 'text'}, + {'name': 'age', 'type': 'numeric'}, + {'name': 'user_embedding', + 'type': 'vector', + 'attrs': {'dims': 3, + 'distance_metric': 'cosine', + 'algorithm': 'flat', + 'datatype': 'float32'}}]} + + + + +```python +from redisvl.index import AsyncSearchIndex +from redis.asyncio import Redis + +client = Redis.from_url("redis://localhost:6379") +index = AsyncSearchIndex.from_dict(schema, redis_client=client) +``` + + +```python +# execute the vector query async +results = await index.query(query) +result_print(results) +``` + + +
vector_distanceuseragejobcredit_score
0john1engineerhigh
0mary2doctorlow
0.0566299557686tyler9engineerhigh
+ + +## Updating a schema +In some scenarios, it makes sense to update the index schema. With Redis and `redisvl`, this is easy because Redis can keep the underlying data in place while you change or make updates to the index configuration. + +So for our scenario, let's imagine we want to reindex this data in 2 ways: +- by using a `Tag` type for `job` field instead of `Text` +- by using an `hnsw` vector index for the `user_embedding` field instead of a `flat` vector index + + +```python +# Modify this schema to have what we want + +index.schema.remove_field("job") +index.schema.remove_field("user_embedding") +index.schema.add_fields([ + {"name": "job", "type": "tag"}, + { + "name": "user_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "hnsw", + "datatype": "float32" + } + } +]) +``` + + +```python +# Run the index update but keep underlying data in place +await index.create(overwrite=True, drop=False) +``` + + 19:17:29 redisvl.index.index INFO Index already exists, overwriting. + + + +```python +# Execute the vector query async +results = await index.query(query) +result_print(results) +``` + + +
vector_distanceuseragejobcredit_score
0john1engineerhigh
0mary2doctorlow
0.0566299557686tyler9engineerhigh
+ + +## Check Index Stats +Use the `rvl` CLI to check the stats for the index: + + +```python +!rvl stats -i user_simple +``` + + + Statistics: + ╭─────────────────────────────┬────────────╮ + │ Stat Key │ Value │ + ├─────────────────────────────┼────────────┤ + │ num_docs │ 4 │ + │ num_terms │ 0 │ + │ max_doc_id │ 4 │ + │ num_records │ 20 │ + │ percent_indexed │ 1 │ + │ hash_indexing_failures │ 0 │ + │ number_of_uses │ 2 │ + │ bytes_per_record_avg │ 48.2000007 │ + │ doc_table_size_mb │ 4.23431396 │ + │ inverted_sz_mb │ 9.19342041 │ + │ key_table_size_mb │ 1.93595886 │ + │ offset_bits_per_record_avg │ nan │ + │ offset_vectors_sz_mb │ 0 │ + │ offsets_per_term_avg │ 0 │ + │ records_per_doc_avg │ 5 │ + │ sortable_values_size_mb │ 0 │ + │ total_indexing_time │ 0.74400001 │ + │ total_inverted_index_blocks │ 11 │ + │ vector_index_sz_mb │ 0.23560333 │ + ╰─────────────────────────────┴────────────╯ + + +## Cleanup + +Below we will clean up after our work. First, you can flush all data from Redis associated with the index by +using the `.clear()` method. This will leave the secondary index in place for future insertions or updates. + +But if you want to clean up everything, including the index, just use `.delete()` +which will by default remove the index AND the underlying data. + + +```python +# Clear all data from Redis associated with the index +await index.clear() +``` + + + + + 4 + + + + +```python +# Butm the index is still in place +await index.exists() +``` + + + + + True + + + + +```python +# Remove / delete the index in its entirety +await index.delete() +``` +--- +linkTitle: 0.5.1 feature overview +title: 0.5.1 Feature Overview +type: integration +--- + + +This notebook provides an overview of what's new with the 0.5.1 release of redisvl. It also highlights changes and potential enhancements for existing usage. + +## What's new? + +- Hybrid query and text query classes +- Threshold optimizer classes +- Schema validation +- Timestamp filters +- Batched queries +- Vector normalization +- Hybrid policy on knn with filters + +## Define and load index for examples + + +```python +from redisvl.utils.vectorize import HFTextVectorizer +from redisvl.index import SearchIndex +import datetime as dt + +import warnings +warnings.filterwarnings("ignore", category=UserWarning, module="redis") + +# Embedding model +emb_model = HFTextVectorizer() + +REDIS_URL = "redis://localhost:6379/0" +NOW = dt.datetime.now() + +job_data = [ + { + "job_title": "Software Engineer", + "job_description": "Develop and maintain web applications using JavaScript, React, and Node.js.", + "posted": (NOW - dt.timedelta(days=1)).timestamp() # day ago + }, + { + "job_title": "Data Analyst", + "job_description": "Analyze large datasets to provide business insights and create data visualizations.", + "posted": (NOW - dt.timedelta(days=7)).timestamp() # week ago + }, + { + "job_title": "Marketing Manager", + "job_description": "Develop and implement marketing strategies to drive brand awareness and customer engagement.", + "posted": (NOW - dt.timedelta(days=30)).timestamp() # month ago + } +] + +job_data = [{**job, "job_embedding": emb_model.embed(job["job_description"], as_buffer=True)} for job in job_data] + + +job_schema = { + "index": { + "name": "jobs", + "prefix": "jobs", + "storage_type": "hash", + }, + "fields": [ + {"name": "job_title", "type": "text"}, + {"name": "job_description", "type": "text"}, + {"name": "posted", "type": "numeric"}, + { + "name": "job_embedding", + "type": "vector", + "attrs": { + "dims": 768, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} + +index = SearchIndex.from_dict(job_schema, redis_url=REDIS_URL) +index.create(overwrite=True, drop=True) +index.load(job_data) +``` + + 12:44:52 redisvl.index.index INFO Index already exists, overwriting. + + + + + + ['jobs:01JR0V1SA29RVD9AAVSTBV9P5H', + 'jobs:01JR0V1SA209KMVHMD7G54P3H5', + 'jobs:01JR0V1SA23ZE7BRERXTZWC33Z'] + + + +# HybridQuery class + +Perform hybrid lexical (BM25) and vector search where results are ranked by: `hybrid_score = (1-alpha)*lexical_Score + alpha*vector_similarity`. + + +```python +from redisvl.query import HybridQuery + +text = "Find a job as a where you develop software" +vec = emb_model.embed(text, as_buffer=True) + +query = HybridQuery( + text=text, + text_field_name="job_description", + vector=vec, + vector_field_name="job_embedding", + alpha=0.7, + num_results=10, + return_fields=["job_title"], +) + +results = index.query(query) +results +``` + + + + + [{'vector_distance': '0.61871612072', + 'job_title': 'Software Engineer', + 'vector_similarity': '0.69064193964', + 'text_score': '49.6242910712', + 'hybrid_score': '15.3707366791'}, + {'vector_distance': '0.937997639179', + 'job_title': 'Marketing Manager', + 'vector_similarity': '0.53100118041', + 'text_score': '49.6242910712', + 'hybrid_score': '15.2589881476'}, + {'vector_distance': '0.859166145325', + 'job_title': 'Data Analyst', + 'vector_similarity': '0.570416927338', + 'text_score': '0', + 'hybrid_score': '0.399291849136'}] + + + +# TextQueries + +TextQueries make it easy to perform pure lexical search with redisvl. + + +```python +from redisvl.query import TextQuery + +text = "Find where you develop software" + +query = TextQuery( + text=text, + text_field_name="job_description", + return_fields=["job_title"], + num_results=10, +) + +results = index.query(query) +results +``` + + + + + [{'id': 'jobs:01JR0V1SA29RVD9AAVSTBV9P5H', + 'score': 49.62429107116745, + 'job_title': 'Software Engineer'}, + {'id': 'jobs:01JR0V1SA23ZE7BRERXTZWC33Z', + 'score': 49.62429107116745, + 'job_title': 'Marketing Manager'}] + + + +# Threshold optimization + +In redis 0.5.0 we added the ability to quickly configure either your semantic cache or semantic router with test data examples. + +For a step by step guide see: [09_threshold_optimization.ipynb](../09_threshold_optimization.ipynb). + +For a more advanced routing example see: [this example](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-router/01_routing_optimization.ipynb). + + +```python +from redisvl.utils.optimize import CacheThresholdOptimizer +from redisvl.extensions.cache.llm import SemanticCache + +sem_cache = SemanticCache( + name="sem_cache", # underlying search index name + redis_url="redis://localhost:6379", # redis connection url string + distance_threshold=0.5 # semantic cache distance threshold +) + +paris_key = sem_cache.store(prompt="what is the capital of france?", response="paris") +rabat_key = sem_cache.store(prompt="what is the capital of morocco?", response="rabat") + +test_data = [ + { + "query": "What's the capital of Britain?", + "query_match": "" + }, + { + "query": "What's the capital of France??", + "query_match": paris_key + }, + { + "query": "What's the capital city of Morocco?", + "query_match": rabat_key + }, +] + +print(f"\nDistance threshold before: {sem_cache.distance_threshold} \n") +optimizer = CacheThresholdOptimizer(sem_cache, test_data) +optimizer.optimize() +print(f"\nDistance threshold after: {sem_cache.distance_threshold} \n") +``` + + + Distance threshold before: 0.5 + + + Distance threshold after: 0.13050847457627118 + + + +# Schema validation + +This feature makes it easier to make sure your data is in the right format. To demo this we will create a new index with the `validate_on_load` flag set to `True` + + +```python +# NBVAL_SKIP +from redisvl.index import SearchIndex + +# sample schema +car_schema = { + "index": { + "name": "cars", + "prefix": "cars", + "storage_type": "json", + }, + "fields": [ + {"name": "make", "type": "text"}, + {"name": "model", "type": "text"}, + {"name": "description", "type": "text"}, + {"name": "mpg", "type": "numeric"}, + { + "name": "car_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} + +sample_data_bad = [ + { + "make": "Toyota", + "model": "Camry", + "description": "A reliable sedan with great fuel economy.", + "mpg": 28, + "car_embedding": [0.1, 0.2, 0.3] + }, + { + "make": "Honda", + "model": "CR-V", + "description": "A practical SUV with advanced technology.", + # incorrect type will throw an error + "mpg": "twenty-two", + "car_embedding": [0.4, 0.5, 0.6] + } +] + +# this should now throw an error +car_index = SearchIndex.from_dict(car_schema, redis_url=REDIS_URL, validate_on_load=True) +car_index.create(overwrite=True) + +try: + car_index.load(sample_data_bad) +except Exception as e: + print(f"Error loading data: {e}") +``` + + 16:20:25 redisvl.index.index ERROR Schema validation error while loading data + Traceback (most recent call last): + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/index/storage.py", line 204, in _preprocess_and_validate_objects + processed_obj = self._validate(processed_obj) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/index/storage.py", line 160, in _validate + return validate_object(self.index_schema, obj) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/schema/validation.py", line 276, in validate_object + validated = model_class.model_validate(flat_obj) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/pydantic/main.py", line 627, in model_validate + return cls.__pydantic_validator__.validate_python( + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + pydantic_core._pydantic_core.ValidationError: 2 validation errors for cars__PydanticModel + mpg.int + Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/int_parsing + mpg.float + Input should be a valid number, unable to parse string as a number [type=float_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/float_parsing + + The above exception was the direct cause of the following exception: + + Traceback (most recent call last): + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/index/index.py", line 615, in load + return self._storage.write( + ^^^^^^^^^^^^^^^^^^^^ + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/index/storage.py", line 265, in write + prepared_objects = self._preprocess_and_validate_objects( + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/robert.shelton/.pyenv/versions/3.11.9/lib/python3.11/site-packages/redisvl/index/storage.py", line 211, in _preprocess_and_validate_objects + raise SchemaValidationError(str(e), index=i) from e + redisvl.exceptions.SchemaValidationError: Validation failed for object at index 1: 2 validation errors for cars__PydanticModel + mpg.int + Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/int_parsing + mpg.float + Input should be a valid number, unable to parse string as a number [type=float_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/float_parsing + Error loading data: Validation failed for object at index 1: 2 validation errors for cars__PydanticModel + mpg.int + Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/int_parsing + mpg.float + Input should be a valid number, unable to parse string as a number [type=float_parsing, input_value='twenty-two', input_type=str] + For further information visit https://errors.pydantic.dev/2.10/v/float_parsing + + +# Timestamp filters + +In Redis datetime objects are stored as numeric epoch times. Timestamp filter makes it easier to handle querying by these fields by handling conversion for you. + + +```python +from redisvl.query import FilterQuery +from redisvl.query.filter import Timestamp + +# find all jobs +ts = Timestamp("posted") < NOW # now datetime created above + +filter_query = FilterQuery( + return_fields=["job_title", "job_description", "posted"], + filter_expression=ts, + num_results=10, +) +res = index.query(filter_query) +res +``` + + + + + [{'id': 'jobs:01JQYMYZBA6NM6DX9YW35MCHJZ', + 'job_title': 'Software Engineer', + 'job_description': 'Develop and maintain web applications using JavaScript, React, and Node.js.', + 'posted': '1743625199.9'}, + {'id': 'jobs:01JQYMYZBABXYR96H96SQ99ZPS', + 'job_title': 'Data Analyst', + 'job_description': 'Analyze large datasets to provide business insights and create data visualizations.', + 'posted': '1743106799.9'}, + {'id': 'jobs:01JQYMYZBAGEBDS270EZADQ1TM', + 'job_title': 'Marketing Manager', + 'job_description': 'Develop and implement marketing strategies to drive brand awareness and customer engagement.', + 'posted': '1741123199.9'}] + + + + +```python +# jobs posted in the last 3 days => 1 job +ts = Timestamp("posted") > NOW - dt.timedelta(days=3) + +filter_query = FilterQuery( + return_fields=["job_title", "job_description", "posted"], + filter_expression=ts, + num_results=10, +) +res = index.query(filter_query) +res +``` + + + + + [{'id': 'jobs:01JQYMYZBA6NM6DX9YW35MCHJZ', + 'job_title': 'Software Engineer', + 'job_description': 'Develop and maintain web applications using JavaScript, React, and Node.js.', + 'posted': '1743625199.9'}] + + + + +```python +# more than 3 days ago but less than 14 days ago => 1 job +ts = Timestamp("posted").between( + NOW - dt.timedelta(days=14), + NOW - dt.timedelta(days=3), +) + +filter_query = FilterQuery( + return_fields=["job_title", "job_description", "posted"], + filter_expression=ts, + num_results=10, +) + +res = index.query(filter_query) +res +``` + + + + + [{'id': 'jobs:01JQYMYZBABXYR96H96SQ99ZPS', + 'job_title': 'Data Analyst', + 'job_description': 'Analyze large datasets to provide business insights and create data visualizations.', + 'posted': '1743106799.9'}] + + + +# Batch search + +This enhancement allows you to speed up the execution of queries by reducing the impact of network latency. + + +```python +import time +num_queries = 200 + +start = time.time() +for i in range(num_queries): + # run the same filter query + res = index.query(filter_query) +end = time.time() +print(f"Time taken for {num_queries} queries: {end - start:.2f} seconds") +``` + + Time taken for 200 queries: 0.11 seconds + + + +```python +batched_queries = [filter_query] * num_queries + +start = time.time() + +index.batch_search(batched_queries, batch_size=10) + +end = time.time() +print(f"Time taken for {num_queries} batched queries: {end - start:.2f} seconds") +``` + + Time taken for 200 batched queries: 0.03 seconds + + +# Vector normalization + +By default, Redis returns the vector cosine distance when performing a search, which yields a value between 0 and 2, where 0 represents a perfect match. However, you may sometimes prefer a similarity score between 0 and 1, where 1 indicates a perfect match. When enabled, this flag performs the conversion for you. Additionally, if this flag is set to true for L2 distance, it normalizes the Euclidean distance to a value between 0 and 1 as well. + + + +```python +from redisvl.query import VectorQuery + +query = VectorQuery( + vector=emb_model.embed("Software Engineer", as_buffer=True), + vector_field_name="job_embedding", + return_fields=["job_title", "job_description", "posted"], + normalize_vector_distance=True, +) + +res = index.query(query) +res +``` + + + + + [{'id': 'jobs:01JQYMYZBA6NM6DX9YW35MCHJZ', + 'vector_distance': '0.7090711295605', + 'job_title': 'Software Engineer', + 'job_description': 'Develop and maintain web applications using JavaScript, React, and Node.js.', + 'posted': '1743625199.9'}, + {'id': 'jobs:01JQYMYZBABXYR96H96SQ99ZPS', + 'vector_distance': '0.6049451231955', + 'job_title': 'Data Analyst', + 'job_description': 'Analyze large datasets to provide business insights and create data visualizations.', + 'posted': '1743106799.9'}, + {'id': 'jobs:01JQYMYZBAGEBDS270EZADQ1TM', + 'vector_distance': '0.553376108408', + 'job_title': 'Marketing Manager', + 'job_description': 'Develop and implement marketing strategies to drive brand awareness and customer engagement.', + 'posted': '1741123199.9'}] + + + +# Hybrid policy on knn with filters + +Within the default redis client you can set the `HYBRID_POLICY` which specifies the filter mode to use during vector search with filters. It can take values `BATCHES` or `ADHOC_BF`. Previously this option was not exposed by redisvl. + + +```python +from redisvl.query.filter import Text + +filter = Text("job_description") % "Develop" + +query = VectorQuery( + vector=emb_model.embed("Software Engineer", as_buffer=True), + vector_field_name="job_embedding", + return_fields=["job_title", "job_description", "posted"], + hybrid_policy="BATCHES" +) + +query.set_filter(filter) + +res = index.query(query) +res +``` + + + + + [{'id': 'jobs:01JQYMYZBA6NM6DX9YW35MCHJZ', + 'vector_distance': '0.581857740879', + 'job_title': 'Software Engineer', + 'job_description': 'Develop and maintain web applications using JavaScript, React, and Node.js.', + 'posted': '1743625199.9'}, + {'id': 'jobs:01JQYMYZBAGEBDS270EZADQ1TM', + 'vector_distance': '0.893247783184', + 'job_title': 'Marketing Manager', + 'job_description': 'Develop and implement marketing strategies to drive brand awareness and customer engagement.', + 'posted': '1741123199.9'}] + + +--- +linkTitle: Release guides +title: Release Guides +type: integration +hideListLinks: true +--- + + +This section contains guidelines and information for RedisVL releases. + + + +* [0.5.1 Feature Overview](0_5_0_release/) + * [What’s new?](0_5_0_release/#what-s-new) + * [Define and load index for examples](0_5_0_release/#define-and-load-index-for-examples) +* [HybridQuery class](0_5_0_release/#hybridquery-class) +* [TextQueries](0_5_0_release/#textqueries) +* [Threshold optimization](0_5_0_release/#threshold-optimization) +* [Schema validation](0_5_0_release/#schema-validation) +* [Timestamp filters](0_5_0_release/#timestamp-filters) +* [Batch search](0_5_0_release/#batch-search) +* [Vector normalization](0_5_0_release/#vector-normalization) +* [Hybrid policy on knn with filters](0_5_0_release/#hybrid-policy-on-knn-with-filters) +--- +linkTitle: LLM session memory +title: LLM Session Memory +type: integration +weight: 07 +--- + + +Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticable when asking simple questions, it becomes a hinderance when engaging in long running conversations that rely on conversational context. + +The solution to this problem is to append the previous conversation history to each subsequent call to the LLM. + +This notebook will show how to use Redis to structure and store and retrieve this conversational session memory. + + +```python +from redisvl.extensions.session_manager import StandardSessionManager +chat_session = StandardSessionManager(name='student tutor') +``` + + 12:24:11 redisvl.index.index INFO Index already exists, not overwriting. + + +To align with common LLM APIs, Redis stores messages with `role` and `content` fields. +The supported roles are "system", "user" and "llm". + +You can store messages one at a time or all at once. + + +```python +chat_session.add_message({"role":"system", "content":"You are a helpful geography tutor, giving simple and short answers to questions about Europen countries."}) +chat_session.add_messages([ + {"role":"user", "content":"What is the capital of France?"}, + {"role":"llm", "content":"The capital is Paris."}, + {"role":"user", "content":"And what is the capital of Spain?"}, + {"role":"llm", "content":"The capital is Madrid."}, + {"role":"user", "content":"What is the population of Great Britain?"}, + {"role":"llm", "content":"As of 2023 the population of Great Britain is approximately 67 million people."},] + ) +``` + +At any point we can retrieve the recent history of the conversation. It will be ordered by entry time. + + +```python +context = chat_session.get_recent() +for message in context: + print(message) +``` + + {'role': 'llm', 'content': 'The capital is Paris.'} + {'role': 'user', 'content': 'And what is the capital of Spain?'} + {'role': 'llm', 'content': 'The capital is Madrid.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + + +In many LLM flows the conversation progresses in a series of prompt and response pairs. session managers offer a convienience function `store()` to add these simply. + + +```python +prompt = "what is the size of England compared to Portugal?" +response = "England is larger in land area than Portal by about 15000 square miles." +chat_session.store(prompt, response) + +context = chat_session.get_recent(top_k=6) +for message in context: + print(message) +``` + + {'role': 'user', 'content': 'And what is the capital of Spain?'} + {'role': 'llm', 'content': 'The capital is Madrid.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + + +## Managing multiple users and conversations + +For applications that need to handle multiple conversations concurrently, Redis supports tagging messages to keep conversations separated. + + +```python +chat_session.add_message({"role":"system", "content":"You are a helpful algebra tutor, giving simple answers to math problems."}, session_tag='student two') +chat_session.add_messages([ + {"role":"user", "content":"What is the value of x in the equation 2x + 3 = 7?"}, + {"role":"llm", "content":"The value of x is 2."}, + {"role":"user", "content":"What is the value of y in the equation 3y - 5 = 7?"}, + {"role":"llm", "content":"The value of y is 4."}], + session_tag='student two' + ) + +for math_message in chat_session.get_recent(session_tag='student two'): + print(math_message) +``` + + {'role': 'system', 'content': 'You are a helpful algebra tutor, giving simple answers to math problems.'} + {'role': 'user', 'content': 'What is the value of x in the equation 2x + 3 = 7?'} + {'role': 'llm', 'content': 'The value of x is 2.'} + {'role': 'user', 'content': 'What is the value of y in the equation 3y - 5 = 7?'} + {'role': 'llm', 'content': 'The value of y is 4.'} + + +## Semantic conversation memory +For longer conversations our list of messages keeps growing. Since LLMs are stateless we have to continue to pass this conversation history on each subsequent call to ensure the LLM has the correct context. + +A typical flow looks like this: +``` +while True: + prompt = input('enter your next question') + context = chat_session.get_recent() + response = LLM_api_call(prompt=prompt, context=context) + chat_session.store(prompt, response) +``` + +This works, but as context keeps growing so too does our LLM token count, which increases latency and cost. + +Conversation histories can be truncated, but that can lead to losing relevant information that appeared early on. + +A better solution is to pass only the relevant conversational context on each subsequent call. + +For this, RedisVL has the `SemanticSessionManager`, which uses vector similarity search to return only semantically relevant sections of the conversation. + + +```python +from redisvl.extensions.session_manager import SemanticSessionManager +semantic_session = SemanticSessionManager(name='tutor') + +semantic_session.add_messages(chat_session.get_recent(top_k=8)) +``` + + 12:24:15 redisvl.index.index INFO Index already exists, not overwriting. + + + +```python +prompt = "what have I learned about the size of England?" +semantic_session.set_distance_threshold(0.35) +context = semantic_session.get_relevant(prompt) +for message in context: + print(message) +``` + + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + + +You can adjust the degree of semantic similarity needed to be included in your context. + +Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everthing. + + +```python +semantic_session.set_distance_threshold(0.7) + +larger_context = semantic_session.get_relevant(prompt) +for message in larger_context: + print(message) +``` + + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + + +## Conversation control + +LLMs can hallucinate on occasion and when this happens it can be useful to prune incorrect information from conversational histories so this incorrect information doesn't continue to be passed as context. + + +```python +semantic_session.store( + prompt="what is the smallest country in Europe?", + response="Monaco is the smallest country in Europe at 0.78 square miles." # Incorrect. Vatican City is the smallest country in Europe + ) + +# get the key of the incorrect message +context = semantic_session.get_recent(top_k=1, raw=True) +bad_key = context[0]['entry_id'] +semantic_session.drop(bad_key) + +corrected_context = semantic_session.get_recent() +for message in corrected_context: + print(message) +``` + + {'role': 'user', 'content': 'What is the population of Great Britain?'} + {'role': 'llm', 'content': 'As of 2023 the population of Great Britain is approximately 67 million people.'} + {'role': 'user', 'content': 'what is the size of England compared to Portugal?'} + {'role': 'llm', 'content': 'England is larger in land area than Portal by about 15000 square miles.'} + {'role': 'user', 'content': 'what is the smallest country in Europe?'} + + + +```python +chat_session.clear() +``` +--- +linkTitle: Semantic caching for LLMs +title: Semantic Caching for LLMs +type: integration +weight: 03 +--- + + +RedisVL provides a ``SemanticCache`` interface to utilize Redis' built-in caching capabilities AND vector search in order to store responses from previously-answered questions. This reduces the number of requests and tokens sent to the Large Language Models (LLM) service, decreasing costs and enhancing application throughput (by reducing the time taken to generate responses). + +This notebook will go over how to use Redis as a Semantic Cache for your applications + +First, we will import [OpenAI](https://platform.openai.com) to use their API for responding to user prompts. We will also create a simple `ask_openai` helper method to assist. + + +```python +import os +import getpass +import time +import numpy as np + +from openai import OpenAI + + +os.environ["TOKENIZERS_PARALLELISM"] = "False" + +api_key = os.getenv("OPENAI_API_KEY") or getpass.getpass("Enter your OpenAI API key: ") + +client = OpenAI(api_key=api_key) + +def ask_openai(question: str) -> str: + response = client.completions.create( + model="gpt-3.5-turbo-instruct", + prompt=question, + max_tokens=200 + ) + return response.choices[0].text.strip() +``` + + +```python +# Test +print(ask_openai("What is the capital of France?")) +``` + + 19:17:51 httpx INFO HTTP Request: POST https://api.openai.com/v1/completions "HTTP/1.1 200 OK" + The capital of France is Paris. + + +## Initializing ``SemanticCache`` + +``SemanticCache`` will automatically create an index within Redis upon initialization for the semantic cache content. + + +```python +from redisvl.extensions.cache.llm import SemanticCache +from redisvl.utils .vectorize import HFTextVectorizer + +llmcache = SemanticCache( + name="llmcache", # underlying search index name + redis_url="redis://localhost:6379", # redis connection url string + distance_threshold=0.1, # semantic cache distance threshold + vectorizer=HFTextVectorizer("redis/langcache-embed-v1"), # embdding model +) +``` + + 19:17:51 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:17:51 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: redis/langcache-embed-v1 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 17.57it/s] + + + +```python +# look at the index specification created for the semantic cache lookup +!rvl index info -i llmcache +``` + + + + Index Information: + ╭───────────────┬───────────────┬───────────────┬───────────────┬───────────────╮ + │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ + ├───────────────┼───────────────┼───────────────┼───────────────┼───────────────┤ + | llmcache | HASH | ['llmcache'] | [] | 0 | + ╰───────────────┴───────────────┴───────────────┴───────────────┴───────────────╯ + Index Fields: + ╭─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────╮ + │ Name │ Attribute │ Type │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ + ├─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┤ + │ prompt │ prompt │ TEXT │ WEIGHT │ 1 │ │ │ │ │ │ │ + │ response │ response │ TEXT │ WEIGHT │ 1 │ │ │ │ │ │ │ + │ inserted_at │ inserted_at │ NUMERIC │ │ │ │ │ │ │ │ │ + │ updated_at │ updated_at │ NUMERIC │ │ │ │ │ │ │ │ │ + │ prompt_vector │ prompt_vector │ VECTOR │ algorithm │ FLAT │ data_type │ FLOAT32 │ dim │ 768 │ distance_metric │ COSINE │ + ╰─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────╯ + + +## Basic Cache Usage + + +```python +question = "What is the capital of France?" +``` + + +```python +# Check the semantic cache -- should be empty +if response := llmcache.check(prompt=question): + print(response) +else: + print("Empty cache") +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 18.30it/s] + + Empty cache + + + + + +Our initial cache check should be empty since we have not yet stored anything in the cache. Below, store the `question`, +proper `response`, and any arbitrary `metadata` (as a python dictionary object) in the cache. + + +```python +# Cache the question, answer, and arbitrary metadata +llmcache.store( + prompt=question, + response="Paris", + metadata={"city": "Paris", "country": "france"} +) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 26.10it/s] + + + + + + 'llmcache:115049a298532be2f181edb03f766770c0db84c22aff39003fec340deaec7545' + + + +Now we will check the cache again with the same question and with a semantically similar question: + + +```python +# Check the cache again +if response := llmcache.check(prompt=question, return_fields=["prompt", "response", "metadata"]): + print(response) +else: + print("Empty cache") +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 12.36it/s] + + + [{'prompt': 'What is the capital of France?', 'response': 'Paris', 'metadata': {'city': 'Paris', 'country': 'france'}, 'key': 'llmcache:115049a298532be2f181edb03f766770c0db84c22aff39003fec340deaec7545'}] + + + +```python +# Check for a semantically similar result +question = "What actually is the capital of France?" +llmcache.check(prompt=question)[0]['response'] +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 12.22it/s] + + + + + + 'Paris' + + + +## Customize the Distance Threshhold + +For most use cases, the right semantic similarity threshhold is not a fixed quantity. Depending on the choice of embedding model, +the properties of the input query, and even business use case -- the threshhold might need to change. + +Fortunately, you can seamlessly adjust the threshhold at any point like below: + + +```python +# Widen the semantic distance threshold +llmcache.set_threshold(0.5) +``` + + +```python +# Really try to trick it by asking around the point +# But is able to slip just under our new threshold +question = "What is the capital city of the country in Europe that also has a city named Nice?" +llmcache.check(prompt=question)[0]['response'] +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 19.20it/s] + + + + + + 'Paris' + + + + +```python +# Invalidate the cache completely by clearing it out +llmcache.clear() + +# should be empty now +llmcache.check(prompt=question) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 26.71it/s] + + + + + + [] + + + +## Utilize TTL + +Redis uses TTL policies (optional) to expire individual keys at points in time in the future. +This allows you to focus on your data flow and business logic without bothering with complex cleanup tasks. + +A TTL policy set on the `SemanticCache` allows you to temporarily hold onto cache entries. Below, we will set the TTL policy to 5 seconds. + + +```python +llmcache.set_ttl(5) # 5 seconds +``` + + +```python +llmcache.store("This is a TTL test", "This is a TTL test response") + +time.sleep(6) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 20.45it/s] + + + +```python +# confirm that the cache has cleared by now on it's own +result = llmcache.check("This is a TTL test") + +print(result) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 17.02it/s] + + [] + + + + + + +```python +# Reset the TTL to null (long lived data) +llmcache.set_ttl() +``` + +## Simple Performance Testing + +Next, we will measure the speedup obtained by using ``SemanticCache``. We will use the ``time`` module to measure the time taken to generate responses with and without ``SemanticCache``. + + +```python +def answer_question(question: str) -> str: + """Helper function to answer a simple question using OpenAI with a wrapper + check for the answer in the semantic cache first. + + Args: + question (str): User input question. + + Returns: + str: Response. + """ + results = llmcache.check(prompt=question) + if results: + return results[0]["response"] + else: + answer = ask_openai(question) + return answer +``` + + +```python +start = time.time() +# asking a question -- openai response time +question = "What was the name of the first US President?" +answer = answer_question(question) +end = time.time() + +print(f"Without caching, a call to openAI to answer this simple question took {end-start} seconds.") + +# add the entry to our LLM cache +llmcache.store(prompt=question, response="George Washington") +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 14.88it/s] + + + 19:18:04 httpx INFO HTTP Request: POST https://api.openai.com/v1/completions "HTTP/1.1 200 OK" + Without caching, a call to openAI to answer this simple question took 0.8826751708984375 seconds. + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 18.38it/s] + + + + + + 'llmcache:67e0f6e28fe2a61c0022fd42bf734bb8ffe49d3e375fd69d692574295a20fc1a' + + + + +```python +# Calculate the avg latency for caching over LLM usage +times = [] + +for _ in range(10): + cached_start = time.time() + cached_answer = answer_question(question) + cached_end = time.time() + times.append(cached_end-cached_start) + +avg_time_with_cache = np.mean(times) +print(f"Avg time taken with LLM cache enabled: {avg_time_with_cache}") +print(f"Percentage of time saved: {round(((end - start) - avg_time_with_cache) / (end - start) * 100, 2)}%") +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 13.65it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.94it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.19it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.53it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 28.12it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.38it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 25.39it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 26.34it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 28.07it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.35it/s] + + Avg time taken with LLM cache enabled: 0.0463670015335083 + Percentage of time saved: 94.75% + + + + + + +```python +# check the stats of the index +!rvl stats -i llmcache +``` + + + Statistics: + ╭─────────────────────────────┬────────────╮ + │ Stat Key │ Value │ + ├─────────────────────────────┼────────────┤ + │ num_docs │ 1 │ + │ num_terms │ 19 │ + │ max_doc_id │ 3 │ + │ num_records │ 29 │ + │ percent_indexed │ 1 │ + │ hash_indexing_failures │ 0 │ + │ number_of_uses │ 19 │ + │ bytes_per_record_avg │ 75.9655151 │ + │ doc_table_size_mb │ 1.34468078 │ + │ inverted_sz_mb │ 0.00210094 │ + │ key_table_size_mb │ 2.76565551 │ + │ offset_bits_per_record_avg │ 8 │ + │ offset_vectors_sz_mb │ 2.09808349 │ + │ offsets_per_term_avg │ 0.75862067 │ + │ records_per_doc_avg │ 29 │ + │ sortable_values_size_mb │ 0 │ + │ total_indexing_time │ 3.875 │ + │ total_inverted_index_blocks │ 21 │ + │ vector_index_sz_mb │ 3.01609802 │ + ╰─────────────────────────────┴────────────╯ + + + +```python +# Clear the cache AND delete the underlying index +llmcache.delete() +``` + +## Cache Access Controls, Tags & Filters +When running complex workflows with similar applications, or handling multiple users it's important to keep data segregated. Building on top of RedisVL's support for complex and hybrid queries we can tag and filter cache entries using custom-defined `filterable_fields`. + +Let's store multiple users' data in our cache with similar prompts and ensure we return only the correct user information: + + +```python +private_cache = SemanticCache( + name="private_cache", + filterable_fields=[{"name": "user_id", "type": "tag"}] +) + +private_cache.store( + prompt="What is the phone number linked to my account?", + response="The number on file is 123-555-0000", + filters={"user_id": "abc"}, +) + +private_cache.store( + prompt="What's the phone number linked in my account?", + response="The number on file is 123-555-1111", + filters={"user_id": "def"}, +) +``` + + 19:18:07 [RedisVL] WARNING The default vectorizer has changed from `sentence-transformers/all-mpnet-base-v2` to `redis/langcache-embed-v1` in version 0.6.0 of RedisVL. For more information about this model, please refer to https://arxiv.org/abs/2504.02268 or visit https://huggingface.co/redis/langcache-embed-v1. To continue using the old vectorizer, please specify it explicitly in the constructor as: vectorizer=HFTextVectorizer(model='sentence-transformers/all-mpnet-base-v2') + 19:18:07 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:18:07 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: redis/langcache-embed-v1 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 8.98it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 24.89it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 26.95it/s] + + + + + + 'private_cache:2831a0659fb888e203cd9fedb9f65681bfa55e4977c092ed1bf87d42d2655081' + + + + +```python +from redisvl.query.filter import Tag + +# define user id filter +user_id_filter = Tag("user_id") == "abc" + +response = private_cache.check( + prompt="What is the phone number linked to my account?", + filter_expression=user_id_filter, + num_results=2 +) + +print(f"found {len(response)} entry \n{response[0]['response']}") +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 27.98it/s] + + found 1 entry + The number on file is 123-555-0000 + + + + + + +```python +# Cleanup +private_cache.delete() +``` + +Multiple `filterable_fields` can be defined on a cache, and complex filter expressions can be constructed to filter on these fields, as well as the default fields already present. + + +```python + +complex_cache = SemanticCache( + name='account_data', + filterable_fields=[ + {"name": "user_id", "type": "tag"}, + {"name": "account_type", "type": "tag"}, + {"name": "account_balance", "type": "numeric"}, + {"name": "transaction_amount", "type": "numeric"} + ] +) +complex_cache.store( + prompt="what is my most recent checking account transaction under $100?", + response="Your most recent transaction was for $75", + filters={"user_id": "abc", "account_type": "checking", "transaction_amount": 75}, +) +complex_cache.store( + prompt="what is my most recent savings account transaction?", + response="Your most recent deposit was for $300", + filters={"user_id": "abc", "account_type": "savings", "transaction_amount": 300}, +) +complex_cache.store( + prompt="what is my most recent checking account transaction over $200?", + response="Your most recent transaction was for $350", + filters={"user_id": "abc", "account_type": "checking", "transaction_amount": 350}, +) +complex_cache.store( + prompt="what is my checking account balance?", + response="Your current checking account is $1850", + filters={"user_id": "abc", "account_type": "checking"}, +) +``` + + 19:18:09 [RedisVL] WARNING The default vectorizer has changed from `sentence-transformers/all-mpnet-base-v2` to `redis/langcache-embed-v1` in version 0.6.0 of RedisVL. For more information about this model, please refer to https://arxiv.org/abs/2504.02268 or visit https://huggingface.co/redis/langcache-embed-v1. To continue using the old vectorizer, please specify it explicitly in the constructor as: vectorizer=HFTextVectorizer(model='sentence-transformers/all-mpnet-base-v2') + 19:18:09 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 19:18:09 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: redis/langcache-embed-v1 + + + Batches: 100%|██████████| 1/1 [00:00<00:00, 13.54it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 16.76it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 21.82it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 28.80it/s] + Batches: 100%|██████████| 1/1 [00:00<00:00, 21.04it/s] + + + + + + 'account_data:944f89729b09ca46b99923d223db45e0bccf584cfd53fcaf87d2a58f072582d3' + + + + +```python +from redisvl.query.filter import Num + +value_filter = Num("transaction_amount") > 100 +account_filter = Tag("account_type") == "checking" +complex_filter = value_filter & account_filter + +# check for checking account transactions over $100 +complex_cache.set_threshold(0.3) +response = complex_cache.check( + prompt="what is my most recent checking account transaction?", + filter_expression=complex_filter, + num_results=5 +) +print(f'found {len(response)} entry') +print(response[0]["response"]) +``` + + Batches: 100%|██████████| 1/1 [00:00<00:00, 28.15it/s] + + found 1 entry + Your most recent transaction was for $350 + + + + + + +```python +# Cleanup +complex_cache.delete() +``` +--- +linkTitle: Querying with RedisVL +title: Querying with RedisVL +type: integration +weight: 02 +--- + + +In this notebook, we will explore more complex queries that can be performed with ``redisvl`` + +Before running this notebook, be sure to +1. Have installed ``redisvl`` and have that environment active for this notebook. +2. Have a running Redis instance with RediSearch > 2.4 running. + + +```python +import pickle +from jupyterutils import table_print, result_print + +# load in the example data and printing utils +data = pickle.load(open("hybrid_example_data.pkl", "rb")) +table_print(data) +``` + + +
useragejobcredit_scoreoffice_locationuser_embeddinglast_updated
john18engineerhigh-122.4194,37.7749b'\xcd\xcc\xcc=\xcd\xcc\xcc=\x00\x00\x00?'1741627789
derrick14doctorlow-122.4194,37.7749b'\xcd\xcc\xcc=\xcd\xcc\xcc=\x00\x00\x00?'1741627789
nancy94doctorhigh-122.4194,37.7749b'333?\xcd\xcc\xcc=\x00\x00\x00?'1710696589
tyler100engineerhigh-122.0839,37.3861b'\xcd\xcc\xcc=\xcd\xcc\xcc>\x00\x00\x00?'1742232589
tim12dermatologisthigh-122.0839,37.3861b'\xcd\xcc\xcc>\xcd\xcc\xcc>\x00\x00\x00?'1739644189
taimur15CEOlow-122.0839,37.3861b'\x9a\x99\x19?\xcd\xcc\xcc=\x00\x00\x00?'1742232589
joe35dentistmedium-122.0839,37.3861b'fff?fff?\xcd\xcc\xcc='1742232589
+ + + +```python +schema = { + "index": { + "name": "user_queries", + "prefix": "user_queries_docs", + "storage_type": "hash", # default setting -- HASH + }, + "fields": [ + {"name": "user", "type": "tag"}, + {"name": "credit_score", "type": "tag"}, + {"name": "job", "type": "text"}, + {"name": "age", "type": "numeric"}, + {"name": "last_updated", "type": "numeric"}, + {"name": "office_location", "type": "geo"}, + { + "name": "user_embedding", + "type": "vector", + "attrs": { + "dims": 3, + "distance_metric": "cosine", + "algorithm": "flat", + "datatype": "float32" + } + + } + ], +} +``` + + +```python +from redisvl.index import SearchIndex + +# construct a search index from the schema +index = SearchIndex.from_dict(schema, redis_url="redis://localhost:6379") + +# create the index (no data yet) +index.create(overwrite=True) +``` + + 11:40:25 redisvl.index.index INFO Index already exists, overwriting. + + + +```python +# use the CLI to see the created index +!rvl index listall +``` + + +```python +# load data to redis +keys = index.load(data) +``` + + +```python +index.info()['num_docs'] +``` + + + + + 7 + + + +## Hybrid Queries + +Hybrid queries are queries that combine multiple types of filters. For example, you may want to search for a user that is a certain age, has a certain job, and is within a certain distance of a location. This is a hybrid query that combines numeric, tag, and geographic filters. + +### Tag Filters + +Tag filters are filters that are applied to tag fields. These are fields that are not tokenized and are used to store a single categorical value. + + +```python +from redisvl.query import VectorQuery +from redisvl.query.filter import Tag + +t = Tag("credit_score") == "high" + +v = VectorQuery( + vector=[0.1, 0.1, 0.5], + vector_field_name="user_embedding", + return_fields=["user", "credit_score", "age", "job", "office_location", "last_updated"], + filter_expression=t +) + +results = index.query(v) +result_print(results) +``` + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0johnhigh18engineer-122.4194,37.77491741627789
0.109129190445tylerhigh100engineer-122.0839,37.38611742232589
0.158808946609timhigh12dermatologist-122.0839,37.38611739644189
0.266666650772nancyhigh94doctor-122.4194,37.77491710696589
+ + + +```python +# negation +t = Tag("credit_score") != "high" + +v.set_filter(t) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0derricklow14doctor-122.4194,37.77491741627789
0.217882037163taimurlow15CEO-122.0839,37.38611742232589
0.653301358223joemedium35dentist-122.0839,37.38611742232589
+ + + +```python +# use multiple tags as a list +t = Tag("credit_score") == ["high", "medium"] + +v.set_filter(t) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.653301358223joemedium35dentist-122.0839,37.3861
0.653301358223joemedium35dentist-122.0839,37.3861
+ + + +```python +# use multiple tags as a set (to enforce uniqueness) +t = Tag("credit_score") == set(["high", "high", "medium"]) + +v.set_filter(t) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.653301358223joemedium35dentist-122.0839,37.3861
0.653301358223joemedium35dentist-122.0839,37.3861
+ + +What about scenarios where you might want to dynamically generate a list of tags? Have no fear. RedisVL allows you to do this gracefully without having to check for the **empty case**. The **empty case** is when you attempt to run a Tag filter on a field with no defined values to match: + +`Tag("credit_score") == []` + +An empty filter like the one above will yield a `*` Redis query filter which implies the base case -- there is no filter here to use. + + +```python +# gracefully fallback to "*" filter if empty case +empty_case = Tag("credit_score") == [] + +v.set_filter(empty_case) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
+ + +### Numeric Filters + +Numeric filters are filters that are applied to numeric fields and can be used to isolate a range of values for a given field. + + +```python +from redisvl.query.filter import Num + +numeric_filter = Num("age").between(15, 35) + +v.set_filter(numeric_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0johnhigh18engineer-122.4194,37.77491741627789
0.217882037163taimurlow15CEO-122.0839,37.38611742232589
0.653301358223joemedium35dentist-122.0839,37.38611742232589
+ + + +```python +# exact match query +numeric_filter = Num("age") == 14 + +v.set_filter(numeric_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0derricklow14doctor-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
+ + + +```python +# negation +numeric_filter = Num("age") != 14 + +v.set_filter(numeric_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
+ + +### Timestamp Filters + +In redis all times are stored as an epoch time numeric however, this class allows you to filter with python datetime for ease of use. + + +```python +from redisvl.query.filter import Timestamp +from datetime import datetime + +dt = datetime(2025, 3, 16, 13, 45, 39, 132589) +print(f'Epoch comparison: {dt.timestamp()}') + +timestamp_filter = Timestamp("last_updated") > dt + +v.set_filter(timestamp_filter) +result_print(index.query(v)) +``` + + Epoch comparison: 1742147139.132589 + + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0.109129190445tylerhigh100engineer-122.0839,37.38611742232589
0.217882037163taimurlow15CEO-122.0839,37.38611742232589
0.653301358223joemedium35dentist-122.0839,37.38611742232589
+ + + +```python +from redisvl.query.filter import Timestamp +from datetime import datetime + +dt = datetime(2025, 3, 16, 13, 45, 39, 132589) + +print(f'Epoch comparison: {dt.timestamp()}') + +timestamp_filter = Timestamp("last_updated") < dt + +v.set_filter(timestamp_filter) +result_print(index.query(v)) +``` + + Epoch comparison: 1742147139.132589 + + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0derricklow14doctor-122.4194,37.77491741627789
0johnhigh18engineer-122.4194,37.77491741627789
0.158808946609timhigh12dermatologist-122.0839,37.38611739644189
0.266666650772nancyhigh94doctor-122.4194,37.77491710696589
+ + + +```python +from redisvl.query.filter import Timestamp +from datetime import datetime + +dt_1 = datetime(2025, 1, 14, 13, 45, 39, 132589) +dt_2 = datetime(2025, 3, 16, 13, 45, 39, 132589) + +print(f'Epoch between: {dt_1.timestamp()} - {dt_2.timestamp()}') + +timestamp_filter = Timestamp("last_updated").between(dt_1, dt_2) + +v.set_filter(timestamp_filter) +result_print(index.query(v)) +``` + + Epoch between: 1736880339.132589 - 1742147139.132589 + + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0derricklow14doctor-122.4194,37.77491741627789
0johnhigh18engineer-122.4194,37.77491741627789
0.158808946609timhigh12dermatologist-122.0839,37.38611739644189
+ + +### Text Filters + +Text filters are filters that are applied to text fields. These filters are applied to the entire text field. For example, if you have a text field that contains the text "The quick brown fox jumps over the lazy dog", a text filter of "quick" will match this text field. + + +```python +from redisvl.query.filter import Text + +# exact match filter -- document must contain the exact word doctor +text_filter = Text("job") == "doctor" + +v.set_filter(text_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_locationlast_updated
0derricklow14doctor-122.4194,37.77491741627789
0.266666650772nancyhigh94doctor-122.4194,37.77491710696589
+ + + +```python +# negation -- document must not contain the exact word doctor +negate_text_filter = Text("job") != "doctor" + +v.set_filter(negate_text_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.653301358223joemedium35dentist-122.0839,37.3861
0.653301358223joemedium35dentist-122.0839,37.3861
+ + + +```python +# wildcard match filter +wildcard_filter = Text("job") % "doct*" + +v.set_filter(wildcard_filter) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0derricklow14doctor-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
+ + + +```python +# fuzzy match filter +fuzzy_match = Text("job") % "%%engine%%" + +v.set_filter(fuzzy_match) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
+ + + +```python +# conditional -- match documents with job field containing engineer OR doctor +conditional = Text("job") % "engineer|doctor" + +v.set_filter(conditional) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
+ + + +```python +# gracefully fallback to "*" filter if empty case +empty_case = Text("job") % "" + +v.set_filter(empty_case) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
+ + +Use raw query strings as input. Below we use the `~` flag to indicate that the full text query is optional. We also choose the BM25 scorer and return document scores along with the result. + + +```python +v.set_filter("(~(@job:engineer))") +v.scorer("BM25").with_scores() + +index.query(v) +``` + + + + + [{'id': 'user_queries_docs:01JMJJHE28ZW4F33ZNRKXRHYCS', + 'score': 1.8181817787737895, + 'vector_distance': '0', + 'user': 'john', + 'credit_score': 'high', + 'age': '18', + 'job': 'engineer', + 'office_location': '-122.4194,37.7749'}, + {'id': 'user_queries_docs:01JMJJHE2899024DYPXT6424N9', + 'score': 0.0, + 'vector_distance': '0', + 'user': 'derrick', + 'credit_score': 'low', + 'age': '14', + 'job': 'doctor', + 'office_location': '-122.4194,37.7749'}, + {'id': 'user_queries_docs:01JMJJPEYCQ89ZQW6QR27J72WT', + 'score': 1.8181817787737895, + 'vector_distance': '0', + 'user': 'john', + 'credit_score': 'high', + 'age': '18', + 'job': 'engineer', + 'office_location': '-122.4194,37.7749'}, + {'id': 'user_queries_docs:01JMJJPEYD544WB1TKDBJ3Z3J9', + 'score': 0.0, + 'vector_distance': '0', + 'user': 'derrick', + 'credit_score': 'low', + 'age': '14', + 'job': 'doctor', + 'office_location': '-122.4194,37.7749'}, + {'id': 'user_queries_docs:01JMJJHE28B5R6T00DH37A7KSJ', + 'score': 1.8181817787737895, + 'vector_distance': '0.109129190445', + 'user': 'tyler', + 'credit_score': 'high', + 'age': '100', + 'job': 'engineer', + 'office_location': '-122.0839,37.3861'}, + {'id': 'user_queries_docs:01JMJJPEYDPF9S5328WHCQN0ND', + 'score': 1.8181817787737895, + 'vector_distance': '0.109129190445', + 'user': 'tyler', + 'credit_score': 'high', + 'age': '100', + 'job': 'engineer', + 'office_location': '-122.0839,37.3861'}, + {'id': 'user_queries_docs:01JMJJHE28G5F943YGWMB1ZX1V', + 'score': 0.0, + 'vector_distance': '0.158808946609', + 'user': 'tim', + 'credit_score': 'high', + 'age': '12', + 'job': 'dermatologist', + 'office_location': '-122.0839,37.3861'}, + {'id': 'user_queries_docs:01JMJJPEYDKA9ARKHRK1D7KPXQ', + 'score': 0.0, + 'vector_distance': '0.158808946609', + 'user': 'tim', + 'credit_score': 'high', + 'age': '12', + 'job': 'dermatologist', + 'office_location': '-122.0839,37.3861'}, + {'id': 'user_queries_docs:01JMJJHE28NR7KF0EZEA433T2J', + 'score': 0.0, + 'vector_distance': '0.217882037163', + 'user': 'taimur', + 'credit_score': 'low', + 'age': '15', + 'job': 'CEO', + 'office_location': '-122.0839,37.3861'}, + {'id': 'user_queries_docs:01JMJJPEYD9EAVGJ2AZ8K9VX7Q', + 'score': 0.0, + 'vector_distance': '0.217882037163', + 'user': 'taimur', + 'credit_score': 'low', + 'age': '15', + 'job': 'CEO', + 'office_location': '-122.0839,37.3861'}] + + + +### Geographic Filters + +Geographic filters are filters that are applied to geographic fields. These filters are used to find results that are within a certain distance of a given point. The distance is specified in kilometers, miles, meters, or feet. A radius can also be specified to find results within a certain radius of a given point. + + +```python +from redisvl.query.filter import Geo, GeoRadius + +# within 10 km of San Francisco office +geo_filter = Geo("office_location") == GeoRadius(-122.4194, 37.7749, 10, "km") + +v.set_filter(geo_filter) +result_print(index.query(v)) +``` + + +
scorevector_distanceusercredit_scoreagejoboffice_location
0.45454544469344740johnhigh18engineer-122.4194,37.7749
0.45454544469344740derricklow14doctor-122.4194,37.7749
0.45454544469344740johnhigh18engineer-122.4194,37.7749
0.45454544469344740derricklow14doctor-122.4194,37.7749
0.45454544469344740.266666650772nancyhigh94doctor-122.4194,37.7749
0.45454544469344740.266666650772nancyhigh94doctor-122.4194,37.7749
+ + + +```python +# within 100 km Radius of San Francisco office +geo_filter = Geo("office_location") == GeoRadius(-122.4194, 37.7749, 100, "km") + +v.set_filter(geo_filter) +result_print(index.query(v)) +``` + + +
scorevector_distanceusercredit_scoreagejoboffice_location
0.45454544469344740johnhigh18engineer-122.4194,37.7749
0.45454544469344740derricklow14doctor-122.4194,37.7749
0.45454544469344740johnhigh18engineer-122.4194,37.7749
0.45454544469344740derricklow14doctor-122.4194,37.7749
0.45454544469344740.109129190445tylerhigh100engineer-122.0839,37.3861
0.45454544469344740.109129190445tylerhigh100engineer-122.0839,37.3861
0.45454544469344740.158808946609timhigh12dermatologist-122.0839,37.3861
0.45454544469344740.158808946609timhigh12dermatologist-122.0839,37.3861
0.45454544469344740.217882037163taimurlow15CEO-122.0839,37.3861
0.45454544469344740.217882037163taimurlow15CEO-122.0839,37.3861
+ + + +```python +# not within 10 km Radius of San Francisco office +geo_filter = Geo("office_location") != GeoRadius(-122.4194, 37.7749, 10, "km") + +v.set_filter(geo_filter) +result_print(index.query(v)) +``` + + +
scorevector_distanceusercredit_scoreagejoboffice_location
0.00.109129190445tylerhigh100engineer-122.0839,37.3861
0.00.109129190445tylerhigh100engineer-122.0839,37.3861
0.00.158808946609timhigh12dermatologist-122.0839,37.3861
0.00.158808946609timhigh12dermatologist-122.0839,37.3861
0.00.217882037163taimurlow15CEO-122.0839,37.3861
0.00.217882037163taimurlow15CEO-122.0839,37.3861
0.00.653301358223joemedium35dentist-122.0839,37.3861
0.00.653301358223joemedium35dentist-122.0839,37.3861
+ + +## Combining Filters + +In this example, we will combine a numeric filter with a tag filter. We will search for users that are between the ages of 20 and 30 and have a job of "engineer". + +### Intersection ("and") + + +```python +t = Tag("credit_score") == "high" +low = Num("age") >= 18 +high = Num("age") <= 100 +ts = Timestamp("last_updated") > datetime(2025, 3, 16, 13, 45, 39, 132589) + +combined = t & low & high & ts + +v = VectorQuery([0.1, 0.1, 0.5], + "user_embedding", + return_fields=["user", "credit_score", "age", "job", "office_location"], + filter_expression=combined) + + +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0.109129190445tylerhigh100engineer-122.0839,37.3861
+ + +### Union ("or") + +The union of two queries is the set of all results that are returned by either of the two queries. The union of two queries is performed using the `|` operator. + + +```python +low = Num("age") < 18 +high = Num("age") > 93 + +combined = low | high + +v.set_filter(combined) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0derricklow14doctor-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
+ + +### Dynamic Combination + +There are often situations where you may or may not want to use a filter in a +given query. As shown above, filters will except the ``None`` type and revert +to a wildcard filter essentially returning all results. + +The same goes for filter combinations which enables rapid reuse of filters in +requests with different parameters as shown below. This removes the need for +a number of "if-then" conditionals to test for the empty case. + + + + +```python +def make_filter(age=None, credit=None, job=None): + flexible_filter = ( + (Num("age") > age) & + (Tag("credit_score") == credit) & + (Text("job") % job) + ) + return flexible_filter + +``` + + +```python +# all parameters +combined = make_filter(age=18, credit="high", job="engineer") +v.set_filter(combined) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
+ + + +```python +# just age and credit_score +combined = make_filter(age=18, credit="high") +v.set_filter(combined) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
+ + + +```python +# just age +combined = make_filter(age=18) +v.set_filter(combined) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.266666650772nancyhigh94doctor-122.4194,37.7749
0.653301358223joemedium35dentist-122.0839,37.3861
0.653301358223joemedium35dentist-122.0839,37.3861
+ + + +```python +# no filters +combined = make_filter() +v.set_filter(combined) +result_print(index.query(v)) +``` + + +
vector_distanceusercredit_scoreagejoboffice_location
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0johnhigh18engineer-122.4194,37.7749
0derricklow14doctor-122.4194,37.7749
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.109129190445tylerhigh100engineer-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.158808946609timhigh12dermatologist-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
0.217882037163taimurlow15CEO-122.0839,37.3861
+ + +## Non-vector Queries + +In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but soley takes a ``FilterExpression``. + + +```python +from redisvl.query import FilterQuery + +has_low_credit = Tag("credit_score") == "low" + +filter_query = FilterQuery( + return_fields=["user", "credit_score", "age", "job", "location"], + filter_expression=has_low_credit +) + +results = index.query(filter_query) + +result_print(results) +``` + + +
usercredit_scoreagejob
derricklow14doctor
taimurlow15CEO
derricklow14doctor
taimurlow15CEO
+ + +## Count Queries + +In some cases, you may need to use a ``FilterExpression`` to execute a ``CountQuery`` that simply returns the count of the number of entities in the pertaining set. It is similar to the ``FilterQuery`` class but does not return the values of the underlying data. + + +```python +from redisvl.query import CountQuery + +has_low_credit = Tag("credit_score") == "low" + +filter_query = CountQuery(filter_expression=has_low_credit) + +count = index.query(filter_query) + +print(f"{count} records match the filter expression {str(has_low_credit)} for the given index.") +``` + + 4 records match the filter expression @credit_score:{low} for the given index. + + +## Range Queries + +Range Queries are a useful method to perform a vector search where only results within a vector ``distance_threshold`` are returned. This enables the user to find all records within their dataset that are similar to a query vector where "similar" is defined by a quantitative value. + + +```python +from redisvl.query import RangeQuery + +range_query = RangeQuery( + vector=[0.1, 0.1, 0.5], + vector_field_name="user_embedding", + return_fields=["user", "credit_score", "age", "job", "location"], + distance_threshold=0.2 +) + +# same as the vector query or filter query +results = index.query(range_query) + +result_print(results) +``` + + +
vector_distanceusercredit_scoreagejob
0johnhigh18engineer
0derricklow14doctor
0johnhigh18engineer
0derricklow14doctor
0.109129190445tylerhigh100engineer
0.109129190445tylerhigh100engineer
0.158808946609timhigh12dermatologist
0.158808946609timhigh12dermatologist
+ + +We can also change the distance threshold of the query object between uses if we like. Here we will set ``distance_threshold==0.1``. This means that the query object will return all matches that are within 0.1 of the query object. This is a small distance, so we expect to get fewer matches than before. + + +```python +range_query.set_distance_threshold(0.1) + +result_print(index.query(range_query)) +``` + + +
vector_distanceusercredit_scoreagejob
0johnhigh18engineer
0derricklow14doctor
0johnhigh18engineer
0derricklow14doctor
+ + +Range queries can also be used with filters like any other query type. The following limits the results to only include records with a ``job`` of ``engineer`` while also being within the vector range (aka distance). + + +```python +is_engineer = Text("job") == "engineer" + +range_query.set_filter(is_engineer) + +result_print(index.query(range_query)) +``` + + +
vector_distanceusercredit_scoreagejob
0johnhigh18engineer
0johnhigh18engineer
+ + +## Advanced Query Modifiers + +See all modifier options available on the query API docs: https://redis.io/docs/latest/integrate/redisvl/api/query + + +```python +# Sort by a different field and change dialect +v = VectorQuery( + vector=[0.1, 0.1, 0.5], + vector_field_name="user_embedding", + return_fields=["user", "credit_score", "age", "job", "office_location"], + num_results=5, + filter_expression=is_engineer +).sort_by("age", asc=False).dialect(3) + +result = index.query(v) +result_print(result) +``` + + +
vector_distanceageusercredit_scorejoboffice_location
0.109129190445100tylerhighengineer-122.0839,37.3861
0.109129190445100tylerhighengineer-122.0839,37.3861
018johnhighengineer-122.4194,37.7749
018johnhighengineer-122.4194,37.7749
+ + +### Raw Redis Query String + +Sometimes it's helpful to convert these classes into their raw Redis query strings. + + +```python +# check out the complex query from above +str(v) +``` + + + + + '@job:("engineer")=>[KNN 5 @user_embedding $vector AS vector_distance] RETURN 6 user credit_score age job office_location vector_distance SORTBY age DESC DIALECT 3 LIMIT 0 5' + + + + +```python +t = Tag("credit_score") == "high" + +str(t) +``` + + + + + '@credit_score:{high}' + + + + +```python +t = Tag("credit_score") == "high" +low = Num("age") >= 18 +high = Num("age") <= 100 + +combined = t & low & high + +str(combined) +``` + + + + + '((@credit_score:{high} @age:[18 +inf]) @age:[-inf 100])' + + + +The RedisVL `SearchIndex` class exposes a `search()` method which is a simple wrapper around the `FT.SEARCH` API. +Provide any valid Redis query string. + + +```python +results = index.search(str(t)) +for r in results.docs: + print(r.__dict__) +``` + + {'id': 'user_queries_docs:01JMJJHE28G5F943YGWMB1ZX1V', 'payload': None, 'user': 'tim', 'age': '12', 'job': 'dermatologist', 'credit_score': 'high', 'office_location': '-122.0839,37.3861', 'user_embedding': '>>\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJHE28ZW4F33ZNRKXRHYCS', 'payload': None, 'user': 'john', 'age': '18', 'job': 'engineer', 'credit_score': 'high', 'office_location': '-122.4194,37.7749', 'user_embedding': '==\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJHE28B5R6T00DH37A7KSJ', 'payload': None, 'user': 'tyler', 'age': '100', 'job': 'engineer', 'credit_score': 'high', 'office_location': '-122.0839,37.3861', 'user_embedding': '=>\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJHE28EX13NEE7BGBM8FH3', 'payload': None, 'user': 'nancy', 'age': '94', 'job': 'doctor', 'credit_score': 'high', 'office_location': '-122.4194,37.7749', 'user_embedding': '333?=\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJPEYCQ89ZQW6QR27J72WT', 'payload': None, 'user': 'john', 'age': '18', 'job': 'engineer', 'credit_score': 'high', 'office_location': '-122.4194,37.7749', 'user_embedding': '==\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJPEYDAN0M3V7EQEVPS6HX', 'payload': None, 'user': 'nancy', 'age': '94', 'job': 'doctor', 'credit_score': 'high', 'office_location': '-122.4194,37.7749', 'user_embedding': '333?=\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJPEYDPF9S5328WHCQN0ND', 'payload': None, 'user': 'tyler', 'age': '100', 'job': 'engineer', 'credit_score': 'high', 'office_location': '-122.0839,37.3861', 'user_embedding': '=>\x00\x00\x00?'} + {'id': 'user_queries_docs:01JMJJPEYDKA9ARKHRK1D7KPXQ', 'payload': None, 'user': 'tim', 'age': '12', 'job': 'dermatologist', 'credit_score': 'high', 'office_location': '-122.0839,37.3861', 'user_embedding': '>>\x00\x00\x00?'} + + + +```python +# Cleanup +index.delete() +``` +--- +linkTitle: Caching embeddings +title: Caching Embeddings +type: integration +weight: 10 +--- + + +RedisVL provides an `EmbeddingsCache` that makes it easy to store and retrieve embedding vectors with their associated text and metadata. This cache is particularly useful for applications that frequently compute the same embeddings, enabling you to: + +- Reduce computational costs by reusing previously computed embeddings +- Decrease latency in applications that rely on embeddings +- Store additional metadata alongside embeddings for richer applications + +This notebook will show you how to use the `EmbeddingsCache` effectively in your applications. + +## Setup + +First, let's import the necessary libraries. We'll use a text embedding model from HuggingFace to generate our embeddings. + + +```python +import os +import time +import numpy as np + +# Disable tokenizers parallelism to avoid deadlocks +os.environ["TOKENIZERS_PARALLELISM"] = "False" + +# Import the EmbeddingsCache +from redisvl.extensions.cache.embeddings import EmbeddingsCache +from redisvl.utils.vectorize import HFTextVectorizer +``` + +Let's create a vectorizer to generate embeddings for our texts: + + +```python +# Initialize the vectorizer +vectorizer = HFTextVectorizer( + model="redis/langcache-embed-v1", + cache_folder=os.getenv("SENTENCE_TRANSFORMERS_HOME") +) +``` + + /Users/tyler.hutcherson/Library/Caches/pypoetry/virtualenvs/redisvl-VnTEShF2-py3.13/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html + from .autonotebook import tqdm as notebook_tqdm + Compiling the model with `torch.compile` and using a `torch.mps` device is not supported. Falling back to non-compiled mode. + + +## Initializing the EmbeddingsCache + +Now let's initialize our `EmbeddingsCache`. The cache requires a Redis connection to store the embeddings and their associated data. + + +```python +# Initialize the embeddings cache +cache = EmbeddingsCache( + name="embedcache", # name prefix for Redis keys + redis_url="redis://localhost:6379", # Redis connection URL + ttl=None # Optional TTL in seconds (None means no expiration) +) +``` + +## Basic Usage + +### Storing Embeddings + +Let's store some text with its embedding in the cache. The `set` method takes the following parameters: +- `text`: The input text that was embedded +- `model_name`: The name of the embedding model used +- `embedding`: The embedding vector +- `metadata`: Optional metadata associated with the embedding +- `ttl`: Optional time-to-live override for this specific entry + + +```python +# Text to embed +text = "What is machine learning?" +model_name = "redis/langcache-embed-v1" + +# Generate the embedding +embedding = vectorizer.embed(text) + +# Optional metadata +metadata = {"category": "ai", "source": "user_query"} + +# Store in cache +key = cache.set( + text=text, + model_name=model_name, + embedding=embedding, + metadata=metadata +) + +print(f"Stored with key: {key[:15]}...") +``` + + Stored with key: embedcache:909f... + + +### Retrieving Embeddings + +To retrieve an embedding from the cache, use the `get` method with the original text and model name: + + +```python +# Retrieve from cache + +if result := cache.get(text=text, model_name=model_name): + print(f"Found in cache: {result['text']}") + print(f"Model: {result['model_name']}") + print(f"Metadata: {result['metadata']}") + print(f"Embedding shape: {np.array(result['embedding']).shape}") +else: + print("Not found in cache.") +``` + + Found in cache: What is machine learning? + Model: redis/langcache-embed-v1 + Metadata: {'category': 'ai', 'source': 'user_query'} + Embedding shape: (768,) + + +### Checking Existence + +You can check if an embedding exists in the cache without retrieving it using the `exists` method: + + +```python +# Check if existing text is in cache +exists = cache.exists(text=text, model_name=model_name) +print(f"First query exists in cache: {exists}") + +# Check if a new text is in cache +new_text = "What is deep learning?" +exists = cache.exists(text=new_text, model_name=model_name) +print(f"New query exists in cache: {exists}") +``` + + First query exists in cache: True + New query exists in cache: False + + +### Removing Entries + +To remove an entry from the cache, use the `drop` method: + + +```python +# Remove from cache +cache.drop(text=text, model_name=model_name) + +# Verify it's gone +exists = cache.exists(text=text, model_name=model_name) +print(f"After dropping: {exists}") +``` + + After dropping: False + + +## Advanced Usage + +### Key-Based Operations + +The `EmbeddingsCache` also provides methods that work directly with Redis keys, which can be useful for advanced use cases: + + +```python +# Store an entry again +key = cache.set( + text=text, + model_name=model_name, + embedding=embedding, + metadata=metadata +) +print(f"Stored with key: {key[:15]}...") + +# Check existence by key +exists_by_key = cache.exists_by_key(key) +print(f"Exists by key: {exists_by_key}") + +# Retrieve by key +result_by_key = cache.get_by_key(key) +print(f"Retrieved by key: {result_by_key['text']}") + +# Drop by key +cache.drop_by_key(key) +``` + + Stored with key: embedcache:909f... + Exists by key: True + Retrieved by key: What is machine learning? + + +### Batch Operations + +When working with multiple embeddings, batch operations can significantly improve performance by reducing network roundtrips. The `EmbeddingsCache` provides methods prefixed with `m` (for "multi") that handle batches efficiently. + + +```python +# Create multiple embeddings +texts = [ + "What is machine learning?", + "How do neural networks work?", + "What is deep learning?" +] +embeddings = [vectorizer.embed(t) for t in texts] + +# Prepare batch items as dictionaries +batch_items = [ + { + "text": texts[0], + "model_name": model_name, + "embedding": embeddings[0], + "metadata": {"category": "ai", "type": "question"} + }, + { + "text": texts[1], + "model_name": model_name, + "embedding": embeddings[1], + "metadata": {"category": "ai", "type": "question"} + }, + { + "text": texts[2], + "model_name": model_name, + "embedding": embeddings[2], + "metadata": {"category": "ai", "type": "question"} + } +] + +# Store multiple embeddings in one operation +keys = cache.mset(batch_items) +print(f"Stored {len(keys)} embeddings with batch operation") + +# Check if multiple embeddings exist in one operation +exist_results = cache.mexists(texts, model_name) +print(f"All embeddings exist: {all(exist_results)}") + +# Retrieve multiple embeddings in one operation +results = cache.mget(texts, model_name) +print(f"Retrieved {len(results)} embeddings in one operation") + +# Delete multiple embeddings in one operation +cache.mdrop(texts, model_name) + +# Alternative: key-based batch operations +# cache.mget_by_keys(keys) # Retrieve by keys +# cache.mexists_by_keys(keys) # Check existence by keys +# cache.mdrop_by_keys(keys) # Delete by keys +``` + + Stored 3 embeddings with batch operation + All embeddings exist: True + Retrieved 3 embeddings in one operation + + +Batch operations are particularly beneficial when working with large numbers of embeddings. They provide the same functionality as individual operations but with better performance by reducing network roundtrips. + +For asynchronous applications, async versions of all batch methods are also available with the `am` prefix (e.g., `amset`, `amget`, `amexists`, `amdrop`). + +### Working with TTL (Time-To-Live) + +You can set a global TTL when initializing the cache, or specify TTL for individual entries: + + +```python +# Create a cache with a default 5-second TTL +ttl_cache = EmbeddingsCache( + name="ttl_cache", + redis_url="redis://localhost:6379", + ttl=5 # 5 second TTL +) + +# Store an entry +key = ttl_cache.set( + text=text, + model_name=model_name, + embedding=embedding +) + +# Check if it exists +exists = ttl_cache.exists_by_key(key) +print(f"Immediately after setting: {exists}") + +# Wait for it to expire +time.sleep(6) + +# Check again +exists = ttl_cache.exists_by_key(key) +print(f"After waiting: {exists}") +``` + + Immediately after setting: True + After waiting: False + + +You can also override the default TTL for individual entries: + + +```python +# Store an entry with a custom 1-second TTL +key1 = ttl_cache.set( + text="Short-lived entry", + model_name=model_name, + embedding=embedding, + ttl=1 # Override with 1 second TTL +) + +# Store another entry with the default TTL (5 seconds) +key2 = ttl_cache.set( + text="Default TTL entry", + model_name=model_name, + embedding=embedding + # No TTL specified = uses the default 5 seconds +) + +# Wait for 2 seconds +time.sleep(2) + +# Check both entries +exists1 = ttl_cache.exists_by_key(key1) +exists2 = ttl_cache.exists_by_key(key2) + +print(f"Entry with custom TTL after 2 seconds: {exists1}") +print(f"Entry with default TTL after 2 seconds: {exists2}") + +# Cleanup +ttl_cache.drop_by_key(key2) +``` + + Entry with custom TTL after 2 seconds: False + Entry with default TTL after 2 seconds: True + + +## Async Support + +The `EmbeddingsCache` provides async versions of all methods for use in async applications. The async methods are prefixed with `a` (e.g., `aset`, `aget`, `aexists`, `adrop`). + + +```python +async def async_cache_demo(): + # Store an entry asynchronously + key = await cache.aset( + text="Async embedding", + model_name=model_name, + embedding=embedding, + metadata={"async": True} + ) + + # Check if it exists + exists = await cache.aexists_by_key(key) + print(f"Async set successful? {exists}") + + # Retrieve it + result = await cache.aget_by_key(key) + success = result is not None and result["text"] == "Async embedding" + print(f"Async get successful? {success}") + + # Remove it + await cache.adrop_by_key(key) + +# Run the async demo +await async_cache_demo() +``` + + Async set successful? True + Async get successful? True + + +## Real-World Example + +Let's build a simple embeddings caching system for a text classification task. We'll check the cache before computing new embeddings to save computation time. + + +```python +# Create a fresh cache for this example +example_cache = EmbeddingsCache( + name="example_cache", + redis_url="redis://localhost:6379", + ttl=3600 # 1 hour TTL +) + +vectorizer = HFTextVectorizer( + model=model_name, + cache=example_cache, + cache_folder=os.getenv("SENTENCE_TRANSFORMERS_HOME") +) + +# Simulate processing a stream of queries +queries = [ + "What is artificial intelligence?", + "How does machine learning work?", + "What is artificial intelligence?", # Repeated query + "What are neural networks?", + "How does machine learning work?" # Repeated query +] + +# Process the queries and track statistics +total_queries = 0 +cache_hits = 0 + +for query in queries: + total_queries += 1 + + # Check cache before computing + before = example_cache.exists(text=query, model_name=model_name) + if before: + cache_hits += 1 + + # Get embedding (will compute or use cache) + embedding = vectorizer.embed(query) + +# Report statistics +cache_misses = total_queries - cache_hits +hit_rate = (cache_hits / total_queries) * 100 + +print("\nStatistics:") +print(f"Total queries: {total_queries}") +print(f"Cache hits: {cache_hits}") +print(f"Cache misses: {cache_misses}") +print(f"Cache hit rate: {hit_rate:.1f}%") + +# Cleanup +for query in set(queries): # Use set to get unique queries + example_cache.drop(text=query, model_name=model_name) +``` + + + Statistics: + Total queries: 5 + Cache hits: 2 + Cache misses: 3 + Cache hit rate: 40.0% + + +## Performance Benchmark + +Let's run benchmarks to compare the performance of embedding with and without caching, as well as batch versus individual operations. + + +```python +# Text to use for benchmarking +benchmark_text = "This is a benchmark text to measure the performance of embedding caching." + +# Create a fresh cache for benchmarking +benchmark_cache = EmbeddingsCache( + name="benchmark_cache", + redis_url="redis://localhost:6379", + ttl=3600 # 1 hour TTL +) +vectorizer.cache = benchmark_cache + +# Number of iterations for the benchmark +n_iterations = 10 + +# Benchmark without caching +print("Benchmarking without caching:") +start_time = time.time() +for _ in range(n_iterations): + embedding = vectorizer.embed(text, skip_cache=True) +no_cache_time = time.time() - start_time +print(f"Time taken without caching: {no_cache_time:.4f} seconds") +print(f"Average time per embedding: {no_cache_time/n_iterations:.4f} seconds") + +# Benchmark with caching +print("\nBenchmarking with caching:") +start_time = time.time() +for _ in range(n_iterations): + embedding = vectorizer.embed(text) +cache_time = time.time() - start_time +print(f"Time taken with caching: {cache_time:.4f} seconds") +print(f"Average time per embedding: {cache_time/n_iterations:.4f} seconds") + +# Compare performance +speedup = no_cache_time / cache_time +latency_reduction = (no_cache_time/n_iterations) - (cache_time/n_iterations) +print(f"\nPerformance comparison:") +print(f"Speedup with caching: {speedup:.2f}x faster") +print(f"Time saved: {no_cache_time - cache_time:.4f} seconds ({(1 - cache_time/no_cache_time) * 100:.1f}%)") +print(f"Latency reduction: {latency_reduction:.4f} seconds per query") +``` + + Benchmarking without caching: + Time taken without caching: 0.4735 seconds + Average time per embedding: 0.0474 seconds + + Benchmarking with caching: + Time taken with caching: 0.0663 seconds + Average time per embedding: 0.0066 seconds + + Performance comparison: + Speedup with caching: 7.14x faster + Time saved: 0.4073 seconds (86.0%) + Latency reduction: 0.0407 seconds per query + + +## Common Use Cases for Embedding Caching + +Embedding caching is particularly useful in the following scenarios: + +1. **Search applications**: Cache embeddings for frequently searched queries to reduce latency +2. **Content recommendation systems**: Cache embeddings for content items to speed up similarity calculations +3. **API services**: Reduce costs and improve response times when generating embeddings through paid APIs +4. **Batch processing**: Speed up processing of datasets that contain duplicate texts +5. **Chatbots and virtual assistants**: Cache embeddings for common user queries to provide faster responses +6. **Development** workflows + +## Cleanup + +Let's clean up our caches to avoid leaving data in Redis: + + +```python +# Clean up all caches +cache.clear() +ttl_cache.clear() +example_cache.clear() +benchmark_cache.clear() +``` + +## Summary + +The `EmbeddingsCache` provides an efficient way to store and retrieve embeddings with their associated text and metadata. Key features include: + +- Simple API for storing and retrieving individual embeddings (`set`/`get`) +- Batch operations for working with multiple embeddings efficiently (`mset`/`mget`/`mexists`/`mdrop`) +- Support for metadata storage alongside embeddings +- Configurable time-to-live (TTL) for cache entries +- Key-based operations for advanced use cases +- Async support for use in asynchronous applications +- Significant performance improvements (15-20x faster with batch operations) + +By using the `EmbeddingsCache`, you can reduce computational costs and improve the performance of applications that rely on embeddings. +--- +linkTitle: User guides +title: User Guides +type: integration +weight: 4 +hideListLinks: true +--- + + +User guides provide helpful resources for using RedisVL and its different components. + + + +* [Getting Started with RedisVL](getting_started/) + * [Define an `IndexSchema`](getting_started/#define-an-indexschema) + * [Sample Dataset Preparation](getting_started/#sample-dataset-preparation) + * [Create a `SearchIndex`](getting_started/#create-a-searchindex) + * [Inspect with the `rvl` CLI](getting_started/#inspect-with-the-rvl-cli) + * [Load Data to `SearchIndex`](getting_started/#load-data-to-searchindex) + * [Creating `VectorQuery` Objects](getting_started/#creating-vectorquery-objects) + * [Using an Asynchronous Redis Client](getting_started/#using-an-asynchronous-redis-client) + * [Updating a schema](getting_started/#updating-a-schema) + * [Check Index Stats](getting_started/#check-index-stats) + * [Cleanup](getting_started/#cleanup) +* [Querying with RedisVL](hybrid_queries/) + * [Hybrid Queries](hybrid_queries/#hybrid-queries) + * [Combining Filters](hybrid_queries/#combining-filters) + * [Non-vector Queries](hybrid_queries/#non-vector-queries) + * [Count Queries](hybrid_queries/#count-queries) + * [Range Queries](hybrid_queries/#range-queries) + * [Advanced Query Modifiers](hybrid_queries/#advanced-query-modifiers) +* [Semantic Caching for LLMs](llmcache/) + * [Initializing `SemanticCache`](llmcache/#initializing-semanticcache) + * [Basic Cache Usage](llmcache/#basic-cache-usage) + * [Customize the Distance Threshhold](llmcache/#customize-the-distance-threshhold) + * [Utilize TTL](llmcache/#utilize-ttl) + * [Simple Performance Testing](llmcache/#simple-performance-testing) + * [Cache Access Controls, Tags & Filters](llmcache/#cache-access-controls-tags-filters) +* [Caching Embeddings](embeddings_cache/) + * [Setup](embeddings_cache/#setup) + * [Initializing the EmbeddingsCache](embeddings_cache/#initializing-the-embeddingscache) + * [Basic Usage](embeddings_cache/#basic-usage) + * [Advanced Usage](embeddings_cache/#advanced-usage) + * [Async Support](embeddings_cache/#async-support) + * [Real-World Example](embeddings_cache/#real-world-example) + * [Performance Benchmark](embeddings_cache/#performance-benchmark) + * [Common Use Cases for Embedding Caching](embeddings_cache/#common-use-cases-for-embedding-caching) + * [Cleanup](embeddings_cache/#cleanup) + * [Summary](embeddings_cache/#summary) +* [Vectorizers](vectorizers/) + * [Creating Text Embeddings](vectorizers/#creating-text-embeddings) + * [Search with Provider Embeddings](vectorizers/#search-with-provider-embeddings) + * [Selecting your float data type](vectorizers/#selecting-your-float-data-type) +* [Hash vs JSON Storage](hash_vs_json/) + * [Hash or JSON – how to choose?](hash_vs_json/#hash-or-json-how-to-choose) + * [Cleanup](hash_vs_json/#cleanup) +* [Working with nested data in JSON](hash_vs_json/#working-with-nested-data-in-json) + * [Full JSON Path support](hash_vs_json/#full-json-path-support) + * [As an example:](hash_vs_json/#as-an-example) +* [Cleanup](hash_vs_json/#id1) +* [Rerankers](rerankers/) + * [Simple Reranking](rerankers/#simple-reranking) +* [LLM Message History](message_history/) + * [Managing multiple users and conversations](message_history/#managing-multiple-users-and-conversations) + * [Semantic message history](message_history/#semantic-message-history) + * [Conversation control](message_history/#conversation-control) +* [Semantic Routing](semantic_router/) + * [Define the Routes](semantic_router/#define-the-routes) + * [Initialize the SemanticRouter](semantic_router/#initialize-the-semanticrouter) + * [Simple routing](semantic_router/#simple-routing) + * [Update the routing config](semantic_router/#update-the-routing-config) + * [Router serialization](semantic_router/#router-serialization) +* [Add route references](semantic_router/#add-route-references) +* [Get route references](semantic_router/#get-route-references) +* [Delete route references](semantic_router/#delete-route-references) + * [Clean up the router](semantic_router/#clean-up-the-router) +* [Threshold Optimization](threshold_optimization/) +* [CacheThresholdOptimizer](threshold_optimization/#cachethresholdoptimizer) + * [Define test_data and optimize](threshold_optimization/#define-test-data-and-optimize) +* [RouterThresholdOptimizer](threshold_optimization/#routerthresholdoptimizer) + * [Define the routes](threshold_optimization/#define-the-routes) + * [Initialize the SemanticRouter](threshold_optimization/#initialize-the-semanticrouter) + * [Provide test_data](threshold_optimization/#provide-test-data) + * [Optimize](threshold_optimization/#optimize) + * [Test it out](threshold_optimization/#test-it-out) + * [Cleanup](threshold_optimization/#cleanup) +* [Release Guides](release_guide/) + * [0.5.1 Feature Overview](release_guide/0_5_0_release/) + * [HybridQuery class](release_guide/0_5_0_release/#hybridquery-class) + * [TextQueries](release_guide/0_5_0_release/#textqueries) + * [Threshold optimization](release_guide/0_5_0_release/#threshold-optimization) + * [Schema validation](release_guide/0_5_0_release/#schema-validation) + * [Timestamp filters](release_guide/0_5_0_release/#timestamp-filters) + * [Batch search](release_guide/0_5_0_release/#batch-search) + * [Vector normalization](release_guide/0_5_0_release/#vector-normalization) + * [Hybrid policy on knn with filters](release_guide/0_5_0_release/#hybrid-policy-on-knn-with-filters) +--- +linkTitle: Threshold optimization +title: Threshold Optimization +type: integration +weight: 09 +--- + + +After setting up `SemanticRouter` or `SemanticCache` it's best to tune the `distance_threshold` to get the most performance out of your system. RedisVL provides helper classes to make this light weight optimization easy. + +**Note:** Threshold optimization relies on `python > 3.9.` + +# CacheThresholdOptimizer + +Let's say you setup the following semantic cache with a distance_threshold of `X` and store the entries: + +- prompt: `what is the capital of france?` response: `paris` +- prompt: `what is the capital of morocco?` response: `rabat` + + +```python +from redisvl.extensions.cache.llm import SemanticCache +from redisvl.utils.vectorize import HFTextVectorizer + +sem_cache = SemanticCache( + name="sem_cache", # underlying search index name + redis_url="redis://localhost:6379", # redis connection url string + distance_threshold=0.5, # semantic cache distance threshold + vectorizer=HFTextVectorizer("redis/langcache-embed-v1") # embedding model +) + +paris_key = sem_cache.store(prompt="what is the capital of france?", response="paris") +rabat_key = sem_cache.store(prompt="what is the capital of morocco?", response="rabat") + +``` + + /Users/justin.cechmanek/.pyenv/versions/3.13/envs/redisvl-dev/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html + from .autonotebook import tqdm as notebook_tqdm + + + 16:16:11 sentence_transformers.SentenceTransformer INFO Use pytorch device_name: mps + 16:16:11 sentence_transformers.SentenceTransformer INFO Load pretrained SentenceTransformer: redis/langcache-embed-v1 + + + Batches: 0%| | 0/1 [00:00=3.8) environment using the `pip` command: + +```shell +pip install redisvl +``` + +Then make sure to have a Redis instance with the Redis Query Engine features enabled on Redis Cloud or locally in docker with Redis Stack: + +```shell +docker run -d --name redis -p 6379:6379 -p 8001:8001 redis/redis-stack:latest +``` + +After running the previous command, the Redis Insight GUI will be available at http://localhost:8001. +--- +linkTitle: Filter +title: Filter +type: integration +--- + + + + +## FilterExpression + +### `class FilterExpression(_filter=None, operator=None, left=None, right=None)` + +A FilterExpression is a logical combination of filters in RedisVL. + +FilterExpressions can be combined using the & and | operators to create +complex expressions that evaluate to the Redis Query language. + +This presents an interface by which users can create complex queries +without having to know the Redis Query language. + +```python +from redisvl.query.filter import Tag, Num + +brand_is_nike = Tag("brand") == "nike" +price_is_over_100 = Num("price") < 100 +f = brand_is_nike & price_is_over_100 + +print(str(f)) + +>> (@brand:{nike} @price:[-inf (100)]) +``` + +This can be combined with the VectorQuery class to create a query: + +```python +from redisvl.query import VectorQuery + +v = VectorQuery( + vector=[0.1, 0.1, 0.5, ...], + vector_field_name="product_embedding", + return_fields=["product_id", "brand", "price"], + filter_expression=f, +) +``` + +#### `NOTE` +Filter expressions are typically not called directly. Instead they are +built by combining filter statements using the & and | operators. + +* **Parameters:** + * **\_filter** (*str* *|* *None*) + * **operator** (*FilterOperator* *|* *None*) + * **left** ([FilterExpression](#filterexpression) *|* *None*) + * **right** ([FilterExpression](#filterexpression) *|* *None*) + +## Tag + +### `class Tag(field)` + +A Tag filter can be applied to Tag fields + +* **Parameters:** + **field** (*str*) + +#### `__eq__(other)` + +Create a Tag equality filter expression. + +* **Parameters:** + **other** (*Union* *[* *List* *[* *str* *]* *,* *str* *]*) – The tag(s) to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Tag + +f = Tag("brand") == "nike" +``` + +#### `__ne__(other)` + +Create a Tag inequality filter expression. + +* **Parameters:** + **other** (*Union* *[* *List* *[* *str* *]* *,* *str* *]*) – The tag(s) to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Tag +f = Tag("brand") != "nike" +``` + +#### `__str__()` + +Return the Redis Query string for the Tag filter + +* **Return type:** + str + +## Text + +### `class Text(field)` + +A Text is a FilterField representing a text field in a Redis index. + +* **Parameters:** + **field** (*str*) + +#### `__eq__(other)` + +Create a Text equality filter expression. These expressions yield +filters that enforce an exact match on the supplied term(s). + +* **Parameters:** + **other** (*str*) – The text value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Text + +f = Text("job") == "engineer" +``` + +#### `__mod__(other)` + +Create a Text “LIKE” filter expression. A flexible expression that +yields filters that can use a variety of additional operators like +wildcards (\*), fuzzy matches (%%), or combinatorics (|) of the supplied +term(s). + +* **Parameters:** + **other** (*str*) – The text value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Text + +f = Text("job") % "engine*" # suffix wild card match +f = Text("job") % "%%engine%%" # fuzzy match w/ Levenshtein Distance +f = Text("job") % "engineer|doctor" # contains either term in field +f = Text("job") % "engineer doctor" # contains both terms in field +``` + +#### `__ne__(other)` + +Create a Text inequality filter expression. These expressions yield +negated filters on exact matches on the supplied term(s). Opposite of an +equality filter expression. + +* **Parameters:** + **other** (*str*) – The text value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Text + +f = Text("job") != "engineer" +``` + +#### `__str__()` + +Return the Redis Query string for the Text filter + +* **Return type:** + str + +## Num + +### `class Num(field)` + +A Num is a FilterField representing a numeric field in a Redis index. + +* **Parameters:** + **field** (*str*) + +#### `__eq__(other)` + +Create a Numeric equality filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num +f = Num("zipcode") == 90210 +``` + +#### `__ge__(other)` + +Create a Numeric greater than or equal to filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num + +f = Num("age") >= 18 +``` + +#### `__gt__(other)` + +Create a Numeric greater than filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num + +f = Num("age") > 18 +``` + +#### `__le__(other)` + +Create a Numeric less than or equal to filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num + +f = Num("age") <= 18 +``` + +#### `__lt__(other)` + +Create a Numeric less than filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num + +f = Num("age") < 18 +``` + +#### `__ne__(other)` + +Create a Numeric inequality filter expression. + +* **Parameters:** + **other** (*int*) – The value to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Num + +f = Num("zipcode") != 90210 +``` + +#### `__str__()` + +Return the Redis Query string for the Numeric filter + +* **Return type:** + str + +#### `between(start, end, inclusive='both')` + +Operator for searching values between two numeric values. + +* **Parameters:** + * **start** (*int*) + * **end** (*int*) + * **inclusive** (*str*) +* **Return type:** + [FilterExpression](#filterexpression) + +## Geo + +### `class Geo(field)` + +A Geo is a FilterField representing a geographic (lat/lon) field in a +Redis index. + +* **Parameters:** + **field** (*str*) + +#### `__eq__(other)` + +Create a geographic filter within a specified GeoRadius. + +* **Parameters:** + **other** ([GeoRadius](#georadius)) – The geographic spec to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Geo, GeoRadius + +f = Geo("location") == GeoRadius(-122.4194, 37.7749, 1, unit="m") +``` + +#### `__ne__(other)` + +Create a geographic filter outside of a specified GeoRadius. + +* **Parameters:** + **other** ([GeoRadius](#georadius)) – The geographic spec to filter on. +* **Return type:** + [FilterExpression](#filterexpression) + +```python +from redisvl.query.filter import Geo, GeoRadius + +f = Geo("location") != GeoRadius(-122.4194, 37.7749, 1, unit="m") +``` + +#### `__str__()` + +Return the Redis Query string for the Geo filter + +* **Return type:** + str + +## GeoRadius + +### `class GeoRadius(longitude, latitude, radius=1, unit='km')` + +A GeoRadius is a GeoSpec representing a geographic radius. + +Create a GeoRadius specification (GeoSpec) + +* **Parameters:** + * **longitude** (*float*) – The longitude of the center of the radius. + * **latitude** (*float*) – The latitude of the center of the radius. + * **radius** (*int* *,* *optional*) – The radius of the circle. Defaults to 1. + * **unit** (*str* *,* *optional*) – The unit of the radius. Defaults to “km”. +* **Raises:** + **ValueError** – If the unit is not one of “m”, “km”, “mi”, or “ft”. + +#### `__init__(longitude, latitude, radius=1, unit='km')` + +Create a GeoRadius specification (GeoSpec) + +* **Parameters:** + * **longitude** (*float*) – The longitude of the center of the radius. + * **latitude** (*float*) – The latitude of the center of the radius. + * **radius** (*int* *,* *optional*) – The radius of the circle. Defaults to 1. + * **unit** (*str* *,* *optional*) – The unit of the radius. Defaults to “km”. +* **Raises:** + **ValueError** – If the unit is not one of “m”, “km”, “mi”, or “ft”. +--- +linkTitle: Schema +title: Schema +type: integration +--- + + +Schema in RedisVL provides a structured format to define index settings and +field configurations using the following three components: + +| Component | Description | +|-------------|------------------------------------------------------------------------------------| +| version | The version of the schema spec. Current supported version is 0.1.0. | +| index | Index specific settings like name, key prefix, key separator, and storage type. | +| fields | Subset of fields within your data to include in the index and any custom settings. | + +## IndexSchema + + + +### `class IndexSchema(*, index, fields={}, version='0.1.0')` + +A schema definition for a search index in Redis, used in RedisVL for +configuring index settings and organizing vector and metadata fields. + +The class offers methods to create an index schema from a YAML file or a +Python dictionary, supporting flexible schema definitions and easy +integration into various workflows. + +An example schema.yaml file might look like this: + +```yaml +version: '0.1.0' + +index: + name: user-index + prefix: user + key_separator: ":" + storage_type: json + +fields: + - name: user + type: tag + - name: credit_score + type: tag + - name: embedding + type: vector + attrs: + algorithm: flat + dims: 3 + distance_metric: cosine + datatype: float32 +``` + +Loading the schema for RedisVL from yaml is as simple as: + +```python +from redisvl.schema import IndexSchema + +schema = IndexSchema.from_yaml("schema.yaml") +``` + +Loading the schema for RedisVL from dict is as simple as: + +```python +from redisvl.schema import IndexSchema + +schema = IndexSchema.from_dict({ + "index": { + "name": "user-index", + "prefix": "user", + "key_separator": ":", + "storage_type": "json", + }, + "fields": [ + {"name": "user", "type": "tag"}, + {"name": "credit_score", "type": "tag"}, + { + "name": "embedding", + "type": "vector", + "attrs": { + "algorithm": "flat", + "dims": 3, + "distance_metric": "cosine", + "datatype": "float32" + } + } + ] +}) +``` + +#### `NOTE` +The fields attribute in the schema must contain unique field names to ensure +correct and unambiguous field references. + +Create a new model by parsing and validating input data from keyword arguments. + +Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be +validated to form a valid model. + +self is explicitly positional-only to allow self as a field name. + +* **Parameters:** + * **index** (*IndexInfo*) + * **fields** (*Dict* *[* *str* *,* *BaseField* *]*) + * **version** (*Literal* *[* *'0.1.0'* *]*) + +#### `add_field(field_inputs)` + +Adds a single field to the index schema based on the specified field +type and attributes. + +This method allows for the addition of individual fields to the schema, +providing flexibility in defining the structure of the index. + +* **Parameters:** + **field_inputs** (*Dict* *[* *str* *,* *Any* *]*) – A field to add. +* **Raises:** + **ValueError** – If the field name or type are not provided or if the name + already exists within the schema. + +```python +# Add a tag field +schema.add_field({"name": "user", "type": "tag}) + +# Add a vector field +schema.add_field({ + "name": "user-embedding", + "type": "vector", + "attrs": { + "dims": 1024, + "algorithm": "flat", + "datatype": "float32" + } +}) +``` + +#### `add_fields(fields)` + +Extends the schema with additional fields. + +This method allows dynamically adding new fields to the index schema. It +processes a list of field definitions. + +* **Parameters:** + **fields** (*List* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – A list of fields to add. +* **Raises:** + **ValueError** – If a field with the same name already exists in the + schema. + +```python +schema.add_fields([ + {"name": "user", "type": "tag"}, + {"name": "bio", "type": "text"}, + { + "name": "user-embedding", + "type": "vector", + "attrs": { + "dims": 1024, + "algorithm": "flat", + "datatype": "float32" + } + } +]) +``` + +#### `classmethod from_dict(data)` + +Create an IndexSchema from a dictionary. + +* **Parameters:** + **data** (*Dict* *[* *str* *,* *Any* *]*) – The index schema data. +* **Returns:** + The index schema. +* **Return type:** + [IndexSchema](#indexschema) + +```python +from redisvl.schema import IndexSchema + +schema = IndexSchema.from_dict({ + "index": { + "name": "docs-index", + "prefix": "docs", + "storage_type": "hash", + }, + "fields": [ + { + "name": "doc-id", + "type": "tag" + }, + { + "name": "doc-embedding", + "type": "vector", + "attrs": { + "algorithm": "flat", + "dims": 1536 + } + } + ] +}) +``` + +#### `classmethod from_yaml(file_path)` + +Create an IndexSchema from a YAML file. + +* **Parameters:** + **file_path** (*str*) – The path to the YAML file. +* **Returns:** + The index schema. +* **Return type:** + [IndexSchema](#indexschema) + +```python +from redisvl.schema import IndexSchema +schema = IndexSchema.from_yaml("schema.yaml") +``` + +#### `remove_field(field_name)` + +Removes a field from the schema based on the specified name. + +This method is useful for dynamically altering the schema by removing +existing fields. + +* **Parameters:** + **field_name** (*str*) – The name of the field to be removed. + +#### `to_dict()` + +Serialize the index schema model to a dictionary, handling Enums +and other special cases properly. + +* **Returns:** + The index schema as a dictionary. +* **Return type:** + Dict[str, Any] + +#### `to_yaml(file_path, overwrite=True)` + +Write the index schema to a YAML file. + +* **Parameters:** + * **file_path** (*str*) – The path to the YAML file. + * **overwrite** (*bool*) – Whether to overwrite the file if it already exists. +* **Raises:** + **FileExistsError** – If the file already exists and overwrite is False. +* **Return type:** + None + +#### `property field_names: List[str]` + +A list of field names associated with the index schema. + +* **Returns:** + A list of field names from the schema. +* **Return type:** + List[str] + +#### `fields: Dict[str, BaseField]` + +Fields associated with the search index and their properties + +#### `index: IndexInfo` + +Details of the basic index configurations. + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `version: Literal['0.1.0']` + +Version of the underlying index schema. + +## Defining Fields + +Fields in the schema can be defined in YAML format or as a Python dictionary, specifying a name, type, an optional path, and attributes for customization. + +**YAML Example**: + +```yaml +- name: title + type: text + path: $.document.title + attrs: + weight: 1.0 + no_stem: false + withsuffixtrie: true +``` + +**Python Dictionary Example**: + +```python +{ + "name": "location", + "type": "geo", + "attrs": { + "sortable": true + } +} +``` + +## Supported Field Types and Attributes + +Each field type supports specific attributes that customize its behavior. Below are the field types and their available attributes: + +**Text Field Attributes**: + +- weight: Importance of the field in result calculation. +- no_stem: Disables stemming during indexing. +- withsuffixtrie: Optimizes queries by maintaining a suffix trie. +- phonetic_matcher: Enables phonetic matching. +- sortable: Allows sorting on this field. + +**Tag Field Attributes**: + +- separator: Character for splitting text into individual tags. +- case_sensitive: Case sensitivity in tag matching. +- withsuffixtrie: Suffix trie optimization for queries. +- sortable: Enables sorting based on the tag field. + +**Numeric and Geo Field Attributes**: + +- Both numeric and geo fields support the sortable attribute, enabling sorting on these fields. + +**Common Vector Field Attributes**: + +- dims: Dimensionality of the vector. +- algorithm: Indexing algorithm (flat or hnsw). +- datatype: Float datatype of the vector (bfloat16, float16, float32, float64). +- distance_metric: Metric for measuring query relevance (COSINE, L2, IP). + +**HNSW Vector Field Specific Attributes**: + +- m: Max outgoing edges per node in each layer. +- ef_construction: Max edge candidates during build time. +- ef_runtime: Max top candidates during search. +- epsilon: Range search boundary factor. + +Note: +: See fully documented Redis-supported fields and options here: [https://redis.io/commands/ft.create/](https://redis.io/commands/ft.create/) +--- +linkTitle: Semantic router +title: Semantic Router +type: integration +--- + + + + +## Semantic Router + +### `class SemanticRouter(name, routes, vectorizer=None, routing_config=None, redis_client=None, redis_url='redis://localhost:6379', overwrite=False, connection_kwargs={})` + +Semantic Router for managing and querying route vectors. + +Initialize the SemanticRouter. + +* **Parameters:** + * **name** (*str*) – The name of the semantic router. + * **routes** (*List* *[*[Route](#route) *]*) – List of Route objects. + * **vectorizer** (*BaseVectorizer* *,* *optional*) – The vectorizer used to embed route references. Defaults to default HFTextVectorizer. + * **routing_config** ([RoutingConfig](#routingconfig) *,* *optional*) – Configuration for routing behavior. Defaults to the default RoutingConfig. + * **redis_client** (*Optional* *[* *Redis* *]* *,* *optional*) – Redis client for connection. Defaults to None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **overwrite** (*bool* *,* *optional*) – Whether to overwrite existing index. Defaults to False. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + +#### `add_route_references(route_name, references)` + +Add a reference(s) to an existing route. + +* **Parameters:** + * **router_name** (*str*) – The name of the router. + * **references** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The reference or list of references to add. + * **route_name** (*str*) +* **Returns:** + The list of added references keys. +* **Return type:** + List[str] + +#### `clear()` + +Flush all routes from the semantic router index. + +* **Return type:** + None + +#### `delete()` + +Delete the semantic router index. + +* **Return type:** + None + +#### `delete_route_references(route_name='', reference_ids=[], keys=[])` + +Get references for an existing semantic router route. + +* **Parameters:** + * **Optional** (*keys*) – The name of the router. + * **Optional** – The reference or list of references to delete. + * **Optional** – List of fully qualified keys (prefix:router:reference_id) to delete. + * **route_name** (*str*) + * **reference_ids** (*List* *[* *str* *]*) + * **keys** (*List* *[* *str* *]*) +* **Returns:** + Number of objects deleted +* **Return type:** + int + +#### `classmethod from_dict(data, **kwargs)` + +Create a SemanticRouter from a dictionary. + +* **Parameters:** + **data** (*Dict* *[* *str* *,* *Any* *]*) – The dictionary containing the semantic router data. +* **Returns:** + The semantic router instance. +* **Return type:** + [SemanticRouter](#semanticrouter) +* **Raises:** + **ValueError** – If required data is missing or invalid. + +```python +from redisvl.extensions.router import SemanticRouter +router_data = { + "name": "example_router", + "routes": [{"name": "route1", "references": ["ref1"], "distance_threshold": 0.5}], + "vectorizer": {"type": "openai", "model": "text-embedding-ada-002"}, +} +router = SemanticRouter.from_dict(router_data) +``` + +#### `classmethod from_existing(name, redis_client=None, redis_url='redis://localhost:6379', **kwargs)` + +Return SemanticRouter instance from existing index. + +* **Parameters:** + * **name** (*str*) + * **redis_client** (*Redis* *|* *None*) + * **redis_url** (*str*) +* **Return type:** + [SemanticRouter](#semanticrouter) + +#### `classmethod from_yaml(file_path, **kwargs)` + +Create a SemanticRouter from a YAML file. + +* **Parameters:** + **file_path** (*str*) – The path to the YAML file. +* **Returns:** + The semantic router instance. +* **Return type:** + [SemanticRouter](#semanticrouter) +* **Raises:** + * **ValueError** – If the file path is invalid. + * **FileNotFoundError** – If the file does not exist. + +```python +from redisvl.extensions.router import SemanticRouter +router = SemanticRouter.from_yaml("router.yaml", redis_url="redis://localhost:6379") +``` + +#### `get(route_name)` + +Get a route by its name. + +* **Parameters:** + **route_name** (*str*) – Name of the route. +* **Returns:** + The selected Route object or None if not found. +* **Return type:** + Optional[[Route](#route)] + +#### `get_route_references(route_name='', reference_ids=[], keys=[])` + +Get references for an existing route route. + +* **Parameters:** + * **router_name** (*str*) – The name of the router. + * **references** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The reference or list of references to add. + * **route_name** (*str*) + * **reference_ids** (*List* *[* *str* *]*) + * **keys** (*List* *[* *str* *]*) +* **Returns:** + Reference objects stored +* **Return type:** + List[Dict[str, Any]]] + +#### `model_post_init(context, /)` + +This function is meant to behave like a BaseModel method to initialise private attributes. + +It takes context as an argument since that’s what pydantic-core passes when calling it. + +* **Parameters:** + * **self** (*BaseModel*) – The BaseModel instance. + * **context** (*Any*) – The context. +* **Return type:** + None + +#### `remove_route(route_name)` + +Remove a route and all references from the semantic router. + +* **Parameters:** + **route_name** (*str*) – Name of the route to remove. +* **Return type:** + None + +#### `route_many(statement=None, vector=None, max_k=None, distance_threshold=None, aggregation_method=None)` + +Query the semantic router with a given statement or vector for multiple matches. + +* **Parameters:** + * **statement** (*Optional* *[* *str* *]*) – The input statement to be queried. + * **vector** (*Optional* *[* *List* *[* *float* *]* *]*) – The input vector to be queried. + * **max_k** (*Optional* *[* *int* *]*) – The maximum number of top matches to return. + * **distance_threshold** (*Optional* *[* *float* *]*) – The threshold for semantic distance. + * **aggregation_method** (*Optional* *[*[DistanceAggregationMethod](#distanceaggregationmethod) *]*) – The aggregation method used for vector distances. +* **Returns:** + The matching routes and their details. +* **Return type:** + List[[RouteMatch](#routematch)] + +#### `to_dict()` + +Convert the SemanticRouter instance to a dictionary. + +* **Returns:** + The dictionary representation of the SemanticRouter. +* **Return type:** + Dict[str, Any] + +```python +from redisvl.extensions.router import SemanticRouter +router = SemanticRouter(name="example_router", routes=[], redis_url="redis://localhost:6379") +router_dict = router.to_dict() +``` + +#### `to_yaml(file_path, overwrite=True)` + +Write the semantic router to a YAML file. + +* **Parameters:** + * **file_path** (*str*) – The path to the YAML file. + * **overwrite** (*bool*) – Whether to overwrite the file if it already exists. +* **Raises:** + **FileExistsError** – If the file already exists and overwrite is False. +* **Return type:** + None + +```python +from redisvl.extensions.router import SemanticRouter +router = SemanticRouter( + name="example_router", + routes=[], + redis_url="redis://localhost:6379" +) +router.to_yaml("router.yaml") +``` + +#### `update_route_thresholds(route_thresholds)` + +Update the distance thresholds for each route. + +* **Parameters:** + **route_thresholds** (*Dict* *[* *str* *,* *float* *]*) – Dictionary of route names and their distance thresholds. + +#### `update_routing_config(routing_config)` + +Update the routing configuration. + +* **Parameters:** + **routing_config** ([RoutingConfig](#routingconfig)) – The new routing configuration. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `name: str` + +The name of the semantic router. + +#### `property route_names: List[str]` + +Get the list of route names. + +* **Returns:** + List of route names. +* **Return type:** + List[str] + +#### `property route_thresholds: Dict[str, float | None]` + +Get the distance thresholds for each route. + +* **Returns:** + Dictionary of route names and their distance thresholds. +* **Return type:** + Dict[str, float] + +#### `routes: `List[[Route](#route)] + +List of Route objects. + +#### `routing_config: `[RoutingConfig](#routingconfig) + +Configuration for routing behavior. + +#### `vectorizer: BaseVectorizer` + +The vectorizer used to embed route references. + +## Routing Config + +### `class RoutingConfig(*, max_k=1, aggregation_method=DistanceAggregationMethod.avg)` + +Configuration for routing behavior. + +Create a new model by parsing and validating input data from keyword arguments. + +Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be +validated to form a valid model. + +self is explicitly positional-only to allow self as a field name. + +* **Parameters:** + * **max_k** (*Annotated* *[* *int* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) + * **aggregation_method** ([DistanceAggregationMethod](#distanceaggregationmethod)) + +#### `max_k: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Strict(strict=True), Gt(gt=0)])]` + +Aggregation method to use to classify queries. + +#### `model_config: ClassVar[ConfigDict] = {'extra': 'ignore'}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +## Route + +### `class Route(*, name, references, metadata={}, distance_threshold=0.5)` + +Model representing a routing path with associated metadata and thresholds. + +Create a new model by parsing and validating input data from keyword arguments. + +Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be +validated to form a valid model. + +self is explicitly positional-only to allow self as a field name. + +* **Parameters:** + * **name** (*str*) + * **references** (*List* *[* *str* *]*) + * **metadata** (*Dict* *[* *str* *,* *Any* *]*) + * **distance_threshold** (*Annotated* *[* *float* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *,* *Le* *(* *le=2* *)* *]* *)* *]*) + +#### `distance_threshold: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Strict(strict=True), Gt(gt=0), Le(le=2)])]` + +Distance threshold for matching the route. + +#### `metadata: Dict[str, Any]` + +Metadata associated with the route. + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `name: str` + +The name of the route. + +#### `references: List[str]` + +List of reference phrases for the route. + +## Route Match + +### `class RouteMatch(*, name=None, distance=None)` + +Model representing a matched route with distance information. + +Create a new model by parsing and validating input data from keyword arguments. + +Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be +validated to form a valid model. + +self is explicitly positional-only to allow self as a field name. + +* **Parameters:** + * **name** (*str* *|* *None*) + * **distance** (*float* *|* *None*) + +#### `distance: float | None` + +The vector distance between the statement and the matched route. + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `name: str | None` + +The matched route name. + +## Distance Aggregation Method + +### `class DistanceAggregationMethod(value, names=, *values, module=None, qualname=None, type=None, start=1, boundary=None)` + +Enumeration for distance aggregation methods. + +#### `avg = 'avg'` + +Compute the average of the vector distances. + +#### `min = 'min'` + +Compute the minimum of the vector distances. + +#### `sum = 'sum'` + +Compute the sum of the vector distances. +--- +linkTitle: LLM message history +title: LLM Message History +type: integration +--- + + +## SemanticMessageHistory + + + +### `class SemanticMessageHistory(name, session_tag=None, prefix=None, vectorizer=None, distance_threshold=0.3, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={}, overwrite=False, **kwargs)` + +Bases: `BaseMessageHistory` + +Initialize message history with index + +Semantic Message History stores the current and previous user text prompts +and LLM responses to allow for enriching future prompts with session +context. Message history is stored in individual user or LLM prompts and +responses. + +* **Parameters:** + * **name** (*str*) – The name of the message history index. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. + * **prefix** (*Optional* *[* *str* *]*) – Prefix for the keys for this message data. + Defaults to None and will be replaced with the index name. + * **vectorizer** (*Optional* *[* *BaseVectorizer* *]*) – The vectorizer used to create embeddings. + * **distance_threshold** (*float*) – The maximum semantic distance to be + included in the context. Defaults to 0.3. + * **redis_client** (*Optional* *[* *Redis* *]*) – A Redis client instance. Defaults to + None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + * **overwrite** (*bool*) – Whether or not to force overwrite the schema for + the semantic message index. Defaults to false. + +The proposed schema will support a single vector embedding constructed +from either the prompt or response in a single string. + +#### `add_message(message, session_tag=None)` + +Insert a single prompt or response into the message history. +A timestamp is associated with it so that it can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **message** (*Dict* *[* *str* *,**str* *]*) – The user prompt or LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entry to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `add_messages(messages, session_tag=None)` + +Insert a list of prompts and responses into the session memory. +A timestamp is associated with each so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **messages** (*List* *[* *Dict* *[* *str* *,* *str* *]* *]*) – The list of user prompts and LLM responses. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `clear()` + +Clears the message history. + +* **Return type:** + None + +#### `delete()` + +Clear all message keys and remove the search index. + +* **Return type:** + None + +#### `drop(id=None)` + +Remove a specific exchange from the message history. + +* **Parameters:** + **id** (*Optional* *[* *str* *]*) – The id of the message entry to delete. + If None then the last entry is deleted. +* **Return type:** + None + +#### `get_recent(top_k=5, as_text=False, raw=False, session_tag=None)` + +Retreive the recent message history in sequential order. + +* **Parameters:** + * **top_k** (*int*) – The number of previous exchanges to return. Default is 5. + * **as_text** (*bool*) – Whether to return the conversation as a single string, + or list of alternating prompts and responses. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + prompt and response + * **session_tag** (*Optional* *[* *str* *]*) – Tag of the entries linked to a specific + conversation session. Defaults to instance ULID. +* **Returns:** + A single string transcription of the session + : or list of strings if as_text is false. +* **Return type:** + Union[str, List[str]] +* **Raises:** + **ValueError** – if top_k is not an integer greater than or equal to 0. + +#### `get_relevant(prompt, as_text=False, top_k=5, fall_back=False, session_tag=None, raw=False, distance_threshold=None)` + +Searches the message history for information semantically related to +the specified prompt. + +This method uses vector similarity search with a text prompt as input. +It checks for semantically similar prompts and responses and gets +the top k most relevant previous prompts or responses to include as +context to the next LLM call. + +* **Parameters:** + * **prompt** (*str*) – The message text to search for in message history + * **as_text** (*bool*) – Whether to return the prompts and responses as text + * **JSON.** (*or as*) + * **top_k** (*int*) – The number of previous messages to return. Default is 5. + * **session_tag** (*Optional* *[* *str* *]*) – Tag of the entries linked to a specific + conversation session. Defaults to instance ULID. + * **distance_threshold** (*Optional* *[* *float* *]*) – The threshold for semantic + vector distance. + * **fall_back** (*bool*) – Whether to drop back to recent conversation history + if no relevant context is found. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + message. +* **Returns:** + Either a list of strings, or a + list of prompts and responses in JSON containing the most relevant. +* **Return type:** + Union[List[str], List[Dict[str,str]] + +Raises ValueError: if top_k is not an integer greater or equal to 0. + +#### `store(prompt, response, session_tag=None)` + +Insert a prompt:response pair into the message history. A timestamp +is associated with each message so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to the LLM. + * **response** (*str*) – The corresponding LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `property messages: List[str] | List[Dict[str, str]]` + +Returns the full message history. + +## MessageHistory + + + +### `class MessageHistory(name, session_tag=None, prefix=None, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={}, **kwargs)` + +Bases: `BaseMessageHistory` + +Initialize message history + +Message History stores the current and previous user text prompts and +LLM responses to allow for enriching future prompts with session +context. Message history is stored in individual user or LLM prompts and +responses. + +* **Parameters:** + * **name** (*str*) – The name of the message history index. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. + * **prefix** (*Optional* *[* *str* *]*) – Prefix for the keys for this conversation data. + Defaults to None and will be replaced with the index name. + * **redis_client** (*Optional* *[* *Redis* *]*) – A Redis client instance. Defaults to + None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + +#### `add_message(message, session_tag=None)` + +Insert a single prompt or response into the message history. +A timestamp is associated with it so that it can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **message** (*Dict* *[* *str* *,**str* *]*) – The user prompt or LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `add_messages(messages, session_tag=None)` + +Insert a list of prompts and responses into the message history. +A timestamp is associated with each so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **messages** (*List* *[* *Dict* *[* *str* *,* *str* *]* *]*) – The list of user prompts and LLM responses. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `clear()` + +Clears the conversation message history. + +* **Return type:** + None + +#### `delete()` + +Clear all conversation keys and remove the search index. + +* **Return type:** + None + +#### `drop(id=None)` + +Remove a specific exchange from the conversation history. + +* **Parameters:** + **id** (*Optional* *[* *str* *]*) – The id of the message entry to delete. + If None then the last entry is deleted. +* **Return type:** + None + +#### `get_recent(top_k=5, as_text=False, raw=False, session_tag=None)` + +Retrieve the recent message history in sequential order. + +* **Parameters:** + * **top_k** (*int*) – The number of previous messages to return. Default is 5. + * **as_text** (*bool*) – Whether to return the conversation as a single string, + or list of alternating prompts and responses. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + prompt and response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag of the entries linked to a specific + conversation session. Defaults to instance ULID. +* **Returns:** + A single string transcription of the messages + : or list of strings if as_text is false. +* **Return type:** + Union[str, List[str]] +* **Raises:** + **ValueError** – if top_k is not an integer greater than or equal to 0. + +#### `store(prompt, response, session_tag=None)` + +Insert a prompt:response pair into the message history. A timestamp +is associated with each exchange so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to the LLM. + * **response** (*str*) – The corresponding LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + conversation session. Defaults to instance ULID. +* **Return type:** + None + +#### `property messages: List[str] | List[Dict[str, str]]` + +Returns the full message history. +--- +linkTitle: Threshold optimizers +title: Threshold Optimizers +type: integration +--- + + +## CacheThresholdOptimizer + + + +## RouterThresholdOptimizer +--- +linkTitle: Query +title: Query +type: integration +--- + + +Query classes in RedisVL provide a structured way to define simple or complex +queries for different use cases. Each query class wraps the `redis-py` Query module +[https://github.com/redis/redis-py/blob/master/redis/commands/search/query.py](https://github.com/redis/redis-py/blob/master/redis/commands/search/query.py) with extended functionality for ease-of-use. + +## VectorQuery + +### `class VectorQuery(vector, vector_field_name, return_fields=None, filter_expression=None, dtype='float32', num_results=10, return_score=True, dialect=2, sort_by=None, in_order=False, hybrid_policy=None, batch_size=None, ef_runtime=None, normalize_vector_distance=False)` + +Bases: `BaseVectorQuery`, `BaseQuery` + +A query for running a vector search along with an optional filter +expression. + +* **Parameters:** + * **vector** (*List* *[* *float* *]*) – The vector to perform the vector search with. + * **vector_field_name** (*str*) – The name of the vector field to search + against in the database. + * **return_fields** (*List* *[* *str* *]*) – The declared fields to return with search + results. + * **filter_expression** (*Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *,* *optional*) – A filter to apply + along with the vector search. Defaults to None. + * **dtype** (*str* *,* *optional*) – The dtype of the vector. Defaults to + “float32”. + * **num_results** (*int* *,* *optional*) – The top k results to return from the + vector search. Defaults to 10. + * **return_score** (*bool* *,* *optional*) – Whether to return the vector + distance. Defaults to True. + * **dialect** (*int* *,* *optional*) – The RediSearch query dialect. + Defaults to 2. + * **sort_by** (*Optional* *[* *str* *]*) – The field to order the results by. Defaults + to None. Results will be ordered by vector distance. + * **in_order** (*bool*) – Requires the terms in the field to have + the same order as the terms in the query filter, regardless of + the offsets between them. Defaults to False. + * **hybrid_policy** (*Optional* *[* *str* *]*) – Controls how filters are applied during vector search. + Options are “BATCHES” (paginates through small batches of nearest neighbors) or + “ADHOC_BF” (computes scores for all vectors passing the filter). + “BATCHES” mode is typically faster for queries with selective filters. + “ADHOC_BF” mode is better when filters match a large portion of the dataset. + Defaults to None, which lets Redis auto-select the optimal policy. + * **batch_size** (*Optional* *[* *int* *]*) – When hybrid_policy is “BATCHES”, controls the number + of vectors to fetch in each batch. Larger values may improve performance + at the cost of memory usage. Only applies when hybrid_policy=”BATCHES”. + Defaults to None, which lets Redis auto-select an appropriate batch size. + * **ef_runtime** (*Optional* *[* *int* *]*) – Controls the size of the dynamic candidate list for HNSW + algorithm at query time. Higher values improve recall at the expense of + slower search performance. Defaults to None, which uses the index-defined value. + * **normalize_vector_distance** (*bool*) – Redis supports 3 distance metrics: L2 (euclidean), + IP (inner product), and COSINE. By default, L2 distance returns an unbounded value. + COSINE distance returns a value between 0 and 2. IP returns a value determined by + the magnitude of the vector. Setting this flag to true converts COSINE and L2 distance + to a similarity score between 0 and 1. Note: setting this flag to true for IP will + throw a warning since by definition COSINE similarity is normalized IP. +* **Raises:** + **TypeError** – If filter_expression is not of type redisvl.query.FilterExpression + +#### `NOTE` +Learn more about vector queries in Redis: [https://redis.io/docs/interact/search-and-query/search/vectors/#knn-search](https://redis.io/docs/interact/search-and-query/search/vectors/#knn-search) + +#### `dialect(dialect)` + +Add a dialect field to the query. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *Query* + +#### `expander(expander)` + +Add a expander field to the query. + +- **expander** - the name of the expander + +* **Parameters:** + **expander** (*str*) +* **Return type:** + *Query* + +#### `in_order()` + +Match only documents where the query terms appear in +the same order in the document. +i.e. for the query “hello world”, we do not match “world hello” + +* **Return type:** + *Query* + +#### `language(language)` + +Analyze the query as being in the specified language. + +* **Parameters:** + **language** (*str*) – The language (e.g. chinese or english) +* **Return type:** + *Query* + +#### `limit_fields(*fields)` + +Limit the search to specific TEXT fields only. + +- **fields**: A list of strings, case sensitive field names + +from the defined schema. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *Query* + +#### `limit_ids(*ids)` + +Limit the results to a specific set of pre-known document +ids of any length. + +* **Return type:** + *Query* + +#### `no_content()` + +Set the query to only return ids and not the document content. + +* **Return type:** + *Query* + +#### `no_stopwords()` + +Prevent the query from being filtered for stopwords. +Only useful in very big queries that you are certain contain +no stopwords. + +* **Return type:** + *Query* + +#### `paging(offset, num)` + +Set the paging for the query (defaults to 0..10). + +- **offset**: Paging offset for the results. Defaults to 0 +- **num**: How many results do we want + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *Query* + +#### `query_string()` + +Return the query string of this query only. + +* **Return type:** + str + +#### `return_fields(*fields)` + +Add fields to return fields. + +* **Return type:** + *Query* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *Query* + +#### `set_batch_size(batch_size)` + +Set the batch size for the query. + +* **Parameters:** + **batch_size** (*int*) – The batch size to use when hybrid_policy is “BATCHES”. +* **Raises:** + * **TypeError** – If batch_size is not an integer + * **ValueError** – If batch_size is not positive + +#### `set_ef_runtime(ef_runtime)` + +Set the EF_RUNTIME parameter for the query. + +* **Parameters:** + **ef_runtime** (*int*) – The EF_RUNTIME value to use for HNSW algorithm. + Higher values improve recall at the expense of slower search. +* **Raises:** + * **TypeError** – If ef_runtime is not an integer + * **ValueError** – If ef_runtime is not positive + +#### `set_filter(filter_expression=None)` + +Set the filter expression for the query. + +* **Parameters:** + **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]* *,* *optional*) – The filter + expression or query string to use on the query. +* **Raises:** + **TypeError** – If filter_expression is not a valid FilterExpression or string. + +#### `set_hybrid_policy(hybrid_policy)` + +Set the hybrid policy for the query. + +* **Parameters:** + **hybrid_policy** (*str*) – The hybrid policy to use. Options are “BATCHES” + or “ADHOC_BF”. +* **Raises:** + **ValueError** – If hybrid_policy is not one of the valid options + +#### `slop(slop)` + +Allow a maximum of N intervening non matched terms between +phrase terms (0 means exact phrase). + +* **Parameters:** + **slop** (*int*) +* **Return type:** + *Query* + +#### `sort_by(field, asc=True)` + +Add a sortby field to the query. + +- **field** - the name of the field to sort by +- **asc** - when True, sorting will be done in asceding order + +* **Parameters:** + * **field** (*str*) + * **asc** (*bool*) +* **Return type:** + *Query* + +#### `timeout(timeout)` + +overrides the timeout parameter of the module + +* **Parameters:** + **timeout** (*float*) +* **Return type:** + *Query* + +#### `verbatim()` + +Set the query to be verbatim, i.e. use no query expansion +or stemming. + +* **Return type:** + *Query* + +#### `with_payloads()` + +Ask the engine to return document payloads. + +* **Return type:** + *Query* + +#### `with_scores()` + +Ask the engine to return document search scores. + +* **Return type:** + *Query* + +#### `property batch_size: int | None` + +Return the batch size for the query. + +* **Returns:** + The batch size for the query. +* **Return type:** + Optional[int] + +#### `property ef_runtime: int | None` + +Return the EF_RUNTIME parameter for the query. + +* **Returns:** + The EF_RUNTIME value for the query. +* **Return type:** + Optional[int] + +#### `property filter: str | `[`FilterExpression`]({{< relref "filter/#filterexpression" >}})` ` + +The filter expression for the query. + +#### `property hybrid_policy: str | None` + +Return the hybrid policy for the query. + +* **Returns:** + The hybrid policy for the query. +* **Return type:** + Optional[str] + +#### `property params: Dict[str, Any]` + +Return the parameters for the query. + +* **Returns:** + The parameters for the query. +* **Return type:** + Dict[str, Any] + +#### `property query: BaseQuery` + +Return self as the query object. + +## VectorRangeQuery + +### `class VectorRangeQuery(vector, vector_field_name, return_fields=None, filter_expression=None, dtype='float32', distance_threshold=0.2, epsilon=None, num_results=10, return_score=True, dialect=2, sort_by=None, in_order=False, hybrid_policy=None, batch_size=None, normalize_vector_distance=False)` + +Bases: `BaseVectorQuery`, `BaseQuery` + +A query for running a filtered vector search based on semantic +distance threshold. + +* **Parameters:** + * **vector** (*List* *[* *float* *]*) – The vector to perform the range query with. + * **vector_field_name** (*str*) – The name of the vector field to search + against in the database. + * **return_fields** (*List* *[* *str* *]*) – The declared fields to return with search + results. + * **filter_expression** (*Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *,* *optional*) – A filter to apply + along with the range query. Defaults to None. + * **dtype** (*str* *,* *optional*) – The dtype of the vector. Defaults to + “float32”. + * **distance_threshold** (*float*) – The threshold for vector distance. + A smaller threshold indicates a stricter semantic search. + Defaults to 0.2. + * **epsilon** (*Optional* *[* *float* *]*) – The relative factor for vector range queries, + setting boundaries for candidates within radius \* (1 + epsilon). + This controls how extensive the search is beyond the specified radius. + Higher values increase recall at the expense of performance. + Defaults to None, which uses the index-defined epsilon (typically 0.01). + * **num_results** (*int*) – The MAX number of results to return. + Defaults to 10. + * **return_score** (*bool* *,* *optional*) – Whether to return the vector + distance. Defaults to True. + * **dialect** (*int* *,* *optional*) – The RediSearch query dialect. + Defaults to 2. + * **sort_by** (*Optional* *[* *str* *]*) – The field to order the results by. Defaults + to None. Results will be ordered by vector distance. + * **in_order** (*bool*) – Requires the terms in the field to have + the same order as the terms in the query filter, regardless of + the offsets between them. Defaults to False. + * **hybrid_policy** (*Optional* *[* *str* *]*) – Controls how filters are applied during vector search. + Options are “BATCHES” (paginates through small batches of nearest neighbors) or + “ADHOC_BF” (computes scores for all vectors passing the filter). + “BATCHES” mode is typically faster for queries with selective filters. + “ADHOC_BF” mode is better when filters match a large portion of the dataset. + Defaults to None, which lets Redis auto-select the optimal policy. + * **batch_size** (*Optional* *[* *int* *]*) – When hybrid_policy is “BATCHES”, controls the number + of vectors to fetch in each batch. Larger values may improve performance + at the cost of memory usage. Only applies when hybrid_policy=”BATCHES”. + Defaults to None, which lets Redis auto-select an appropriate batch size. + * **normalize_vector_distance** (*bool*) – Redis supports 3 distance metrics: L2 (euclidean), + IP (inner product), and COSINE. By default, L2 distance returns an unbounded value. + COSINE distance returns a value between 0 and 2. IP returns a value determined by + the magnitude of the vector. Setting this flag to true converts COSINE and L2 distance + to a similarity score between 0 and 1. Note: setting this flag to true for IP will + throw a warning since by definition COSINE similarity is normalized IP. +* **Raises:** + **TypeError** – If filter_expression is not of type redisvl.query.FilterExpression + +#### `NOTE` +Learn more about vector range queries: [https://redis.io/docs/interact/search-and-query/search/vectors/#range-query](https://redis.io/docs/interact/search-and-query/search/vectors/#range-query) + +#### `dialect(dialect)` + +Add a dialect field to the query. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *Query* + +#### `expander(expander)` + +Add a expander field to the query. + +- **expander** - the name of the expander + +* **Parameters:** + **expander** (*str*) +* **Return type:** + *Query* + +#### `in_order()` + +Match only documents where the query terms appear in +the same order in the document. +i.e. for the query “hello world”, we do not match “world hello” + +* **Return type:** + *Query* + +#### `language(language)` + +Analyze the query as being in the specified language. + +* **Parameters:** + **language** (*str*) – The language (e.g. chinese or english) +* **Return type:** + *Query* + +#### `limit_fields(*fields)` + +Limit the search to specific TEXT fields only. + +- **fields**: A list of strings, case sensitive field names + +from the defined schema. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *Query* + +#### `limit_ids(*ids)` + +Limit the results to a specific set of pre-known document +ids of any length. + +* **Return type:** + *Query* + +#### `no_content()` + +Set the query to only return ids and not the document content. + +* **Return type:** + *Query* + +#### `no_stopwords()` + +Prevent the query from being filtered for stopwords. +Only useful in very big queries that you are certain contain +no stopwords. + +* **Return type:** + *Query* + +#### `paging(offset, num)` + +Set the paging for the query (defaults to 0..10). + +- **offset**: Paging offset for the results. Defaults to 0 +- **num**: How many results do we want + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *Query* + +#### `query_string()` + +Return the query string of this query only. + +* **Return type:** + str + +#### `return_fields(*fields)` + +Add fields to return fields. + +* **Return type:** + *Query* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *Query* + +#### `set_batch_size(batch_size)` + +Set the batch size for the query. + +* **Parameters:** + **batch_size** (*int*) – The batch size to use when hybrid_policy is “BATCHES”. +* **Raises:** + * **TypeError** – If batch_size is not an integer + * **ValueError** – If batch_size is not positive + +#### `set_distance_threshold(distance_threshold)` + +Set the distance threshold for the query. + +* **Parameters:** + **distance_threshold** (*float*) – Vector distance threshold. +* **Raises:** + * **TypeError** – If distance_threshold is not a float or int + * **ValueError** – If distance_threshold is negative + +#### `set_epsilon(epsilon)` + +Set the epsilon parameter for the range query. + +* **Parameters:** + **epsilon** (*float*) – The relative factor for vector range queries, + setting boundaries for candidates within radius \* (1 + epsilon). +* **Raises:** + * **TypeError** – If epsilon is not a float or int + * **ValueError** – If epsilon is negative + +#### `set_filter(filter_expression=None)` + +Set the filter expression for the query. + +* **Parameters:** + **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]* *,* *optional*) – The filter + expression or query string to use on the query. +* **Raises:** + **TypeError** – If filter_expression is not a valid FilterExpression or string. + +#### `set_hybrid_policy(hybrid_policy)` + +Set the hybrid policy for the query. + +* **Parameters:** + **hybrid_policy** (*str*) – The hybrid policy to use. Options are “BATCHES” + or “ADHOC_BF”. +* **Raises:** + **ValueError** – If hybrid_policy is not one of the valid options + +#### `slop(slop)` + +Allow a maximum of N intervening non matched terms between +phrase terms (0 means exact phrase). + +* **Parameters:** + **slop** (*int*) +* **Return type:** + *Query* + +#### `sort_by(field, asc=True)` + +Add a sortby field to the query. + +- **field** - the name of the field to sort by +- **asc** - when True, sorting will be done in asceding order + +* **Parameters:** + * **field** (*str*) + * **asc** (*bool*) +* **Return type:** + *Query* + +#### `timeout(timeout)` + +overrides the timeout parameter of the module + +* **Parameters:** + **timeout** (*float*) +* **Return type:** + *Query* + +#### `verbatim()` + +Set the query to be verbatim, i.e. use no query expansion +or stemming. + +* **Return type:** + *Query* + +#### `with_payloads()` + +Ask the engine to return document payloads. + +* **Return type:** + *Query* + +#### `with_scores()` + +Ask the engine to return document search scores. + +* **Return type:** + *Query* + +#### `property batch_size: int | None` + +Return the batch size for the query. + +* **Returns:** + The batch size for the query. +* **Return type:** + Optional[int] + +#### `property distance_threshold: float` + +Return the distance threshold for the query. + +* **Returns:** + The distance threshold for the query. +* **Return type:** + float + +#### `property epsilon: float | None` + +Return the epsilon for the query. + +* **Returns:** + The epsilon for the query, or None if not set. +* **Return type:** + Optional[float] + +#### `property filter: str | `[`FilterExpression`]({{< relref "filter/#filterexpression" >}})` ` + +The filter expression for the query. + +#### `property hybrid_policy: str | None` + +Return the hybrid policy for the query. + +* **Returns:** + The hybrid policy for the query. +* **Return type:** + Optional[str] + +#### `property params: Dict[str, Any]` + +Return the parameters for the query. + +* **Returns:** + The parameters for the query. +* **Return type:** + Dict[str, Any] + +#### `property query: BaseQuery` + +Return self as the query object. + +## HybridQuery + +### `class HybridQuery(text, text_field_name, vector, vector_field_name, text_scorer='BM25STD', filter_expression=None, alpha=0.7, dtype='float32', num_results=10, return_fields=None, stopwords='english', dialect=2)` + +Bases: `AggregationQuery` + +HybridQuery combines text and vector search in Redis. +It allows you to perform a hybrid search using both text and vector similarity. +It scores documents based on a weighted combination of text and vector similarity. + +```python +from redisvl.query import HybridQuery +from redisvl.index import SearchIndex + +index = SearchIndex.from_yaml("path/to/index.yaml") + +query = HybridQuery( + text="example text", + text_field_name="text_field", + vector=[0.1, 0.2, 0.3], + vector_field_name="vector_field", + text_scorer="BM25STD", + filter_expression=None, + alpha=0.7, + dtype="float32", + num_results=10, + return_fields=["field1", "field2"], + stopwords="english", + dialect=2, +) + +results = index.query(query) +``` + +Instantiates a HybridQuery object. + +* **Parameters:** + * **text** (*str*) – The text to search for. + * **text_field_name** (*str*) – The text field name to search in. + * **vector** (*Union* *[* *bytes* *,* *List* *[* *float* *]* *]*) – The vector to perform vector similarity search. + * **vector_field_name** (*str*) – The vector field name to search in. + * **text_scorer** (*str* *,* *optional*) – The text scorer to use. Options are {TFIDF, TFIDF.DOCNORM, + BM25, DISMAX, DOCSCORE, BM25STD}. Defaults to “BM25STD”. + * **filter_expression** (*Optional* *[*[*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *,* *optional*) – The filter expression to use. + Defaults to None. + * **alpha** (*float* *,* *optional*) – The weight of the vector similarity. Documents will be scored + as: hybrid_score = (alpha) \* vector_score + (1-alpha) \* text_score. + Defaults to 0.7. + * **dtype** (*str* *,* *optional*) – The data type of the vector. Defaults to “float32”. + * **num_results** (*int* *,* *optional*) – The number of results to return. Defaults to 10. + * **return_fields** (*Optional* *[* *List* *[* *str* *]* *]* *,* *optional*) – The fields to return. Defaults to None. + * **stopwords** (*Optional* *[* *Union* *[* *str* *,* *Set* *[* *str* *]* *]* *]* *,* *optional*) – The stopwords to remove from the + provided text prior to searchuse. If a string such as “english” “german” is + provided then a default set of stopwords for that language will be used. if a list, + set, or tuple of strings is provided then those will be used as stopwords. + Defaults to “english”. if set to “None” then no stopwords will be removed. + * **dialect** (*int* *,* *optional*) – The Redis dialect version. Defaults to 2. +* **Raises:** + * **ValueError** – If the text string is empty, or if the text string becomes empty after + stopwords are removed. + * **TypeError** – If the stopwords are not a set, list, or tuple of strings. + +#### `add_scores()` + +If set, includes the score as an ordinary field of the row. + +* **Return type:** + *AggregateRequest* + +#### `apply(**kwexpr)` + +Specify one or more projection expressions to add to each result + +### `Parameters` + +- **kwexpr**: One or more key-value pairs for a projection. The key is + : the alias for the projection, and the value is the projection + expression itself, for example apply(square_root=”sqrt(@foo)”) + +* **Return type:** + *AggregateRequest* + +#### `dialect(dialect)` + +Add a dialect field to the aggregate command. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *AggregateRequest* + +#### `filter(expressions)` + +Specify filter for post-query results using predicates relating to +values in the result set. + +### `Parameters` + +- **fields**: Fields to group by. This can either be a single string, + : or a list of strings. + +* **Parameters:** + **expressions** (*str* *|* *List* *[* *str* *]*) +* **Return type:** + *AggregateRequest* + +#### `group_by(fields, *reducers)` + +Specify by which fields to group the aggregation. + +### `Parameters` + +- **fields**: Fields to group by. This can either be a single string, + : or a list of strings. both cases, the field should be specified as + @field. +- **reducers**: One or more reducers. Reducers may be found in the + : aggregation module. + +* **Parameters:** + * **fields** (*List* *[* *str* *]*) + * **reducers** (*Reducer* *|* *List* *[* *Reducer* *]*) +* **Return type:** + *AggregateRequest* + +#### `limit(offset, num)` + +Sets the limit for the most recent group or query. + +If no group has been defined yet (via group_by()) then this sets +the limit for the initial pool of results from the query. Otherwise, +this limits the number of items operated on from the previous group. + +Setting a limit on the initial search results may be useful when +attempting to execute an aggregation on a sample of a large data set. + +### `Parameters` + +- **offset**: Result offset from which to begin paging +- **num**: Number of results to return + +Example of sorting the initial results: + +`` +AggregateRequest("@sale_amount:[10000, inf]") .limit(0, 10) .group_by("@state", r.count()) +`` + +Will only group by the states found in the first 10 results of the +query @sale_amount:[10000, inf]. On the other hand, + +`` +AggregateRequest("@sale_amount:[10000, inf]") .limit(0, 1000) .group_by("@state", r.count() .limit(0, 10) +`` + +Will group all the results matching the query, but only return the +first 10 groups. + +If you only wish to return a *top-N* style query, consider using +sort_by() instead. + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *AggregateRequest* + +#### `load(*fields)` + +Indicate the fields to be returned in the response. These fields are +returned in addition to any others implicitly specified. + +### `Parameters` + +- **fields**: If fields not specified, all the fields will be loaded. + +Otherwise, fields should be given in the format of @field. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *AggregateRequest* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *AggregateRequest* + +#### `sort_by(*fields, **kwargs)` + +Indicate how the results should be sorted. This can also be used for +*top-N* style queries + +### `Parameters` + +- **fields**: The fields by which to sort. This can be either a single + : field or a list of fields. If you wish to specify order, you can + use the Asc or Desc wrapper classes. +- **max**: Maximum number of results to return. This can be + : used instead of LIMIT and is also faster. + +Example of sorting by foo ascending and bar descending: + +`` +sort_by(Asc("@foo"), Desc("@bar")) +`` + +Return the top 10 customers: + +`` +AggregateRequest() .group_by("@customer", r.sum("@paid").alias(FIELDNAME)) .sort_by(Desc("@paid"), max=10) +`` + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *AggregateRequest* + +#### `with_schema()` + +If set, the schema property will contain a list of [field, type] +entries in the result object. + +* **Return type:** + *AggregateRequest* + +#### `property params: Dict[str, Any]` + +Return the parameters for the aggregation. + +* **Returns:** + The parameters for the aggregation. +* **Return type:** + Dict[str, Any] + +#### `property stopwords: Set[str]` + +Return the stopwords used in the query. +:returns: The stopwords used in the query. +:rtype: Set[str] + +## TextQuery + +### `class TextQuery(text, text_field_name, text_scorer='BM25STD', filter_expression=None, return_fields=None, num_results=10, return_score=True, dialect=2, sort_by=None, in_order=False, params=None, stopwords='english')` + +Bases: `BaseQuery` + +TextQuery is a query for running a full text search, along with an optional filter expression. + +```python +from redisvl.query import TextQuery +from redisvl.index import SearchIndex + +index = SearchIndex.from_yaml(index.yaml) + +query = TextQuery( + text="example text", + text_field_name="text_field", + text_scorer="BM25STD", + filter_expression=None, + num_results=10, + return_fields=["field1", "field2"], + stopwords="english", + dialect=2, +) + +results = index.query(query) +``` + +A query for running a full text search, along with an optional filter expression. + +* **Parameters:** + * **text** (*str*) – The text string to perform the text search with. + * **text_field_name** (*str*) – The name of the document field to perform text search on. + * **text_scorer** (*str* *,* *optional*) – The text scoring algorithm to use. + Defaults to BM25STD. Options are {TFIDF, BM25STD, BM25, TFIDF.DOCNORM, DISMAX, DOCSCORE}. + See [https://redis.io/docs/latest/develop/interact/search-and-query/advanced-concepts/scoring/](https://redis.io/docs/latest/develop/interact/search-and-query/advanced-concepts/scoring/) + * **filter_expression** (*Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *,* *optional*) – A filter to apply + along with the text search. Defaults to None. + * **return_fields** (*List* *[* *str* *]*) – The declared fields to return with search + results. + * **num_results** (*int* *,* *optional*) – The top k results to return from the + search. Defaults to 10. + * **return_score** (*bool* *,* *optional*) – Whether to return the text score. + Defaults to True. + * **dialect** (*int* *,* *optional*) – The RediSearch query dialect. + Defaults to 2. + * **sort_by** (*Optional* *[* *str* *]*) – The field to order the results by. Defaults + to None. Results will be ordered by text score. + * **in_order** (*bool*) – Requires the terms in the field to have + the same order as the terms in the query filter, regardless of + the offsets between them. Defaults to False. + * **params** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *optional*) – The parameters for the query. + Defaults to None. + * **stopwords** (*Optional* *[* *Union* *[* *str* *,* *Set* *[* *str* *]* *]*) – The set of stop words to remove + from the query text. If a language like ‘english’ or ‘spanish’ is provided + a default set of stopwords for that language will be used. Users may specify + their own stop words by providing a List or Set of words. if set to None, + then no words will be removed. Defaults to ‘english’. +* **Raises:** + * **ValueError** – if stopwords language string cannot be loaded. + * **TypeError** – If stopwords is not a valid iterable set of strings. + +#### `dialect(dialect)` + +Add a dialect field to the query. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *Query* + +#### `expander(expander)` + +Add a expander field to the query. + +- **expander** - the name of the expander + +* **Parameters:** + **expander** (*str*) +* **Return type:** + *Query* + +#### `in_order()` + +Match only documents where the query terms appear in +the same order in the document. +i.e. for the query “hello world”, we do not match “world hello” + +* **Return type:** + *Query* + +#### `language(language)` + +Analyze the query as being in the specified language. + +* **Parameters:** + **language** (*str*) – The language (e.g. chinese or english) +* **Return type:** + *Query* + +#### `limit_fields(*fields)` + +Limit the search to specific TEXT fields only. + +- **fields**: A list of strings, case sensitive field names + +from the defined schema. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *Query* + +#### `limit_ids(*ids)` + +Limit the results to a specific set of pre-known document +ids of any length. + +* **Return type:** + *Query* + +#### `no_content()` + +Set the query to only return ids and not the document content. + +* **Return type:** + *Query* + +#### `no_stopwords()` + +Prevent the query from being filtered for stopwords. +Only useful in very big queries that you are certain contain +no stopwords. + +* **Return type:** + *Query* + +#### `paging(offset, num)` + +Set the paging for the query (defaults to 0..10). + +- **offset**: Paging offset for the results. Defaults to 0 +- **num**: How many results do we want + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *Query* + +#### `query_string()` + +Return the query string of this query only. + +* **Return type:** + str + +#### `return_fields(*fields)` + +Add fields to return fields. + +* **Return type:** + *Query* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *Query* + +#### `set_filter(filter_expression=None)` + +Set the filter expression for the query. + +* **Parameters:** + **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]* *,* *optional*) – The filter + expression or query string to use on the query. +* **Raises:** + **TypeError** – If filter_expression is not a valid FilterExpression or string. + +#### `slop(slop)` + +Allow a maximum of N intervening non matched terms between +phrase terms (0 means exact phrase). + +* **Parameters:** + **slop** (*int*) +* **Return type:** + *Query* + +#### `sort_by(field, asc=True)` + +Add a sortby field to the query. + +- **field** - the name of the field to sort by +- **asc** - when True, sorting will be done in asceding order + +* **Parameters:** + * **field** (*str*) + * **asc** (*bool*) +* **Return type:** + *Query* + +#### `timeout(timeout)` + +overrides the timeout parameter of the module + +* **Parameters:** + **timeout** (*float*) +* **Return type:** + *Query* + +#### `verbatim()` + +Set the query to be verbatim, i.e. use no query expansion +or stemming. + +* **Return type:** + *Query* + +#### `with_payloads()` + +Ask the engine to return document payloads. + +* **Return type:** + *Query* + +#### `with_scores()` + +Ask the engine to return document search scores. + +* **Return type:** + *Query* + +#### `property filter: str | `[`FilterExpression`]({{< relref "filter/#filterexpression" >}})` ` + +The filter expression for the query. + +#### `property params: Dict[str, Any]` + +Return the query parameters. + +#### `property query: BaseQuery` + +Return self as the query object. + +## FilterQuery + +### `class FilterQuery(filter_expression=None, return_fields=None, num_results=10, dialect=2, sort_by=None, in_order=False, params=None)` + +Bases: `BaseQuery` + +A query for running a filtered search with a filter expression. + +* **Parameters:** + * **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]*) – The optional filter + expression to query with. Defaults to ‘\*’. + * **return_fields** (*Optional* *[* *List* *[* *str* *]* *]* *,* *optional*) – The fields to return. + * **num_results** (*Optional* *[* *int* *]* *,* *optional*) – The number of results to return. Defaults to 10. + * **dialect** (*int* *,* *optional*) – The query dialect. Defaults to 2. + * **sort_by** (*Optional* *[* *str* *]* *,* *optional*) – The field to order the results by. Defaults to None. + * **in_order** (*bool* *,* *optional*) – Requires the terms in the field to have the same order as the + terms in the query filter. Defaults to False. + * **params** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *optional*) – The parameters for the query. Defaults to None. +* **Raises:** + **TypeError** – If filter_expression is not of type redisvl.query.FilterExpression + +#### `dialect(dialect)` + +Add a dialect field to the query. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *Query* + +#### `expander(expander)` + +Add a expander field to the query. + +- **expander** - the name of the expander + +* **Parameters:** + **expander** (*str*) +* **Return type:** + *Query* + +#### `in_order()` + +Match only documents where the query terms appear in +the same order in the document. +i.e. for the query “hello world”, we do not match “world hello” + +* **Return type:** + *Query* + +#### `language(language)` + +Analyze the query as being in the specified language. + +* **Parameters:** + **language** (*str*) – The language (e.g. chinese or english) +* **Return type:** + *Query* + +#### `limit_fields(*fields)` + +Limit the search to specific TEXT fields only. + +- **fields**: A list of strings, case sensitive field names + +from the defined schema. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *Query* + +#### `limit_ids(*ids)` + +Limit the results to a specific set of pre-known document +ids of any length. + +* **Return type:** + *Query* + +#### `no_content()` + +Set the query to only return ids and not the document content. + +* **Return type:** + *Query* + +#### `no_stopwords()` + +Prevent the query from being filtered for stopwords. +Only useful in very big queries that you are certain contain +no stopwords. + +* **Return type:** + *Query* + +#### `paging(offset, num)` + +Set the paging for the query (defaults to 0..10). + +- **offset**: Paging offset for the results. Defaults to 0 +- **num**: How many results do we want + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *Query* + +#### `query_string()` + +Return the query string of this query only. + +* **Return type:** + str + +#### `return_fields(*fields)` + +Add fields to return fields. + +* **Return type:** + *Query* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *Query* + +#### `set_filter(filter_expression=None)` + +Set the filter expression for the query. + +* **Parameters:** + **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]* *,* *optional*) – The filter + expression or query string to use on the query. +* **Raises:** + **TypeError** – If filter_expression is not a valid FilterExpression or string. + +#### `slop(slop)` + +Allow a maximum of N intervening non matched terms between +phrase terms (0 means exact phrase). + +* **Parameters:** + **slop** (*int*) +* **Return type:** + *Query* + +#### `sort_by(field, asc=True)` + +Add a sortby field to the query. + +- **field** - the name of the field to sort by +- **asc** - when True, sorting will be done in asceding order + +* **Parameters:** + * **field** (*str*) + * **asc** (*bool*) +* **Return type:** + *Query* + +#### `timeout(timeout)` + +overrides the timeout parameter of the module + +* **Parameters:** + **timeout** (*float*) +* **Return type:** + *Query* + +#### `verbatim()` + +Set the query to be verbatim, i.e. use no query expansion +or stemming. + +* **Return type:** + *Query* + +#### `with_payloads()` + +Ask the engine to return document payloads. + +* **Return type:** + *Query* + +#### `with_scores()` + +Ask the engine to return document search scores. + +* **Return type:** + *Query* + +#### `property filter: str | `[`FilterExpression`]({{< relref "filter/#filterexpression" >}})` ` + +The filter expression for the query. + +#### `property params: Dict[str, Any]` + +Return the query parameters. + +#### `property query: BaseQuery` + +Return self as the query object. + +## CountQuery + +### `class CountQuery(filter_expression=None, dialect=2, params=None)` + +Bases: `BaseQuery` + +A query for a simple count operation provided some filter expression. + +* **Parameters:** + * **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]*) – The filter expression to + query with. Defaults to None. + * **params** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *optional*) – The parameters for the query. Defaults to None. + * **dialect** (*int*) +* **Raises:** + **TypeError** – If filter_expression is not of type redisvl.query.FilterExpression + +```python +from redisvl.query import CountQuery +from redisvl.query.filter import Tag + +t = Tag("brand") == "Nike" +query = CountQuery(filter_expression=t) + +count = index.query(query) +``` + +#### `dialect(dialect)` + +Add a dialect field to the query. + +- **dialect** - dialect version to execute the query under + +* **Parameters:** + **dialect** (*int*) +* **Return type:** + *Query* + +#### `expander(expander)` + +Add a expander field to the query. + +- **expander** - the name of the expander + +* **Parameters:** + **expander** (*str*) +* **Return type:** + *Query* + +#### `in_order()` + +Match only documents where the query terms appear in +the same order in the document. +i.e. for the query “hello world”, we do not match “world hello” + +* **Return type:** + *Query* + +#### `language(language)` + +Analyze the query as being in the specified language. + +* **Parameters:** + **language** (*str*) – The language (e.g. chinese or english) +* **Return type:** + *Query* + +#### `limit_fields(*fields)` + +Limit the search to specific TEXT fields only. + +- **fields**: A list of strings, case sensitive field names + +from the defined schema. + +* **Parameters:** + **fields** (*List* *[* *str* *]*) +* **Return type:** + *Query* + +#### `limit_ids(*ids)` + +Limit the results to a specific set of pre-known document +ids of any length. + +* **Return type:** + *Query* + +#### `no_content()` + +Set the query to only return ids and not the document content. + +* **Return type:** + *Query* + +#### `no_stopwords()` + +Prevent the query from being filtered for stopwords. +Only useful in very big queries that you are certain contain +no stopwords. + +* **Return type:** + *Query* + +#### `paging(offset, num)` + +Set the paging for the query (defaults to 0..10). + +- **offset**: Paging offset for the results. Defaults to 0 +- **num**: How many results do we want + +* **Parameters:** + * **offset** (*int*) + * **num** (*int*) +* **Return type:** + *Query* + +#### `query_string()` + +Return the query string of this query only. + +* **Return type:** + str + +#### `return_fields(*fields)` + +Add fields to return fields. + +* **Return type:** + *Query* + +#### `scorer(scorer)` + +Use a different scoring function to evaluate document relevance. +Default is TFIDF. + +* **Parameters:** + **scorer** (*str*) – The scoring function to use + (e.g. TFIDF.DOCNORM or BM25) +* **Return type:** + *Query* + +#### `set_filter(filter_expression=None)` + +Set the filter expression for the query. + +* **Parameters:** + **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]* *,* *optional*) – The filter + expression or query string to use on the query. +* **Raises:** + **TypeError** – If filter_expression is not a valid FilterExpression or string. + +#### `slop(slop)` + +Allow a maximum of N intervening non matched terms between +phrase terms (0 means exact phrase). + +* **Parameters:** + **slop** (*int*) +* **Return type:** + *Query* + +#### `sort_by(field, asc=True)` + +Add a sortby field to the query. + +- **field** - the name of the field to sort by +- **asc** - when True, sorting will be done in asceding order + +* **Parameters:** + * **field** (*str*) + * **asc** (*bool*) +* **Return type:** + *Query* + +#### `timeout(timeout)` + +overrides the timeout parameter of the module + +* **Parameters:** + **timeout** (*float*) +* **Return type:** + *Query* + +#### `verbatim()` + +Set the query to be verbatim, i.e. use no query expansion +or stemming. + +* **Return type:** + *Query* + +#### `with_payloads()` + +Ask the engine to return document payloads. + +* **Return type:** + *Query* + +#### `with_scores()` + +Ask the engine to return document search scores. + +* **Return type:** + *Query* + +#### `property filter: str | `[`FilterExpression`]({{< relref "filter/#filterexpression" >}})` ` + +The filter expression for the query. + +#### `property params: Dict[str, Any]` + +Return the query parameters. + +#### `property query: BaseQuery` + +Return self as the query object. +--- +linkTitle: Rerankers +title: Rerankers +type: integration +--- + + +## CohereReranker + + + +### `class CohereReranker(model='rerank-english-v3.0', rank_by=None, limit=5, return_score=True, api_config=None)` + +Bases: `BaseReranker` + +The CohereReranker class uses Cohere’s API to rerank documents based on an +input query. + +This reranker is designed to interact with Cohere’s /rerank API, +requiring an API key for authentication. The key can be provided +directly in the api_config dictionary or through the COHERE_API_KEY +environment variable. User must obtain an API key from Cohere’s website +([https://dashboard.cohere.com/](https://dashboard.cohere.com/)). Additionally, the cohere python +client must be installed with pip install cohere. + +```python +from redisvl.utils.rerank import CohereReranker + +# set up the Cohere reranker with some configuration +reranker = CohereReranker(rank_by=["content"], limit=2) +# rerank raw search results based on user input/query +results = reranker.rank( + query="your input query text here", + docs=[ + {"content": "document 1"}, + {"content": "document 2"}, + {"content": "document 3"} + ] +) +``` + +Initialize the CohereReranker with specified model, ranking criteria, +and API configuration. + +* **Parameters:** + * **model** (*str*) – The identifier for the Cohere model used for reranking. + Defaults to ‘rerank-english-v3.0’. + * **rank_by** (*Optional* *[* *List* *[* *str* *]* *]*) – Optional list of keys specifying the + attributes in the documents that should be considered for + ranking. None means ranking will rely on the model’s default + behavior. + * **limit** (*int*) – The maximum number of results to return after + reranking. Must be a positive integer. + * **return_score** (*bool*) – Whether to return scores alongside the + reranked results. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the API key. + Defaults to None. +* **Raises:** + * **ImportError** – If the cohere library is not installed. + * **ValueError** – If the API key is not provided. + +#### `async arank(query, docs, **kwargs)` + +Rerank documents based on the provided query using the Cohere rerank API. + +This method processes the user’s query and the provided documents to +rerank them in a manner that is potentially more relevant to the +query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents + to be ranked, either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[Union[List[Dict[str, Any]], List[str]], float], List[Dict[str, Any]]] + +#### `model_post_init(context, /)` + +This function is meant to behave like a BaseModel method to initialise private attributes. + +It takes context as an argument since that’s what pydantic-core passes when calling it. + +* **Parameters:** + * **self** (*BaseModel*) – The BaseModel instance. + * **context** (*Any*) – The context. +* **Return type:** + None + +#### `rank(query, docs, **kwargs)` + +Rerank documents based on the provided query using the Cohere rerank API. + +This method processes the user’s query and the provided documents to +rerank them in a manner that is potentially more relevant to the +query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents + to be ranked, either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[Union[List[Dict[str, Any]], List[str]], float], List[Dict[str, Any]]] + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +## HFCrossEncoderReranker + + + +### `class HFCrossEncoderReranker(model='cross-encoder/ms-marco-MiniLM-L-6-v2', limit=3, return_score=True, *, rank_by=None)` + +Bases: `BaseReranker` + +The HFCrossEncoderReranker class uses a cross-encoder models from Hugging Face +to rerank documents based on an input query. + +This reranker loads a cross-encoder model using the CrossEncoder class +from the sentence_transformers library. It requires the +sentence_transformers library to be installed. + +```python +from redisvl.utils.rerank import HFCrossEncoderReranker + +# set up the HFCrossEncoderReranker with a specific model +reranker = HFCrossEncoderReranker(model_name="cross-encoder/ms-marco-MiniLM-L-6-v2", limit=3) +# rerank raw search results based on user input/query +results = reranker.rank( + query="your input query text here", + docs=[ + {"content": "document 1"}, + {"content": "document 2"}, + {"content": "document 3"} + ] +) +``` + +Initialize the HFCrossEncoderReranker with a specified model and ranking criteria. + +* **Parameters:** + * **model** (*str*) – The name or path of the cross-encoder model to use for reranking. + Defaults to ‘cross-encoder/ms-marco-MiniLM-L-6-v2’. + * **limit** (*int*) – The maximum number of results to return after reranking. Must be a positive integer. + * **return_score** (*bool*) – Whether to return scores alongside the reranked results. + * **rank_by** (*List* *[* *str* *]* *|* *None*) + +#### `async arank(query, docs, **kwargs)` + +Asynchronously rerank documents based on the provided query using the loaded cross-encoder model. + +This method processes the user’s query and the provided documents to rerank them +in a manner that is potentially more relevant to the query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents to be ranked, + either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[List[Dict[str, Any]], List[float]], List[Dict[str, Any]]] + +#### `model_post_init(context, /)` + +This function is meant to behave like a BaseModel method to initialise private attributes. + +It takes context as an argument since that’s what pydantic-core passes when calling it. + +* **Parameters:** + * **self** (*BaseModel*) – The BaseModel instance. + * **context** (*Any*) – The context. +* **Return type:** + None + +#### `rank(query, docs, **kwargs)` + +Rerank documents based on the provided query using the loaded cross-encoder model. + +This method processes the user’s query and the provided documents to rerank them +in a manner that is potentially more relevant to the query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents to be ranked, + either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[List[Dict[str, Any]], List[float]], List[Dict[str, Any]]] + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +## VoyageAIReranker + + + +### `class VoyageAIReranker(model, rank_by=None, limit=5, return_score=True, api_config=None)` + +Bases: `BaseReranker` + +The VoyageAIReranker class uses VoyageAI’s API to rerank documents based on an +input query. + +This reranker is designed to interact with VoyageAI’s /rerank API, +requiring an API key for authentication. The key can be provided +directly in the api_config dictionary or through the VOYAGE_API_KEY +environment variable. User must obtain an API key from VoyageAI’s website +([https://dash.voyageai.com/](https://dash.voyageai.com/)). Additionally, the voyageai python +client must be installed with pip install voyageai. + +```python +from redisvl.utils.rerank import VoyageAIReranker + +# set up the VoyageAI reranker with some configuration +reranker = VoyageAIReranker(rank_by=["content"], limit=2) +# rerank raw search results based on user input/query +results = reranker.rank( + query="your input query text here", + docs=[ + {"content": "document 1"}, + {"content": "document 2"}, + {"content": "document 3"} + ] +) +``` + +Initialize the VoyageAIReranker with specified model, ranking criteria, +and API configuration. + +* **Parameters:** + * **model** (*str*) – The identifier for the VoyageAI model used for reranking. + * **rank_by** (*Optional* *[* *List* *[* *str* *]* *]*) – Optional list of keys specifying the + attributes in the documents that should be considered for + ranking. None means ranking will rely on the model’s default + behavior. + * **limit** (*int*) – The maximum number of results to return after + reranking. Must be a positive integer. + * **return_score** (*bool*) – Whether to return scores alongside the + reranked results. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the API key. + Defaults to None. +* **Raises:** + * **ImportError** – If the voyageai library is not installed. + * **ValueError** – If the API key is not provided. + +#### `async arank(query, docs, **kwargs)` + +Rerank documents based on the provided query using the VoyageAI rerank API. + +This method processes the user’s query and the provided documents to +rerank them in a manner that is potentially more relevant to the +query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents + to be ranked, either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[Union[List[Dict[str, Any]], List[str]], float], List[Dict[str, Any]]] + +#### `model_post_init(context, /)` + +This function is meant to behave like a BaseModel method to initialise private attributes. + +It takes context as an argument since that’s what pydantic-core passes when calling it. + +* **Parameters:** + * **self** (*BaseModel*) – The BaseModel instance. + * **context** (*Any*) – The context. +* **Return type:** + None + +#### `rank(query, docs, **kwargs)` + +Rerank documents based on the provided query using the VoyageAI rerank API. + +This method processes the user’s query and the provided documents to +rerank them in a manner that is potentially more relevant to the +query’s context. + +* **Parameters:** + * **query** (*str*) – The user’s search query. + * **docs** (*Union* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *List* *[* *str* *]* *]*) – The list of documents + to be ranked, either as dictionaries or strings. +* **Returns:** + The reranked list of documents and optionally associated scores. +* **Return type:** + Union[Tuple[Union[List[Dict[str, Any]], List[str]], float], List[Dict[str, Any]]] + +#### `model_config: ClassVar[ConfigDict] = {}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. +--- +linkTitle: Search index classes +title: Search Index Classes +type: integration +--- + + +| Class | Description | +|-------------------------------------------|----------------------------------------------------------------------------------------------| +| [SearchIndex](#searchindex-api) | Primary class to write, read, and search across data structures in Redis. | +| [AsyncSearchIndex](#asyncsearchindex-api) | Async version of the SearchIndex to write, read, and search across data structures in Redis. | + + + +## SearchIndex + +### `class SearchIndex(schema, redis_client=None, redis_url=None, connection_kwargs=None, validate_on_load=False, **kwargs)` + +A search index class for interacting with Redis as a vector database. + +The SearchIndex is instantiated with a reference to a Redis database and an +IndexSchema (YAML path or dictionary object) that describes the various +settings and field configurations. + +```python +from redisvl.index import SearchIndex + +# initialize the index object with schema from file +index = SearchIndex.from_yaml( + "schemas/schema.yaml", + redis_url="redis://localhost:6379", + validate_on_load=True +) + +# create the index +index.create(overwrite=True, drop=False) + +# data is an iterable of dictionaries +index.load(data) + +# delete index and data +index.delete(drop=True) +``` + +Initialize the RedisVL search index with a schema, Redis client +(or URL string with other connection args), connection_args, and other +kwargs. + +* **Parameters:** + * **schema** ([*IndexSchema*]({{< relref "schema/#indexschema" >}})) – Index schema object. + * **redis_client** (*Optional* *[* *redis.Redis* *]*) – An + instantiated redis client. + * **redis_url** (*Optional* *[* *str* *]*) – The URL of the Redis server to + connect to. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]* *,* *optional*) – Redis client connection + args. + * **validate_on_load** (*bool* *,* *optional*) – Whether to validate data against schema + when loading. Defaults to False. + +#### `aggregate(*args, **kwargs)` + +Perform an aggregation operation against the index. + +Wrapper around the aggregation API that adds the index name +to the query and passes along the rest of the arguments +to the redis-py ft().aggregate() method. + +* **Returns:** + Raw Redis aggregation results. +* **Return type:** + Result + +#### `batch_query(queries, batch_size=10)` + +Execute a batch of queries and process results. + +* **Parameters:** + * **queries** (*Sequence* *[* *BaseQuery* *]*) + * **batch_size** (*int*) +* **Return type:** + *List*[*List*[*Dict*[str, *Any*]]] + +#### `batch_search(queries, batch_size=10)` + +Perform a search against the index for multiple queries. + +This method takes a list of queries and optionally query params and +returns a list of Result objects for each query. Results are +returned in the same order as the queries. + +* **Parameters:** + * **queries** (*List* *[* *SearchParams* *]*) – The queries to search for. batch_size + * **(* ***int** – The number of queries to search for at a time. + Defaults to 10. + * **optional****)** – The number of queries to search for at a time. + Defaults to 10. + * **batch_size** (*int*) +* **Returns:** + The search results for each query. +* **Return type:** + List[Result] + +#### `clear()` + +Clear all keys in Redis associated with the index, leaving the index +available and in-place for future insertions or updates. + +* **Returns:** + Count of records deleted from Redis. +* **Return type:** + int + +#### `connect(redis_url=None, **kwargs)` + +Connect to a Redis instance using the provided redis_url, falling +back to the REDIS_URL environment variable (if available). + +Note: Additional keyword arguments (\*\*kwargs) can be used to provide +extra options specific to the Redis connection. + +* **Parameters:** + **redis_url** (*Optional* *[* *str* *]* *,* *optional*) – The URL of the Redis server to + connect to. +* **Raises:** + * **redis.exceptions.ConnectionError** – If the connection to the Redis + server fails. + * **ValueError** – If the Redis URL is not provided nor accessible + through the REDIS_URL environment variable. + * **ModuleNotFoundError** – If required Redis modules are not installed. + +#### `create(overwrite=False, drop=False)` + +Create an index in Redis with the current schema and properties. + +* **Parameters:** + * **overwrite** (*bool* *,* *optional*) – Whether to overwrite the index if it + already exists. Defaults to False. + * **drop** (*bool* *,* *optional*) – Whether to drop all keys associated with the + index in the case of overwriting. Defaults to False. +* **Raises:** + * **RuntimeError** – If the index already exists and ‘overwrite’ is False. + * **ValueError** – If no fields are defined for the index. +* **Return type:** + None + +```python +# create an index in Redis; only if one does not exist with given name +index.create() + +# overwrite an index in Redis without dropping associated data +index.create(overwrite=True) + +# overwrite an index in Redis; drop associated data (clean slate) +index.create(overwrite=True, drop=True) +``` + +#### `delete(drop=True)` + +Delete the search index while optionally dropping all keys associated +with the index. + +* **Parameters:** + **drop** (*bool* *,* *optional*) – Delete the key / documents pairs in the + index. Defaults to True. +* **Raises:** + **redis.exceptions.ResponseError** – If the index does not exist. + +#### `disconnect()` + +Disconnect from the Redis database. + +#### `drop_documents(ids)` + +Remove documents from the index by their document IDs. + +This method converts document IDs to Redis keys automatically by applying +the index’s key prefix and separator configuration. + +* **Parameters:** + **ids** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The document ID or IDs to remove from the index. +* **Returns:** + Count of documents deleted from Redis. +* **Return type:** + int + +#### `drop_keys(keys)` + +Remove a specific entry or entries from the index by it’s key ID. + +* **Parameters:** + **keys** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The document ID or IDs to remove from the index. +* **Returns:** + Count of records deleted from Redis. +* **Return type:** + int + +#### `exists()` + +Check if the index exists in Redis. + +* **Returns:** + True if the index exists, False otherwise. +* **Return type:** + bool + +#### `expire_keys(keys, ttl)` + +Set the expiration time for a specific entry or entries in Redis. + +* **Parameters:** + * **keys** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The entry ID or IDs to set the expiration for. + * **ttl** (*int*) – The time-to-live in seconds. +* **Return type:** + int | *List*[int] + +#### `fetch(id)` + +Fetch an object from Redis by id. + +The id is typically either a unique identifier, +or derived from some domain-specific metadata combination +(like a document id or chunk id). + +* **Parameters:** + **id** (*str*) – The specified unique identifier for a particular + document indexed in Redis. +* **Returns:** + The fetched object. +* **Return type:** + Dict[str, Any] + +#### `classmethod from_dict(schema_dict, **kwargs)` + +Create a SearchIndex from a dictionary. + +* **Parameters:** + **schema_dict** (*Dict* *[* *str* *,* *Any* *]*) – A dictionary containing the schema. +* **Returns:** + A RedisVL SearchIndex object. +* **Return type:** + [SearchIndex](#searchindex) + +```python +from redisvl.index import SearchIndex + +index = SearchIndex.from_dict({ + "index": { + "name": "my-index", + "prefix": "rvl", + "storage_type": "hash", + }, + "fields": [ + {"name": "doc-id", "type": "tag"} + ] +}, redis_url="redis://localhost:6379") +``` + +#### `classmethod from_existing(name, redis_client=None, redis_url=None, **kwargs)` + +Initialize from an existing search index in Redis by index name. + +* **Parameters:** + * **name** (*str*) – Name of the search index in Redis. + * **redis_client** (*Optional* *[* *redis.Redis* *]*) – An + instantiated redis client. + * **redis_url** (*Optional* *[* *str* *]*) – The URL of the Redis server to + connect to. +* **Raises:** + * **ValueError** – If redis_url or redis_client is not provided. + * **RedisModuleVersionError** – If required Redis modules are not installed. + +#### `classmethod from_yaml(schema_path, **kwargs)` + +Create a SearchIndex from a YAML schema file. + +* **Parameters:** + **schema_path** (*str*) – Path to the YAML schema file. +* **Returns:** + A RedisVL SearchIndex object. +* **Return type:** + [SearchIndex](#searchindex) + +```python +from redisvl.index import SearchIndex + +index = SearchIndex.from_yaml("schemas/schema.yaml", redis_url="redis://localhost:6379") +``` + +#### `info(name=None)` + +Get information about the index. + +* **Parameters:** + **name** (*str* *,* *optional*) – Index name to fetch info about. + Defaults to None. +* **Returns:** + A dictionary containing the information about the index. +* **Return type:** + dict + +#### `key(id)` + +Construct a redis key as a combination of an index key prefix (optional) +and specified id. + +The id is typically either a unique identifier, or +derived from some domain-specific metadata combination (like a document +id or chunk id). + +* **Parameters:** + **id** (*str*) – The specified unique identifier for a particular + document indexed in Redis. +* **Returns:** + The full Redis key including key prefix and value as a string. +* **Return type:** + str + +#### `listall()` + +List all search indices in Redis database. + +* **Returns:** + The list of indices in the database. +* **Return type:** + List[str] + +#### `load(data, id_field=None, keys=None, ttl=None, preprocess=None, batch_size=None)` + +Load objects to the Redis database. Returns the list of keys loaded +to Redis. + +RedisVL automatically handles constructing the object keys, batching, +optional preprocessing steps, and setting optional expiration +(TTL policies) on keys. + +* **Parameters:** + * **data** (*Iterable* *[* *Any* *]*) – An iterable of objects to store. + * **id_field** (*Optional* *[* *str* *]* *,* *optional*) – Specified field used as the id + portion of the redis key (after the prefix) for each + object. Defaults to None. + * **keys** (*Optional* *[* *Iterable* *[* *str* *]* *]* *,* *optional*) – Optional iterable of keys. + Must match the length of objects if provided. Defaults to None. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – Time-to-live in seconds for each key. + Defaults to None. + * **preprocess** (*Optional* *[* *Callable* *]* *,* *optional*) – A function to preprocess + objects before storage. Defaults to None. + * **batch_size** (*Optional* *[* *int* *]* *,* *optional*) – Number of objects to write in + a single Redis pipeline execution. Defaults to class’s + default batch size. +* **Returns:** + List of keys loaded to Redis. +* **Return type:** + List[str] +* **Raises:** + * **SchemaValidationError** – If validation fails when validate_on_load is enabled. + * **RedisVLError** – If there’s an error loading data to Redis. + +#### `paginate(query, page_size=30)` + +Execute a given query against the index and return results in +paginated batches. + +This method accepts a RedisVL query instance, enabling pagination of +results which allows for subsequent processing over each batch with a +generator. + +* **Parameters:** + * **query** (*BaseQuery*) – The search query to be executed. + * **page_size** (*int* *,* *optional*) – The number of results to return in each + batch. Defaults to 30. +* **Yields:** + A generator yielding batches of search results. +* **Raises:** + * **TypeError** – If the page_size argument is not of type int. + * **ValueError** – If the page_size argument is less than or equal to zero. +* **Return type:** + *Generator* + +```python +# Iterate over paginated search results in batches of 10 +for result_batch in index.paginate(query, page_size=10): + # Process each batch of results + pass +``` + +#### `NOTE` +The page_size parameter controls the number of items each result +batch contains. Adjust this value based on performance +considerations and the expected volume of search results. + +#### `query(query)` + +Execute a query on the index. + +This method takes a BaseQuery or AggregationQuery object directly, and +handles post-processing of the search. + +* **Parameters:** + **query** (*Union* *[* *BaseQuery* *,* *AggregateQuery* *]*) – The query to run. +* **Returns:** + A list of search results. +* **Return type:** + List[Result] + +```python +from redisvl.query import VectorQuery + +query = VectorQuery( + vector=[0.16, -0.34, 0.98, 0.23], + vector_field_name="embedding", + num_results=3 +) + +results = index.query(query) +``` + +#### `search(*args, **kwargs)` + +Perform a search against the index. + +Wrapper around the search API that adds the index name +to the query and passes along the rest of the arguments +to the redis-py ft().search() method. + +* **Returns:** + Raw Redis search results. +* **Return type:** + Result + +#### `set_client(redis_client, **kwargs)` + +Manually set the Redis client to use with the search index. + +This method configures the search index to use a specific Redis or +Async Redis client. It is useful for cases where an external, +custom-configured client is preferred instead of creating a new one. + +* **Parameters:** + **redis_client** (*redis.Redis*) – A Redis or Async Redis + client instance to be used for the connection. +* **Raises:** + **TypeError** – If the provided client is not valid. + +#### `property client: Redis | None` + +The underlying redis-py client object. + +#### `property key_separator: str` + +The optional separator between a defined prefix and key value in +forming a Redis key. + +#### `property name: str` + +The name of the Redis search index. + +#### `property prefix: str` + +The optional key prefix that comes before a unique key value in +forming a Redis key. + +#### `property storage_type: StorageType` + +The underlying storage type for the search index; either +hash or json. + + + +## AsyncSearchIndex + +### `class AsyncSearchIndex(schema, *, redis_url=None, redis_client=None, connection_kwargs=None, validate_on_load=False, **kwargs)` + +A search index class for interacting with Redis as a vector database in +async-mode. + +The AsyncSearchIndex is instantiated with a reference to a Redis database +and an IndexSchema (YAML path or dictionary object) that describes the +various settings and field configurations. + +```python +from redisvl.index import AsyncSearchIndex + +# initialize the index object with schema from file +index = AsyncSearchIndex.from_yaml( + "schemas/schema.yaml", + redis_url="redis://localhost:6379", + validate_on_load=True +) + +# create the index +await index.create(overwrite=True, drop=False) + +# data is an iterable of dictionaries +await index.load(data) + +# delete index and data +await index.delete(drop=True) +``` + +Initialize the RedisVL async search index with a schema. + +* **Parameters:** + * **schema** ([*IndexSchema*]({{< relref "schema/#indexschema" >}})) – Index schema object. + * **redis_url** (*Optional* *[* *str* *]* *,* *optional*) – The URL of the Redis server to + connect to. + * **redis_client** (*Optional* *[* *aredis.Redis* *]*) – An + instantiated redis client. + * **connection_kwargs** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – Redis client connection + args. + * **validate_on_load** (*bool* *,* *optional*) – Whether to validate data against schema + when loading. Defaults to False. + +#### `async aggregate(*args, **kwargs)` + +Perform an aggregation operation against the index. + +Wrapper around the aggregation API that adds the index name +to the query and passes along the rest of the arguments +to the redis-py ft().aggregate() method. + +* **Returns:** + Raw Redis aggregation results. +* **Return type:** + Result + +#### `async batch_query(queries, batch_size=10)` + +Asynchronously execute a batch of queries and process results. + +* **Parameters:** + * **queries** (*List* *[* *BaseQuery* *]*) + * **batch_size** (*int*) +* **Return type:** + *List*[*List*[*Dict*[str, *Any*]]] + +#### `async batch_search(queries, batch_size=10)` + +Perform a search against the index for multiple queries. + +This method takes a list of queries and returns a list of Result objects +for each query. Results are returned in the same order as the queries. + +* **Parameters:** + * **queries** (*List* *[* *SearchParams* *]*) – The queries to search for. batch_size + * **(* ***int** – The number of queries to search for at a time. + Defaults to 10. + * **optional****)** – The number of queries to search for at a time. + Defaults to 10. + * **batch_size** (*int*) +* **Returns:** + The search results for each query. +* **Return type:** + List[Result] + +#### `async clear()` + +Clear all keys in Redis associated with the index, leaving the index +available and in-place for future insertions or updates. + +* **Returns:** + Count of records deleted from Redis. +* **Return type:** + int + +#### `connect(redis_url=None, **kwargs)` + +[DEPRECATED] Connect to a Redis instance. Use connection parameters in \_\_init_\_. + +* **Parameters:** + **redis_url** (*str* *|* *None*) + +#### `async create(overwrite=False, drop=False)` + +Asynchronously create an index in Redis with the current schema +: and properties. + +* **Parameters:** + * **overwrite** (*bool* *,* *optional*) – Whether to overwrite the index if it + already exists. Defaults to False. + * **drop** (*bool* *,* *optional*) – Whether to drop all keys associated with the + index in the case of overwriting. Defaults to False. +* **Raises:** + * **RuntimeError** – If the index already exists and ‘overwrite’ is False. + * **ValueError** – If no fields are defined for the index. +* **Return type:** + None + +```python +# create an index in Redis; only if one does not exist with given name +await index.create() + +# overwrite an index in Redis without dropping associated data +await index.create(overwrite=True) + +# overwrite an index in Redis; drop associated data (clean slate) +await index.create(overwrite=True, drop=True) +``` + +#### `async delete(drop=True)` + +Delete the search index. + +* **Parameters:** + **drop** (*bool* *,* *optional*) – Delete the documents in the index. + Defaults to True. +* **Raises:** + **redis.exceptions.ResponseError** – If the index does not exist. + +#### `async disconnect()` + +Disconnect from the Redis database. + +#### `async drop_documents(ids)` + +Remove documents from the index by their document IDs. + +This method converts document IDs to Redis keys automatically by applying +the index’s key prefix and separator configuration. + +* **Parameters:** + **ids** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The document ID or IDs to remove from the index. +* **Returns:** + Count of documents deleted from Redis. +* **Return type:** + int + +#### `async drop_keys(keys)` + +Remove a specific entry or entries from the index by it’s key ID. + +* **Parameters:** + **keys** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The document ID or IDs to remove from the index. +* **Returns:** + Count of records deleted from Redis. +* **Return type:** + int + +#### `async exists()` + +Check if the index exists in Redis. + +* **Returns:** + True if the index exists, False otherwise. +* **Return type:** + bool + +#### `async expire_keys(keys, ttl)` + +Set the expiration time for a specific entry or entries in Redis. + +* **Parameters:** + * **keys** (*Union* *[* *str* *,* *List* *[* *str* *]* *]*) – The entry ID or IDs to set the expiration for. + * **ttl** (*int*) – The time-to-live in seconds. +* **Return type:** + int | *List*[int] + +#### `async fetch(id)` + +Asynchronously etch an object from Redis by id. The id is typically +either a unique identifier, or derived from some domain-specific +metadata combination (like a document id or chunk id). + +* **Parameters:** + **id** (*str*) – The specified unique identifier for a particular + document indexed in Redis. +* **Returns:** + The fetched object. +* **Return type:** + Dict[str, Any] + +#### `classmethod from_dict(schema_dict, **kwargs)` + +Create a SearchIndex from a dictionary. + +* **Parameters:** + **schema_dict** (*Dict* *[* *str* *,* *Any* *]*) – A dictionary containing the schema. +* **Returns:** + A RedisVL SearchIndex object. +* **Return type:** + [SearchIndex](#searchindex) + +```python +from redisvl.index import SearchIndex + +index = SearchIndex.from_dict({ + "index": { + "name": "my-index", + "prefix": "rvl", + "storage_type": "hash", + }, + "fields": [ + {"name": "doc-id", "type": "tag"} + ] +}, redis_url="redis://localhost:6379") +``` + +#### `async classmethod* from_existing(name, redis_client=None, redis_url=None, **kwargs)` + +Initialize from an existing search index in Redis by index name. + +* **Parameters:** + * **name** (*str*) – Name of the search index in Redis. + * **redis_client** (*Optional* *[* *redis.Redis* *]*) – An + instantiated redis client. + * **redis_url** (*Optional* *[* *str* *]*) – The URL of the Redis server to + connect to. + +#### `classmethod from_yaml(schema_path, **kwargs)` + +Create a SearchIndex from a YAML schema file. + +* **Parameters:** + **schema_path** (*str*) – Path to the YAML schema file. +* **Returns:** + A RedisVL SearchIndex object. +* **Return type:** + [SearchIndex](#searchindex) + +```python +from redisvl.index import SearchIndex + +index = SearchIndex.from_yaml("schemas/schema.yaml", redis_url="redis://localhost:6379") +``` + +#### `async info(name=None)` + +Get information about the index. + +* **Parameters:** + **name** (*str* *,* *optional*) – Index name to fetch info about. + Defaults to None. +* **Returns:** + A dictionary containing the information about the index. +* **Return type:** + dict + +#### `key(id)` + +Construct a redis key as a combination of an index key prefix (optional) +and specified id. + +The id is typically either a unique identifier, or +derived from some domain-specific metadata combination (like a document +id or chunk id). + +* **Parameters:** + **id** (*str*) – The specified unique identifier for a particular + document indexed in Redis. +* **Returns:** + The full Redis key including key prefix and value as a string. +* **Return type:** + str + +#### `async listall()` + +List all search indices in Redis database. + +* **Returns:** + The list of indices in the database. +* **Return type:** + List[str] + +#### `load(data, id_field=None, keys=None, ttl=None, preprocess=None, concurrency=None, batch_size=None)` + +Asynchronously load objects to Redis. Returns the list of keys loaded +to Redis. + +RedisVL automatically handles constructing the object keys, batching, +optional preprocessing steps, and setting optional expiration +(TTL policies) on keys. + +* **Parameters:** + * **data** (*Iterable* *[* *Any* *]*) – An iterable of objects to store. + * **id_field** (*Optional* *[* *str* *]* *,* *optional*) – Specified field used as the id + portion of the redis key (after the prefix) for each + object. Defaults to None. + * **keys** (*Optional* *[* *Iterable* *[* *str* *]* *]* *,* *optional*) – Optional iterable of keys. + Must match the length of objects if provided. Defaults to None. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – Time-to-live in seconds for each key. + Defaults to None. + * **preprocess** (*Optional* *[* *Callable* *]* *,* *optional*) – A function to + preprocess objects before storage. Defaults to None. + * **batch_size** (*Optional* *[* *int* *]* *,* *optional*) – Number of objects to write in + a single Redis pipeline execution. Defaults to class’s + default batch size. + * **concurrency** (*int* *|* *None*) +* **Returns:** + List of keys loaded to Redis. +* **Return type:** + List[str] +* **Raises:** + * **SchemaValidationError** – If validation fails when validate_on_load is enabled. + * **RedisVLError** – If there’s an error loading data to Redis. + +```python +data = [{"test": "foo"}, {"test": "bar"}] + +# simple case +keys = await index.load(data) + +# set 360 second ttl policy on data +keys = await index.load(data, ttl=360) + +# load data with predefined keys +keys = await index.load(data, keys=["rvl:foo", "rvl:bar"]) + +# load data with preprocessing step +def add_field(d): + d["new_field"] = 123 + return d +keys = await index.load(data, preprocess=add_field) +``` + +#### `async paginate(query, page_size=30)` + +Execute a given query against the index and return results in +paginated batches. + +This method accepts a RedisVL query instance, enabling async pagination +of results which allows for subsequent processing over each batch with a +generator. + +* **Parameters:** + * **query** (*BaseQuery*) – The search query to be executed. + * **page_size** (*int* *,* *optional*) – The number of results to return in each + batch. Defaults to 30. +* **Yields:** + An async generator yielding batches of search results. +* **Raises:** + * **TypeError** – If the page_size argument is not of type int. + * **ValueError** – If the page_size argument is less than or equal to zero. +* **Return type:** + *AsyncGenerator* + +```python +# Iterate over paginated search results in batches of 10 +async for result_batch in index.paginate(query, page_size=10): + # Process each batch of results + pass +``` + +#### `NOTE` +The page_size parameter controls the number of items each result +batch contains. Adjust this value based on performance +considerations and the expected volume of search results. + +#### `async query(query)` + +Asynchronously execute a query on the index. + +This method takes a BaseQuery or AggregationQuery object directly, runs +the search, and handles post-processing of the search. + +* **Parameters:** + **query** (*Union* *[* *BaseQuery* *,* *AggregateQuery* *]*) – The query to run. +* **Returns:** + A list of search results. +* **Return type:** + List[Result] + +```python +from redisvl.query import VectorQuery + +query = VectorQuery( + vector=[0.16, -0.34, 0.98, 0.23], + vector_field_name="embedding", + num_results=3 +) + +results = await index.query(query) +``` + +#### `async search(*args, **kwargs)` + +Perform a search on this index. + +Wrapper around redis.search.Search that adds the index name +to the search query and passes along the rest of the arguments +to the redis-py ft.search() method. + +* **Returns:** + Raw Redis search results. +* **Return type:** + Result + +#### `set_client(redis_client)` + +[DEPRECATED] Manually set the Redis client to use with the search index. +This method is deprecated; please provide connection parameters in \_\_init_\_. + +* **Parameters:** + **redis_client** (*Redis* *|* *Redis*) + +#### `property client: Redis | None` + +The underlying redis-py client object. + +#### `property key_separator: str` + +The optional separator between a defined prefix and key value in +forming a Redis key. + +#### `property name: str` + +The name of the Redis search index. + +#### `property prefix: str` + +The optional key prefix that comes before a unique key value in +forming a Redis key. + +#### `property storage_type: StorageType` + +The underlying storage type for the search index; either +hash or json. +--- +linkTitle: LLM cache +title: LLM Cache +type: integration +--- + + +## SemanticCache + + + +### `class SemanticCache(name='llmcache', distance_threshold=0.1, ttl=None, vectorizer=None, filterable_fields=None, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={}, overwrite=False, **kwargs)` + +Bases: `BaseLLMCache` + +Semantic Cache for Large Language Models. + +Semantic Cache for Large Language Models. + +* **Parameters:** + * **name** (*str* *,* *optional*) – The name of the semantic cache search index. + Defaults to “llmcache”. + * **distance_threshold** (*float* *,* *optional*) – Semantic threshold for the + cache. Defaults to 0.1. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The time-to-live for records cached + in Redis. Defaults to None. + * **vectorizer** (*Optional* *[* *BaseVectorizer* *]* *,* *optional*) – The vectorizer for the cache. + Defaults to HFTextVectorizer. + * **filterable_fields** (*Optional* *[* *List* *[* *Dict* *[* *str* *,* *Any* *]* *]* *]*) – An optional list of RedisVL fields + that can be used to customize cache retrieval with filters. + * **redis_client** (*Optional* *[* *Redis* *]* *,* *optional*) – A redis client connection instance. + Defaults to None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + * **overwrite** (*bool*) – Whether or not to force overwrite the schema for + the semantic cache index. Defaults to false. +* **Raises:** + * **TypeError** – If an invalid vectorizer is provided. + * **TypeError** – If the TTL value is not an int. + * **ValueError** – If the threshold is not between 0 and 1. + * **ValueError** – If existing schema does not match new schema and overwrite is False. + +#### `async acheck(prompt=None, vector=None, num_results=1, return_fields=None, filter_expression=None, distance_threshold=None)` + +Async check the semantic cache for results similar to the specified prompt +or vector. + +This method searches the cache using vector similarity with +either a raw text prompt (converted to a vector) or a provided vector as +input. It checks for semantically similar prompts and fetches the cached +LLM responses. + +* **Parameters:** + * **prompt** (*Optional* *[* *str* *]* *,* *optional*) – The text prompt to search for in + the cache. + * **vector** (*Optional* *[* *List* *[* *float* *]* *]* *,* *optional*) – The vector representation + of the prompt to search for in the cache. + * **num_results** (*int* *,* *optional*) – The number of cached results to return. + Defaults to 1. + * **return_fields** (*Optional* *[* *List* *[* *str* *]* *]* *,* *optional*) – The fields to include + in each returned result. If None, defaults to all available + fields in the cached entry. + * **filter_expression** (*Optional* *[*[*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]*) – Optional filter expression + that can be used to filter cache results. Defaults to None and + the full cache will be searched. + * **distance_threshold** (*Optional* *[* *float* *]*) – The threshold for semantic + vector distance. +* **Returns:** + A list of dicts containing the requested + : return fields for each similar cached response. +* **Return type:** + List[Dict[str, Any]] +* **Raises:** + * **ValueError** – If neither a prompt nor a vector is specified. + * **ValueError** – if ‘vector’ has incorrect dimensions. + * **TypeError** – If return_fields is not a list when provided. + +```python +response = await cache.acheck( + prompt="What is the captial city of France?" +) +``` + +#### `async aclear()` + +Async clear the cache of all keys. + +* **Return type:** + None + +#### `async adelete()` + +Async delete the cache and its index entirely. + +* **Return type:** + None + +#### `async adisconnect()` + +Asynchronously disconnect from Redis and search index. + +Closes all Redis connections and index connections. + +#### `async adrop(ids=None, keys=None)` + +Async drop specific entries from the cache by ID or Redis key. + +* **Parameters:** + * **ids** (*Optional* *[* *List* *[* *str* *]* *]*) – List of entry IDs to remove from the cache. + Entry IDs are the unique identifiers without the cache prefix. + * **keys** (*Optional* *[* *List* *[* *str* *]* *]*) – List of full Redis keys to remove from the cache. + Keys are the complete Redis keys including the cache prefix. +* **Return type:** + None + +#### `NOTE` +At least one of ids or keys must be provided. + +* **Raises:** + **ValueError** – If neither ids nor keys is provided. +* **Parameters:** + * **ids** (*List* *[* *str* *]* *|* *None*) + * **keys** (*List* *[* *str* *]* *|* *None*) +* **Return type:** + None + +#### `async aexpire(key, ttl=None)` + +Asynchronously set or refresh the expiration time for a key in the cache. + +* **Parameters:** + * **key** (*str*) – The Redis key to set the expiration on. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The time-to-live in seconds. If None, + uses the default TTL configured for this cache instance. + Defaults to None. +* **Return type:** + None + +#### `NOTE` +If neither the provided TTL nor the default TTL is set (both are None), +this method will have no effect. + +#### `async astore(prompt, response, vector=None, metadata=None, filters=None, ttl=None)` + +Async stores the specified key-value pair in the cache along with metadata. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to cache. + * **response** (*str*) – The LLM response to cache. + * **vector** (*Optional* *[* *List* *[* *float* *]* *]* *,* *optional*) – The prompt vector to + cache. Defaults to None, and the prompt vector is generated on + demand. + * **metadata** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *optional*) – The optional metadata to cache + alongside the prompt and response. Defaults to None. + * **filters** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – The optional tag to assign to the cache entry. + Defaults to None. + * **ttl** (*Optional* *[* *int* *]*) – The optional TTL override to use on this individual cache + entry. Defaults to the global TTL setting. +* **Returns:** + The Redis key for the entries added to the semantic cache. +* **Return type:** + str +* **Raises:** + * **ValueError** – If neither prompt nor vector is specified. + * **ValueError** – if vector has incorrect dimensions. + * **TypeError** – If provided metadata is not a dictionary. + +```python +key = await cache.astore( + prompt="What is the captial city of France?", + response="Paris", + metadata={"city": "Paris", "country": "France"} +) +``` + +#### `async aupdate(key, **kwargs)` + +Async update specific fields within an existing cache entry. If no fields +are passed, then only the document TTL is refreshed. + +* **Parameters:** + **key** (*str*) – the key of the document to update using kwargs. +* **Raises:** + * **ValueError if an incorrect mapping is provided as a kwarg.** – + * **TypeError if metadata is provided and not** **of** **type dict.** – +* **Return type:** + None + +```python +key = await cache.astore('this is a prompt', 'this is a response') +await cache.aupdate( + key, + metadata={"hit_count": 1, "model_name": "Llama-2-7b"} +) +``` + +#### `check(prompt=None, vector=None, num_results=1, return_fields=None, filter_expression=None, distance_threshold=None)` + +Checks the semantic cache for results similar to the specified prompt +or vector. + +This method searches the cache using vector similarity with +either a raw text prompt (converted to a vector) or a provided vector as +input. It checks for semantically similar prompts and fetches the cached +LLM responses. + +* **Parameters:** + * **prompt** (*Optional* *[* *str* *]* *,* *optional*) – The text prompt to search for in + the cache. + * **vector** (*Optional* *[* *List* *[* *float* *]* *]* *,* *optional*) – The vector representation + of the prompt to search for in the cache. + * **num_results** (*int* *,* *optional*) – The number of cached results to return. + Defaults to 1. + * **return_fields** (*Optional* *[* *List* *[* *str* *]* *]* *,* *optional*) – The fields to include + in each returned result. If None, defaults to all available + fields in the cached entry. + * **filter_expression** (*Optional* *[*[*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]*) – Optional filter expression + that can be used to filter cache results. Defaults to None and + the full cache will be searched. + * **distance_threshold** (*Optional* *[* *float* *]*) – The threshold for semantic + vector distance. +* **Returns:** + A list of dicts containing the requested + : return fields for each similar cached response. +* **Return type:** + List[Dict[str, Any]] +* **Raises:** + * **ValueError** – If neither a prompt nor a vector is specified. + * **ValueError** – if ‘vector’ has incorrect dimensions. + * **TypeError** – If return_fields is not a list when provided. + +```python +response = cache.check( + prompt="What is the captial city of France?" +) +``` + +#### `clear()` + +Clear the cache of all keys. + +* **Return type:** + None + +#### `delete()` + +Delete the cache and its index entirely. + +* **Return type:** + None + +#### `disconnect()` + +Disconnect from Redis and search index. + +Closes all Redis connections and index connections. + +#### `drop(ids=None, keys=None)` + +Drop specific entries from the cache by ID or Redis key. + +* **Parameters:** + * **ids** (*Optional* *[* *List* *[* *str* *]* *]*) – List of entry IDs to remove from the cache. + Entry IDs are the unique identifiers without the cache prefix. + * **keys** (*Optional* *[* *List* *[* *str* *]* *]*) – List of full Redis keys to remove from the cache. + Keys are the complete Redis keys including the cache prefix. +* **Return type:** + None + +#### `NOTE` +At least one of ids or keys must be provided. + +* **Raises:** + **ValueError** – If neither ids nor keys is provided. +* **Parameters:** + * **ids** (*List* *[* *str* *]* *|* *None*) + * **keys** (*List* *[* *str* *]* *|* *None*) +* **Return type:** + None + +#### `expire(key, ttl=None)` + +Set or refresh the expiration time for a key in the cache. + +* **Parameters:** + * **key** (*str*) – The Redis key to set the expiration on. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The time-to-live in seconds. If None, + uses the default TTL configured for this cache instance. + Defaults to None. +* **Return type:** + None + +#### `NOTE` +If neither the provided TTL nor the default TTL is set (both are None), +this method will have no effect. + +#### `set_threshold(distance_threshold)` + +Sets the semantic distance threshold for the cache. + +* **Parameters:** + **distance_threshold** (*float*) – The semantic distance threshold for + the cache. +* **Raises:** + **ValueError** – If the threshold is not between 0 and 1. +* **Return type:** + None + +#### `set_ttl(ttl=None)` + +Set the default TTL, in seconds, for entries in the cache. + +* **Parameters:** + **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The optional time-to-live expiration + for the cache, in seconds. +* **Raises:** + **ValueError** – If the time-to-live value is not an integer. +* **Return type:** + None + +#### `store(prompt, response, vector=None, metadata=None, filters=None, ttl=None)` + +Stores the specified key-value pair in the cache along with metadata. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to cache. + * **response** (*str*) – The LLM response to cache. + * **vector** (*Optional* *[* *List* *[* *float* *]* *]* *,* *optional*) – The prompt vector to + cache. Defaults to None, and the prompt vector is generated on + demand. + * **metadata** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]* *,* *optional*) – The optional metadata to cache + alongside the prompt and response. Defaults to None. + * **filters** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – The optional tag to assign to the cache entry. + Defaults to None. + * **ttl** (*Optional* *[* *int* *]*) – The optional TTL override to use on this individual cache + entry. Defaults to the global TTL setting. +* **Returns:** + The Redis key for the entries added to the semantic cache. +* **Return type:** + str +* **Raises:** + * **ValueError** – If neither prompt nor vector is specified. + * **ValueError** – if vector has incorrect dimensions. + * **TypeError** – If provided metadata is not a dictionary. + +```python +key = cache.store( + prompt="What is the captial city of France?", + response="Paris", + metadata={"city": "Paris", "country": "France"} +) +``` + +#### `update(key, **kwargs)` + +Update specific fields within an existing cache entry. If no fields +are passed, then only the document TTL is refreshed. + +* **Parameters:** + **key** (*str*) – the key of the document to update using kwargs. +* **Raises:** + * **ValueError if an incorrect mapping is provided as a kwarg.** – + * **TypeError if metadata is provided and not** **of** **type dict.** – +* **Return type:** + None + +```python +key = cache.store('this is a prompt', 'this is a response') +cache.update(key, metadata={"hit_count": 1, "model_name": "Llama-2-7b"}) +``` + +#### `property aindex: `[`AsyncSearchIndex`]({{< relref "searchindex/#asyncsearchindex" >}})` | None` + +The underlying AsyncSearchIndex for the cache. + +* **Returns:** + The async search index. +* **Return type:** + [AsyncSearchIndex]({{< relref "searchindex/#asyncsearchindex" >}}) + +#### `property distance_threshold: float` + +The semantic distance threshold for the cache. + +* **Returns:** + The semantic distance threshold. +* **Return type:** + float + +#### `property index: `[`SearchIndex`]({{< relref "searchindex/#searchindex" >}})` ` + +The underlying SearchIndex for the cache. + +* **Returns:** + The search index. +* **Return type:** + [SearchIndex]({{< relref "searchindex/#searchindex" >}}) + +#### `property ttl: int | None` + +The default TTL, in seconds, for entries in the cache. + +# Embeddings Cache + +## EmbeddingsCache + + + +### `class EmbeddingsCache(name='embedcache', ttl=None, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={})` + +Bases: `BaseCache` + +Embeddings Cache for storing embedding vectors with exact key matching. + +Initialize an embeddings cache. + +* **Parameters:** + * **name** (*str*) – The name of the cache. Defaults to “embedcache”. + * **ttl** (*Optional* *[* *int* *]*) – The time-to-live for cached embeddings. Defaults to None. + * **redis_client** (*Optional* *[* *Redis* *]*) – Redis client instance. Defaults to None. + * **redis_url** (*str*) – Redis URL for connection. Defaults to “redis://localhost:6379”. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – Redis connection arguments. Defaults to {}. +* **Raises:** + **ValueError** – If vector dimensions are invalid + +```python +cache = EmbeddingsCache( + name="my_embeddings_cache", + ttl=3600, # 1 hour + redis_url="redis://localhost:6379" +) +``` + +#### `async aclear()` + +Async clear the cache of all keys. + +* **Return type:** + None + +#### `async adisconnect()` + +Async disconnect from Redis. + +* **Return type:** + None + +#### `async adrop(text, model_name)` + +Async remove an embedding from the cache. + +Asynchronously removes an embedding from the cache. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Return type:** + None + +```python +await cache.adrop( + text="What is machine learning?", + model_name="text-embedding-ada-002" +) +``` + +#### `async adrop_by_key(key)` + +Async remove an embedding from the cache by its Redis key. + +Asynchronously removes an embedding from the cache by its Redis key. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Return type:** + None + +```python +await cache.adrop_by_key("embedcache:1234567890abcdef") +``` + +#### `async aexists(text, model_name)` + +Async check if an embedding exists. + +Asynchronously checks if an embedding exists for the given text and model. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + True if the embedding exists in the cache, False otherwise. +* **Return type:** + bool + +```python +if await cache.aexists("What is machine learning?", "text-embedding-ada-002"): + print("Embedding is in cache") +``` + +#### `async aexists_by_key(key)` + +Async check if an embedding exists for the given Redis key. + +Asynchronously checks if an embedding exists for the given Redis key. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Returns:** + True if the embedding exists in the cache, False otherwise. +* **Return type:** + bool + +```python +if await cache.aexists_by_key("embedcache:1234567890abcdef"): + print("Embedding is in cache") +``` + +#### `async aexpire(key, ttl=None)` + +Asynchronously set or refresh the expiration time for a key in the cache. + +* **Parameters:** + * **key** (*str*) – The Redis key to set the expiration on. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The time-to-live in seconds. If None, + uses the default TTL configured for this cache instance. + Defaults to None. +* **Return type:** + None + +#### `NOTE` +If neither the provided TTL nor the default TTL is set (both are None), +this method will have no effect. + +#### `async aget(text, model_name)` + +Async get embedding by text and model name. + +Asynchronously retrieves a cached embedding for the given text and model name. +If found, refreshes the TTL of the entry. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + Embedding cache entry or None if not found. +* **Return type:** + Optional[Dict[str, Any]] + +```python +embedding_data = await cache.aget( + text="What is machine learning?", + model_name="text-embedding-ada-002" +) +``` + +#### `async aget_by_key(key)` + +Async get embedding by its full Redis key. + +Asynchronously retrieves a cached embedding for the given Redis key. +If found, refreshes the TTL of the entry. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Returns:** + Embedding cache entry or None if not found. +* **Return type:** + Optional[Dict[str, Any]] + +```python +embedding_data = await cache.aget_by_key("embedcache:1234567890abcdef") +``` + +#### `async amdrop(texts, model_name)` + +Async remove multiple embeddings from the cache by their texts and model name. + +Asynchronously removes multiple embeddings in a single operation. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Return type:** + None + +```python +# Remove multiple embeddings asynchronously +await cache.amdrop( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `async amdrop_by_keys(keys)` + +Async remove multiple embeddings from the cache by their Redis keys. + +Asynchronously removes multiple embeddings in a single operation. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to remove. +* **Return type:** + None + +```python +# Remove multiple embeddings asynchronously +await cache.amdrop_by_keys(["embedcache:key1", "embedcache:key2"]) +``` + +#### `async amexists(texts, model_name)` + +Async check if multiple embeddings exist by their texts and model name. + +Asynchronously checks existence of multiple embeddings in a single operation. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + List of boolean values indicating whether each embedding exists. +* **Return type:** + List[bool] + +```python +# Check if multiple embeddings exist asynchronously +exists_results = await cache.amexists( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `async amexists_by_keys(keys)` + +Async check if multiple embeddings exist by their Redis keys. + +Asynchronously checks existence of multiple keys in a single operation. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to check. +* **Returns:** + List of boolean values indicating whether each key exists. + The order matches the input keys order. +* **Return type:** + List[bool] + +```python +# Check if multiple keys exist asynchronously +exists_results = await cache.amexists_by_keys(["embedcache:key1", "embedcache:key2"]) +``` + +#### `async amget(texts, model_name)` + +Async get multiple embeddings by their texts and model name. + +Asynchronously retrieves multiple cached embeddings in a single operation. +If found, refreshes the TTL of each entry. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + List of embedding cache entries or None for texts not found. +* **Return type:** + List[Optional[Dict[str, Any]]] + +```python +# Get multiple embeddings asynchronously +embedding_data = await cache.amget( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `async amget_by_keys(keys)` + +Async get multiple embeddings by their Redis keys. + +Asynchronously retrieves multiple cached embeddings in a single network roundtrip. +If found, refreshes the TTL of each entry. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to retrieve. +* **Returns:** + List of embedding cache entries or None for keys not found. + The order matches the input keys order. +* **Return type:** + List[Optional[Dict[str, Any]]] + +```python +# Get multiple embeddings asynchronously +embedding_data = await cache.amget_by_keys([ + "embedcache:key1", + "embedcache:key2" +]) +``` + +#### `async amset(items, ttl=None)` + +Async store multiple embeddings in a batch operation. + +Each item in the input list should be a dictionary with the following fields: +- ‘text’: The text input that was embedded +- ‘model_name’: The name of the embedding model +- ‘embedding’: The embedding vector +- ‘metadata’: Optional metadata to store with the embedding + +* **Parameters:** + * **items** (*List* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – List of dictionaries, each containing text, model_name, embedding, and optional metadata. + * **ttl** (*int* *|* *None*) – Optional TTL override for these entries. +* **Returns:** + List of Redis keys where the embeddings were stored. +* **Return type:** + List[str] + +```python +# Store multiple embeddings asynchronously +keys = await cache.amset([ + { + "text": "What is ML?", + "model_name": "text-embedding-ada-002", + "embedding": [0.1, 0.2, 0.3], + "metadata": {"source": "user"} + }, + { + "text": "What is AI?", + "model_name": "text-embedding-ada-002", + "embedding": [0.4, 0.5, 0.6], + "metadata": {"source": "docs"} + } +]) +``` + +#### `async aset(text, model_name, embedding, metadata=None, ttl=None)` + +Async store an embedding with its text and model name. + +Asynchronously stores an embedding with its text and model name. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. + * **embedding** (*List* *[* *float* *]*) – The embedding vector to store. + * **metadata** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – Optional metadata to store with the embedding. + * **ttl** (*Optional* *[* *int* *]*) – Optional TTL override for this specific entry. +* **Returns:** + The Redis key where the embedding was stored. +* **Return type:** + str + +```python +key = await cache.aset( + text="What is machine learning?", + model_name="text-embedding-ada-002", + embedding=[0.1, 0.2, 0.3, ...], + metadata={"source": "user_query"} +) +``` + +#### `clear()` + +Clear the cache of all keys. + +* **Return type:** + None + +#### `disconnect()` + +Disconnect from Redis. + +* **Return type:** + None + +#### `drop(text, model_name)` + +Remove an embedding from the cache. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Return type:** + None + +```python +cache.drop( + text="What is machine learning?", + model_name="text-embedding-ada-002" +) +``` + +#### `drop_by_key(key)` + +Remove an embedding from the cache by its Redis key. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Return type:** + None + +```python +cache.drop_by_key("embedcache:1234567890abcdef") +``` + +#### `exists(text, model_name)` + +Check if an embedding exists for the given text and model. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + True if the embedding exists in the cache, False otherwise. +* **Return type:** + bool + +```python +if cache.exists("What is machine learning?", "text-embedding-ada-002"): + print("Embedding is in cache") +``` + +#### `exists_by_key(key)` + +Check if an embedding exists for the given Redis key. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Returns:** + True if the embedding exists in the cache, False otherwise. +* **Return type:** + bool + +```python +if cache.exists_by_key("embedcache:1234567890abcdef"): + print("Embedding is in cache") +``` + +#### `expire(key, ttl=None)` + +Set or refresh the expiration time for a key in the cache. + +* **Parameters:** + * **key** (*str*) – The Redis key to set the expiration on. + * **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The time-to-live in seconds. If None, + uses the default TTL configured for this cache instance. + Defaults to None. +* **Return type:** + None + +#### `NOTE` +If neither the provided TTL nor the default TTL is set (both are None), +this method will have no effect. + +#### `get(text, model_name)` + +Get embedding by text and model name. + +Retrieves a cached embedding for the given text and model name. +If found, refreshes the TTL of the entry. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + Embedding cache entry or None if not found. +* **Return type:** + Optional[Dict[str, Any]] + +```python +embedding_data = cache.get( + text="What is machine learning?", + model_name="text-embedding-ada-002" +) +``` + +#### `get_by_key(key)` + +Get embedding by its full Redis key. + +Retrieves a cached embedding for the given Redis key. +If found, refreshes the TTL of the entry. + +* **Parameters:** + **key** (*str*) – The full Redis key for the embedding. +* **Returns:** + Embedding cache entry or None if not found. +* **Return type:** + Optional[Dict[str, Any]] + +```python +embedding_data = cache.get_by_key("embedcache:1234567890abcdef") +``` + +#### `mdrop(texts, model_name)` + +Remove multiple embeddings from the cache by their texts and model name. + +Efficiently removes multiple embeddings in a single operation. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Return type:** + None + +```python +# Remove multiple embeddings +cache.mdrop( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `mdrop_by_keys(keys)` + +Remove multiple embeddings from the cache by their Redis keys. + +Efficiently removes multiple embeddings in a single operation. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to remove. +* **Return type:** + None + +```python +# Remove multiple embeddings +cache.mdrop_by_keys(["embedcache:key1", "embedcache:key2"]) +``` + +#### `mexists(texts, model_name)` + +Check if multiple embeddings exist by their texts and model name. + +Efficiently checks existence of multiple embeddings in a single operation. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + List of boolean values indicating whether each embedding exists. +* **Return type:** + List[bool] + +```python +# Check if multiple embeddings exist +exists_results = cache.mexists( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `mexists_by_keys(keys)` + +Check if multiple embeddings exist by their Redis keys. + +Efficiently checks existence of multiple keys in a single operation. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to check. +* **Returns:** + List of boolean values indicating whether each key exists. + The order matches the input keys order. +* **Return type:** + List[bool] + +```python +# Check if multiple keys exist +exists_results = cache.mexists_by_keys(["embedcache:key1", "embedcache:key2"]) +``` + +#### `mget(texts, model_name)` + +Get multiple embeddings by their texts and model name. + +Efficiently retrieves multiple cached embeddings in a single operation. +If found, refreshes the TTL of each entry. + +* **Parameters:** + * **texts** (*List* *[* *str* *]*) – List of text inputs that were embedded. + * **model_name** (*str*) – The name of the embedding model. +* **Returns:** + List of embedding cache entries or None for texts not found. +* **Return type:** + List[Optional[Dict[str, Any]]] + +```python +# Get multiple embeddings +embedding_data = cache.mget( + texts=["What is machine learning?", "What is deep learning?"], + model_name="text-embedding-ada-002" +) +``` + +#### `mget_by_keys(keys)` + +Get multiple embeddings by their Redis keys. + +Efficiently retrieves multiple cached embeddings in a single network roundtrip. +If found, refreshes the TTL of each entry. + +* **Parameters:** + **keys** (*List* *[* *str* *]*) – List of Redis keys to retrieve. +* **Returns:** + List of embedding cache entries or None for keys not found. + The order matches the input keys order. +* **Return type:** + List[Optional[Dict[str, Any]]] + +```python +# Get multiple embeddings +embedding_data = cache.mget_by_keys([ + "embedcache:key1", + "embedcache:key2" +]) +``` + +#### `mset(items, ttl=None)` + +Store multiple embeddings in a batch operation. + +Each item in the input list should be a dictionary with the following fields: +- ‘text’: The text input that was embedded +- ‘model_name’: The name of the embedding model +- ‘embedding’: The embedding vector +- ‘metadata’: Optional metadata to store with the embedding + +* **Parameters:** + * **items** (*List* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – List of dictionaries, each containing text, model_name, embedding, and optional metadata. + * **ttl** (*int* *|* *None*) – Optional TTL override for these entries. +* **Returns:** + List of Redis keys where the embeddings were stored. +* **Return type:** + List[str] + +```python +# Store multiple embeddings +keys = cache.mset([ + { + "text": "What is ML?", + "model_name": "text-embedding-ada-002", + "embedding": [0.1, 0.2, 0.3], + "metadata": {"source": "user"} + }, + { + "text": "What is AI?", + "model_name": "text-embedding-ada-002", + "embedding": [0.4, 0.5, 0.6], + "metadata": {"source": "docs"} + } +]) +``` + +#### `set(text, model_name, embedding, metadata=None, ttl=None)` + +Store an embedding with its text and model name. + +* **Parameters:** + * **text** (*str*) – The text input that was embedded. + * **model_name** (*str*) – The name of the embedding model. + * **embedding** (*List* *[* *float* *]*) – The embedding vector to store. + * **metadata** (*Optional* *[* *Dict* *[* *str* *,* *Any* *]* *]*) – Optional metadata to store with the embedding. + * **ttl** (*Optional* *[* *int* *]*) – Optional TTL override for this specific entry. +* **Returns:** + The Redis key where the embedding was stored. +* **Return type:** + str + +```python +key = cache.set( + text="What is machine learning?", + model_name="text-embedding-ada-002", + embedding=[0.1, 0.2, 0.3, ...], + metadata={"source": "user_query"} +) +``` + +#### `set_ttl(ttl=None)` + +Set the default TTL, in seconds, for entries in the cache. + +* **Parameters:** + **ttl** (*Optional* *[* *int* *]* *,* *optional*) – The optional time-to-live expiration + for the cache, in seconds. +* **Raises:** + **ValueError** – If the time-to-live value is not an integer. +* **Return type:** + None + +#### `property ttl: int | None` + +The default TTL, in seconds, for entries in the cache. +--- +linkTitle: LLM session manager +title: LLM Session Manager +type: integration +--- + + +## SemanticSessionManager + + + +### `class SemanticSessionManager(name, session_tag=None, prefix=None, vectorizer=None, distance_threshold=0.3, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={}, overwrite=False, **kwargs)` + +Bases: `BaseSessionManager` + +Initialize session memory with index + +Session Manager stores the current and previous user text prompts and +LLM responses to allow for enriching future prompts with session +context. Session history is stored in individual user or LLM prompts and +responses. + +* **Parameters:** + * **name** (*str*) – The name of the session manager index. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. + * **prefix** (*Optional* *[* *str* *]*) – Prefix for the keys for this session data. + Defaults to None and will be replaced with the index name. + * **vectorizer** (*Optional* *[* *BaseVectorizer* *]*) – The vectorizer used to create embeddings. + * **distance_threshold** (*float*) – The maximum semantic distance to be + included in the context. Defaults to 0.3. + * **redis_client** (*Optional* *[* *Redis* *]*) – A Redis client instance. Defaults to + None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + * **overwrite** (*bool*) – Whether or not to force overwrite the schema for + the semantic session index. Defaults to false. + +The proposed schema will support a single vector embedding constructed +from either the prompt or response in a single string. + +#### `add_message(message, session_tag=None)` + +Insert a single prompt or response into the session memory. +A timestamp is associated with it so that it can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **message** (*Dict* *[* *str* *,**str* *]*) – The user prompt or LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `add_messages(messages, session_tag=None)` + +Insert a list of prompts and responses into the session memory. +A timestamp is associated with each so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **messages** (*List* *[* *Dict* *[* *str* *,* *str* *]* *]*) – The list of user prompts and LLM responses. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `clear()` + +Clears the chat session history. + +* **Return type:** + None + +#### `delete()` + +Clear all conversation keys and remove the search index. + +* **Return type:** + None + +#### `drop(id=None)` + +Remove a specific exchange from the conversation history. + +* **Parameters:** + **id** (*Optional* *[* *str* *]*) – The id of the session entry to delete. + If None then the last entry is deleted. +* **Return type:** + None + +#### `get_recent(top_k=5, as_text=False, raw=False, session_tag=None)` + +Retreive the recent conversation history in sequential order. + +* **Parameters:** + * **top_k** (*int*) – The number of previous exchanges to return. Default is 5. + * **as_text** (*bool*) – Whether to return the conversation as a single string, + or list of alternating prompts and responses. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + prompt and response + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Returns:** + A single string transcription of the session + : or list of strings if as_text is false. +* **Return type:** + Union[str, List[str]] +* **Raises:** + **ValueError** – if top_k is not an integer greater than or equal to 0. + +#### `get_relevant(prompt, as_text=False, top_k=5, fall_back=False, session_tag=None, raw=False, distance_threshold=None)` + +Searches the chat history for information semantically related to +the specified prompt. + +This method uses vector similarity search with a text prompt as input. +It checks for semantically similar prompts and responses and gets +the top k most relevant previous prompts or responses to include as +context to the next LLM call. + +* **Parameters:** + * **prompt** (*str*) – The message text to search for in session memory + * **as_text** (*bool*) – Whether to return the prompts and responses as text + * **JSON** (*or as*) + * **top_k** (*int*) – The number of previous messages to return. Default is 5. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. + * **distance_threshold** (*Optional* *[* *float* *]*) – The threshold for semantic + vector distance. + * **fall_back** (*bool*) – Whether to drop back to recent conversation history + if no relevant context is found. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + message. +* **Returns:** + Either a list of strings, or a + list of prompts and responses in JSON containing the most relevant. +* **Return type:** + Union[List[str], List[Dict[str,str]] + +Raises ValueError: if top_k is not an integer greater or equal to 0. + +#### `store(prompt, response, session_tag=None)` + +Insert a prompt:response pair into the session memory. A timestamp +is associated with each message so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to the LLM. + * **response** (*str*) – The corresponding LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `property messages: List[str] | List[Dict[str, str]]` + +Returns the full chat history. + +## StandardSessionManager + + + +### `class StandardSessionManager(name, session_tag=None, prefix=None, redis_client=None, redis_url='redis://localhost:6379', connection_kwargs={}, **kwargs)` + +Bases: `BaseSessionManager` + +Initialize session memory + +Session Manager stores the current and previous user text prompts and +LLM responses to allow for enriching future prompts with session +context.Session history is stored in individual user or LLM prompts and +responses. + +* **Parameters:** + * **name** (*str*) – The name of the session manager index. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. + * **prefix** (*Optional* *[* *str* *]*) – Prefix for the keys for this session data. + Defaults to None and will be replaced with the index name. + * **redis_client** (*Optional* *[* *Redis* *]*) – A Redis client instance. Defaults to + None. + * **redis_url** (*str* *,* *optional*) – The redis url. Defaults to redis://localhost:6379. + * **connection_kwargs** (*Dict* *[* *str* *,* *Any* *]*) – The connection arguments + for the redis client. Defaults to empty {}. + +The proposed schema will support a single combined vector embedding +constructed from the prompt & response in a single string. + +#### `add_message(message, session_tag=None)` + +Insert a single prompt or response into the session memory. +A timestamp is associated with it so that it can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **message** (*Dict* *[* *str* *,**str* *]*) – The user prompt or LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `add_messages(messages, session_tag=None)` + +Insert a list of prompts and responses into the session memory. +A timestamp is associated with each so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **messages** (*List* *[* *Dict* *[* *str* *,* *str* *]* *]*) – The list of user prompts and LLM responses. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `clear()` + +Clears the chat session history. + +* **Return type:** + None + +#### `delete()` + +Clear all conversation keys and remove the search index. + +* **Return type:** + None + +#### `drop(id=None)` + +Remove a specific exchange from the conversation history. + +* **Parameters:** + **id** (*Optional* *[* *str* *]*) – The id of the session entry to delete. + If None then the last entry is deleted. +* **Return type:** + None + +#### `get_recent(top_k=5, as_text=False, raw=False, session_tag=None)` + +Retrieve the recent conversation history in sequential order. + +* **Parameters:** + * **top_k** (*int*) – The number of previous messages to return. Default is 5. + * **as_text** (*bool*) – Whether to return the conversation as a single string, + or list of alternating prompts and responses. + * **raw** (*bool*) – Whether to return the full Redis hash entry or just the + prompt and response + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Returns:** + A single string transcription of the session + : or list of strings if as_text is false. +* **Return type:** + Union[str, List[str]] +* **Raises:** + **ValueError** – if top_k is not an integer greater than or equal to 0. + +#### `store(prompt, response, session_tag=None)` + +Insert a prompt:response pair into the session memory. A timestamp +is associated with each exchange so that they can be later sorted +in sequential ordering after retrieval. + +* **Parameters:** + * **prompt** (*str*) – The user prompt to the LLM. + * **response** (*str*) – The corresponding LLM response. + * **session_tag** (*Optional* *[* *str* *]*) – Tag to be added to entries to link to a specific + session. Defaults to instance ULID. +* **Return type:** + None + +#### `property messages: List[str] | List[Dict[str, str]]` + +Returns the full chat history. +--- +linkTitle: Vectorizers +title: Vectorizers +type: integration +--- + + +## HFTextVectorizer + + + +### `class HFTextVectorizer(model='sentence-transformers/all-mpnet-base-v2', dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The HFTextVectorizer class leverages Hugging Face’s Sentence Transformers +for generating vector embeddings from text input. + +This vectorizer is particularly useful in scenarios where advanced natural language +processing and understanding are required, and ideal for running on your own +hardware without usage fees. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +Utilizing this vectorizer involves specifying a pre-trained model from +Hugging Face’s vast collection of Sentence Transformers. These models are +trained on a variety of datasets and tasks, ensuring versatility and +robust performance across different embedding needs. + +Requirements: +: - The sentence-transformers library must be installed with pip. + +```python +# Basic usage +vectorizer = HFTextVectorizer(model="sentence-transformers/all-mpnet-base-v2") +embedding = vectorizer.embed("Hello, world!") + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="my_embeddings_cache") + +vectorizer = HFTextVectorizer( + model="sentence-transformers/all-mpnet-base-v2", + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Batch processing +embeddings = vectorizer.embed_many( + ["Hello, world!", "How are you?"], + batch_size=2 +) +``` + +Initialize the Hugging Face text vectorizer. + +* **Parameters:** + * **model** (*str*) – The pre-trained model from Hugging Face’s Sentence + Transformers to be used for embedding. Defaults to + ‘sentence-transformers/all-mpnet-base-v2’. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **\*\*kwargs** – Additional parameters to pass to the SentenceTransformer + constructor. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the sentence-transformers library is not installed. + * **ValueError** – If there is an error setting the embedding model dimensions. + * **ValueError** – If an invalid dtype is provided. + +#### `model_post_init(context, /)` + +This function is meant to behave like a BaseModel method to initialise private attributes. + +It takes context as an argument since that’s what pydantic-core passes when calling it. + +* **Parameters:** + * **self** (*BaseModel*) – The BaseModel instance. + * **context** (*Any*) – The context. +* **Return type:** + None + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## OpenAITextVectorizer + + + +### `class OpenAITextVectorizer(model='text-embedding-ada-002', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The OpenAITextVectorizer class utilizes OpenAI’s API to generate +embeddings for text data. + +This vectorizer is designed to interact with OpenAI’s embeddings API, +requiring an API key for authentication. The key can be provided directly +in the api_config dictionary or through the OPENAI_API_KEY environment +variable. Users must obtain an API key from OpenAI’s website +([https://api.openai.com/](https://api.openai.com/)). Additionally, the openai python client must be +installed with pip install openai>=1.13.0. + +The vectorizer supports both synchronous and asynchronous operations, +allowing for batch processing of texts and flexibility in handling +preprocessing tasks. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +# Basic usage with OpenAI embeddings +vectorizer = OpenAITextVectorizer( + model="text-embedding-ada-002", + api_config={"api_key": "your_api_key"} # OR set OPENAI_API_KEY in your env +) +embedding = vectorizer.embed("Hello, world!") + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="openai_embeddings_cache") + +vectorizer = OpenAITextVectorizer( + model="text-embedding-ada-002", + api_config={"api_key": "your_api_key"}, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Asynchronous batch embedding of multiple texts +embeddings = await vectorizer.aembed_many( + ["Hello, world!", "How are you?"], + batch_size=2 +) +``` + +Initialize the OpenAI vectorizer. + +* **Parameters:** + * **model** (*str*) – Model to use for embedding. Defaults to + ‘text-embedding-ada-002’. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the + API key and any additional OpenAI API options. Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the openai library is not installed. + * **ValueError** – If the OpenAI API key is not provided. + * **ValueError** – If an invalid dtype is provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## AzureOpenAITextVectorizer + + + +### `class AzureOpenAITextVectorizer(model='text-embedding-ada-002', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The AzureOpenAITextVectorizer class utilizes AzureOpenAI’s API to generate +embeddings for text data. + +This vectorizer is designed to interact with AzureOpenAI’s embeddings API, +requiring an API key, an AzureOpenAI deployment endpoint and API version. +These values can be provided directly in the api_config dictionary with +the parameters ‘azure_endpoint’, ‘api_version’ and ‘api_key’ or through the +environment variables ‘AZURE_OPENAI_ENDPOINT’, ‘OPENAI_API_VERSION’, and ‘AZURE_OPENAI_API_KEY’. +Users must obtain these values from the ‘Keys and Endpoints’ section in their Azure OpenAI service. +Additionally, the openai python client must be installed with pip install openai>=1.13.0. + +The vectorizer supports both synchronous and asynchronous operations, +allowing for batch processing of texts and flexibility in handling +preprocessing tasks. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +# Basic usage +vectorizer = AzureOpenAITextVectorizer( + model="text-embedding-ada-002", + api_config={ + "api_key": "your_api_key", # OR set AZURE_OPENAI_API_KEY in your env + "api_version": "your_api_version", # OR set OPENAI_API_VERSION in your env + "azure_endpoint": "your_azure_endpoint", # OR set AZURE_OPENAI_ENDPOINT in your env + } +) +embedding = vectorizer.embed("Hello, world!") + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="azureopenai_embeddings_cache") + +vectorizer = AzureOpenAITextVectorizer( + model="text-embedding-ada-002", + api_config={ + "api_key": "your_api_key", + "api_version": "your_api_version", + "azure_endpoint": "your_azure_endpoint", + }, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Asynchronous batch embedding of multiple texts +embeddings = await vectorizer.aembed_many( + ["Hello, world!", "How are you?"], + batch_size=2 +) +``` + +Initialize the AzureOpenAI vectorizer. + +* **Parameters:** + * **model** (*str*) – Deployment to use for embedding. Must be the + ‘Deployment name’ not the ‘Model name’. Defaults to + ‘text-embedding-ada-002’. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the + API key, API version, Azure endpoint, and any other API options. + Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the openai library is not installed. + * **ValueError** – If the AzureOpenAI API key, version, or endpoint are not provided. + * **ValueError** – If an invalid dtype is provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## VertexAITextVectorizer + + + +### `class VertexAITextVectorizer(model='textembedding-gecko', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The VertexAITextVectorizer uses Google’s VertexAI Palm 2 embedding model +API to create text embeddings. + +This vectorizer is tailored for use in +environments where integration with Google Cloud Platform (GCP) services is +a key requirement. + +Utilizing this vectorizer requires an active GCP project and location +(region), along with appropriate application credentials. These can be +provided through the api_config dictionary or set the GOOGLE_APPLICATION_CREDENTIALS +env var. Additionally, the vertexai python client must be +installed with pip install google-cloud-aiplatform>=1.26. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +# Basic usage +vectorizer = VertexAITextVectorizer( + model="textembedding-gecko", + api_config={ + "project_id": "your_gcp_project_id", # OR set GCP_PROJECT_ID + "location": "your_gcp_location", # OR set GCP_LOCATION + }) +embedding = vectorizer.embed("Hello, world!") + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="vertexai_embeddings_cache") + +vectorizer = VertexAITextVectorizer( + model="textembedding-gecko", + api_config={ + "project_id": "your_gcp_project_id", + "location": "your_gcp_location", + }, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Batch embedding of multiple texts +embeddings = vectorizer.embed_many( + ["Hello, world!", "Goodbye, world!"], + batch_size=2 +) +``` + +Initialize the VertexAI vectorizer. + +* **Parameters:** + * **model** (*str*) – Model to use for embedding. Defaults to + ‘textembedding-gecko’. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the + API config details. Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the google-cloud-aiplatform library is not installed. + * **ValueError** – If the API key is not provided. + * **ValueError** – If an invalid dtype is provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## CohereTextVectorizer + + + +### `class CohereTextVectorizer(model='embed-english-v3.0', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The CohereTextVectorizer class utilizes Cohere’s API to generate +embeddings for text data. + +This vectorizer is designed to interact with Cohere’s /embed API, +requiring an API key for authentication. The key can be provided +directly in the api_config dictionary or through the COHERE_API_KEY +environment variable. User must obtain an API key from Cohere’s website +([https://dashboard.cohere.com/](https://dashboard.cohere.com/)). Additionally, the cohere python +client must be installed with pip install cohere. + +The vectorizer supports only synchronous operations, allows for batch +processing of texts and flexibility in handling preprocessing tasks. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +from redisvl.utils.vectorize import CohereTextVectorizer + +# Basic usage +vectorizer = CohereTextVectorizer( + model="embed-english-v3.0", + api_config={"api_key": "your-cohere-api-key"} # OR set COHERE_API_KEY in your env +) +query_embedding = vectorizer.embed( + text="your input query text here", + input_type="search_query" +) +doc_embeddings = vectorizer.embed_many( + texts=["your document text", "more document text"], + input_type="search_document" +) + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="cohere_embeddings_cache") + +vectorizer = CohereTextVectorizer( + model="embed-english-v3.0", + api_config={"api_key": "your-cohere-api-key"}, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed( + text="your input query text here", + input_type="search_query" +) + +# Second call will retrieve from cache +embedding2 = vectorizer.embed( + text="your input query text here", + input_type="search_query" +) +``` + +Initialize the Cohere vectorizer. + +Visit [https://cohere.ai/embed](https://cohere.ai/embed) to learn about embeddings. + +* **Parameters:** + * **model** (*str*) – Model to use for embedding. Defaults to ‘embed-english-v3.0’. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the API key. + Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + ‘float32’ will use Cohere’s float embeddings, ‘int8’ and ‘uint8’ will map + to Cohere’s corresponding embedding types. Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the cohere library is not installed. + * **ValueError** – If the API key is not provided. + * **ValueError** – If an invalid dtype is provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## BedrockTextVectorizer + + + +### `class BedrockTextVectorizer(model='amazon.titan-embed-text-v2:0', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The AmazonBedrockTextVectorizer class utilizes Amazon Bedrock’s API to generate +embeddings for text data. + +This vectorizer is designed to interact with Amazon Bedrock API, +requiring AWS credentials for authentication. The credentials can be provided +directly in the api_config dictionary or through environment variables: +- AWS_ACCESS_KEY_ID +- AWS_SECRET_ACCESS_KEY +- AWS_REGION (defaults to us-east-1) + +The vectorizer supports synchronous operations with batch processing and +preprocessing capabilities. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +# Basic usage with explicit credentials +vectorizer = AmazonBedrockTextVectorizer( + model="amazon.titan-embed-text-v2:0", + api_config={ + "aws_access_key_id": "your_access_key", + "aws_secret_access_key": "your_secret_key", + "aws_region": "us-east-1" + } +) + +# With environment variables and caching +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="bedrock_embeddings_cache") + +vectorizer = AmazonBedrockTextVectorizer( + model="amazon.titan-embed-text-v2:0", + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Generate batch embeddings +embeddings = vectorizer.embed_many(["Hello", "World"], batch_size=2) +``` + +Initialize the AWS Bedrock Vectorizer. + +* **Parameters:** + * **model** (*str*) – The Bedrock model ID to use. Defaults to amazon.titan-embed-text-v2:0 + * **api_config** (*Optional* *[* *Dict* *[* *str* *,* *str* *]* *]*) – AWS credentials and config. + Can include: aws_access_key_id, aws_secret_access_key, aws_region + If not provided, will use environment variables. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ValueError** – If credentials are not provided in config or environment. + * **ImportError** – If boto3 is not installed. + * **ValueError** – If an invalid dtype is provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## CustomTextVectorizer + + + +### `class CustomTextVectorizer(embed, embed_many=None, aembed=None, aembed_many=None, dtype='float32', cache=None)` + +Bases: `BaseVectorizer` + +The CustomTextVectorizer class wraps user-defined embedding methods to create +embeddings for text data. + +This vectorizer is designed to accept a provided callable text vectorizer and +provides a class definition to allow for compatibility with RedisVL. +The vectorizer may support both synchronous and asynchronous operations which +allows for batch processing of texts, but at a minimum only syncronous embedding +is required to satisfy the ‘embed()’ method. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +# Basic usage with a custom embedding function +vectorizer = CustomTextVectorizer( + embed = my_vectorizer.generate_embedding +) +embedding = vectorizer.embed("Hello, world!") + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="my_embeddings_cache") + +vectorizer = CustomTextVectorizer( + embed=my_vectorizer.generate_embedding, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed("Hello, world!") + +# Second call will retrieve from cache +embedding2 = vectorizer.embed("Hello, world!") + +# Asynchronous batch embedding of multiple texts +embeddings = await vectorizer.aembed_many( + ["Hello, world!", "How are you?"], + batch_size=2 +) +``` + +Initialize the Custom vectorizer. + +* **Parameters:** + * **embed** (*Callable*) – a Callable function that accepts a string object and returns a list of floats. + * **embed_many** (*Optional* *[* *Callable* *]*) – a Callable function that accepts a list of string objects and returns a list containing lists of floats. Defaults to None. + * **aembed** (*Optional* *[* *Callable* *]*) – an asyncronous Callable function that accepts a string object and returns a lists of floats. Defaults to None. + * **aembed_many** (*Optional* *[* *Callable* *]*) – an asyncronous Callable function that accepts a list of string objects and returns a list containing lists of floats. Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. +* **Raises:** + **ValueError** – if embedding validation fails. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. + +## VoyageAITextVectorizer + + + +### `class VoyageAITextVectorizer(model='voyage-large-2', api_config=None, dtype='float32', cache=None, *, dims=None)` + +Bases: `BaseVectorizer` + +The VoyageAITextVectorizer class utilizes VoyageAI’s API to generate +embeddings for text data. + +This vectorizer is designed to interact with VoyageAI’s /embed API, +requiring an API key for authentication. The key can be provided +directly in the api_config dictionary or through the VOYAGE_API_KEY +environment variable. User must obtain an API key from VoyageAI’s website +([https://dash.voyageai.com/](https://dash.voyageai.com/)). Additionally, the voyageai python +client must be installed with pip install voyageai. + +The vectorizer supports both synchronous and asynchronous operations, allows for batch +processing of texts and flexibility in handling preprocessing tasks. + +You can optionally enable caching to improve performance when generating +embeddings for repeated text inputs. + +```python +from redisvl.utils.vectorize import VoyageAITextVectorizer + +# Basic usage +vectorizer = VoyageAITextVectorizer( + model="voyage-large-2", + api_config={"api_key": "your-voyageai-api-key"} # OR set VOYAGE_API_KEY in your env +) +query_embedding = vectorizer.embed( + text="your input query text here", + input_type="query" +) +doc_embeddings = vectorizer.embed_many( + texts=["your document text", "more document text"], + input_type="document" +) + +# With caching enabled +from redisvl.extensions.cache.embeddings import EmbeddingsCache +cache = EmbeddingsCache(name="voyageai_embeddings_cache") + +vectorizer = VoyageAITextVectorizer( + model="voyage-large-2", + api_config={"api_key": "your-voyageai-api-key"}, + cache=cache +) + +# First call will compute and cache the embedding +embedding1 = vectorizer.embed( + text="your input query text here", + input_type="query" +) + +# Second call will retrieve from cache +embedding2 = vectorizer.embed( + text="your input query text here", + input_type="query" +) +``` + +Initialize the VoyageAI vectorizer. + +Visit [https://docs.voyageai.com/docs/embeddings](https://docs.voyageai.com/docs/embeddings) to learn about embeddings and check the available models. + +* **Parameters:** + * **model** (*str*) – Model to use for embedding. Defaults to “voyage-large-2”. + * **api_config** (*Optional* *[* *Dict* *]* *,* *optional*) – Dictionary containing the API key. + Defaults to None. + * **dtype** (*str*) – the default datatype to use when embedding text as byte arrays. + Used when setting as_buffer=True in calls to embed() and embed_many(). + Defaults to ‘float32’. + * **cache** (*Optional* *[*[*EmbeddingsCache*]({{< relref "cache/#embeddingscache" >}}) *]*) – Optional EmbeddingsCache instance to cache embeddings for + better performance with repeated texts. Defaults to None. + * **dims** (*Annotated* *[* *int* *|* *None* *,* *FieldInfo* *(* *annotation=NoneType* *,* *required=True* *,* *metadata=* *[* *Strict* *(* *strict=True* *)* *,* *Gt* *(* *gt=0* *)* *]* *)* *]*) +* **Raises:** + * **ImportError** – If the voyageai library is not installed. + * **ValueError** – If the API key is not provided. + +#### `model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}` + +Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. + +#### `property type: str` + +Return the type of vectorizer. +--- +linkTitle: RedisVL API +title: RedisVL API +type: integration +weight: 5 +hideListLinks: true +--- + + +Reference documentation for the RedisVL API. + + + +* [Schema](schema/) + * [IndexSchema](schema/#indexschema) + * [Defining Fields](schema/#defining-fields) + * [Supported Field Types and Attributes](schema/#supported-field-types-and-attributes) +* [Search Index Classes](searchindex/) + * [SearchIndex](searchindex/#searchindex) + * [AsyncSearchIndex](searchindex/#asyncsearchindex) +* [Query](query/) + * [VectorQuery](query/#vectorquery) + * [VectorRangeQuery](query/#vectorrangequery) + * [HybridQuery](query/#hybridquery) + * [TextQuery](query/#textquery) + * [FilterQuery](query/#filterquery) + * [CountQuery](query/#countquery) +* [Filter](filter/) + * [FilterExpression](filter/#filterexpression) + * [Tag](filter/#tag) + * [Text](filter/#text) + * [Num](filter/#num) + * [Geo](filter/#geo) + * [GeoRadius](filter/#georadius) +* [Vectorizers](vectorizer/) + * [HFTextVectorizer](vectorizer/#hftextvectorizer) + * [OpenAITextVectorizer](vectorizer/#openaitextvectorizer) + * [AzureOpenAITextVectorizer](vectorizer/#azureopenaitextvectorizer) + * [VertexAITextVectorizer](vectorizer/#vertexaitextvectorizer) + * [CohereTextVectorizer](vectorizer/#coheretextvectorizer) + * [BedrockTextVectorizer](vectorizer/#bedrocktextvectorizer) + * [CustomTextVectorizer](vectorizer/#customtextvectorizer) + * [VoyageAITextVectorizer](vectorizer/#voyageaitextvectorizer) +* [Rerankers](reranker/) + * [CohereReranker](reranker/#coherereranker) + * [HFCrossEncoderReranker](reranker/#hfcrossencoderreranker) + * [VoyageAIReranker](reranker/#voyageaireranker) +* [LLM Cache](cache/) + * [SemanticCache](cache/#semanticcache) +* [Embeddings Cache](cache/#embeddings-cache) + * [EmbeddingsCache](cache/#embeddingscache) +* [LLM Message History](message_history/) + * [SemanticMessageHistory](message_history/#semanticmessagehistory) + * [MessageHistory](message_history/#messagehistory) +* [Semantic Router](router/) + * [Semantic Router](router/#semantic-router-api) + * [Routing Config](router/#routing-config) + * [Route](router/#route) + * [Route Match](router/#route-match) + * [Distance Aggregation Method](router/#distance-aggregation-method) +* [Threshold Optimizers](threshold_optimizer/) + * [CacheThresholdOptimizer](threshold_optimizer/#cachethresholdoptimizer) + * [RouterThresholdOptimizer](threshold_optimizer/#routerthresholdoptimizer) +--- +linkTitle: The RedisVL CLI +title: The RedisVL CLI +type: integration +--- + + +RedisVL is a Python library with a dedicated CLI to help load and create vector search indices within Redis. + +This notebook will walk through how to use the Redis Vector Library CLI (``rvl``). + +Before running this notebook, be sure to +1. Have installed ``redisvl`` and have that environment active for this notebook. +2. Have a running Redis instance with the Search and Query capability + + +```python +# First, see if the rvl tool is installed +!rvl version +``` + + 19:16:18 [RedisVL] INFO RedisVL version 0.5.2 + + +## Commands +Here's a table of all the rvl commands and options. We'll go into each one in detail below. + +| Command | Options | Description | +|---------------|--------------------------|-------------| +| `rvl version` | | display the redisvl library version| +| `rvl index` | `create --schema` or `-s `| create a redis index from the specified schema file| +| `rvl index` | `listall` | list all the existing search indices| +| `rvl index` | `info --index` or ` -i ` | display the index definition in tabular format| +| `rvl index` | `delete --index` or `-i ` | remove the specified index, leaving the data still in Redis| +| `rvl index` | `destroy --index` or `-i `| remove the specified index, as well as the associated data| +| `rvl stats` | `--index` or `-i ` | display the index statistics, including number of docs, average bytes per record, indexing time, etc| +| `rvl stats` | `--schema` or `-s ` | display the index statistics of a schema defined in . The index must have already been created within Redis| + +## Index + +The ``rvl index`` command can be used for a number of tasks related to creating and managing indices. Whether you are working in Python or another language, this cli tool can still be useful for managing and inspecting your indices. + +First, we will create an index from a yaml schema that looks like the following: + + + +```python +%%writefile schema.yaml + +version: '0.1.0' + +index: + name: vectorizers + prefix: doc + storage_type: hash + +fields: + - name: sentence + type: text + - name: embedding + type: vector + attrs: + dims: 768 + algorithm: flat + distance_metric: cosine +``` + + Overwriting schema.yaml + + + +```python +# Create an index from a yaml schema +!rvl index create -s schema.yaml +``` + + 19:16:21 [RedisVL] INFO Index created successfully + + + +```python +# list the indices that are available +!rvl index listall +``` + + 19:16:24 [RedisVL] INFO Indices: + 19:16:24 [RedisVL] INFO 1. vectorizers + + + +```python +# inspect the index fields +!rvl index info -i vectorizers +``` + + + + Index Information: + ╭───────────────┬───────────────┬───────────────┬───────────────┬───────────────╮ + │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ + ├───────────────┼───────────────┼───────────────┼───────────────┼───────────────┤ + | vectorizers | HASH | ['doc'] | [] | 0 | + ╰───────────────┴───────────────┴───────────────┴───────────────┴───────────────╯ + Index Fields: + ╭─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────┬─────────────────╮ + │ Name │ Attribute │ Type │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ Field Option │ Option Value │ + ├─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┼─────────────────┤ + │ sentence │ sentence │ TEXT │ WEIGHT │ 1 │ │ │ │ │ │ │ + │ embedding │ embedding │ VECTOR │ algorithm │ FLAT │ data_type │ FLOAT32 │ dim │ 768 │ distance_metric │ COSINE │ + ╰─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────┴─────────────────╯ + + + +```python +# delete an index without deleting the data within it +!rvl index delete -i vectorizers +``` + + 19:16:29 [RedisVL] INFO Index deleted successfully + + + +```python +# see the indices that still exist +!rvl index listall +``` + + 19:16:32 [RedisVL] INFO Indices: + + +## Stats + +The ``rvl stats`` command will return some basic information about the index. This is useful for checking the status of an index, or for getting information about the index to use in other commands. + + +```python +# create a new index with the same schema +# recreating the index will reindex the documents +!rvl index create -s schema.yaml +``` + + 19:16:35 [RedisVL] INFO Index created successfully + + + +```python +# list the indices that are available +!rvl index listall +``` + + 19:16:38 [RedisVL] INFO Indices: + 19:16:38 [RedisVL] INFO 1. vectorizers + + + +```python +# see all the stats for the index +!rvl stats -i vectorizers +``` + + + Statistics: + ╭─────────────────────────────┬────────────╮ + │ Stat Key │ Value │ + ├─────────────────────────────┼────────────┤ + │ num_docs │ 0 │ + │ num_terms │ 0 │ + │ max_doc_id │ 0 │ + │ num_records │ 0 │ + │ percent_indexed │ 1 │ + │ hash_indexing_failures │ 0 │ + │ number_of_uses │ 1 │ + │ bytes_per_record_avg │ nan │ + │ doc_table_size_mb │ 0 │ + │ inverted_sz_mb │ 0 │ + │ key_table_size_mb │ 0 │ + │ offset_bits_per_record_avg │ nan │ + │ offset_vectors_sz_mb │ 0 │ + │ offsets_per_term_avg │ nan │ + │ records_per_doc_avg │ nan │ + │ sortable_values_size_mb │ 0 │ + │ total_indexing_time │ 0 │ + │ total_inverted_index_blocks │ 0 │ + │ vector_index_sz_mb │ 0.00818634 │ + ╰─────────────────────────────┴────────────╯ + + +## Optional arguments +You can modify these commands with the below optional arguments + +| Argument | Description | Default | +|----------------|-------------|---------| +| `-u --url` | The full Redis URL to connec to | `redis://localhost:6379` | +| `--host` | Redis host to connect to | `localhost` | +| `-p --port` | Redis port to connect to. Must be an integer | `6379` | +| `--user` | Redis username, if one is required | `default` | +| `--ssl` | Boolean flag indicating if ssl is required. If set the Redis base url changes to `rediss://` | None | +| `-a --password`| Redis password, if one is required| `""` | + +### Choosing your Redis instance +By default rvl first checks if you have `REDIS_URL` environment variable defined and tries to connect to that. If not, it then falls back to `localhost:6379`, unless you pass the `--host` or `--port` arguments + + +```python +# specify your Redis instance to connect to +!rvl index listall --host localhost --port 6379 +``` + + 19:16:43 [RedisVL] INFO Indices: + 19:16:43 [RedisVL] INFO 1. vectorizers + + +### Using SSL encription +If your Redis instance is configured to use SSL encription then set the `--ssl` flag. +You can similarly specify the username and password to construct the full Redis URL + + +```python +# connect to rediss://jane_doe:password123@localhost:6379 +!rvl index listall --user jane_doe -a password123 --ssl +``` + + 19:16:46 [RedisVL] ERROR Error 8 connecting to rediss:6379. nodename nor servname provided, or not known. + + + +```python +!rvl index destroy -i vectorizers +``` + + 19:16:49 [RedisVL] INFO Index deleted successfully + +--- +linkTitle: Install RedisVL +title: Install RedisVL +type: integration +--- + + +There are a few ways to install RedisVL. The easiest way is to use pip. + +## Install RedisVL with Pip + +Install `redisvl` into your Python (>=3.8) environment using `pip`: + +```bash +$ pip install -U redisvl +``` + +RedisVL comes with a few dependencies that are automatically installed, however, a few dependencies +are optional and can be installed separately if needed: + +```bash +$ pip install redisvl[all] # install vectorizer dependencies +$ pip install redisvl[dev] # install dev dependencies +``` + +If you use ZSH, remember to escape the brackets: + +```bash +$ pip install redisvl\[all\] +``` + +This library supports the use of hiredis, so you can also install by running: + +```bash +pip install redisvl[hiredis] +``` + +## Install RedisVL from Source + +To install RedisVL from source, clone the repository and install the package using `pip`: + +```bash +$ git clone https://github.com/redis/redis-vl-python.git && cd redisvl +$ pip install . + +# or for an editable installation (for developers of RedisVL) +$ pip install -e . +``` + +## Installing Redis + +RedisVL requires a distribution of Redis that supports the [Search and Query](https://redis.com/modules/redis-search/) capability of which there are 3: + +offering + +1. [Redis Cloud](https://redis.io/cloud), a fully managed cloud offering +2. [Redis Stack](https://redis.io/docs/getting-started/install-stack/docker/), a local docker image for testing and development +3. [Redis Enterprise](https://redis.com/redis-enterprise/), a commercial self-hosted + +### Redis Cloud + +Redis Cloud is the easiest way to get started with RedisVL. You can sign up for a free account [here](https://redis.io/cloud). Make sure to have the `Search and Query` +capability enabled when creating your database. + +### Redis Stack (local development) + +For local development and testing, Redis-Stack can be used. We recommend running Redis +in a docker container. To do so, run the following command: + +```bash +docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest +``` + +This will also spin up the [Redis Insight GUI](https://redis.io/insight/) at `http://localhost:8001`. + +### Redis Enterprise (self-hosted) + +Redis Enterprise is a commercial offering that can be self-hosted. You can download the latest version [here](https://redis.io/downloads/). + +If you are considering a self-hosted Redis Enterprise deployment on Kubernetes, there is the [Redis Enterprise Operator](https://docs.redis.com/latest/kubernetes/) for Kubernetes. This will allow you to easily deploy and manage a Redis Enterprise cluster on Kubernetes. +--- +linkTitle: Overview +title: Overview +type: integration +weight: 3 +hideListLinks: true +--- + + + + +* [Install RedisVL](installation/) + * [Install RedisVL with Pip](installation/#install-redisvl-with-pip) + * [Install RedisVL from Source](installation/#install-redisvl-from-source) + * [Installing Redis](installation/#installing-redis) +* [The RedisVL CLI](cli/) + * [Commands](cli/#commands) + * [Index](cli/#index) + * [Stats](cli/#stats) + * [Optional arguments](cli/#optional-arguments) +--- +aliases: +- /integrate/redisvl/api +- /integrate/redisvl/api/cache +- /integrate/redisvl/api/filter +- /integrate/redisvl/api/query +- /integrate/redisvl/api/schema +- /integrate/redisvl/api/searchindex +- /integrate/redisvl/api/vectorizer +- /integrate/redisvl/overview +- /integrate/redisvl/overview/cli +- /integrate/redisvl/user-guide +- /integrate/redisvl/user-guide/get-started +- /integrate/redisvl/user-guide/json-v-hashes +- /integrate/redisvl/user-guide/query-filter +- /integrate/redisvl/user-guide/semantic-caching +- /integrate/redisvl/user-guide/vectorizers +categories: +- docs +- integrate +- stack +- oss +- rs +- rc +- oss +- clients +description: This is the Redis vector library (RedisVL). +group: library +hidden: false +linkTitle: RedisVL +summary: RedisVL provides a powerful, dedicated Python client library for using Redis + as a vector database. Leverage Redis's speed, reliability, and vector-based semantic + search capabilities to supercharge your application. +title: RedisVL +type: integration +weight: 1 +--- +RedisVL is a powerful, dedicated Python client library for Redis that enables seamless integration and management of high-dimensional vector data. +Built to support machine learning and artificial intelligence workflows, RedisVL simplifies the process of storing, searching, and analyzing vector embeddings, which are commonly used for tasks like recommendation systems, semantic search, and anomaly detection. + +Key features of RedisVL include: + +- Vector Similarity Search: Efficiently find nearest neighbors in high-dimensional spaces using algorithms like HNSW (Hierarchical Navigable Small World). +- Integration with AI Frameworks: RedisVL works seamlessly with popular frameworks such as TensorFlow, PyTorch, and Hugging Face, making it easy to deploy AI models. +- Scalable and Fast: Leveraging Redis's in-memory architecture, RedisVL provides low-latency access to vector data, even at scale. +- By bridging the gap between data storage and AI model deployment, RedisVL empowers developers to build intelligent, real-time applications with minimal infrastructure complexity. +--- +LinkTitle: Nagios with Redis Enterprise +Title: Nagios with Redis Enterprise +alwaysopen: false +categories: +- docs +- integrate +- rs +description: "The Redis Enterprise Software (RS)\_Nagios plugin enables you to monitor\ + \ the status of RS\_related objects and alerts. The RS\_alerts can be related to\ + \ the cluster, nodes, or databases." +group: observability +summary: "This\_Nagios plugin enables you to monitor the status of Redis Enterprise\_\ + related components and alerts." +title: Nagios integration with Redis Enterprise Software +type: integration +weight: 7 +--- + +{{}} +The Nagios plugin has been retired as of Redis Enterprise Software version 7.2.4. +{{}} + +The Redis Enterprise Software (RS) Nagios plugin enables you to monitor the status of RS related +objects and alerts. The RS alerts can be related to the cluster, nodes, +or databases. + +The alerts that can be monitored via Nagios are the same alerts that can +be configured in the RS UI in the Settings ­\> Alerts page, or the +specific Database ­\> Configuration page. + +All alert configurations (active / not active, setting thresholds, etc') +can only be done through the RS UI, they cannot be configured in Nagios. +Through Nagios you can only view the status and information of the +alerts. + +The full list of alerts can be found in the plugin package itself (in +"/rlec_obj/rlec_services.cfg" file, more details below). + +RS Nagios plugin support API password retrieval from Gnome keyring, +KWallet, Windows credential vault, Mac OS X Keychain, if present, or +otherwise Linux Secret Service compatible password store. With no +keyring service available, the password is saved with base64 encoding, +under the user home directory. + +## Configuring the Nagios plugin + +In order to configure the Nagios plugin you need to copy the files that +come with the package into your Nagios environment and place them in a +Nagios configuration directory. Or, alternatively you can copy parts of +the package configuration into your existing Nagios configuration. + +If Keyring capabilities are needed to store the password, python keyring +package should be installed and used by following the below steps from +the operating system CLI on Nagios machine: + +1. pip install keyring ­to install the package (See + https://pip.pypa.io/en/stable/installing/ on how to install python + pip if needed). +1. keyring set RS-Nagios ­\ to set the password. + User email should be identical to the email used in Nagios + configuration and the password should be set using the same user + that run the Nagios server. + +Then, you need to update the local parameters, such as hostnames, +addresses, and object IDs, to the values relevant for your +RS deployment. + +Finally, you need to set the configuration for each node and database +you would like to monitor. More details below. + +The RS Nagios package includes two components: + +- The plugin itself ­- with suffix "rlec_nagios_plugin" +- Configuration files - with suffix "rlec_nagios_conf" + +Below is the list of files included in these packages and instructions +regarding what updates need to be made to these flies. + +Note : The instructions below assume you are running on Ubuntu, have a +clean Nagios installation, and the base Nagios directory is +"/usr/local/nagios/" + +### Step 1 + +Copy the folder named "libexec" from the plugin folder and its contents +to "/usr/local/nagios/" + +These files included in it are: + +- check_rlec_alert +- check_rlec_node +- check_rlec_bdb +- email_stub +- rlecdigest.py + +Note : The check_rlec_alert, check_rlec_node, check_rlec_bdb files +are the actual plugin implementation. You can run each of them with a +"­h" switch in order to retrieve their documentation and their expected +parameters. + +### Step 2 + +Add the following lines to your "nagios.cfg": + +- cfg_dir=/usr/local/nagios/etc/rlec_obj +- cfg_dir=/usr/local/nagios/etc/rlec_local +- resource_file=/usr/local/nagios/etc/rlec_resource.cfg + +### Step 3 + +Copy the configuration files along with their folders to +"/usr/local/nagios/etc" and make the required updates, as detailed +below. + +1. Under the "/etc" folder: + 1. "rlec_resource.cfg " ­ holds global variables definitions for + the user and password to use to connect to RS. You should update + the variables to the relevant user and password for your + deployment. + 1. "rlec_local " folder + 1. "rlec_obj" folder +1. Under the "/rlec_local" folder: + 1. "cluster.cfg " ­ holds configuration details at the cluster + level. If you would like to monitor more than one cluster then + you need to duplicate the two existing entries in the file for + each cluster. + 1. The first "define host" section defines a variable for the + IP address of the cluster that is used in other + configuration files. + 1. Update the "address" to the Cluster Name (FQDN) as + defined in DNS, or the IP address of one of the nodes in + the cluster. + 1. If you are configuring more than one RS then when + duplicating this section you should make sure: + 1. The "name" is unique. + 1. In the second "define host" section: + 1. The "host_name " in each entry must be unique. + 1. The "display_name" in each entry can be updated to a + user-friendly name that are shown in Nagios UI. + 1. "contacts.cfg " ­ holds configuration details who to send emails + to. It should be updated to values relevant for your deployment. + If this file already exists in your existing Nagios environment + then you should update it accordingly. + 1. "databases.cfg" ­ holds configuration details of the databases + to monitor. The "define host" section should be duplicated for + every database to monitor. + 1. "host_name" should be a unique value. + 1. "display_name " should be updated to a user-friendly name + to show in the UI. + 1. "_RLECID " should be the database's internal ID that can + be retrieved from + [`rladmin status`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status" >}}) command output. + 1. "nodes.cfg " ­ holds configuration details of the nodes in the + cluster. The "define host" section should be duplicated for + every node in the cluster. + 1. "host_name" should be a unique value. + 1. "display_name " should be updated to a user-friendly name + to show in the UI. + 1. "address" should be updated to the DNS name mapped to the + IP address of the node, or to the IP address itself. + 1. "_RLECID " should be the node's internal ID that can be + retrieved + from [`rladmin status`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status" >}}) command output. + 1. Under the "/rlec_obj" folder: + 1. "rlec_cmd.cfg" ­ holds configuration details of how to + activate the plugin. No need to make any updates to it. + 1. "rlec_groups.cfg" holds definitions of host groups. No need + to make any updates to it. + 1. "rlec_services.cfg" holds definitions of all alerts that + are monitored. No need to make any updates to it. + 1. "rlec_templates.cfg" holds general RS Nagios definitions. + No need to make any updates to it. +--- +LinkTitle: Datadog with Redis Cloud +Title: Datadog with Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Datadog to your Redis Cloud cluster using the + Redis Datadog Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Datadog to your Redis Cloud cluster using the + Redis Datadog Integration. +type: integration +weight: 7 +--- + + +[Datadog](https://www.datadoghq.com/) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The Datadog Integration for Redis Cloud uses the Datadog Integration API to connect to Redis metrics exporters. +The integration is based on Datadog's +[OpenMetrics integration](https://datadoghq.dev/integrations-core/base/openmetrics/) in their core API. This integration +enables Redis Cloud users to export metrics directly to Datadog for analysis, and includes Redis-designed +dashboards for use in monitoring Redis Cloud clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rc/redis-cloud-datadog.png" alt="screenshot of datadog dashboard">}} +## Install Redis' Datadog Integration for Redis Cloud + +Installing the Datadog integration is a two-step process. Firstly, the installation must be part of your configuration. +Select 'Integrations' from the menu in the Datadog portal and then enter 'Redis' in the search bar, then select +'Redis Cloud by Redis, Inc.'. Next click 'Install Integration' in the top-right corner of the overview page. + +If you have not already created a VPC between the Redis Cloud cluster and the network in which the machine hosting the +Datadog agent lives you should do so now. Please visit [VPC Peering](https://redis.io/docs/latest/operate/rc/security/vpc-peering/) +and follow the instructions for the cloud platform of your choice. + +Returning to the Datadog console, open the 'Configure' tab of the integration and follow the instructions for installing +the integration on the local machine. After it has been installed follow the instruction for adding an instance to the +conf.yaml in /etc/datadog-agent/conf.d/redis_cloud.d. + +After you have edited the conf.yaml file please restart the service and check its status: + +```shell +sudo service datadog-agent restart +``` + +followed by: + +```shell +sudo service datadog-agent status +``` + +to be certain that the service itself is running and did not encounter any problems. Next, check the output of the +service; in the terminal on the Datadog agent host run the following command: + +```shell +tail -f /var/log/datadog/agent.log +``` + +It will take several minutes for data to reach Datadog. Finally, check the Datadog console by selecting +Infrastructure -> Host Map from the menu and then finding the host that is monitoring the Redis Cloud instance. The host +should be present, and in its list of components there should be a section called 'rdsc', which is the namespace used by +the Redis Cloud integration, although this can take several minutes to appear. It is also possible to verify the metrics +by choosing Metrics -> Explorer from the menu and entering 'rdsc.bdb_up'. + +## View metrics + +The Redis Cloud Integration for Datadog contains pre-defined dashboards to aid in monitoring your Redis Cloud deployment. + +The following dashboards are currently available: + +- Overview +- Database +- Network + +A number of additional dashboards will be included in the next release (v1.1.0). + +## Monitor metrics + +See [Observability and monitoring guidance]({{< relref "/integrate/prometheus-with-redis-enterprise/observability" >}}) for monitoring details. +--- +LinkTitle: NRedisStack +Title: C#/.NET client for Redis +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis and C#/.NET +group: library +stack: true +summary: NRedisStack is a C#/.NET library for Redis. +title: NRedisStack +type: integration +weight: 2 +--- + +Connect your C#/.NET application to a Redis database using the NRedisStack client library. + +Refer to the complete [C#/.NET guide]({{< relref "/develop/clients/dotnet" >}}) to install, connect, and use NRedisStack. +--- +LinkTitle: node-redis +Title: Node.js client for Redis +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis and Node.js +group: library +stack: true +summary: node-redis is a Node.js client library for Redis. +title: node-redis +type: integration +weight: 2 +--- + +Connect your Node.js application to a Redis database using the node-redis client library. + +Refer to the complete [Node.js guide]({{< relref "/develop/clients/nodejs" >}}) to install, connect, and use node-redis. +--- +LinkTitle: New Relic with Redis Cloud +Title: New Relic with Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect New Relic to your Redis Cloud cluster using + the Redis New Relic Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect New Relic to your Redis Cloud cluster using + the Redis New Relic Integration. +type: integration +weight: 7 +--- + + +[New Relic](https://newrelic.com/?customer-bypass=true) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The New Relic Integration for Redis Cloud uses Prometheus remote write functionality to connect Prometheus data +sources to New Relic. This integration enables Redis Cloud users to export metrics to New Relic for analysis, +and includes Redis-designed dashboards for use in monitoring Redis Cloud clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rc/redis-cloud-newrelic.png" >}} +## Install Redis' New Relic Integration for Redis Cloud + +The New Relic Integration for Redis is based on a feature of the Prometheus data source. Prometheus can forward metrics on to +another destination using remote writes. This will require a Prometheus installation inside the same datacenter as the +Redis Cloud deployment. + +If you have not already created a VPC between the Redis Cloud cluster and the network in which the machine hosting +Prometheus lives you should do so now. Please visit [VPC Peering](https://redis.io/docs/latest/operate/rc/security/vpc-peering/) +and follow the instructions for the cloud platform of your choice. + +Finally, the Prometheus installation must be configured to pull metrics from Redis Cloud and write them to New Relic. There +are two sections, first the pull from Redis and second the write to New Relic. + +Get metrics from Redis Cloud: + +```yaml + - job_name: "redis-cloud" + scrape_interval: 30s + scrape_timeout: 30s + metrics_path: / + scheme: https + tls_config: + insecure_skip_verify: true + static_configs: + # The default Redis Cloud Prometheus port is 8070. + # Replace REDIS_CLOUD_HOST with your cluster's hostname. + - targets: ["REDIS_CLOUD_HOST:8070"] +``` + +Write them to New Relic: + +```yaml +# Remote write configuration for New Relic. +# - Replace REDIS_CLOUD_SERVICE NAME with any name you'd like to use to refer to this data source. +# - Replace NEW_RELIC_BEARER_TOKEN with the token you generated on the New Relic Administration -> API Keys page. +remote_write: +- url: https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=REDIS_CLOUD_SERVICE_NAME + authorization: + credentials: NEW_RELIC_BEARER_TOKEN +``` + +## View metrics + +The Redis Cloud Integration for New Relic contains pre-defined dashboards to aid in monitoring your Redis Enterprise deployment. + +The following dashboards are currently available: + +- Cluster: top-level statistics indicating the general health of the cluster +- Database: performance metrics at the database level +- Node +- Shard: low-level details of an individual shard +- Active-Active: replication and performance for geo-replicated clusters +- Proxy: network and command information regarding the proxy +- Proxy Threads: processor usage information regarding the proxy's component threads + +## Monitor metrics + +New Relic dashboards can be filtered using the text area. For example, when viewing a cluster dashboard it is possible to +filter the display to show data for only one cluster by typing 'cluster' in the text area and waiting for the system to +retrieve the relevant data before choosing one of the options in the 'cluster' section. + +Certain types of data do not know the name of the database from which they were drawn. The dashboard should have a list +of database names and ids; use the id value when filtering input to the dashboard. + + + + +--- +Title: Write-behind architecture +aliases: /integrate/redis-data-integration/write-behind/architecture/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Discover the main components of Write-behind +group: di +headerRange: '[2]' +linkTitle: Architecture +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Write-behind lets you integrate Redis Enterprise (as the source of changes to data) and downstream databases or datastores. +Write-behind captures any changes to a selected set of key patterns in a Redis keyspace and asynchronously writes them in small batches to the downstream database. This means that your app doesn't need to handle the data remodeling or manage the connection with the downstream database. + +Write-behind can normalize a key in Redis to several records in one or more tables at the target. +To learn more about write-behind declarative jobs and normalization, see the +[write-behind quick start guide]({{< relref "/integrate/write-behind/quickstart/write-behind-guide" >}}). + +## Write-behind topology + +Write-behind's CLI and engine are shipped as one product that can run both ingest and write-behind pipelines. +However, the two different types of pipeline have different topologies. + +The Write-behind engine is installed on the Redis database that contains the application data and not on a separate staging Redis database. The Write-behind data streams and its control plane add only a small footprint of a few hundred MB to the Redis database. In the write-behind topology, Write-behind processes data in parallel on each shard and establishes a single connection from each shard to the downstream database. + +### Model translation + +Write-behind can track changes to the following Redis types: + +- [Hash]({{< relref "/develop/data-types/hashes" >}}) +- [JSON]({{< relref "/develop/data-types/json/" >}}) +- [Set]({{< relref "/develop/data-types/sets" >}}) +- [Sorted Set]({{< relref "/develop/data-types/sorted-sets" >}}) + +Unlike the ingest scenario, write-behind has no default behavior for model translation. You must always +create a declarative job to specify the mapping between Redis keys and target database records. +The job configuration has `keys` and `mapping` sections that help make this an easy task. + +## Write-behind components + +### Write-behind CLI + +Write-behind's Python-based CLI is highly intuitive to use and performs validations to help you avoid mistakes. +The CLI makes it easy to set up Write-behind and manage it over its entire lifecycle. + +### Write-behind Engine + +Write-behind uses Redis Gears as its runtime environment. The Gears and Write-behind engine logic are installed +on all source Redis Enterprise database shards, but only primary shards process events and handle the pipeline. + +The Write-behind Engine reads Redis change events from Redis Streams (one for each tracked key-pattern), +processes them, and translates them to SQL or whatever other language the target database uses. + +Write-behind writes changes to the target in small batches using transactions. Write-behind guarantees +*at-least once* delivery. If any network problems, disconnections, or other temporary failures occur, +Write-behind will keep attempting to write the changes to the target. If a hard reject occurs, Write-behind keeps the reject +record and the reason in a *dead letter queue (DLQ)*. + +### Write-behind configuration + +The Write-behind configuration is persisted at the cluster level. The configuration is written by the CLI +[`deploy`]({{< relref "/integrate/write-behind/reference/cli/redis-di-deploy" >}}) +command, which saves all changes to the configuration file. This mechanism allows for automatic configuration of new shards +whenever you need them, and it can survive shard and node failure. +--- +Title: Install RedisGears for Redis Data Integration +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Install and set up RedisGears for a Write-behind deployment +group: di +linkTitle: Install RedisGears +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 70 +--- + +Write-behind requires that [RedisGears](https://redis.com/modules/redis-gears) module with the [Python plugin](https://docs.redis.com/latest/modules/redisgears/python/) is installed on the Redis Enterprise cluster. + +The Python plugin can be installed explicitly or alongside with the [JVM plugin](https://docs.redis.com/latest/modules/redisgears/jvm/) if the latter is needed on the cluster for other purposes. + +Use the [`redis-di create`]({{< relref "/integrate/write-behind/reference/cli/redis-di-create.md" >}}) command in Write-behind CLI to install RedisGears. + +## Download RedisGears + +Download RedisGears based on the Linux distribution of where Redis Enterprise is installed. + +### Ubuntu 20.04 + +```bash +curl -s --tlsv1.3 https://redismodules.s3.amazonaws.com/redisgears/redisgears.Linux-ubuntu20.04-x86_64.{{}}-withdeps.zip -o /tmp/redis-gears.zip +``` + +### Ubuntu 18.04 + +```bash +curl -s --tlsv1.3 https://redismodules.s3.amazonaws.com/redisgears/redisgears.Linux-ubuntu18.04-x86_64.{{}}-withdeps.zip -o /tmp/redis-gears.zip +``` + +### RHEL8 + +```bash +curl -s https://redismodules.s3.amazonaws.com/redisgears/redisgears.Linux-rhel8-x86_64.{{}}-withdeps.zip -o /tmp/redis-gears.zip +``` + +### RHEL7 + +```bash +curl -s https://redismodules.s3.amazonaws.com/redisgears/redisgears.Linux-rhel7-x86_64.{{}}-withdeps.zip -o /tmp/redis-gears.zip +``` +--- +Title: Install Write-behind CLI +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Install Write-behind CLI +group: di +linkTitle: Install Write-behind CLI +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +The following installation instructions install the Write-behind CLI on a local workstation. + +Write-behind installation is done via the Write-behind CLI. The CLI should have network access to the Redis Enterprise cluster API (port 9443 by default). + +### Download Write-behind CLI + +#### Ubuntu 20.04 + +```bash +wget https://qa-onprem.s3.amazonaws.com/redis-di/{{}}/redis-di-ubuntu20.04-{{}}.tar.gz -O /tmp/redis-di.tar.gz +``` + +#### Ubuntu 18.04 + +```bash +wget https://qa-onprem.s3.amazonaws.com/redis-di/{{}}/redis-di-ubuntu18.04-{{}}.tar.gz -O /tmp/redis-di.tar.gz +``` + +#### RHEL 8 + +```bash +wget https://qa-onprem.s3.amazonaws.com/redis-di/{{}}/redis-di-rhel8-{{}}.tar.gz -O /tmp/redis-di.tar.gz +``` + +#### RHEL 7 + +```bash +wget https://qa-onprem.s3.amazonaws.com/redis-di/{{}}/redis-di-rhel7-{{}}.tar.gz -O /tmp/redis-di.tar.gz +``` + +## Install Write-behind CLI + +Unpack the downloaded `redis-di.tar.gz` into the `/usr/local/bin/` directory: + +```bash +sudo tar xvf /tmp/redis-di.tar.gz -C /usr/local/bin/ +``` + +> Note: Non-root users should unpack to a directory with write permission and run `redis-di` directly from it. +--- +Title: Installation +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to install Write-behind +group: di +hideListLinks: false +linkTitle: Installation +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 8 +--- + +--- +Title: Monitoring guide +aliases: /integrate/redis-data-integration/write-behind/monitoring-guide/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Monitor Write-behind engine and data processing jobs +group: di +linkTitle: Monitoring +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 70 +--- + +Write-behind accumulates operating statistics that you can: + +- Observe and analyze to discover various types of problems. +- Use for optimization purposes. + +## Console metrics + +Write-behind can display its operating metrics in the console using the +[`redis-di status`]({{< relref "/integrate/write-behind/reference/cli/redis-di-status" >}}) +command. The command provides information about the current Write-behind engine status, target database configuration, and processing statistics broken down by stream. This tool is intended to be used by Operator to get the current snapshot of the system as well as monitoring ongoing data processing (when used in live mode). + +## Prometheus integration + +Write-behind allows collecting and exporting its metrics to [Prometheus](https://prometheus.io/) and visualizing them in [Grafana](https://grafana.com/). Operator can start the built-in exporter using the +[`redis-di monitor`]({{< relref "/integrate/write-behind/reference/cli/redis-di-monitor" >}}) +command. The diagram describes this flow and the components involved: + +{{< image filename="/images/rdi/monitoring-diagram.png" >}} + +> Note: The host names and ports above are examples only and can be changed as needed. + +### Test Write-behind metrics exporter + +Start the Write-behind metrics exporter using the command below: + +```bash +redis-di monitor +``` + +> Note: The default port for the exporter is `9121`. If you need to change it, use the `--exporter-port` option. The default metrics collection interval is 5 seconds. If you need to change it, use the `--collect-interval` option. + +Then navigate to `http://localhost:9121/` to see the exported metrics. You should be able to see the following metric: + +``` +rdi_engine_state{state="RUNNING",sync_mode="UNKNOWN"} 1.0 +``` + +> Note: The actual value of the metric above can be 0 if you haven't started Write-behind engine yet (in which case, the `state` label should indicate that as well). You must have the Write-behind database created and configured before observing any metrics. If you are not seeing it or getting an error value instead, this indicates that the Write-behind database is not properly configured. + +## Configure Prometheus + +Next, configure the Prometheus scraper. Edit the `prometheus.yml` file to add the following scraper config: + +```yaml +scrape_configs: + # scrape Write-behind metrics exporter + - job_name: rdi-exporter + static_configs: + - targets: ["redis-exporter:9121"] +``` + +> Notes: + +- Make sure the `targets` value above points to the host and port you configured to run the Write-behind metrics exporter. +- The `scrape_interval` setting in Prometheus should be the same or higher than the `collect_interval` setting for the exporter. For example, if the `collect_interval` is set to 5 seconds, the `scrape_interval` should also be set to 5 seconds or more. If the `scrape_interval` is set to less than the `collect_interval`, Prometheus will scrape the exporter before it has a chance to collect and refresh metrics, and you will see duplicated values in Prometheus. + +## Test the Prometheus scraper + +After the scraper config is added to the Prometheus configuration, you should now be able to navigate to `http://:9090/graph` (replace `` with a valid Prometheus hostname or IP address). + +Explore Write-behind metrics using the [expression browser](https://prometheus.io/docs/visualization/browser/). + +In the expression box, type in a metric name (for example, `rdi_engine_state`) and select `Enter` or the `Execute` button to see the following result: + +``` +rdi_engine_state{instance="redis-exporter:9121", job="rdi-exporter", status="RUNNING", sync_mode="UNKNOWN"} 1 +``` + +> Note: You may see more than just one Write-behind metric if Write-behind engine has already processed some data. If you do not see any metrics, check your scraper job configuration in Prometheus. + +## Use Grafana to analyze metrics + +Optionally, you may deploy the sample Grafana dashboard to monitor the status of Write-behind engine and its individual jobs. Follow these steps to import the dashboard: + +1. Download the **dashboard file** to your local machine. + +1. Log into Grafana and navigate to the list of dashboards, then choose **New > Import**: + +{{< image filename="/images/rdi/monitoring-grafana-new-dash.png" >}} + +1. On the next screen, select **Upload JSON file** and upload the file you downloaded in step 1. Make sure you select the data source that is connected to the Write-behind metrics exporter: + +{{< image filename="/images/rdi/monitoring-grafana-dash-configure.png" >}} + +1. Select **Import** and make sure you choose the jobs to monitor from the drop-down list (this will be empty if you don't have any jobs running yet): + +{{< image filename="/images/rdi/monitoring-grafana-dash-running.png" >}} + +## Write-behind metrics + +This list shows exported Write-behind metrics along with their descriptions: + +| Metric Name | Labels | Values | Description | +| --------------------------- | ---------------------------------------------------------------------------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| rdi_engine_state | {status=RUNNING \| STOPPED, sync_mode=SNAPSHOT \| STREAMING \| UNKNOWN} | 0, 1 | Status of Write-behind engine. 0 - Write-behind engine is stopped, 1 - Write-behind engine is running. Sync mode label indicates the last reported ingest synchronization mode. | +| rdi_incoming_entries | {data_source=``, operation=pending \| inserted \| updated \| deleted \| filtered \| rejected} | `` | Counters, indicating the number of operations performed for each stream. | +| rdi_stream_event_latency_ms | {data_source=``} | 0 - ∞ | Latency calculated for each stream. Indicates the time in milliseconds the first available record has spent in the stream waiting to be processed by Write-behind engine. If no records pending it will always return zero. | +--- +Title: Quickstart +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Get started creating a write-behind pipeline +draft: null +group: di +hidden: false +linkTitle: Quickstart +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 5 +--- + +This guide takes you through the creation of a write-behind pipeline. + + +## Concepts + +**Write-behind** is a processing pipeline used to synchronize data in a Redis database with a downstream data store. +You can think about it as a pipeline that starts with change data capture (CDC) events for a Redis database and then filters, transforms, and maps the data to the target data store data structures. + +The **target** data store to which the write-behind pipeline connects and writes data. + +The write-behind pipeline is composed of one or more **jobs**. Each job is responsible for capturing change for one key pattern in Redis and mapping it to one or more tables in the downstream data store. Each job is defined in a YAML file. + +{{< image filename="/images/rdi/redis-di-write-behind.png" >}} + +## Supported data stores + +Write-behind currently supports these target data stores: + +| Data Store | +| ---------- | +| Cassandra | +| MariaDB | +| MySQL | +| Oracle | +| PostgreSQL | +| Redis Enterprise | +| SQL Server | + +## Prerequisites + +The only prerequisite for running Write-behind is [Redis Gears Python](https://redis.com/modules/redis-gears/) >= 1.2.6 installed on the Redis Enterprise Cluster and enabled for the database you want to mirror to the downstream data store. +For more information, see +[RedisGears installation]({{< relref "/integrate/write-behind/installation/install-redis-gears" >}}). + +## Preparing the write-behind pipeline + +- Install [Write-behind CLI]({{< relref "/integrate/write-behind/installation/install-rdi-cli" >}}) on a Linux host that has connectivity to your Redis Enterprise Cluster. +- Run the [`configure`]({{< relref "/integrate/write-behind/reference/cli/redis-di-configure" >}}) command to install the Write-behind Engine on your Redis database, if you have not used this Redis database with Write-behind before. +- Run the [`scaffold`]({{< relref "/integrate/write-behind/reference/cli/redis-di-scaffold" >}}) command with the type of data store you want to use, for example: + + ```bash + redis-di scaffold --strategy write_behind --dir . --db-type mysql + ``` + + This creates a template `config.yaml` file and a folder named `jobs` under the current directory. + You can specify any folder name with `--dir` or use the `--preview config.yaml` option, if your Write-behind CLI is deployed inside a Kubernetes (K8s) pod, to get the `config.yaml` template to the terminal. + +- Add the connections required for downstream targets in the `connections` section of `config.yaml`, for example: + + ```yaml + connections: + my-postgres: + type: postgresql + host: 172.17.0.3 + port: 5432 + database: postgres + user: postgres + password: postgres + #query_args: + # sslmode: verify-ca + # sslrootcert: /opt/work/ssl/ca.crt + # sslkey: /opt/work/ssl/client.key + # sslcert: /opt/work/ssl/client.crt + my-mysql: + type: mysql + host: 172.17.0.4 + port: 3306 + database: test + user: test + password: test + #connect_args: + # ssl_ca: /opt/ssl/ca.crt + # ssl_cert: /opt/ssl/client.crt + # ssl_key: /opt/ssl/client.key + ``` + + This is the first section of the `config.yaml` file and typically the only one you'll need to edit. The `connections` section is designed to have many target connections. In the previous example, there are two downstream connections named `my-postgres` and `my-mysql`. + + To obtain a secured connection using TLS, you can add more `connect_args` or `query_args`, depending on the specific target database terminology, to the connection definition. + + The name can be any arbitrary name as long as it is: + + - Unique for this Write-behind engine + - Referenced correctly by the jobs in the respective YAML files + +In order to prepare the pipeline, fill in the correct information for the target data store. Secrets can be provided using a reference to a secret ([see below](#how-to-provide-targets-secrets)) or by specifying a path. + +The `applier` section has information about the batch size and frequency used to write data to the target. + +Some of the `applier` attributes such as `target_data_type`, `wait_enabled`, and `retry_on_replica_failure` are specific for the Write-behind ingest pipeline and can be ignored. + +### Write-behind jobs + +Write-behind jobs are a mandatory part of the write-behind pipeline configuration. +Under the `jobs` directory (parallel to `config.yaml`) you should have a job definition in a YAML file for every key pattern you want to write to a downstream database table. + +The YAML file can be named using the destination table name or another naming convention, but it has to have a unique name. + +Job definition has the following structure: + +```yaml +source: + redis: + key_pattern: emp:* + trigger: write-behind + exclude_commands: ["json.del"] +transform: + - uses: rename_field + with: + from_field: after.country + to_field: after.my_country +output: + - uses: relational.write + with: + connection: my-connection + schema: my-schema + table: my-table + keys: + - first_name + - last_name + mapping: + - first_name + - last_name + - address + - gender +``` + +### Source section + +The `source` section describes the source of data in the pipeline. + +The `redis` section is common for every pipeline initiated by an event in Redis, such as applying changes to data. In the case of write-behind, it has the information required to activate a pipeline dealing with changes to data. It includes the following attributes: + +- The `key_pattern` attribute specifies the pattern of Redis keys to listen on. The pattern must correspond to keys that are of Hash or JSON type. + +- The `exclude_commands` attribute specifies which commands to ignore. For example, if you listen on a key pattern with Hash values, you can exclude the `HDEL` command so no data deletions will propagate to the downstream database. If you don't specify this attribute, Write-behind acts on all relevant commands. +- The `trigger` attribute is mandatory and must be set to `write-behind`. + +- The `row_format` attribute can be used with the value `full` to receive both the `before` and `after` sections of the payload. Note that for write-behind events the `before` value of the key is never provided. + + > Note: Write-behind does not support the [`expired`]({{< relref "/develop/use/keyspace-notifications" >}}#events-generated-by-different-commands) event. Therefore, keys that are expired in Redis will not be deleted from the target database automatically. +> Notes: The `redis` attribute is a breaking change replacing the `keyspace` attribute. The `key_pattern` attribute replaces the `pattern` attribute. The `exclude_commands` attributes replaces the `exclude-commands` attribute. If you upgrade to version 0.105 and beyond, you must edit your existing jobs and redeploy them. + +### Output section + +The `output` section is critical. It specifies a reference to a connection from the `config.yaml` `connections` section: + +- The `uses` attribute specifies the type of **writer** Write-behind will use to prepare and write the data to the target. + In this example, it is `relational.write`, a writer that translates the data into a SQL statement with the specific dialect of the downstream relational database. + For a full list of supported writers, see + [data transformation block types]({{< relref "/integrate/write-behind/reference/data-transformation-block-types" >}}). + +- The `schema` attribute specifies the schema/database to use (different database have different names for schema in the object hierarchy). + +- The `table` attribute specifies the downstream table to use. + +- The `keys` section specifies the field(s) in the table that are the unique constraints in that table. + +- The `mapping` section is used to map database columns to Redis fields with different names or to expressions. The mapping can be all Redis data fields or a subset of them. + +> Note: The columns used in `keys` will be automatically included, so there's no need to repeat them in the `mapping` section. + +### Apply filters and transformations to write-behind + +The Write-behind jobs can apply filters and transformations to the data before it is written to the target. Specify the filters and transformations under the `transform` section. + +#### Filters + +Use filters to skip some of the data and not apply it to target. +Filters can apply simple or complex expressions that take as arguments the Redis entry key, fields, and even the change op code (create, delete, update, etc.). +See [Filter]({{< relref "/integrate/write-behind/reference/data-transformation-block-types/filter" >}}) for more information. + +#### Transformations + +Transformations manipulate the data in one of the following ways: + +- Renaming a field +- Adding a field +- Removing a field +- Mapping source fields to use in output + +To learn more about transformations, see +[data transformation pipeline]({{< relref "/integrate/write-behind/data-transformation/data-transformation-pipeline" >}}). + +## Provide target's secrets + +The target's secrets (such as TLS certificates) can be read from a path on the Redis node's file system. This allows the consumption of secrets injected from secret stores. + +## Deploy the write-behind pipeline + +To start the pipeline, run the +[`deploy`]({{< relref "/integrate/write-behind/reference/cli/redis-di-deploy" >}}) command: + +```bash +redis-di deploy +``` + +You can check that the pipeline is running, receiving, and writing data using the +[`status`]({{< relref "/integrate/write-behind/reference/cli/redis-di-status" >}}) command: + +```bash +redis-di status +``` + +## Monitor the write-behind pipeline + +The Write-behind pipeline collects the following metrics: + +| Metric Description | Metric in [Prometheus](https://prometheus.io/) | +| ----------------------------------- | --------------------------------------------------------------------------------------------------- | +| Total incoming events by stream | Calculated as a Prometheus DB query: `sum(pending, rejected, filtered, inserted, updated, deleted)` | +| Created incoming events by stream | `rdi_metrics_incoming_entries{data_source:"…",operation="inserted"}` | +| Updated incoming events by stream | `rdi_metrics_incoming_entries{data_source:"…",operation="updated"}` | +| Deleted incoming events by stream | `rdi_metrics_incoming_entries{data_source:"…",operation="deleted"}` | +| Filtered incoming events by stream | `rdi_metrics_incoming_entries{data_source:"…",operation="filtered"}` | +| Malformed incoming events by stream | `rdi_metrics_incoming_entries{data_source:"…",operation="rejected"}` | +| Total events per stream (snapshot) | `rdi_metrics_stream_size{data_source:""}` | +| Time in stream (snapshot) | `rdi_metrics_stream_last_latency_ms{data_source:"…"}` | + +To use the metrics you can either: + +- Run the [`status`]({{< relref "/integrate/write-behind/reference/cli/redis-di-status" >}}) command: + + ```bash + redis-di status + ``` + +- Scrape the metrics using Write-behind's Prometheus exporter + +## Upgrading + +If you need to upgrade Write-behind, you should use the +[`upgrade`]({{< relref "/integrate/write-behind/reference/cli/redis-di-upgrade" >}}) command that provides for a zero downtime upgrade: + +```bash +redis-di upgrade ... +``` +--- +Title: Write-behind configuration guide +aliases: /integrate/redis-data-integration/write-behind/configuration-guide/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Configure write-behind to your database +draft: null +group: di +hidden: false +linkTitle: Configuration +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +This guide shows you how to configure write-behind target connections. + +## Overview +Write-behind target connections are the connections established between a Write-behind instance and a target database in a +[write-behind scenario]({{< relref "/integrate/write-behind/quickstart/write-behind-guide" >}}). +Write-behind is used to replicate changes captured in a Write-behind-enabled Redis Enterprise database to a target database. +The connections must be configured in the `config.yaml` before deploying any jobs and must follow one of the formats shown below. Multiple connections can be specified in the `connections` section. + +### For relational datastores + +```yaml +connections: + my-sql-datastore: + type: # mysql | oracle | postgresql | sqlserver + host: # IP address or FQDN of a database host and instance + port: # database port + database: # name of the database + user: # database user + password: # database password + # connect_args: # optional connection parameters passed to the driver - these are driver specific + # query_args: # optional parameters for SQL query execution - typically not required for Write-behind operation +``` + +### For non-relational datastores + +```yaml +connections: + my-nosql-datastore: + type: # cassandra + hosts: # array of IP addresses or host names of a datastore nodes + port: # database port + database: # name of the database + user: # database user + password: # database password +``` + +## Microsoft SQL Server + +Microsoft SQL Server supports different authentication mechanisms (SQL Server Authentication and Integrated Windows Authentication) and protocols (NTLM and Kerberos). Write-behind can use all of them. However, systems that use Kerberos may require some additional configuration. + +### Account permissions + +To enable Write-behind to work with a SQL Server database, check that the account you specify was assigned at least the `db_datawriter` role. + +### SQL Server authentication + +To use SQL Server authentication mode, create a user with login credentials and then assign the necessary permissions for the target database to that user. + +```yaml +connections: + mssql2019-sqlauth: + type: sqlserver + host: ip-10-0-0-5.internal + port: 1433 + database: rdi_wb_database + user: rdi_user + password: secret +``` + +### Windows authentication + +To use Windows authentication mode, you need to create a Windows or Active Directory account that has the necessary permissions to access the target database, and is able to log into SQL Server. The Linux machine hosting Write-behind can be configured to support the NTLM authentication protocol. + +For NTLM: + +```yaml +connections: + mssql2019-ntlm: + type: sqlserver + host: ip-10-0-0-5.internal + port: 1433 + database: rdi_wb_database + user: MYDOMAIN\rdi_service_account # company-domain\service-account + password: secret # NTLM requires to provide a password +``` + +> Note: User must be specified with the domain name for Windows Authentication to work correctly. + +After you configure the Write-behind connection and deploy the write-behind job, run the following SQL query to have the operator check if Write-behind is using the expected authentication mechanism and protocol. Note: this operation may require the `sysadmin` role. + +```sql +SELECT session_id, auth_scheme FROM sys.dm_exec_connections; +``` + +The results indicate which `auth_scheme` is used by each session and may take values `SQL`, `NTLM`, and `Kerberos`.--- +Title: Data transformation pipeline +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to transform data to Redis types +group: di +linkTitle: Data transformation pipeline +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 20 +--- + +Write-behind's data transformation capabilities allow users to transform their data beyond the default translation of source types to Redis types. The transformation involves no coding. Instead, it is described in a set of human readable YAML files, one per source table. + +The ingested format and types are different from one source to another. Currently, the only supported source is [Debezium](https://debezium.io/). The first transformation from Debezium types to native JSON with Redis types is done automatically without any need for user instructions. Then, this JSON is passed on to the user defined transformation pipeline. + +Each job describes the transformation logic to perform on data from a single source. The source is typically a database table or collection and is specified as the full name of this table/collection. The job may include filtering logic to skip data that matches a condition. Other logical steps in the job will transform the data into the desired output that will be stored in Redis as hashes or JSON. + +{{< image filename="/images/rdi/data-transformation-pipeline.png" >}} + +## Default job +In situations where there is a need to perform a transformation on all ingested records without creating a specific job for specific tables, the default job is used. The transformation associated with this job will be applied to all tables that lack their own explicitly defined jobs. The default job must have a table name of "*", and only one instance of this type of job is permitted. + +For example, the default job can streamline tasks such as adding a prefix or postfix to Redis keys, or adding fields to new hashes and JSONs without customizing each source table. + +Currently, the default job is supported for ingest pipelines only. + +### Example +This example demonstrates the process of adding an `app_code` field with a value of `foo` using the [add_field]({{}}) block to all tables that lack explicitly defined jobs. Additionally, it appends an `aws` prefix and a `gcp` postfix to every generated hash key. + +default.yaml +```yaml +source: + table: "*" + row_format: full +transform: + - uses: add_field + with: + fields: + - field: after.app_code + expression: "`foo`" + language: jmespath +output: + - uses: redis.write + with: + data_type: hash + key: + expression: concat(['aws', '#', table, '#', keys(key)[0], '#', values(key)[0], '#gcp']) + language: jmespath +``` + +## Jobs + +Each job is defined in a separate YAML file. All of these files will be uploaded to Write-behind using the `deploy` command. +For more information, see [deploy configuration](#deploy-configuration)). If you are using the +[scaffold]({{< relref "/integrate/write-behind/reference/cli/redis-di-scaffold" >}}) command, +place the job files in the `jobs` folder. + +### Job YAML structure + +#### Fields + +- `source`: + + This section describes the table that the job operates on: + + - `server_name`: logical server name (optional). Corresponds to the `debezium.source.topic.prefix` property specified in Debezium Server's `application.properties` config file + - `db`: database name (optional) + - `schema`: database schema (optional) + - `table`: database table name + - `row_format`: format of the data to be transformed: `data_only` (default) - only payload, full - complete change record + +> Note: Any reference to the properties `server_name`, `db`, `schema`, and `table` will be treated by default as case insensitive. This can be changed by setting `case_insensitive` to `false`. + +> Cassandra only: In Cassandra, a `keyspace` is roughly the equivalent to a `schema` in other databases. Write-behind uses the `schema` property declared in a job file to match the `keyspace` attribute of the incoming change record. + +> MongoDB only: In MongoDB, a `replica set` is a cluster of shards with data and can be regarded as roughly equivalent to a `schema` in a relational database. A MongoDB `collection` is similar to a `table` in other databases. Write-behind uses the `schema` and `table` properties declared in a job file to match the `replica set` and `collection` attributes of the incoming change record, respectively. + +- `transform`: + + This section includes a series of blocks that define how the data will be transformed. + For more information, see + [supported blocks]({{< relref "/integrate/write-behind/reference/data-transformation-block-types" >}}) + and [JMESPath custom functions]({{< relref "/integrate/write-behind/reference/jmespath-custom-functions.md" >}}). + +- `output`: + + This section defines the output targets for processed data: + + - Cassandra: + - `uses`: `cassandra.write`: write into a Cassandra data store + - `with`: + - `connection`: connection name + - `keyspace`: keyspace + - `table`: target table + - `keys`: array of key columns + - `mapping`: array of mapping columns + - `opcode_field`: the name of the field in the payload that holds the operation (c - create, d - delete, u - update) for this record in the database + - Redis: + - `uses`: `redis.write`: write to a Redis data structure. Multiple blocks of this type are allowed in the same job + - `with`: + - `connection`: connection name as defined in `config.yaml` (by default, the connection named 'target' is used) + - `data_type`: target data structure when writing data to Redis (hash, json, set and stream are supported values) + - `key`: this allows you to override the key of the record by applying custom logic: + - `expression`: expression to execute + - `language`: expression language, JMESPath or SQL + - `expire`: positive integer value indicating a number of seconds for the key to expire. If not set, the key will never expire + - SQL: + - `uses`: `relational.write`: write into a SQL-compatible data store + - `with`: + - `connection`: connection name + - `schema`: schema + - `table`: target table name + - `keys`: array of key columns + - `mapping`: array of mapping columns + - `opcode_field`: the name of the field in the payload that holds the operation (c - create, d - delete, u - update) for this record in the database + +#### Notes + +- `source` is required. +- Either `transform`, `key`, or both should be specified. + +#### Using key in transformations + +To access the Redis key (for example in a write-behind job) you will need to take the following steps: + +- Set `row_format: full` to allow access to the key that is part of the full data entry. +- Use the expression `key.key` to get the Redis key as a string. + +#### Before and after values + +Update events typically report `before` and `after` sections, providing access to the data state before and after the update. +To access the "before" values explicitly, you will need to: + +- Set `row_format: full` to allow access to the key that is part of the full data entry. +- Use the `before.` pattern. + +### Example + +This example shows how to rename the `fname` field to `first_name` in the table `emp` using the `rename_field` block. It also demonstrates how to set the key of this record instead of relying on the default logic. + +redislabs.dbo.emp.yaml + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + from_field: fname + to_field: first_name +output: + - uses: redis.write + with: + connection: target + key: + expression: concat(['emp:fname:',fname,':lname:',lname]) + language: jmespath +``` + +### Deploy configuration + +In order to deploy your jobs to the remote Write-behind database, run: + +```bash +redis-di deploy +``` + +### Deploy configuration on Kubernetes + +If the Write-behind CLI is deployed as a pod in a Kubernetes cluster, perform these steps to deploy your jobs: + +- Create a ConfigMap from the YAML files in your `jobs` folder: + + ```bash + kubectl create configmap redis-di-jobs --from-file=jobs/ + ``` + +- Deploy your jobs: + + ```bash + kubectl exec -it pod/redis-di-cli -- redis-di deploy + ``` + +> Note: A delay occurs between creating/modifying the ConfigMap and its availability in the `redis-di-cli` pod. Wait around 30 seconds before running the `redis-di deploy` command. + +You have two options to update the ConfigMap: + +- For smaller changes, you can edit the ConfigMap directly with this command: + + ```bash + kubectl edit configmap redis-di-jobs + ``` + +- For bigger changes such as adding another job file, edit the files in your local `jobs` folder and then run this command: + + ```bash + kubectl create configmap redis-di-jobs --from-file=jobs/ --dry-run=client -o yaml | kubectl apply -f - + ``` + +> Note: You need to run `kubectl exec -it pod/redis-di-cli -- redis-di deploy` after updating the ConfigMap with either option. +--- +Title: Write-behind to Redis Enterprise target example +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write-behind to Redis Enterprise target +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +The `redis.write` block can be used in the `output` section of the write-behind job in order to enable writing data to the Redis Enterprise target database. Multiple blocks can be used at the same time to write data to different data types. The following example captures data modified in the `address:*` keyspace, then creates a new JSON-like string field named `json_addr`, and, finally, writes the results to multiple keys in target Redis Enteprise database: + +```yaml +source: + redis: + trigger: write-behind + key_pattern: address:* +transform: + - uses: add_field + with: + fields: + - field: "json_addr" + expression: concat(['{"city":', city, ', "zip":', zip, '}']) + language: jmespath +output: + - uses: redis.write + with: + data_type: hash + connection: target + key: + expression: concat(['addr:org_id:', org_id, ':hash']) + language: jmespath + - uses: redis.write + with: + data_type: json + key: + expression: concat(['addr:org_id:', org_id, ':json']) + language: jmespath + on_update: merge + - uses: redis.write + with: + data_type: set + key: + expression: concat(['addresses:', country]) + language: jmespath + args: + member: json_addr + - uses: redis.write + with: + data_type: sorted_set + key: + expression: concat(['addrs_withscores:', country]) + language: jmespath + args: + score: zip + member: json_addr + - uses: redis.write + with: + data_type: stream + key: + expression: "`addresses:events`" + language: jmespath + mapping: + - org_id: message_id + - zip: zip_code + - country +``` + +Run the following command in the source Redis Enterprise database to test the job: + +```shell +127.0.0.1:12005> hset address:1 city Austin zip 78901 org_id 1 country USA +``` + +The result is five keys will be created in the target Redis Enterprise database (hash, JSON, set, sorted set, and stream): + +```shell +127.0.0.1:12000> keys * +1) "addr:org_id:1:hash" +2) "addr:org_id:1:json" +3) "addresses:USA" +4) "addrs_withscores:USA" +5) "addresses:events" +``` +--- +Title: Write-behind foreach example +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write-behind foreach +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + + +The `foreach` section is used to explode a list of objects or arrays to rows in a selected target. +The `foreach` expression is structured as `:`. +The following example uses the `add_field` transformation to prepare the input JSON to the desired structure. Then, it applies `foreach` to write each `order` object as a relational database record using `keys` and `mapping`. +In this example, the `JMESPath` function `to_string` is used to flatten an array of objects, `specs`, to a string. + +```yaml +source: + redis: + key_pattern: orderdetail:* + trigger: write-behind + exclude_commands: ["json.del"] +transform: + - uses: add_field + with: + fields: + - field: my_orders + language: jmespath + expression: | + orders[*].{ + code: code + periodStartTime: periodStartTime + periodEndTime: periodEndTime + specs: to_string(specs) + } +output: + - uses: relational.write + with: + connection: mssql + schema: dbo + table: OrderMaster + keys: + - Code: orderDetail.code + mapping: + - DeliveryDate: orderDetail.deliveryDate + - ProductCode: orderDetail.productCode + - CountryCode: orderDetail.countryCode + - uses: relational.write + with: + connection: mssql + schema: dbo + table: Order + foreach: "order: my_orders[]" + keys: + - Code: order.code + mapping: + - OrderDetailCode: orderDetail.code + - PeriodStartTime: order.periodStartTime + - PeriodEndTime: order.periodEndTime + - Specs: order.specs + +```--- +Title: Write-behind transformation examples +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: See examples of write-behind transform job configurations +group: di +linkTitle: Transformation examples +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- +--- +Title: Data transformation +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +hideListLinks: false +linkTitle: Data transformation +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +The key functionality that Write-behind performs is mapping the data coming from [Debezium Server](https://debezium.io/documentation/reference/stable/operations/debezium-server.html) (representing a Source Database row data or row state change) into a Redis key with a value of [Hash]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json/" >}}). +There are two types of data transformations in Write-behind: + +1. By default, each source row is converted into one hash or one JSON key in Redis. + This conversion uses the Debezium schema-based conversion. The incoming data includes the schema and Write-behind uses a set of handlers to automatically convert each source column to a Redis Hash field or JSON type based on the Debezium type in the schema. See + [data type conversion]({{< relref "/integrate/write-behind/reference/data-types-conversion" >}}) + for a full reference on these conversions. + +1. If the user wants to add or modify this default mapping, Write-behind provides declarative data transformations. These transformations are represented in YAML files. Each file contains a job, which is a set of transformations per source table. See +[declarative transformations]({{< relref "/integrate/write-behind/data-transformation/data-transformation-pipeline" >}}) for more information. + +{{< image filename="/images/rdi/data-transformation-flow.png" >}} + +## More info + +--- +Title: Data type conversion +aliases: /integrate/redis-data-integration/write-behind/reference/data-types-conversion/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Data type conversion reference +group: di +linkTitle: Data type conversion +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +This document describes default conversions of data types of supported databases into target redis data types. + +## General ANSI SQL data types + +| Source data type | Target data type for HASH | Example for HASH | +| :---------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| array | not supported | | +| bigint | string | 2147483648 will be saved as '2147483648' | +| binary | bytes string
bytes or base64-encoded String, or a hex-encoded String, based on the connector configuration property setting `binary.handling.mode` | When `binary.handling.mode = bytes` the string `'hello'` will be added to the table as the binary string `'0x68656C6C6F'`,will be converted by Debezium to `'aGVsbG8AAAAAAAAAAAAAAAAAAAA='` and will be stored in Redis target database as: `'aGVsbG8AAAAAAAAAAAAAAAAAAAA='` | +| bit string | not supported | | +| blob | bytes string
bytes or base64-encoded String, or a hex-encoded String, based on the connector configuration property setting `binary.handling.mode` | When `binary.handling.mode = bytes` the binary image file ,which was loaded into the table ,will be converted by Debezium to bytes and will be stored in Redis target database as bytes string | +| boolean | string | The boolean value: false will be converted by the Applier to 0 and will be saved in Redis target database as the string '0' | +| char | string | When PostgreSQL data type is char(14) 'hello world' will be saved as 'hello world   ' | +| date | string
Mapped to ms.microsec since epoch | PG field value: `'2021-01-29'` will be converted by Debezium to the int `18656` (number of dates since epoch), which will be converted to number of ms since epoch and will be stored in Redis target database as: `'1611878400000'` | +| integer | string | `2147483640` will be saved as `'2147483640'` | +| interval | not supported | | +| null | | The field with `null` value will be sent by Debezium as `null` and will not be stored in Redis target database | +| numeric | string
Debezium configuration parameter `decimal.handling.mode` determines how the connector maps decimal values.When `decimal.handling.mode = 'precision'` the binary string,recieved by Debezium,will be converted to its corresponding numeric value and will be stored in Redis target database as string | PG field value: `4521398.56` will be converted by Debezium to binary string `'GvMbUA=='`, which will be converted to numeric value and will be stored in Redis target database as: `'4521398.56'` | +| smallint | string | 32767 will be saved as '32767' | +| text | string | 'This is a very long text for the PostgreSQL text column' | +| time | string
mapped to number of seconds past midnight | `'14:23:46'` will be converted to `'51826000000'` sec | +| PostgreSQL, Oracle, Cassandra: timestamp
MySQL, SQL Server: datetime | string
mapped to ms.microsec since epoch.
SQL Server datetime format: `YYYY-MM-DD hh:mm:ss[.nnn]`,
range: `1753-01-01` through `9999-12-31` | PG field value: `'2018-06-20 15:13:16.945104'` will be converted by Debezium to `'1529507596945104'`(micro seconds) and will be stored in Redis target database as `'1529507596945.104'` | +| PosrgreSQL: timestamptz
Oracle: timestamp with local timezone
MySQL: timestamp | string
converted to UTC and stored as number of ms.microsec since epoch | | +| timetz | string
converted to UTC and stored as number of seconds past midnight | PG field value: `'14:23:46.12345'` will be converted by Debezium to the string `'12:23:46.12345Z'` and will be stored in Redis target database as: `'44638.345'` | +| varbinary | bytes string
bytes or base64-encoded String, or a hex-encoded String, based on the connector configuration property setting `binary.handling.mode` | When `binary.handling.mode = bytes` the string `'hello'` will be added to the table as the binary string `'0x68656C6C6F'`,will be converted by Debezium to `'aGVsbG8='` and will be stored in Redis target database as: `'aGVsbG8='` | +| varchar | string | 'hello world' will be saved as 'hello world' | +| xml | string | + +| Source data type | Target data type for JSON | Example for JSON | +| :---------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| array | not supported | | +| bigint | number | 2147483648 will be saved as 2147483648 | +| binary | bytes string | When `binary.handling.mode = bytes` the string `'hello'` will be added to the table as the binary string `'0x68656C6C6F'`,will be converted by Debezium to `'aGVsbG8AAAAAAAAAAAAAAAAAAAA='` and will be stored in Redis target database as: `'aGVsbG8AAAAAAAAAAAAAAAAAAAA='` | +| bit string | not supported | | +| blob | bytes string | When `binary.handling.mode = bytes` the binary image file ,which was loaded into the table ,will be converted by Debezium to bytes and will be stored in Redis target database as bytes string | +| boolean | boolean | The boolean value:true will be saved in Redis target database as boolean data type with value True | +| char | string | When PostgreSQL data type is char(14) 'hello world' will be saved as 'hello world   ' | +| date | number
Mapped to ms.microsec since epoch | PG field value: `2021-01-29` will be converted by Debezium to the int 18656 (number of dates since epoch), which will be converted to number of ms since epoch and will be stored in Redis target database as: `1611878400000` | +| integer | number | `2147483640` will be saved as `2147483640` | +| interval | not supported | | +| null | | The field with `null` value will be sent by Debezium as `null` and will be stored in Redis target database with the value `null` | +| numeric | Debezium configuration parameter `decimal.handling.mode` determines how the connector maps decimal values. When `decimal.handling.mode = 'precision'` the binary string received by Debezium will be converted to its corresponding numeric value and will be stored in Redis target database db as number. When `decimal.handling.mode = 'string'` the string ,received by Debezium, will be stored in Redis target database as string. When `decimal.handling.mode = 'double'` the double value received by Debezium will be stored in Redis target database as number. | `decimal.handling.mode = string`: PG field value: `4521398.56` will be recieved by Debezium as the string `'4521398.56'`, and will be stored in Redis target database as the string: '`4521398.56`' . | +| smallint | number | 32767 will be saved as 32767 | +| text | string | | +| time | number number
mapped to the number of seconds past midnight | `'14:23:46'` will be converted to `51826000000` sec | +| PostgreSQL, Oracle, Cassandra: timestamp
MySQL, SQL Server: datetime | decimal
mapped to ms.microsec since epoch.
SQL Server datetime format: `YYYY-MM-DD hh:mm:ss[.nnn]`,
range: `1753-01-01` through `9999-12-31` | PG field value: `'2018-06-20 15:13:16.945104'` will be converted by Debezium to `'1529507596945104'`(micro seconds) and will be stored in Redis target database as `1529507596945.104 | +| PosrgreSQL: timestamptz
Oracle: timestamp with local timezone
MySQL: timestamp | number
converted to UTC and stored as number of ms.microsec since epoch | | +| timetz | number
converted to UTC and stored as number of ms.microsec since epoch | PG field value: `'14:23:46.12345'` will be converted by Debezium to the string `'12:23:46.12345Z'` and will be stored in Redis target database as: `44638.345` | +| varbinary | bytes string
bytes or base64-encoded String, or a hex-encoded String, based on the connector configuration property setting `binary.handling.mode` | When `binary.handling.mode = bytes` the string `'hello'` will be added to the table as the binary string `'0x68656C6C6F'`,will be converted by Debezium to `'aGVsbG8='` and will be stored in Redis target database as: `'aGVsbG8='` | +| varchar | string | 'hello world' will be saved as 'hello world’ | +| xml | string | | + +## Cassandra data types + +| Source data type | Target data type for HASH | Example for HASH | +| :-------------------- | :------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ascii | string | | +| counter (64-bit long) | string | `2` | +| date | not supported | | +| decimal | not supported | | +| double | string | '1.007308023' | +| duration | not supported | | +| float | string | The number `-3.4E+38` will be received by debezium as `-3.4E38`, will be converted by the Applier to `-340000000000000000000000000000000000000` and will be saved in Redis target database as `'-340000000000000000000000000000000000000'` | +| frozen | string | `{'10.10.11.1', '10.10.10.1', '10.10.12.1'}` will be saved in Redis target database as a string: `'{'10.10.11.1', '10.10.10.1', '10.10.12.1'}'` | +| frozen udt | string | `{'city': 'City','street': 'Street','streetno': 2,'zipcode': '02-212'}` will be saved in Redis target database as a string: `'{'city': 'City','street': 'Street','streetno': 2,'zipcode': '02-212'}'` | +| inet | string
IPv4 and IPv6 network addresses | The IP address `4.35.221.243` will be converted by debezium to `'/4.35.221.243'` and will be saved in Redis target database as `'/4.35.221.243'` | +| list | string | The list `['New York', 'Paris','London','New York']` will be sent by debezium as array of strings: `['New York', 'Paris','London','New York']` and will be saved in Redis target database as the string `"['New York', 'Paris','London','New York']"` | +| map | string | `{'fruit' : 'Apple', 'band' : 'Beatles'}` will be saved in Redis target database as a string: `'{'fruit' : 'Apple', 'band' : 'Beatles'}'` | +| set | string | The set `{'y','n'}` will be saved in Redis target database as the string `'{'y','n'}' | +| tinyint | string | | +| uuid | string | b9980b96-a85b-411c-b8e7-4be55c123793 | +| tuple | string | The tuple `{ "field1": 1, "field2": "testing tuple", "field3": 2 }` will be saved in Redis target database as a string: `' { "field1": 1, "field2": "testing tuple", "field3": 2 }'` | +| varint | not supported | | + +| Source data type | Target data type for JSON | Example for JSON | +| :-------------------- | :------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ascii | string | | +| counter (64-bit long) | number | 2 | +| date | not supported | | +| decimal | not supported | | +| double | number | 1.007308023 | +| duration | not supported | | +| float | number | The number -3.4E+38 will be received by debezium as `-3.4E38` and will be saved in Redis target database as `-1e+38` | +| frozen | array | `{'10.10.11.1', '10.10.10.1', '10.10.12.1'}` will be saved in Redis target database as an array: `{'/10.10.11.1', '/10.10.10.1', '/10.10.12.1'}` | +| frozen udt | object | `{'city': 'City','street': 'Street','streetno': 2,'zipcode': '02-212'}` will be saved in Redis target database as string will be saved in Redis target database as an object: `{'city': 'City','street': 'Street','streetno': 2,'zipcode': '02-212'}` | +| inet | string
IPv4 and IPv6 network addresses | The IP address `4.35.221.243` will be converted by debezium to `'/4.35.221.243'` and will be saved in Redis target database as `'/4.35.221.243'` | +| list | array | The list `['New York', 'Paris','London','New York']` will be sent by debezium as array of strings: `['New York', 'Paris','London','New York']` and will be saved in Redis target database as an array: `['New York', 'Paris','London','New York']` | +| map | object | `{'fruit' : 'Apple', 'band' : 'Beatles'}` will be saved in Redis target database as an object: `{'fruit' : 'Apple', 'band' : 'Beatles'}` | +| set | array | The set `{'y','n'}` will be saved in Redis target database as an array: `{'y','n'}` | +| tinyint | | | +| uuid | string | b9980b96-a85b-411c-b8e7-4be55c123793 | +| tuple | object | The tuple `{ "field1": 1, "field2": "testing tuple", "field3": 2 }` will be saved in Redis target database as an object: `{ "field1": 1, "field2": "testing tuple", "field3": 2 }` | +| varint | not supported | | + +## MySQL and MariaDB data types + +| Source data type | Source data type for HASH | Example for Hash | +| ------------------ | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| enum | string | MySQL enum value `'cat'` will be stored in Redis target database as `'cat'` | +| geometry | not supported | | +| geometrycollection | not supported | | +| json | string | {"pid": 102, "name": "name2"} | +| linestring | not supported | | +| multilinestring | not supported | | +| multipoint | not supported | | +| multypolygon | not supported | | +| polygon | not supported | | +| set | string | `'1,2,3'` will be stored in Redis target database as: `'1,2,3'` | +| year | string | The value `'55'` will be stored in the database as `2055` and will be sent by Debezium as int32 data type with value `2055`. It will be stored in Redis target database as the string `'2055'` as well | + +| Source data type | Target data type for JSON | Example for JSON | +| ------------------ | ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| enum | string | MySQL enum value `'cat'` will be stored in Redis target database as `'cat'` | +| geometry | not supported | | +| geometrycollection | not supported | | +| json | object | {"pid": 102, "name": "name2"} | +| linestring | not supported | | +| multilinestring | not supported | | +| multipoint | not supported | | +| multypolygon | not supported | | +| polygon | not supported | | +| set | string | `'1,2,3'` will be stored in Redis target database as: `'1,2,3'` | +| year | number | The value `'55'` will be stored in the database as `2055` and will be sent by Debezium as int32 data type with value `2055`. It will be stored in Redis target database as the number `2055` as well | + +## Oracle data types + +| Source data type | Source data type for HASH | Example for Hash | +| ---------------------------------------------------------------------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| bfile | not supported | | +| binary_double | string | '1.7E+308' | +| binary_float | string | '3.40282E38' | +| clob | string | large block of text | +| float,real,double precision
real = FLOAT(63),double precision = FLOAT(126). | string | The value `-3.402E+38` will be saved in Redis target database as the string `'-340200000000000000000000000000000000000'` when Debezium configuration parameter `decimal.handling.mode = 'double'` | +| long raw | not supported | | +| nchar | string - is Unicode data type that can store Unicode characters | The string `'testing hebrew שלום'` will be stored in Redis target database as '`testing hebrew \xd7\xa9\xd7\x9c\xd7\x95\xd7\x9d`         ' | +| nclob | not supported | | +| number(p,s) | string | '10385274000.32' | +| nvarchar | string - is Unicode data type that can store Unicode characters | The string `testing hebrew שלום'` will be stored in Redis target database as '`testing hebrew \xd7\xa9\xd7\x9c\xd7\x95\xd7\x9d`' | +| raw | not supported | | +| rowid | string | AAAR1QAAOAAAACFAAA | +| timestamp with tz | string | `'2021-12-30 14:23:46'` will be converted by Debezium to the string `'2021-12-30T14:23:46+02:00'` and will be stored in Redis target database as the string: `'1611878400000'` which is the number of ms since epoch | +| urowid | not supported | + +| Source data type | Target data type for JSON | Example for JSON | +| ---------------------------------------------------------------------------------- | --------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| bfile | not supported | | +| binary_double | number | 1.7E+308 | +| binary_float | number | 3.40282E38 | +| clob | string | large block of text | +| float,real,double precision
real = FLOAT(63),double precision = FLOAT(126). | number/string | the value `-3.402E+38` will be saved in Redis target database as the number `-340200000000000000000000000000000000000` when Debezium configuration parameter `decimal.handling.mode = 'double'` | +| long raw | not supported | | +| nchar | string - is Unicode data type that can store Unicode characters | The string `'testing hebrew שלום'` will be stored in Redis target database as '`testing hebrew \xd7\xa9\xd7\x9c\xd7\x95\xd7\x9d`         ' | +| nclob | not supported | | +| number(p,s) | number | 10385274000.32 | +| nvarchar | string - is Unicode data type that can store Unicode characters | The string `testing hebrew שלום'` will be stored in Redis target database as '`testing hebrew \xd7\xa9\xd7\x9c\xd7\x95\xd7\x9d`' | +| raw | not supported | | +| rowid | string | AAAR1QAAOAAAACFAAA | +| timestamp with tz | number | `'2021-12-30 14:23:46'` will be converted by Debezium to the string `'2021-12-30T14:23:46+02:00'` and will be stored in Redis target database as the number: `1611878400000` which is the number of ms since epoch | +| urowid | not supported | | + +## PostgreSQL data types + +| Source data type | Source data type for HASH | Example for Hash | +| ---------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| box | not supported | | +| cidr | string
IPv4 and IPv6 networks addresses. | '4.35.221.243/32' | +| circle | not supported | | +| domain | string | | +| hstore | string | '{"pages":"368","author":"Katherine Dunn","category":"fiction"}' | +| inet | string
IPv4 and IPv6 network addresses | '4.35.221.243' | +| json | string | "{"guid":  "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
"name": "Angela Barton",
"is_active": null,
"company": "Magnafone"
"address": \"178 Howard Place, Gulf, Washington,702",
"registered": "2009-11-07T08:53:22 +08:00",
"latitude": 19.793713,
"longitude": 86.513373,
"tags": ["enim","aliquip","qui\" ]}" | +| line | not supported | | +| macaddr | string
mac addresses | '08:00:2b:01:02:03' | +| money | string | When `decimal.handling.mode = 'double'` the money value `-8793780.01` will be received by Debezium as `-8793780.01` with double data type, and will be stored in Redis target database as the string'`-8793780.01`' | +| path | not supported | | +| point | not supported | | +| polygon | not supported | | +| uuid | string | 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11' | +| | | | + +| Source data type | Target data type for JSON | Example for JSON | +| ---------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| box | not supported | | +| cidr | string
IPv4 and IPv6 networks addresses. | '4.35.221.243/32' | +| circle | not supported | | +| domain | string | | +| hstore | string | '{"pages":"368","author":"Katherine Dunn","category":"fiction"} | +| inet | string
IPv4 and IPv6 network addresses | '4.35.221.243' | +| json | object | {"guid":  "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
"name": "Angela Barton",
"is_active": null,
"company": "Magnafone"
"address": \"178 Howard Place, Gulf, Washington,702",
"registered": "2009-11-07T08:53:22 +08:00",
"latitude": 19.793713,
"longitude": 86.513373,
"tags": ["enim","aliquip","qui\" ]} | +| line | not supported | | +| macaddr | string
mac addresses | '08:00:2b:01:02:03' | +| money | string | When `decimal.handling.mode = 'double'` the money value `-8793780.01` will be received by Debezium as `-8793780.01`, with double data type and will be stored in Redis target database as the number `-8793780.01` | +| path | not supported | | +| point | not supported | | +| polygon | not supported | | +| uuid | string | 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11' | +| | | | + +## SQL server data types + +| Source data type | Source data type for HASH | Example for Hash | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| bit | string | When value `>0` it will be converted by Debezium to true and will be saved in Redis target database as '`1`' while when value `= 0` it will be converted by Debezium to false and will be saved in Redis target database as '`0`' | +| datetime2 | string
Represents the number of milliseconds since the epoch, and does not include timezone information. | When Debezium configuration parameter `time.precision.mode='connect'`, the value '`2018-06-20 15:13:16.945104`' will be converted by Debezium to the value '`1529507596945104`' and will be saved in Redis target database database as the string `'1529507596945.104`' | +| datetimeoffset | string | When Debezium configuration parameter `decimal.handling.mode = 'precision'`, the datetimeoffset datatype value '`12-10-25 12:32:10 +01:00`' will be converted to the string '`2025-12-10T12:32:10+01:00'` and will be saved in Redis target database as `1765366330000` | +| decimal,float,real | string
range of values: decimal-10^38 +1 to 10^38,float-1.79E+308 to -2.23E-308, 0 and 2.23E-308 to 1.79E+308,real:- 3.40E + 38 to -1.18E - 38, 0 and 1.18E - 38 to 3.40E + 38 | When Debezium configuration parameter `decimal.handling.mode = 'precision'` the value '`-3.402E+38`' will be converted by Debezium to the binary string '`/wAP3QCzc/wpiIGe8AAAAAA=`' and will be saved in Redis target database as the string '`-340200000000000000000000000000000000000`' | +| image | string
Variable-length binary data from 0 through 2,147,483,647 bytes. | | +| money | string
range of values: -922,337,203,685,477,5808 to 922,337,203,685,477.5807 | When Debezium configuration parameter `decimal.handling.mode = 'precision'` the value `922337203685477.5807` will be converted by Debezium to the binary '`f/////////8=`' string and will be saved in Redis target database as the string '`922337203685477.5807`' | +| nchar | string - fixed-size string data , Unicode data type that can store Unicode characters | | +| nvarchar | string - variable-size string data, Unicode data type that can store Unicode characters | | +| numeric | string
range of values - 10^38 +1 to 10^38 | When Debezium configuration parameter `time.precision.mode = 'connect'` and `decimal.handling.mode = 'precision'` , the value `1.00E +33` will be converted by Debezium to the binary string `'SztMqFqGw1MAAAAAAAAAAA=='` and will be saved in Redis target database as the string `'1000000000000000000000000000000000'` | +| rowversion | string
data type that exposes automatically generated, unique binary numbers within a database. rowversion is generally used as a mechanism for version-stamping table rows. | 0x00000000000007D0 | +| smalldatetime | string
represents the number of milliseconds past the epoch, and does not include timezone information. | `'2018-06-20 15:13:16`' will be converted by Debezium to `1529507580000` ms past the epoch and will be saved in Write-behind as the string '`1529507580000`'.
number of seconds: 16 will be not be included in the convertion and will not be saved in Redis target database | +| smallmoney | string
range of values: - 214,748.3648 to 214,748.3647 | When Debezium configuration parameter `decimal.handling.mode = 'string'` the value `-214748.3648` will be converted by Debezium to the string `'-214748.3648'` and will be saved in Redis target database as '`-214748.3648`' | +| Spatial Geometry Types | not supported | | +| Spatial Geography Types | not supported | | +| table | not supported | | +| text | Variable-length Unicode data | | +| uniqueidentifier | string | 06BEEF00-F859-406B-9A60-0A56AB81A97 | + +| Source data type | Target data type for JSON | Example for JSON | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| bit | boolean | When value `>0` it will be converted by Debezium to true and will be saved in Redis target database as `True` while when value `= 0` it will be converted by Debezium to false and will be saved in Redis target database as `False` | +| datetime2 | number
Represents the number of milliseconds since the epoch, and does not include timezone | When Debezium configuration parameter `time.precision.mode='connect'`, the value '`2018-06-20 15:13:16.945104`' will be converted by Debezium to the value '`1529507596945104`' and will be saved in Redis target database database as the number `1529507596945.104` | +| datetimeoffset | number | When Debezium configuration parameter `decimal.handling.mode = 'precision'`, the datetimeoffset datatype value '`12-10-25 12:32:10 +01:00`' will be converted to the string '`2025-12-10T12:32:10+01:00'` and will be saved in Redis target database as the string `1765366330000` | +| decimal,float,real | number/string | When Debezium configuration parameter `decimal.handling.mode = 'precision'` the value '`-3.402E+38`' will be converted by Debezium to the binary string '`/wAP3QCzc/wpiIGe8AAAAAA=`' and will be saved in Redis target database as the number `-340200000000000000000000000000000000000` | +| image | string | | +| money | number/string depending on the value of decimal.handling.mode | When Debezium configuration parameter `decimal.handling.mode = 'precision'` the value `922337203685477.5807` will be converted by Debezium to the binary '`f/////////8=`' string and will be saved in Redis target database as the number `922337203685477.5807` | +| nchar | string | | +| nvarchar | string - variable-size string data, Unicode data type that can store Unicode characters | | +| numeric | number | When Debezium configuration parameter `time.precision.mode = 'connect'` and `decimal.handling.mode = 'precision'` , the value `1.00E +33` will be converted by Debezium to the binary string `'SztMqFqGw1MAAAAAAAAAAA=='` and will be saved in Redis target database as the number`1000000000000000000000000000000000` | +| rowversion | string
data type that exposes automatically generated, unique binary numbers within a database. rowversion is generally used as a mechanism for version-stamping table rows. | 0x00000000000007D0 | +| smalldatetime | number
represents the number of milliseconds past the epoch, and does not include timezone information. | `'2018-06-20 15:13:16`' will be converted by Debezium to `1529507580000` ms past the epoch and will be saved in Write-behind as the number `1529507580000`.
number of seconds: 16 will be not be included in the convertion and will not be saved in Redis target database | +| smallmoney | number | When Debezium configuration parameter `decimal.handling.mode = 'string'` the value `-214748.3648` will be converted by Debezium to the string `'-214748.3648'` and will be saved in Redis target database as '`-214748.3648`' | +| Spatial Geometry Types | not supported | | +| Spatial Geography Types | not supported | | +| table | not supported | | +| text | | | +| uniqueidentifier | string | 06BEEF00-F859-406B-9A60-0A56AB81A97 | + +\* fields with "not supported" data type will not appear in target hash. +--- +Title: JMESPath custom functions +aliases: /integrate/redis-data-integration/write-behind/reference/jmespath-custom-functions/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: JMESPath custom function reference +group: di +linkTitle: JMESPath custom functions +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 40 +--- + +| Function | Description | Example | Comments | +| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `base64_decode` | Decodes a base64(RFC 4648) encoded string | Input: `{"encoded": "SGVsbG8gV29ybGQh"}`
Expression: `base64_decode(encoded)`
Output: `Hello World!` | | +| `capitalize` | Capitalizes all the words in the string | Input: `{"name": "john doe"}`
Expression: `capitalize(name)`
Output: `John Doe` | | +| `concat` | Concatenates an array of variables or literals | Input: `{"fname": "john", "lname": "doe"}`
Expression: `concat([fname, ' ' ,lname])`
Output: `john doe` | This is equivalent to the more verbose built-in expression: `' '.join([fname,lname])` | +| `filter_entries` | Filters entries in a dictionary (object) using the given JMESPath predicate | Input: `{ "name": "John", "age": 30, "country": "US", "score": 15}`
Expression: `` filter_entries(@, `key == 'name' \|\| key == 'age'`)``
Output:`{"name": "John", "age": 30 }` | | +| `from_entries` | Converts an array of objects with `key` and `value` properties into a single object | Input: `[{"key": "name", "value": "John"}, {"key": "age", "value": 30}, {"key": "city", "value": null}]`
Expression: `from_entries(@)`
Output: `{"name": "John", "age": 30, "city": null}` | | +| `hash` | Calculates a hash using the `hash_name` hash function and returns its hexadecimal representation | Input: `{"some_str": "some_value"}`
Expression: `hash(some_str, `sha1`)`
Output: `8c818171573b03feeae08b0b4ffeb6999e3afc05` | Supported algorithms: sha1 (default), sha256, md5, sha384, sha3_384, blake2b, sha512, sha3_224, sha224, sha3_256, sha3_512, blake2s | +| `in` | Checks if an element matches any value in a list of values | Input: `{"el": "b"}`
Expression: `in(el, `["a", "b", "c"]`)`
Output: `True` | | +| `left` | Returns a specified number of characters from the start of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `left(greeting, `5`)`
Output: `hello` | | +| `lower` | Converts all uppercase characters in a string into lowercase characters | Input: `{"fname": "John"}`
Expression: `lower(fname)`
Output: `john` | | +| `mid` | Returns a specified number of characters from the middle of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `mid(greeting, `4`, `3`)`
Output: `o w` | | +| `json_parse` | Returns parsed object from the given json string | Input: `{"data": '{"greeting": "hello world!"}'}`
Expression: `parse_json(data)`
Output: `{"greeting": "hello world!"}` | | +| `regex_replace` | Replaces a string that matches a regular expression | Input: `{"text": "Banana Bannnana"}`
Expression: `regex_replace(text, 'Ban\w+', 'Apple Apple')`
Output: `Apple Apple` | | +| `replace` | Replaces all the occurrences of a substring with a new one | Input: `{"sentence": "one four three four!"}`
Expression: `replace(sentence, 'four', 'two')`
Output: `one two three two!` | | +| `right` | Returns a specified number of characters from the end of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `right(greeting, `6`)`
Output: `world!` | | +| `split` | Splits a string into a list of strings after breaking the given string by the specified delimiter (comma by default) | Input: `{"departments": "finance,hr,r&d"}`
Expression: `split(departments)`
Output: `['finance', 'hr', 'r&d']` | Default delimiter is comma - a different delimiter can be passed to the function as the second argument, for example: `split(departments, ';')` | +| `time_delta_days` | Returns the number of days between a given `dt` and now (positive) or the number of days that have passed from now (negative) | Input: `{"dt": '2021-10-06T18:56:16.701670+00:00'}`
Expression: `time_delta_days(dt)`
Output: `365` | If `dt` is a string, ISO datetime (2011-11-04T00:05:23+04:00, for example) is assumed. If `dt` is a number, Unix timestamp (1320365123, for example) is assumed. | +| `time_delta_seconds` | Returns the number of seconds between a given `dt` and now (positive) or the number of seconds that have passed from now (negative) | Input: `{"dt": '2021-10-06T18:56:16.701670+00:00'}`
Expression: `time_delta_days(dt)`
Output: `31557600` | If `dt` is a string, ISO datetime (2011-11-04T00:05:23+04:00, for example) is assumed. If `dt` is a number, Unix timestamp (1320365123, for example) is assumed. | +| `to_entries` | Converts a given object into an array of objects with `key` and `value` properties | Input: `{"name": "John", "age": 30, "city": null}`
Expression: `to_entries(@)`
Output: `[{"key": "name", "value": "John"}, {"key": "age", "value": 30}, {"key": "city", "value": null}]` | | +| `upper` | Converts all lowercase characters in a string into uppercase characters | Input: `{"fname": "john"}`
Expression: `upper(fname)`
Output: `JOHN` | | +| `uuid` | Generates a random UUID4 and returns it as a string in standard format | Input: None
Expression: `uuid()`
Output: `3264b35c-ff5d-44a8-8bc7-9be409dac2b7` | | +--- +Title: redis-di delete-all-contexts +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Deletes all contexts +group: di +linkTitle: redis-di delete-all-contexts +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di delete-all-contexts [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di delete-all-contexts [OPTIONS] + + Deletes all contexts + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + -f, --force Force operation. skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di create +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Creates the Write-behind Database instance +group: di +linkTitle: redis-di create +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di create [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `silent`: + + - Type: BOOL + - Default: `false` + - Usage: `--silent` + + Silent install. Do not prompt to enter missing parameters + +- `no_configure`: + + - Type: BOOL + - Default: `false` + - Usage: `--no-configure` + + Do not install Write-behind Engine to the Write-behind Database + +- `cluster_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-host` + + Host/IP of Redis Enterprise Cluster (service name in case of k8s) + +- `cluster_api_port` (REQUIRED): + + - Type: + - Default: `9443` + - Usage: `--cluster-api-port` + + API Port of Redis Enterprise Cluster + +- `cluster_user` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-user` + + Redis Enterprise Cluster username with either DB Member, Cluster Member or Cluster Admin roles + +- `cluster_password`: + + - Type: STRING + - Default: `none` + - Usage: `--cluster-password` + + Redis Enterprise Cluster Password + +- `rdi_port`: + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port for the new Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `` + - Usage: `--rdi-password` + + Password for the new Write-behind Database (alphanumeric characters with zero or more of the following: ! & # $ ^ < > -) + +- `rdi_memory`: + + - Type: =30> + - Default: `100` + - Usage: `--rdi-memory` + + Memory for Write-behind Database (in MB) + +- `rdi_shards`: + + - Type: =1> + - Default: `1` + - Usage: `--rdi-shards` + + Number of database server-side shards + +- `replication`: + + - Type: BOOL + - Default: `false` + - Usage: `--replication` + + In-memory database replication + +- `redisgears_module`: + + - Type: STRING + - Default: `` + - Usage: `--redisgears-module` + + RedisGears module file + +- `with_rejson`: + + - Type: BOOL + - Default: `false` + - Usage: `--with-rejson` + + Include ReJSON in the Write-behind Database + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di create [OPTIONS] + + Creates the Write-behind Database instance + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --silent Silent install. Do not prompt to enter + missing parameters + --no-configure Do not install Write-behind Engine to the Write-behind + Database + --cluster-host TEXT Host/IP of Redis Enterprise Cluster (service + name in case of k8s) [required] + --cluster-api-port INTEGER RANGE + API Port of Redis Enterprise Cluster + [default: 9443; 1000<=x<=65535; required] + --cluster-user TEXT Redis Enterprise Cluster username with + either DB Member, Cluster Member or Cluster + Admin roles [required] + --cluster-password TEXT Redis Enterprise Cluster Password + --rdi-port INTEGER RANGE Port for the new Write-behind Database + [1000<=x<=65535] + --rdi-password TEXT Password for the new Write-behind Database + (alphanumeric characters with zero or more + of the following: ! & # $ ^ < > -) + --rdi-memory INTEGER RANGE Memory for Write-behind Database (in MB) [x>=30] + --rdi-shards INTEGER RANGE Number of database server-side shards + [x>=1] + --replication In-memory database replication + --redisgears-module TEXT RedisGears module file + --help Show this message and exit. +``` +--- +Title: redis-di deploy +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Deploys the Write-behind configurations including target +group: di +linkTitle: redis-di deploy +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di deploy [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `directory`: + + - Type: STRING + - Default: `.` + - Usage: `--dir` + + Directory containing Write-behind configuration + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di deploy [OPTIONS] + + Deploys the Write-behind configurations including target + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --dir TEXT Directory containing Write-behind configuration + [default: .] + --help Show this message and exit. +``` +--- +Title: redis-di list-jobs +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Lists transformation engine's jobs +group: di +linkTitle: redis-di list-jobs +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di list-jobs [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di list-jobs [OPTIONS] + + Lists transformation engine's jobs + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di stop +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Stops the pipeline +group: di +linkTitle: redis-di stop +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di stop [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di stop [OPTIONS] + + Stops the pipeline + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di get-rejected +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Returns all the stored rejected entries +group: di +linkTitle: redis-di get-rejected +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di get-rejected [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `max_records`: + + - Type: =1> + - Default: `none` + - Usage: `--max-records` + + Maximum rejected records per DLQ + +- `oldest`: + + - Type: BOOL + - Default: `false` + - Usage: `--oldest +-o` + + Displays the oldest rejected records. If omitted, most resent records will be retrieved + +- `dlq_name`: + + - Type: STRING + - Default: `none` + - Usage: `--dlq-name` + + Only prints the rejected records for the specified DLQ (Dead Letter Queue) name + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di get-rejected [OPTIONS] + + Returns all the stored rejected entries + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --max-records INTEGER RANGE Maximum rejected records per DLQ [x>=1] + -o, --oldest Displays the oldest rejected records. If + omitted, most resent records will be + retrieved + --dlq-name TEXT Only prints the rejected records for the + specified DLQ (Dead Letter Queue) name + --help Show this message and exit. +``` +--- +Title: redis-di delete +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Deletes Write-behind database permanently +group: di +linkTitle: redis-di delete +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di delete [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `cluster_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-host` + + Host/IP of Redis Enterprise Cluster (service name in case of k8s) + +- `cluster_api_port` (REQUIRED): + + - Type: + - Default: `9443` + - Usage: `--cluster-api-port` + + API Port of Redis Enterprise Cluster + +- `cluster_user` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-user` + + Redis Enterprise Cluster username with either DB Member, Cluster Member or Cluster Admin roles + +- `cluster_password`: + + - Type: STRING + - Default: `none` + - Usage: `--cluster-password` + + Redis Enterprise Cluster Password + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di delete [OPTIONS] + + Deletes Write-behind database permanently + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --cluster-host TEXT Host/IP of Redis Enterprise Cluster (service + name in case of k8s) [required] + --cluster-api-port INTEGER RANGE + API Port of Redis Enterprise Cluster + [default: 9443; 1000<=x<=65535; required] + --cluster-user TEXT Redis Enterprise Cluster username with + either DB Member, Cluster Member or Cluster + Admin roles [required] + --cluster-password TEXT Redis Enterprise Cluster Password + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -f, --force Force operation. skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di configure +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Configures the Write-behind Database so it is ready to process data +group: di +linkTitle: redis-di configure +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di configure [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di configure [OPTIONS] + + Configures the Write-behind Database so it is ready to process data + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di dump-support-package +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Dumps Write-behind support package +group: di +linkTitle: redis-di dump-support-package +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di dump-support-package [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `directory`: + + - Type: STRING + - Default: `.` + - Usage: `--dir` + + Directory where the support file should be generated + +- `dump_rejected`: + + - Type: INT + - Default: `none` + - Usage: `--dump-rejected` + + Dumps rejected records + +- `trace_timeout`: + + - Type: + - Default: `none` + - Usage: `--trace-timeout` + + Stops the trace after exceeding this timeout (in seconds) + +- `max_change_records`: + + - Type: =1> + - Default: `10` + - Usage: `--max-change-records` + + Maximum traced change records per shard + +- `trace_only_rejected`: + + - Type: BOOL + - Default: `false` + - Usage: `--trace-only-rejected` + + Trace only rejected change records + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di dump-support-package [OPTIONS] + + Dumps Write-behind support package + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --dir TEXT Directory where the support file should be + generated [default: .] + --dump-rejected INTEGER Dumps rejected records + --trace-timeout INTEGER RANGE Stops the trace after exceeding this timeout + (in seconds) [1<=x<=600] + --max-change-records INTEGER RANGE + Maximum traced change records per shard + [x>=1] + --trace-only-rejected Trace only rejected change records + --help Show this message and exit. +``` +--- +Title: redis-di list-contexts +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Lists all saved contexts +group: di +linkTitle: redis-di list-contexts +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di list-contexts [OPTIONS] +``` + +## Options + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di list-contexts [OPTIONS] + + Lists all saved contexts + +Options: + --help Show this message and exit. +``` +--- +Title: redis-di set-secret +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Writes a secret to Redis secret store +group: di +linkTitle: redis-di set-secret +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di set-secret [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `cluster_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-host` + + Host/IP of Redis Enterprise Cluster (service name in case of k8s) + +- `cluster_api_port` (REQUIRED): + + - Type: + - Default: `9443` + - Usage: `--cluster-api-port` + + API Port of Redis Enterprise Cluster + +- `cluster_user` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-user` + + Redis Enterprise Cluster username with either DB Member, Cluster Member or Cluster Admin roles + +- `cluster_password`: + + - Type: STRING + - Default: `none` + - Usage: `--cluster-password` + + Redis Enterprise Cluster Password + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `secret_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--secret-name` + + The name of the secret + +- `secret_value` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--secret-value` + + The value of the secret + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di set-secret [OPTIONS] + + Writes a secret to Redis secret store + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --cluster-host TEXT Host/IP of Redis Enterprise Cluster (service + name in case of k8s) [required] + --cluster-api-port INTEGER RANGE + API Port of Redis Enterprise Cluster + [default: 9443; 1000<=x<=65535; required] + --cluster-user TEXT Redis Enterprise Cluster username with + either DB Member, Cluster Member or Cluster + Admin roles [required] + --cluster-password TEXT Redis Enterprise Cluster Password + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --secret-name TEXT The name of the secret [required] + --secret-value TEXT The value of the secret [required] + --help Show this message and exit. +``` +--- +Title: redis-di add-context +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Adds a new context +group: di +linkTitle: redis-di add-context +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di add-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `cluster_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-host` + + Host/IP of Redis Enterprise Cluster (service name in case of k8s) + +- `cluster_api_port` (REQUIRED): + + - Type: + - Default: `9443` + - Usage: `--cluster-api-port` + + API Port of Redis Enterprise Cluster + +- `cluster_user` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-user` + + Redis Enterprise Cluster username with either DB Member, Cluster Member or Cluster Admin roles + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di add-context [OPTIONS] CONTEXT_NAME + + Adds a new context + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --cluster-host TEXT Host/IP of Redis Enterprise Cluster (service + name in case of k8s) [required] + --cluster-api-port INTEGER RANGE + API Port of Redis Enterprise Cluster + [default: 9443; 1000<=x<=65535; required] + --cluster-user TEXT Redis Enterprise Cluster username with + either DB Member, Cluster Member or Cluster + Admin roles [required] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --help Show this message and exit. +``` +--- +Title: redis-di delete-context +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Deletes a context +group: di +linkTitle: redis-di delete-context +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di delete-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di delete-context [OPTIONS] CONTEXT_NAME + + Deletes a context + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + -f, --force Force operation. skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di set-context +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Sets a context to be the active one +group: di +linkTitle: redis-di set-context +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di set-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di set-context [OPTIONS] CONTEXT_NAME + + Sets a context to be the active one + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --help Show this message and exit. +``` +--- +Title: redis-di start +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Starts the pipeline +group: di +linkTitle: redis-di start +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di start [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di start [OPTIONS] + + Starts the pipeline + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di monitor +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Monitors Write-behind by collecting metrics and exporting to Prometheus +group: di +linkTitle: redis-di monitor +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di monitor [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `exporter_port`: + + - Type: + - Default: `9121` + - Usage: `--exporter-port` + + HTTP port to start the exporter on + +- `collect_interval`: + + - Type: + - Default: `5` + - Usage: `--collect-interval` + + Metrics collection interval (seconds) + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di monitor [OPTIONS] + + Monitors Write-behind by collecting metrics and exporting to Prometheus + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --exporter-port INTEGER RANGE HTTP port to start the exporter on + [1000<=x<=65535] + --collect-interval INTEGER RANGE + Metrics collection interval (seconds) + [1<=x<=60] + --help Show this message and exit. +``` +--- +Title: redis-di reset +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Resets the pipeline into initial full sync mode +group: di +linkTitle: redis-di reset +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di reset [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di reset [OPTIONS] + + Resets the pipeline into initial full sync mode + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -f, --force Force operation. skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di upgrade +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Upgrades Write-behind Engine without losing data or downtime +group: di +linkTitle: redis-di upgrade +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di upgrade [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `cluster_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-host` + + Host/IP of Redis Enterprise Cluster (service name in case of k8s) + +- `cluster_api_port` (REQUIRED): + + - Type: + - Default: `9443` + - Usage: `--cluster-api-port` + + API Port of Redis Enterprise Cluster + +- `cluster_user` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--cluster-user` + + Redis Enterprise Cluster username with either DB Member, Cluster Member or Cluster Admin roles + +- `cluster_password`: + + - Type: STRING + - Default: `none` + - Usage: `--cluster-password` + + Redis Enterprise Cluster Password + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force upgrade/downgrade. skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di upgrade [OPTIONS] + + Upgrades Write-behind Engine without losing data or downtime + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --cluster-host TEXT Host/IP of Redis Enterprise Cluster (service + name in case of k8s) [required] + --cluster-api-port INTEGER RANGE + API Port of Redis Enterprise Cluster + [default: 9443; 1000<=x<=65535; required] + --cluster-user TEXT Redis Enterprise Cluster username with + either DB Member, Cluster Member or Cluster + Admin roles [required] + --cluster-password TEXT Redis Enterprise Cluster Password + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -f, --force Force upgrade/downgrade. skips verification + prompts + --help Show this message and exit. +``` +--- +Title: redis-di status +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Displays the status of the pipeline end to end +group: di +linkTitle: redis-di status +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di status [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `live`: + + - Type: BOOL + - Default: `false` + - Usage: `--live +-l` + + Live data flow monitoring + +- `page_number`: + + - Type: =1> + - Default: `none` + - Usage: `--page-number +-p` + + Set the page number (live mode only) + +- `page_size`: + + - Type: =1> + - Default: `none` + - Usage: `--page-size +-s` + + Set the page size (live mode only) + +- `ingested_only`: + + - Type: BOOL + - Default: `false` + - Usage: `--ingested-only +-i` + + Display ingested data streams (live mode only) + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di status [OPTIONS] + + Displays the status of the pipeline end to end + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -l, --live Live data flow monitoring + -p, --page-number INTEGER RANGE + Set the page number (live mode only) [x>=1] + -s, --page-size INTEGER RANGE Set the page size (live mode only) [x>=1] + -i, --ingested-only Display ingested data streams (live mode + only) + --help Show this message and exit. +``` +--- +Title: redis-di +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: A command line tool to manage & configure Write-behind +group: di +linkTitle: redis-di +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di [OPTIONS] COMMAND [ARGS]... +``` + +## Options + +- `version`: + + - Type: BOOL + - Default: `false` + - Usage: `--version` + + Show the version and exit. + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di [OPTIONS] COMMAND [ARGS]... + + A command line tool to manage & configure Write-behind + +Options: + --version Show the version and exit. + --help Show this message and exit. + +Commands: + add-context Adds a new context + configure Configures the Write-behind Database so it is ready to... + create Creates the Write-behind Database instance + delete Deletes Write-behind database permanently + delete-all-contexts Deletes all contexts + delete-context Deletes a context + deploy Deploys the Write-behind configurations including target + describe-job Describes a transformation engine's job + dump-support-package Dumps Write-behind support package + get-rejected Returns all the stored rejected entries + list-contexts Lists all saved contexts + list-jobs Lists transformation engine's jobs + monitor Monitors Write-behind by collecting metrics and exporting... + reset Resets the pipeline into initial full sync mode + scaffold Generates configuration files for Write-behind and... + set-context Sets a context to be the active one + set-secret Writes a secret to Redis secret store + start Starts the pipeline + status Displays the status of the pipeline end to end + stop Stops the pipeline + trace Starts a trace session for troubleshooting data... + upgrade Upgrades Write-behind Engine without losing data or downtime +``` +--- +Title: redis-di describe-job +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Describes a transformation engine's job +group: di +linkTitle: redis-di describe-job +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di describe-job [OPTIONS] JOB_NAME +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `job_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `job-name` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di describe-job [OPTIONS] JOB_NAME + + Describes a transformation engine's job + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di scaffold +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Generates configuration files for Write-behind and Debezium (when ingesting data + to Redis) +group: di +linkTitle: redis-di scaffold +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di scaffold [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `db_type` (REQUIRED): + + - Type: Choice([, , , , ]) + - Default: `none` + - Usage: `--db-type` + + DB type + +- `strategy`: + + - Type: Choice([, ]) + - Default: `ingest` + - Usage: `--strategy` + + Strategy + + Output to directory or stdout + +- `directory`: + + - Type: STRING + - Default: `none` + - Usage: `--dir` + + Directory containing Write-behind configuration + +- `preview`: + + - Type: Choice(['debezium/application.properties', 'config.yaml']) + - Default: `none` + - Usage: `--preview` + + Print the content of specified config file to CLI output + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di scaffold [OPTIONS] + + Generates configuration files for Write-behind and Debezium (when ingesting data to + Redis) + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --db-type [mysql|oracle|postgresql|redis|sqlserver] + DB type [required] + --strategy [ingest|write_behind] + Strategy [default: ingest] + Output formats: [mutually_exclusive, required] + Output to directory or stdout + --dir TEXT Directory containing Write-behind configuration + --preview [debezium/application.properties|config.yaml] + Print the content of specified config file + to CLI output + --help Show this message and exit. +``` +--- +Title: CLI +aliases: /integrate/redis-data-integration/write-behind/reference/cli/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Write-behind CLI reference +group: di +hideListLinks: false +linkTitle: CLI +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- +--- +Title: redis-di trace +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Starts a trace session for troubleshooting data transformation +group: di +linkTitle: redis-di trace +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Usage + +``` +Usage: redis-di trace [OPTIONS] +``` + +## Options + +- `loglevel`: + + - Type: Choice(['DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--loglevel +-log-level` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of Write-behind Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of Write-behind Database + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + Write-behind Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `max_change_records`: + + - Type: =1> + - Default: `10` + - Usage: `--max-change-records` + + Maximum traced change records per shard + +- `timeout` (REQUIRED): + + - Type: + - Default: `20` + - Usage: `--timeout` + + Stops the trace after exceeding this timeout (in seconds) + +- `trace_only_rejected`: + + - Type: BOOL + - Default: `false` + - Usage: `--trace-only-rejected` + + Trace only rejected change records + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di trace [OPTIONS] + + Starts a trace session for troubleshooting data transformation + +Options: + -log-level, --loglevel [DEBUG|INFO|WARN|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of Write-behind Database [required] + --rdi-port INTEGER RANGE Port of Write-behind Database [1000<=x<=65535; + required] + --rdi-password TEXT Write-behind Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --max-change-records INTEGER RANGE + Maximum traced change records per shard + [x>=1] + --timeout INTEGER RANGE Stops the trace after exceeding this timeout + (in seconds) [default: 20; 1<=x<=600; + required] + --trace-only-rejected Trace only rejected change records + --help Show this message and exit. +``` +--- +Title: filter +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Filter records +group: di +linkTitle: filter +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Filter records + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: filter + with: + language: sql + expression: age>20 +``` +--- +Title: redis.write +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Write to a Redis Enterprise database +group: di +linkTitle: redis.write +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Write to a Redis Enterprise database + +**Properties** + +| Name | Type | Description | Required | +| ----------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | +| **connection** | `string` | Name of Redis connection specified in `config.yaml`.
Defaults to connection named `target`. | no | +| **data_type**
| `string` | Type of Redis target data structure.
Enum: `hash`(default), `json`, `set`, `sorted_set`, `stream`, `string`.
Takes precedence over system property `target_data_type`. | no | +| [**nest**](#nest) | `object` | Nest (embed) object within a different key.
If nesting is specified, the following parameters are ignored: `key`, `args` and `on_update`. | no | +| **key** | `object` | Definition of the target Redis key.
| yes | +| ∟ **expression** | `string` | Expression used to calculate the target key. | yes | +| ∟ **language** | `string` | Language used to define the expression.
Enum: `jmespath`, `sql`. | yes | +| [**args**](#args) | `object` | Arguments for modifying the target key.
Specific to the data type. | no | +| **mapping** | `array` | Array of fields (or `field: alias` pairs) to be written to a Redis key.
Supported for hashes, json documents and streams only. | no | +| **on_update** | `string` | Target key update strategy
Enum: `merge`, `replace` (default). | no | +| **expire** | `integer` | TTL in seconds for the modified key to expire.
If not specified (or `expire: 0`), the target key will never expire. | no | + +> Notes: + +- Job parameters always override system properties. In particular, `data_type` will override `target_data_type` and `on_update` will override `json_update_strategy` properties respectively. +- Mapping for **JSON documents** supports nested paths (e.g. `path.to.field`) which results in creating a nested element in Redis key. When a dot is used in a field name, it must be escaped with a backslash (e.g. `path\.to\.field`). Nested paths are not supported for hashes and streams. +- For **strings** Write-behind will automatically assume `on_update: replace` regardless of what was declared in the job file. Appends and increments are not currently supported. +- For **streams** Write-behind will ignore `on_update` property since they are append only. + +> Notes: + +- Job parameters always override system properties. In particular, `data_type` will override `target_data_type` and `on_update` will override `json_update_strategy` properties respectively. +- Mapping for JSON documents supports nested paths (e.g. `path.to.field`) which results in creating a nested element in Redis key. When a dot is used in a field name, it must be escaped with a backslash (e.g. `path\.to\.field`). Nested paths are not supported for hashes and streams. + +**Example** + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + # this block will use the default connection: target - since no explicit connection is specified, + # the data will be written in a JSON format as the data_type: json is specified for the block + - uses: redis.write + with: + data_type: json + key: + expression: concat(['invoice_id:', InvoiceId]) + language: jmespath + mapping: # only the fields listed below will be written to a JSON document + - InvoiceId: id # this will create an element with a different name + - InvoiceDate: date + - BillingAddress: address.primary.street # this will create a nested element in the JSON document + - BillingCity: "address.primary.city name" # this will create a nested element with a space in the name + - BillingState: address.primary.state + - BillingPostalCode: "address.primary.zip\\.code" # this will create a nested element with a dot in the name + - Total # this will create an element with the same name as the original field + on_update: merge + # this block will use the explicitly specified connection: target1 - it must be defined in config.yaml + # the data will be written to the corresponding Redis set, based on a value of the key expression + - uses: redis.write + with: + connection: target + data_type: set + key: + expression: concat(['invoices:', BillingCountry]) + language: jmespath + args: + member: InvoiceId + # this block will use the explicitly specified connection: target1 - it must be defined in config.yaml + # the data will be written to the Redis sorted set named invoices:sorted as specified in the key expression + - uses: redis.write + with: + connection: target1 + data_type: sorted_set + key: + expression: "`invoices:sorted`" + language: jmespath + args: + score: Total + member: InvoiceId + # this block will use the specified connection: target2 - this, again, has to be defined in config.yaml + # the data will be written to a Redis stream named invoice:events as specified in the key expression + - uses: redis.write + with: + connection: target2 + data_type: stream + key: + expression: "`invoice:events`" + language: jmespath + mapping: # only the fields listed below will be written to a stream message, with two of them renamed as message_id and country + - InvoiceId: message_id + - BillingCountry: country + - Total + # this block will use the default connection: target - since no explicit connection is specified, + # the data will be written to a Redis string as the data_type: string is specified for the block + - uses: redis.write + with: + data_type: string + key: + expression: concat(['Invoice:', InvoiceId]) + language: jmespath + args: + value: Total # only the Total field will be written to a string + expire: 100 # the key will expire in 100 seconds +``` + + + +## args: object + +Arguments for modifying the target key + +**Properties** + +| Name | Type | Description | Required | +| ---------- | -------- | ---------------------------------------------------------------------------------------------- | -------- | +| **score** | `string` | Field name used as a score for sorted sets.
Valid for sorted sets only. | yes | +| **member** | `string` | Field name used as a member for sets and sorted sets.
Valid for sets and sorted sets only. | yes | +| **value** | `string` | Field name used as a value for strings.
Valid for strings only. | yes | + + + +## nest: object + +Nest (embed) object within a different key + +**Properties** + +| Name | Type | Description | Required | +| ------------------------ | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| **parent** | `object` | Parent object definition. | yes | +| ∟ **server_name** | `string` | Server name. | no | +| ∟ **schema** | `string` | Schema name. | no | +| ∟ **table** | `string` | Parent table. | yes | +| **parent_key** | `string` | Field name used to identify the parent key (usually FK). | yes | +| **child_key** | `string` | Optional field name used to identify the parent key value (if different from **parent_key**) in the child record. If not specified, then the field name defined in **parent_key** is used to lookup the value. | no | +| **nesting_key** | `string` | Field name used to create the nesting key (usually PK). | yes | +| **path** | `string` | Path, where the nested object should reside in a parent document.
Must start with the root (e.g. `$.`) | yes | +| **structure** | `string` | Data structure used to represent the object in a parent document (`map` is the only supported value). | no | + +> Notes: + +- When `nest` object is defined, Write-behind will automatically assume `data_type: json` and `on_update: merge` regardless of what was declared in the job file. +- Nesting job cannot be used together with any of the these properties: `key`, `args`. The key is automatically calculated based on the following template: `::`. +- When `expire` is specified, it will be applied to the **parent** key. Therefore all nested objects will expire together with the parent key. + +**Example** + +```yaml +source: + server_name: chinook + schema: public + table: InvoiceLine +output: + - uses: redis.write + with: + nest: + parent: + # server_name: chinook + # schema: public + table: Invoice + nesting_key: InvoiceLineId + parent_key: InvoiceId + # child_key: ParentInvoiceId + path: $.InvoiceLineItems + # structure: map +``` + +> Note: In the example above `child_key` is not needed, because the FK in the child table `InvoiceLine` is defined using the same field name `InvoiceId` as in the parent table `Invoice`. If instead a FK was defined differently (e.g. InvoiceLine.ParentInvoivceId = Invoice.InvoiceId), then `child_key` parameter would be required to describe this relationship in the child job. +--- +Title: map +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Map a record into a new output based on expressions +group: di +linkTitle: map +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Map a record into a new output based on expressions + +**Properties** + +| Name | Type | Description | Required | +| ----------------------------- | ------------------ | --------------------------------------------- | -------- | +| [**expression**](#expression) | `object`, `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: map + with: + expression: + first_name: first_name + last_name: last_name + greeting: >- + 'Hello ' || CASE WHEN gender = 'F' THEN 'Ms.' WHEN gender = 'M' THEN 'Mr.' + ELSE 'N/A' END || ' ' || full_name + country: country + full_name: full_name + language: sql +``` + +**Example** + +```yaml +source: + table: customer +transform: + - uses: map + with: + expression: | + { + "CustomerId": customer_id, + "FirstName": first_name, + "LastName": last_name, + "Company": company, + "Location": + { + "Street": address, + "City": city, + "State": state, + "Country": country, + "PostalCode": postal_code + }, + "Phone": phone, + "Fax": fax, + "Email": email + } + language: jmespath +``` + + + +## expression: object + +Expression + +**No properties.** +--- +Title: relational.write +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Write into a SQL-compatible data store +group: di +linkTitle: relational.write +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Write into a SQL-compatible data store + +**Properties** + +| Name | Type | Description | Required | +| --------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| **connection**
(The connection to use for loading) | `string` | Logical connection name as defined in the connections.yaml
| yes | +| **schema**
(The table schema of the target table) | `string` | If left blank, the default schema of this connection will be used as defined in the connections.yaml
| yes | +| **table**
(The target table name) | `string` | Target table name
| yes | +| [**keys**](#keys)
(Business keys to use in case of \`load_strategy\` is UPSERT or working with \`opcode_field\`) | `array` | | no | +| [**mapping**](#mapping)
(Fields to write) | `array` | | no | +| **foreach**
(Split a column into multiple records with a JMESPath expression) | `string` | Use a JMESPath expression to split a column into multiple records. The expression should be in the format column: expression.
Pattern: `^(?!:).*:.*(? | no | +| **opcode_field** | `string` | Name of the field in the payload that holds the operation (c - create, d - delete, u - update) for this record in the DB
| no | +| **load_strategy** | `string` | type of target
Default: `"APPEND"`
Enum: `"APPEND"`, `"REPLACE"`, `"UPSERT"`, `"TYPE2"`
| no | +| **active_record_indicator** | `string` | Used for `TYPE2` load_strategy. An SQL expression used to identify which rows are active
| no | +| [**inactive_record_mapping**](#inactive_record_mapping)
(Used for \`TYPE2\` load_strategy\. The columns mapping to use to close out an active record) | `array` | A list of columns to use. Use any valid SQL expression for the source. If 'target' is omitted, will default to the name of the source column
Default:
| no | + +**Additional Properties:** not allowed + +**No properties.** + +**Not [required1]:** +**No properties.** + +**Example** + +```yaml +id: load_snowflake +type: relational.write +properties: + connection: eu_datalake + table: employees + schema: dbo + load_strategy: APPEND +``` + + + +## keys\[\]: Business keys to use in case of \`load_strategy\` is UPSERT or working with \`opcode_field\` + +**Items: name of column** + +**No properties.** + +**Example** + +```yaml +- fname +- lname: last_name +``` + + + +## mapping\[\]: Fields to write + +**Items: name of column** + +**No properties.** + +**Example** + +```yaml +- fname +- lname: last_name +- address +- gender +``` + + + +## inactive_record_mapping\[\]: Used for \`TYPE2\` load_strategy\. The columns mapping to use to close out an active record + +A list of columns to use. Use any valid SQL expression for the source. If 'target' is omitted, will default to the name of the source column + +**No properties.** + +**Example** + +```yaml +- source: CURRENT_DATE + target: deletedAt +- source: "'Y'" + target: is_active +``` +--- +Title: remove_field +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Remove fields +group: di +linkTitle: remove_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Remove fields + +**Option 1 (alternative):** +Remove multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: remove_field + with: + fields: + - field: credit_card + - field: name.mname +``` + +**Option 2 (alternative):** +Remove one field + +**Properties** + +| Name | Type | Description | Required | +| --------- | -------- | ----------- | -------- | +| **field** | `string` | Field
| yes | + +**Additional Properties:** not allowed +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: remove_field + with: + field: credit_card +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| --------- | -------- | ----------- | -------- | +| **field** | `string` | Field
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: rename_field +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Rename fields. All other fields remain unchanged. +group: di +linkTitle: rename_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Rename fields. All other fields remain unchanged. + +**Option 1 (alternative):** +Rename multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + fields: + - from_field: name.lname + to_field: name.last_name + - from_field: name.fname + to_field: name.first_name +``` + +**Option 2 (alternative):** +Rename one field + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------- | -------- | +| **from_field** | `string` | From field
| yes | +| **to_field** | `string` | To field
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + from_field: name.lname + to_field: name.last_name +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------- | -------- | +| **from_field** | `string` | From field
| yes | +| **to_field** | `string` | To field
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: cassandra.write +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Write into a Cassandra data store +group: di +linkTitle: cassandra.write +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Write into a Cassandra data store + +**Properties** + +| Name | Type | Description | Required | +| ------------------------------------------------------ | -------- | ----------------------------------------------------------------------------------------------------------------------------- | -------- | +| **connection**
(The connection to use for loading) | `string` | Logical connection name as defined in the connections.yaml
| yes | +| **keyspace** | `string` | Keyspace
| yes | +| **table**
(The target table name) | `string` | Target table name
| yes | +| [**keys**](#keys)
(Business keys) | `array` | | yes | +| [**mapping**](#mapping)
(Fields to write) | `array` | | yes | +| **opcode_field** | `string` | Name of the field in the payload that holds the operation (c - create, d - delete, u - update) for this record in the DB
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +id: load_snowflake +type: relational.write +properties: + connection: eu_datalake + table: employees + schema: dbo + load_strategy: APPEND +``` + + + +## keys\[\]: Business keys + +**Items: name of column** + +**No properties.** + +**Example** + +```yaml +- fname +- lname: last_name +``` + + + +## mapping\[\]: Fields to write + +**Items: name of column** + +**No properties.** + +**Example** + +```yaml +- fname +- lname: last_name +- address +- gender +``` +--- +Title: key +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Set the Redis key for this data entry +group: di +linkTitle: key +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Set the Redis key for this data entry + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +key: + expression: concat([InvoiceId, '.', CustomerId]) + language: jmespath +``` +--- +Title: add_field +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Add fields to a record +group: di +linkTitle: add_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Add fields to a record + +**Option 1 (alternative):** +Add multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: add_field + with: + fields: + - field: name.full_name + language: jmespath + expression: concat([name.fname, ' ', name.lname]) + - field: name.fname_upper + language: jmespath + expression: upper(name.fname) +``` + +**Option 2 (alternative):** +Add one field + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **field** | `string` | Field
| yes | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: add_field + with: + field: country + language: sql + expression: country_code || ' - ' || UPPER(country_name) +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **field** | `string` | Field
| yes | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: Data transformation block types +aliases: /integrate/redis-data-integration/write-behind/reference/data-transformation-block-types/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Data transformation block type reference +group: di +hideListLinks: false +linkTitle: Data transformation block types +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- +--- +Title: Write-behind configuration file +aliases: /integrate/redis-data-integration/write-behind/reference/config-yaml-reference/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Write-behind configuration file reference +group: di +linkTitle: Write-behind configuration file +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +## Write-behind Configuration File + +**Properties** + +| Name | Type | Description | Required | +| ------------------------------------------------------------------------------------------ | ---------------- | ----------- | -------- | +| [**applier**](#applier)
(Configuration details of Write-behind Applier Gear) | `object`, `null` | | | +| [**connections**](#connections) | `object` | | | + + + +### applier: Configuration details of Write-behind Applier Gear + +**Properties** + +| Name | Type | Description | Required | +| ------------------------------------------------------------------------------------------------------------------------------------ | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | --- | +| **on_failed_retry_interval**
(Interval \(in seconds\) on which to perform retry on failure) | `integer`, `string` | Default: `5`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **read_batch_size**
(The batch size for reading data from source database) | `integer`, `string` | Default: `2000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **debezium_lob_encoded_placeholder**
(Enable Debezium LOB placeholders) | `string` | Default: `"X19kZWJleml1bV91bmF2YWlsYWJsZV92YWx1ZQ=="`
| | +| **dedup**
(Enable deduplication mechanism) | `boolean` | Default: `false`
| | +| **dedup_max_size**
(Max size of the deduplication set) | `integer` | Default: `1024`
Minimum: `1`
| | +| **duration**
(Time \(in ms\) after which data will be read from stream even if read_batch_size was not reached) | `integer`, `string` | Default: `100`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **write_batch_size**
(The batch size for writing data to target Redis database\. Should be less or equal to the read_batch_size) | `integer`, `string` | Default: `200`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **error_handling**
(Error handling strategy: ignore \- skip, dlq \- store rejected messages in a dead letter queue) | `string` | Default: `"dlq"`
Pattern: ``^\${.\*}$ | ignore | dlq``
| | +| **dlq_max_messages**
(Dead letter queue max messages per stream) | `integer`, `string` | Default: `1000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **target_data_type**
(Target data type: hash/json \- RedisJSON module must be in use in the target DB) | `string` | Default: `"hash"`
Pattern: ``^\${.\*}$ | hash | json``
| | +| **json_update_strategy**
(Target update strategy: replace/merge \- RedisJSON module must be in use in the target DB) | `string` | (DEPRECATED)
Property 'json_update_strategy' will be deprecated in future releases. Use 'on_update' job-level property to define the json update strategy.
Default: `"replace"`
Pattern: ``^\${.\*}$ | replace | merge``
| | +| **initial_sync_processes**
(Number of processes Write-behind Engine creates to process the initial sync with the source) | `integer`, `string` | Default: `4`
Pattern: `^\${.*}$`
Minimum: `1`
Maximum: `32`
| | +| **wait_enabled**
(Checks if the data has been written to the replica shard) | `boolean` | Default: `false`
| | +| **wait_timeout**
(Timeout in milliseconds when checking write to the replica shard) | `integer`, `string` | Default: `1000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **retry_on_replica_failure**
(Ensures that the data has been written to the replica shard and keeps retrying if not) | `boolean` | Default: `true`
| | + +**Additional Properties:** not allowed + + +### connections: Connections + +**Properties (Pattern)** + +| Name | Type | Description | Required | +| ------------------------ | ---- | ----------- | -------- | +| **\.\*** | | | | +| **additionalProperties** | | | | +--- +Title: Write-behind reference +aliases: /integrate/redis-data-integration/write-behind/reference/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Collects reference material for Write-behind +group: di +hideListLinks: false +linkTitle: Reference +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 90 +--- +--- +Title: Write-behind configuration for mysql +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for mysql +group: di +linkTitle: mysql +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector +# A numeric ID of this database client, which must be unique across all currently-running database processes in the MySQL cluster. +debezium.source.database.server.id=1 +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= + +debezium.source.database.hostname= +debezium.source.database.port= +debezium.source.database.user= +debezium.source.database.password= +debezium.source.include.schema.changes=false +# Determines whether the connector should omit publishing change events when there are no modifications in the included columns. +debezium.source.skip.messages.without.change=true +debezium.source.offset.flush.interval.ms=1000 +debezium.source.tombstones.on.delete=false +debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory + +# Important: Do NOT use `include` and `exclude` database lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match database names to be monitored. +# By default, all databases are monitored. +#debezium.source.database.include.list=,... +# An optional, comma-separated list of regular expressions that match database names for which you do not want to capture changes. +#debezium.source.database.exclude.list=,... +# Important: Do NOT use `include` and `exclude` table lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. +#debezium.source.table.include.list=,... +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. +#debezium.source.table.exclude.list=,... + +# Important: Do NOT use include and exclude column lists at the same time, use either include or exclude. +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. +#debezium.source.column.include.list=,... +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. +#debezium.source.column.exclude.list=,... + +# Records only DDL statements that are relevant to tables whose changes are being captured by Debezium. +# In case of changing the captured tables, run `redis-di reset`. +debezium.source.schema.history.internal.store.only.captured.tables.ddl=true + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Write-behind configuration for oracle +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for oracle +group: di +linkTitle: oracle +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.oracle.OracleConnector +debezium.source.log.mining.strategy=online_catalog +debezium.source.log.mining.transaction.retention.ms=180000 +# This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists. +debezium.source.log.mining.query.filter.mode=in +# The name of the Oracle Pluggable Database that the connector captures changes from. +# For non-CDB installation, do not specify this property. +#debezium.source.database.pdb.name=ORCLPDB1 +# Enables capturing and serialization of large object (CLOB, NCLOB, and BLOB) column values in change events. +#debezium.source.lob.enabled=true +# Specifies the constant that the connector provides to indicate that the original value is unchanged and not provided by the database. +#debezium.source.unavailable.value.placeholder=__debezium_unavailable_value +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= +debezium.source.database.dbname= + +debezium.source.database.hostname= +debezium.source.database.port= +debezium.source.database.user= +debezium.source.database.password= +debezium.source.include.schema.changes=false +# Determines whether the connector should omit publishing change events when there are no modifications in the included columns. +debezium.source.skip.messages.without.change=true +debezium.source.offset.flush.interval.ms=1000 +debezium.source.tombstones.on.delete=false +debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory + +# Important: Do NOT use `include` and `exclude` table lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. +#debezium.source.table.include.list=,... +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. +#debezium.source.table.exclude.list=,... + +# Important: Do NOT use include and exclude column lists at the same time, use either include or exclude. +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. +#debezium.source.column.include.list=,... +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. +#debezium.source.column.exclude.list=,... + +# Records only DDL statements that are relevant to tables whose changes are being captured by Debezium. +# In case of changing the captured tables, run `redis-di reset`. +debezium.source.schema.history.internal.store.only.captured.tables.ddl=true + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Write-behind configuration for sqlserver +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for sqlserver +group: di +linkTitle: sqlserver +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.sqlserver.SqlServerConnector +debezium.source.database.names= +debezium.source.database.encrypt=false +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= + +debezium.source.database.hostname= +debezium.source.database.port= +debezium.source.database.user= +debezium.source.database.password= +debezium.source.include.schema.changes=false +# Determines whether the connector should omit publishing change events when there are no modifications in the included columns. +debezium.source.skip.messages.without.change=true +debezium.source.offset.flush.interval.ms=1000 +debezium.source.tombstones.on.delete=false +debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory + +# Important: Do NOT use `include` and `exclude` table lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. +#debezium.source.table.include.list=,... +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. +#debezium.source.table.exclude.list=,... + +# Important: Do NOT use include and exclude column lists at the same time, use either include or exclude. +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. +#debezium.source.column.include.list=,... +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. +#debezium.source.column.exclude.list=,... + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Write-behind configuration for postgresql +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for postgresql +group: di +linkTitle: postgresql +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector +debezium.source.plugin.name=pgoutput +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= +debezium.source.database.dbname= + +debezium.source.database.hostname= +debezium.source.database.port= +debezium.source.database.user= +debezium.source.database.password= +debezium.source.include.schema.changes=false +# Determines whether the connector should omit publishing change events when there are no modifications in the included columns. +# This property takes effect when the `REPLICA IDENTITY` of the table is set to `FULL`. +debezium.source.skip.messages.without.change=true +debezium.source.offset.flush.interval.ms=1000 +debezium.source.tombstones.on.delete=false +debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory + +# Important: Do NOT use `include` and `exclude` table lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. +#debezium.source.table.include.list=,... +# An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. +#debezium.source.table.exclude.list=,... + +# Important: Do NOT use include and exclude column lists at the same time, use either include or exclude. +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. +#debezium.source.column.include.list=,... +# An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. +#debezium.source.column.exclude.list=,... + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Write-behind configuration for cassandra +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for cassandra +group: di +linkTitle: cassandra +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.cassandra.Cassandra4Connector +debezium.source.snapshot.consistency=ONE +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= +debezium.source.cassandra.node.id= +debezium.source.cassandra.hosts= +debezium.source.cassandra.port= +debezium.source.cassandra.config= +debezium.source.commit.log.relocation.dir= +debezium.source.commit.log.real.time.processing.enabled=true +debezium.source.commit.marked.complete.poll.interval.ms=1000 +debezium.source.http.port=8040 + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Write-behind configuration for mongodb +aliases: null +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Describes the `application.properties` settings that configure Debezium + Server for mongodb +group: di +linkTitle: mongodb +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: null +--- + +## application.properties + +```properties +debezium.sink.type=redis +debezium.sink.redis.message.format=extended +debezium.sink.redis.address=: +# Comment the following line if not using a password for Write-behind. +debezium.sink.redis.password= +debezium.sink.redis.memory.limit.mb=80 +# Redis SSL/TLS +#debezium.sink.redis.ssl.enabled=true +# When Redis is configured with a replica shard, these properties allow to verify that the data has been written to the replica. +#debezium.sink.redis.wait.enabled=true +#debezium.sink.redis.wait.timeout.ms=1000 +#debezium.sink.redis.wait.retry.enabled=true +#debezium.sink.redis.wait.retry.delay.ms=1000 +#debezium.source.database.history.redis.ssl.enabled=true +# Location of the Java keystore file containing an application process' own certificate and private key. +#javax.net.ssl.keyStore= +# Password to access the private key from the keystore file specified by javax.net.ssl.keyStore. This password is used twice: To unlock the keystore file (store password), and To decrypt the private key stored in the keystore (key password). +#javax.net.ssl.keyStorePassword= +# Location of the Java keystore file containing the collection of CA certificates trusted by this application process (trust store). +#javax.net.ssl.trustStore= +# Password to unlock the keystore file (store password) specified by javax.net.ssl.trustStore. +#javax.net.ssl.trustStorePassword= + +debezium.source.connector.class=io.debezium.connector.mongodb.MongoDbConnector +debezium.source.mongodb.hosts=/: +debezium.source.mongodb.connection.mode=replica_set +debezium.source.mongodb.user= +debezium.source.mongodb.password= +debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore +debezium.source.topic.prefix= + +debezium.source.offset.flush.interval.ms=1000 +debezium.source.tombstones.on.delete=false +debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory + +# Important: Do NOT use `include` and `exclude` database lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match database names to be monitored. +# By default, all databases are monitored. +#debezium.source.database.include.list=,... +# An optional, comma-separated list of regular expressions that match database names for which you do not want to capture changes. +#debezium.source.database.exclude.list=,... +# Important: Do NOT use `include` and `exclude` collection lists at the same time, use either `include` or `exclude`. +# An optional, comma-separated list of regular expressions that match collection names to be monitored. +#debezium.source.collection.include.list=,... +# An optional, comma-separated list of regular expressions that match collection names for which you do not want to capture changes. +#debezium.source.collection.exclude.list=,... + +#An optional, comma-separated list of regular expressions that match field names for which you do not want to capture changes. +#debezium.source.field_exclude_list=, + +# Whether to include the detailed schema information generated by Debezium in each record written to RDI. +# Note: Including the schema reduces the initial sync throughput and is not recommended for large data sets. +debezium.source.key.converter.schemas.enable=false +debezium.source.value.converter.schemas.enable=false +# When detailed schema information is excluded, handle decimal numeric types as strings. +debezium.source.decimal.handling.mode=string + +debezium.transforms=AddPrefix +debezium.transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter +debezium.transforms.AddPrefix.regex=.* +debezium.transforms.AddPrefix.replacement=data:$0 + +# Logging +# Uncomment the following lines if running Debezium Server as a Java standalone process (non-containerized). +#quarkus.log.file.enable=true +#quarkus.log.file.path= +#quarkus.log.file.rotation.max-file-size=100M +#quarkus.log.file.rotation.rotate-on-boot=true +#quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz +#quarkus.log.file.rotation.max-backup-index=3 + +# The default minimum log level for every log category, change only quarkus.log.level when needed. +quarkus.log.min-level=TRACE +# The default log level for every log category. +quarkus.log.level=INFO +# Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. +quarkus.log.console.json=false +# The port on which Debezium exposes Microprofile Health endpoint and other exposed status information. +quarkus.http.port=8088 +``` +--- +Title: Debezium Server configuration file +aliases: /integrate/redis-data-integration/write-behind/reference/debezium/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: + Application properties settings used to configure Debezim Server for + source database servers +group: di +hideListLinks: false +linkTitle: Debezium Server configuration +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 50 +--- + +The `application.properties` file configures Debezium Server configuration to support source databases. It contains sections that define the sink connector (Redis) configuration and the source connector configuration. +This file needs to be saved in the host running Debezium Server. + +The following topics describe `application.properties` for specific database servers: +--- +Title: Write-behind (preview) +aliases: /integrate/redis-data-integration/write-behind/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +hideListLinks: false +linkTitle: Write-behind (preview) +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +--- +LinkTitle: Dynatrace with Redis Enterprise +Title: Dynatrace with Redis Enterprise +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Dynatrace to your Redis Enterprise cluster using + the Redis Dynatrace Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Dynatrace to your Redis Enterprise cluster using + the Redis Dynatrace Integration. +type: integration +weight: 7 +--- + + +[Dynatrace](https://www.dynatrace.com/) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The Dynatrace Integration for Redis Enterprise uses Prometheus remote write functionality to connect Prometheus data +sources to Dynatrace. This integration enables Redis Enterprise users to export metrics to Dynatrace for analysis, +and includes Redis-designed dashboards for use in monitoring Redis Enterprise clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rs/redis-enterprise-dynatrace.png" >}} +## Install Redis' Dynatrace Integration for Redis Enterprise + +At the present time the Dynatrace integration is not signed by Dynatrace, meaning that it will be necessary to download +the source configuration and dashboards and assemble them and sign them cryptologically with a certificate that you have +created. The instructions for this procedure can be found on the Dynatrace +[site](https://docs.dynatrace.com/docs/extend-dynatrace/extensions20/sign-extension). Please note that the instructions +would have you place the dashboards next to the src folder; this is incorrect, the dashboards should be located inside +the src folder. + +## View metrics + +The Redis Enterprise Integration for Dynatrace contains pre-defined dashboards to aid in monitoring your Redis Enterprise deployment. + +The following dashboards are currently available: + +- Cluster: top-level statistics indicating the general health of the cluster +- Database: performance metrics at the database level +- Node: machine performance statistics +- Shard: low-level details of an individual shard +- Active-Active: replication and performance for geo-replicated clusters +- Proxy: network and command information regarding the proxy +- Proxy Threads: processor usage information regarding the proxy's component threads + + +## Monitor metrics + +Dynatrace dashboards can be filtered using the text area. For example, when viewing a cluster dashboard it is possible to +filter the display to show data for only one cluster by typing 'cluster' in the text area and waiting for the system to +retrieve the relevant data before choosing one of the options in the 'cluster' section. + +Certain types of data do not know the name of the database from which they were drawn. The dashboard should have a list +of database names and ids; use the id value when filtering input to the dashboard. + + +--- +LinkTitle: Prometheus & Grafana with Redis Cloud +Title: Prometheus and Grafana with Redis Cloud +alwaysopen: false +categories: +- docs +- integrate +- rc +description: Use Prometheus and Grafana to collect and visualize Redis Cloud metrics. +group: observability +summary: You can use Prometheus and Grafana to collect and visualize your Redis Cloud + metrics. +type: integration +weight: 6 +aliases: + - /operate/rc/cloud-integrations/prometheus-integration +--- + +You can use Prometheus and Grafana to collect and visualize your Redis Cloud metrics. + +- [Prometheus](https://prometheus.io/) is an open source systems monitoring and alerting toolkit that can scrape metrics from different sources. +- [Grafana](https://grafana.com/) is an open source metrics visualization tool that can process Prometheus data. + +Redis Cloud exposes its metrics through a Prometheus endpoint. You can configure your Prometheus server to scrape metrics from your Redis Cloud subscription on port 8070. + +The Redis Cloud Prometheus endpoint is exposed on Redis Cloud's internal network. To access this network, enable [VPC peering]({{< relref "/operate/rc/security/vpc-peering" >}}) or [Private Service Connect]({{< relref "/operate/rc/security/private-service-connect" >}}). Both options are only available with Redis Cloud Pro. You cannot use Prometheus and Grafana with Redis Cloud Essentials. + +For more information on how Prometheus communicates with Redis Enterprise clusters, see [Prometheus integration with Redis Enterprise Software]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}). + +## Quick start + +You can quickly set up Prometheus and Grafana for testing using the Prometheus and Grafana Docker images. + +### Prerequisites + +1. Create a [Redis Cloud Pro database]({{< relref "/operate/rc/databases/create-database/create-pro-database-new" >}}). + +1. Set up [VPC peering]({{< relref "/operate/rc/security/vpc-peering" >}}). + +1. Extract the Prometheus endpoint from the private endpoint to your database. The private endpoint is in the [Redis Cloud console](https://cloud.redis.io/) under the [Configuration tab]({{< relref "/operate/rc/databases/view-edit-database#configuration-tab" >}}) of your database. The Prometheus endpoint is on port 8070 of the internal server. + + For example, if your private endpoint is: + + ```sh + redis-12345.internal.:12345 + ``` + + The Prometheus endpoint is: + + ```sh + internal.:8070 + ``` + +1. Create an instance to run Prometheus and Grafana on the same cloud provider as your Redis Cloud subscription (for example, Amazon Web Services or Google Cloud). This instance must: + - Exist in the same region as your Redis Cloud subscription. + - Connect to the VPC subnet that is peered with your Redis Cloud subscription. + - Allow outbound connections to port 8070, so that Prometheus can scrape the Redis Cloud server for data. + - Allow inbound connections to port 9090 for Prometheus and port 3000 for Grafana. + - Be located in one of the CIDR ranges of the RFC-1918 internal IP standard, which is comprised of three CIDR ranges: + + - 10.0.0.0/8 + - 172.16.0.0/12 + - 192.168.0.0/16 + + The Prometheus endpoint is subject to a whitelist according to this standard. + +### Set up Prometheus + +To get started with custom monitoring with Prometheus on Docker: + +1. Create a directory on the Prometheus instance called `prometheus` and create a `prometheus.yml` file in that directory. + +1. Add the following contents to `prometheus.yml`. Replace `` with the Prometheus endpoint. + + ```yml + global: + scrape_interval: 15s + evaluation_interval: 15s + + # Attach these labels to any time series or alerts when communicating with + # external systems (federation, remote storage, Alertmanager). + external_labels: + monitor: "prometheus-stack-monitor" + + # Load and evaluate rules in this file every 'evaluation_interval' seconds. + #rule_files: + # - "first.rules" + # - "second.rules" + + scrape_configs: + # scrape Prometheus itself + - job_name: prometheus + scrape_interval: 10s + scrape_timeout: 5s + static_configs: + - targets: ["localhost:9090"] + + # scrape Redis Cloud + - job_name: redis-cloud + scrape_interval: 30s + scrape_timeout: 30s + metrics_path: / # For v2, use /v2 + scheme: https + static_configs: + - targets: [":8070"] + ``` + +1. Create a `docker-compose.yml` file with instructions to set up the Prometheus and Grafana Docker images. + + ```yml + version: '3' + services: + prometheus-server: + image: prom/prometheus + ports: + - 9090:9090 + volumes: + - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml + + grafana-ui: + image: grafana/grafana + ports: + - 3000:3000 + environment: + - GF_SECURITY_ADMIN_PASSWORD=secret + links: + - prometheus-server:prometheus + ``` + +1. To start the containers, run: + + ```sh + $ docker compose up -d + ``` + +1. To check that all the containers are up, run: `docker ps` +1. In your browser, sign in to Prometheus at `http://localhost:9090` to make sure the server is running. +1. Select **Status** and then **Targets** to check that Prometheus is collecting data from the Redis Cloud cluster. + + {{The Redis Enterprise target showing that Prometheus is connected to the Redis Enterprise Cluster.}} + + If Prometheus is connected to the cluster, you can type **node_up** in the Expression field on the Prometheus home page to see the cluster metrics. + +See [Prometheus Metrics]({{< relref "/integrate/prometheus-with-redis-enterprise/prometheus-metrics-definitions" >}}) for a list of metrics that Prometheus collects from Redis Enterprise clusters. + +### Set up Grafana + +Once the Prometheus and Grafana Docker containers are running, and Prometheus is connected to your Redis Cloud subscription, you can set up your Grafana dashboards. + +1. Sign in to Grafana. If you installed Grafana with Docker, go to `http://localhost:3000` and sign in with: + + - Username: `admin` + - Password: `secret` + +1. In the Grafana configuration menu, select **Data Sources**. + +1. Select **Add data source**. + +1. Select **Prometheus** from the list of data source types. + + {{The Prometheus data source in the list of data sources on Grafana.}} + +1. Enter the Prometheus configuration information: + + - Name: `redis-cloud` + - URL: `http://prometheus-server:9090` + - Access: `Server` + + {{The Prometheus connection form in Grafana.}} + + {{< note >}} + +- If the network port is not accessible to the Grafana server, select the **Browser** option from the Access menu. +- In a testing environment, you can select **Skip TLS verification**. + + {{< /note >}} + +1. Add dashboards for your subscription and database metrics. + To add preconfigured dashboards: + 1. In the Grafana dashboards menu, select **Manage**. + 1. Select **Import**. + 1. Add the [subscription status](https://grafana.com/grafana/dashboards/18406-subscription-status-dashboard/) and [database status](https://grafana.com/grafana/dashboards/18407-database-status-dashboard/) dashboards. + +### Grafana dashboards for Redis Cloud + +Redis publishes preconfigured dashboards for Redis Cloud and Grafana: + +* The [subscription status dashboard](https://grafana.com/grafana/dashboards/18406-subscription-status-dashboard/) provides an overview of your Redis Cloud subscriptions. +* The [database status dashboard](https://grafana.com/grafana/dashboards/18407-database-status-dashboard/) displays specific database metrics, including latency, memory usage, ops/second, and key count. +* The [Active-Active dashboard](https://github.com/redis-field-engineering/redis-enterprise-observability/blob/main/grafana/dashboards/grafana_v9-11/cloud/basic/redis-cloud-active-active-dashboard_v9-11.json) displays metrics specific to [Active-Active databases]({{< relref "/operate/rc/databases/configuration/active-active-redis" >}}). + +These dashboards are open source. For additional dashboard options, or to file an issue, see the [Redis Enterprise observability Github repository](https://github.com/redis-field-engineering/redis-enterprise-observability/tree/main/grafana). + +For more information about configuring Grafana dashboards, see the [Grafana documentation](https://grafana.com/docs/). +--- +LinkTitle: Set up Redis +Title: Set up Redis for Bedrock +alwaysopen: false +categories: +- docs +- integrate +- oss +- rs +- rc +description: Shows how to set up your Redis database for Amazon Bedrock. +group: cloud-service +summary: With Amazon Bedrock, users can access foundational AI models from a variety + of vendors through a single API, streamlining the process of leveraging generative + artificial intelligence. +type: integration +weight: 1 +--- + +You need to set up your Redis Cloud database before you can set it as the vector database in Amazon Bedrock. To do this, you need to: + +1. [Sign up for Redis Cloud and create a database](#sign-up-create-subscription) +1. [Enable Transport Layer Security (TLS) for the database and save the certificates](#get-certs) +1. [Store database credentials in AWS secrets manager](#store-secret) +1. [Create a vector index in your database](#create-vector-index) for Bedrock to use + +After you set up the database, you can use the database information to set it as your knowledge base database when you [create a knowledge base]({{< relref "/integrate/amazon-bedrock/create-knowledge-base" >}}). + +## Sign up and create a database {#sign-up-create-subscription} + +To set up a Redis Cloud instance for Bedrock, you need to: + +1. [Sign up for Redis Cloud](#sign-up) if you do not already have an account. +1. [Create a database](#create-sub) to use for your Bedrock knowledge base. + +### Sign up for Redis Cloud using AWS Marketplace {#sign-up} + +1. Select the [Redis Cloud](https://aws.amazon.com/marketplace/pp/prodview-mwscixe4ujhkq?sr=0-1&ref_=beagle&applicationId=AWSMPContessa) AWS marketplace link from Bedrock to be taken to the Redis Cloud plan listing. + + {{The Redis Cloud listing on AWS Marketplace}} + +1. Subscribe to Redis Cloud listing, locate the **Set Up Your Account** button, and then select it to begin mapping your Redis Cloud account with your AWS Marketplace account. + + {{Use the Set Up Your Account button after subscribing to Redis Cloud with your AWS Marketplace account.}} + +1. Sign in to the [Redis Cloud console](https://cloud.redis.io). + +1. Select the Redis account to be mapped to your AWS Marketplace account and confirm that your payment method will change and that the connection cannot be undone. + + {{Use the AWS Marketplace dialog to map your Redis Cloud account to your AWS Marketplace account.}} + +1. Use the **Map account** button to confirm your choice. + +1. Once your Redis account is mapped to your AWS Marketplace account, a message appears in the upper, left corner of the account panel. + + {{The AWS Marketplace badge appears when your Redis Cloud account is mapped to an AWS Marketplace account.}} + + In addition, AWS Marketplace is reported as the selected payment method. + +### Create a database {#create-sub} + +1. In the [Redis Cloud console](https://cloud.redis.io/), select **New database**. + + {{The New Database button creates a new database.}} + +1. When the **New database** page appears, select **Pro** to create a Pro plan. + + {{The Subscription selection panel with Pro selected.}} + +1. After you select **Pro**, the **Database settings** section will appear. For this guide, continue with **Easy create** to get started faster. + + {{The database settings section.}} + + If you'd like to select all of the configuration options yourself, select **Custom settings**. See [Create a Redis Cloud Pro database]({{< relref "/operate/rc/databases/create-database/create-pro-database-new#custom-settings" >}}) for more details. + +1. Redis will generate a database name for you. If you want to change it, you can do so in the **Database name** field. + + {{The database name, cloud vendor and region settings.}} + +1. Select **Amazon Web Services** as the cloud vendor and select a region. + +1. In the **Optimal database settings** section: + + {{The Dataset size, throughput, and High availability settings.}} + + - Turn on [**High-availability**]({{< relref "/operate/rc/databases/configuration/high-availability" >}}). + - Set the Dataset size of your database based on the amount of data that Bedrock will pull from your Simple Storage Service (S3) [bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html). See [Find out the size of your S3 buckets](https://aws.amazon.com/blogs/storage/find-out-the-size-of-your-amazon-s3-buckets/) to find out how much knowledge base data is stored in your S3 bucket and pick the closest size, rounded up, from the table below. + + | Total Size of Documents in S3 | Database size without replication | Database size with replication | +|-------------------------------|-----------------------------------|--------------------------------| +| 10,000 kb | 135 Mb | 270 Mb | +| 100,000 kb | 1.35 Gb | 2.7 Gb | +| 1,000,000 kb | 13.5 Gb | 27 Gb | +| 10,000,000 kb | 135 Gb | 270 Gb | + + For more information on sizing, see the [Bedrock integration blog post](https://redis.io/blog/amazon-bedrock-integration-with-redis-enterprise/#right-size-your-database-for-amazon-bedrock). + +1. Select **View all settings** to review the database settings that we selected for you. + + {{The optimal database settings.}} + + If you want to change these settings, select [**Switch to custom settings**]({{< relref "/operate/rc/databases/create-database/create-pro-database-new#custom-settings" >}}). + +1. You will not need to enter a payment method, as it's automatically assigned to your AWS Marketplace account. Select **Confirm & pay** to create your new database. + + {{Select Confirm & pay to create your new database.}} + + Note that databases are created in the background. While they are provisioning, you aren't allowed to make changes. (The process generally takes 10-15 minutes.) + + Use the **Databases list** to check the status of your subscription. You will also receive an email when your database is ready to use. + +## Enable TLS and get certificates {#get-certs} + +For your database to be fully secure, you must enable [Transport Layer Security (TLS)]({{< relref "/operate/rc/security/database-security/tls-ssl#enable-tls" >}}) for your database with client authentication. + +1. Select **Databases** from the [Redis Cloud console](https://cloud.redis.io/) menu and then select your database from the list. + +1. From the database's **Configuration** screen, select the **Edit** button: + + {{The Edit database button lets you change selected database properties.}} + +1. In the **Security** section, use the **Transport layer security (TLS)** toggle to enable TLS: + + {{Use the Transport Layer Security toggle to enable TLS.}} + +1. Select **Download server certificate** to download the Redis Cloud certificate bundle `redis_ca.pem`: + + {{Use the Download server certificate button to download the Redis Cloud CA certificates.}} + +1. Select the **Mutual TLS (require client authentication)** checkbox to require client authentication. + +1. Select **Add client certificate** to add a certificate. + + {{The Add client certificate button.}} + +1. Either provide an [X.509 client certificate](https://en.wikipedia.org/wiki/X.509) or chain in PEM format for your client or select **Generate** to create one: + + {{Provide or generate a certificate for Mutual TLS.}} + + - If you generate your certificate from the Redis Cloud console, a **Download certificate** button will appear after it is generated. Select it to download the certificate. + + {{The Download certificate button.}} + + The download contains: + + - `redis-db-.crt` – the certificate's public key. + + - `redis-db-.key` – the certificate's private key. + + {{}} +You must download the certificate using the button at this point. After your changes have been applied, the full bundle of public and private keys will no longer be available for download. + {{}} + + - If you provide a client certificate, you will see the certificate details before you save your changes. + + {{The Download certificate button.}} + +1. To apply your changes and enable TLS, select the **Save database** button: + + {{Use the Save database button to save database changes.}} + +## Store database credentials in AWS secrets manager {#store-secret} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Security, Identity, and Compliance** > **Secrets Manager**. [Create a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) of type **Other type of secret** with the following key/value fields: + +- `username`: Database username +- `password`: Database password +- `serverCertificate`: Contents of the [server certificate]({{< relref "/operate/rc/security/database-security/tls-ssl#download-certificates" >}}) (`redis_ca.pem`) +- `clientCertificate`: Contents of the client certificate (`redis_user.crt`) +- `clientPrivateKey`: Contents of the client private key (`redis_user_private.key`) + +After you store this secret, you can view and copy the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_iam-permissions.html#iam-resources) of your secret on the secret details page. + +## Create a vector index in your database {#create-vector-index} + +After your Redis Cloud database is set up, create a search index with a vector field using [FT.CREATE]({{< relref "commands/ft.create" >}}) as your knowledge base for Amazon Bedrock. You can accomplish this using **Redis Insight** or `redis-cli`. + +### Redis Insight + +[Redis Insight]({{< relref "/develop/tools/insight" >}}) is a free Redis GUI that allows you to visualize and optimize your data in Redis. + +To create your vector index in Redis Insight: + +1. [Download and install Redis Insight](https://redis.io/insight/) if you don't have it already. + +1. In the [Redis Cloud console](https://cloud.redis.io/), in your database's **Configuration** tab, select the **Connect** button next to your database to open the connection wizard. + + {{< image filename="/images/rc/button-connect.png#no-click" alt="Connect button." >}} + +1. In the connection wizard, under **Redis Insight Desktop**, select **Public Endpoint**. Select **Open with Redis Insight** to connect to the database with Redis Insight. + +1. Select **Use TLS**. In the **CA Certificate** section, select **Add new CA certificate**. Give the certificate a name in the **Name** field, and enter the contents of `redis_ca.pem` into the **Certificate** field. + + {{The Redis Insight Add CA Certificate section.}} + +1. Select **Requires TLS Client Authentication**. In the **Client Certificate** section, select **Add new certificate**. Give the certificate a name in the **Name** field. Enter the contents of `redis_user.crt` into the **Certificate** field, and the contents of `redis_user_private.key` into the **Private Key** field. + + {{The Redis Insight Add Client Certificate section.}} + +1. Select **Add Redis Database** to connect to the database. + +1. Select your database alias to connect to your database. Select the **Workbench** icon to go to the workbench. + + {{The Redis Insight workbench icon.}} + +1. Enter the [FT.CREATE]({{< relref "commands/ft.create" >}}) command to create an index. + + ```text + FT.CREATE + ON HASH + SCHEMA + "" TEXT + "" TEXT + "" VECTOR FLAT + 6 + "TYPE" "FLOAT32" + "DIM" 1536 + "DISTANCE_METRIC" "COSINE" + ``` + + Replace the following fields: + + - `` with the vector index name + - `` with the text field name + - `` with the metadata field name + - `` with the vector field name + +1. Select **Run** to create the index. + + {{The Redis Insight run button.}} + +### `redis-cli` + +The [`redis-cli`]({{< relref "/develop/tools/cli" >}}) command-line utility lets you connect and run Redis commands directly from the command line. To use `redis-cli`, you can [install Redis]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/" >}}). + +Public endpoint and port details are available from the **Databases** list or the database's **Configuration** screen. Select **Connect** to view how to connect to your database with `redis-cli`. + +```sh +redis-cli -h -p --tls --cacert redis_ca.pem \ + --cert redis_user.crt --key redis_user_private.key +``` + +After you are connected with `redis-cli`, create an index using [FT.CREATE]({{< relref "commands/ft.create" >}}). + +```text +FT.CREATE + ON HASH + SCHEMA + "" TEXT + "" TEXT + "" VECTOR FLAT + 6 + "TYPE" "FLOAT32" + "DIM" 1536 + "DISTANCE_METRIC" "COSINE" +``` + +Replace the following fields: +- `` with the vector index name +- `` with the text field name +- `` with the metadata field name +- `` with the vector field name + +## Next steps + +After your Redis database is set up, you can use it to [create a knowledge base]({{< relref "/integrate/amazon-bedrock/create-knowledge-base" >}}) in Amazon Bedrock.--- +LinkTitle: Create Bedrock knowledge base +Title: Create a Bedrock knowledge base +alwaysopen: false +categories: +- docs +- integrate +- oss +- rs +- rc +description: Shows how to set up your Knowledge base in Amazon Bedrock. +group: cloud-service +summary: With Amazon Bedrock, users can access foundational AI models from a variety + of vendors through a single API, streamlining the process of leveraging generative + artificial intelligence. +type: integration +weight: 2 +--- + +After you have set up a vector database with Redis Cloud, you can use it to create a knowledge base for your models. + +Before you begin this guide, you will need: + +- An [AWS S3 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) with text data that you want to use to train your models. + +- An [AWS IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) with permissions for the Bedrock knowledge base. + +- A Redis database that is [set up for Amazon Bedrock]({{< relref "/integrate/amazon-bedrock/set-up-redis" >}}) + +## Create knowledge base + +To use your Redis database to create a knowledge base on Amazon Bedrock: + +1. Sign in to the [AWS console](https://console.aws.amazon.com/). + +1. Use the **Services** menu to locate and select **Machine Learning** > **Amazon Bedrock**. This takes you to the Amazon Bedrock admin panel. + +1. Select **Knowledge base** > **Create knowledge base** to create your knowledge base. + + {{The Create knowledge base button.}} + +1. In the **Knowledge base details** section, enter a name and description for your knowledge base. + +1. Select the IAM role for the Bedrock knowledge base in the **IAM Permissions** section. Select **Next** to add the data source. + +1. Enter a name for the data source and connect your S3 bucket in the **Data source** section. + +1. In the **Vector database** section, select **Redis Cloud** and select the checkbox to agree with the legal disclaimer. + + {{The Redis Cloud selection for your vector database.}} + + Fill in the fields with the following information: + + - **Endpoint URL**: Public endpoint of your database. This can be found in the [Redis Cloud console](https://cloud.redis.io/) from the database list or from the **General** section of the **Configuration** tab for the source database. + - **Credentials Secret ARN**: [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_iam-permissions.html#iam-resources) of your [database credentials secret]({{< relref "/integrate/amazon-bedrock/set-up-redis#store-secret" >}}). + - **Vector Index name**: Name of the [vector index]({{< relref "/integrate/amazon-bedrock/set-up-redis#create-vector-index" >}}) + - **Vector field**: Name of the [vector field]({{< relref "/integrate/amazon-bedrock/set-up-redis#create-vector-index" >}}) of the vector index + - **Text field**: Name of the [text field]({{< relref "/integrate/amazon-bedrock/set-up-redis#create-vector-index" >}}) of the vector index + - **Metadata field**: Name of the [metadata field]({{< relref "/integrate/amazon-bedrock/set-up-redis#create-vector-index" >}}) of the vector index + + Select **Next** to review your settings. + +1. Review your knowledge base before you create it. Select **Create knowledge base** to finish creation. + + {{The Create knowledge base button.}} + +Amazon Bedrock will sync the data from the S3 bucket and load it into your Redis database. This will take some time. + +Your knowledge base will have a status of **Ready** when it is ready to be connected to an Agent. + +{{A Bedrock knowledge base with a Ready status.}} + +Select the name of your knowledge base to view the syncing status of your data sources. The data source will have a status of **Ready** when it is synced to the vector database. + +{{A Bedrock data source with a Ready status.}} + +After the knowledge base is ready, you can use it to [Create an agent]({{< relref "/integrate/amazon-bedrock/create-agent" >}}). +--- +LinkTitle: Create Bedrock agent +Title: Create a Bedrock agent +alwaysopen: false +categories: +- docs +- integrate +- oss +- rs +- rc +description: Shows how to set up your Agent in Amazon Bedrock. +group: cloud-service +summary: With Amazon Bedrock, users can access foundational AI models from a variety + of vendors through a single API, streamlining the process of leveraging generative + artificial intelligence. +type: integration +weight: 3 +--- + +After you have [created a knowledge base]({{< relref "/integrate/amazon-bedrock/create-knowledge-base" >}}), you can use it to create an agent on Amazon Bedrock. + +Before you begin this guide, you will need: + +- An [AWS IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) with [permissions for the Bedrock agent](https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html). + +- A [Bedrock knowledge base]({{< relref "/integrate/amazon-bedrock/create-knowledge-base" >}}) connected to a [Redis Cloud vector database]({{< relref "/integrate/amazon-bedrock/set-up-redis" >}}). + +## Create an agent + +1. Sign in to the [AWS console](https://console.aws.amazon.com/). + +1. Use the **Services** menu to locate and select **Machine Learning** > **Amazon Bedrock**. This takes you to the Amazon Bedrock admin panel. + +1. Select **Agents** > **Create Agent** to create your knowledge base. + + {{The Create Agent button.}} + +1. In the **Agent name** section, enter a name and description for your agent. + +1. Select whether or not you want the agent to be able to ask for additional information in the **User input** section. + +1. Select the IAM role for the Bedrock agent in the **IAM Permissions** section. + +1. Choose how long you want your idle session timeout to be in the **Idle session timeout** section. Select **Next** to continue. + +1. In the **Model details** section, choose which model you want to use and enter the instructions for your agent. Select **Next** to continue. + +1. In the **Action groups** section, you may specify any tasks you would like the agent to perform. Select **Next** to continue. + +1. Select the [knowledge base](#create-a-knowledge-base) you created and summarize the information in the knowledge base in the **Knowledge base instructions for Agent** form. Select **Add another knowledge base** if you would like to add multiple knowledge bases. + + {{The Add another knowledge base button.}} + + Select **Next** to continue. + +1. Review your agent before you create it. Select **Create Agent** to finish creation. + + {{The Create Agent button.}} + +Amazon Bedrock will create your agent and link it to your knowledge base. This will take some time. + +Your agent will have a status of **Ready** when it is ready to be tested. + +{{A Bedrock agent with a Ready status.}} + +Select the name of your agent to view the versions and draft aliases of your agent. You can also test your agent by entering prompts in the **Enter your message here** field. --- +LinkTitle: Amazon Bedrock +Title: Amazon Bedrock +alwaysopen: false +categories: +- docs +- integrate +- oss +- rs +- rc +description: Shows how to use your Redis database with Amazon Bedrock to customize + foundational models. +group: cloud-service +hideListLinks: true +summary: With Amazon Bedrock, users can access foundational AI models from a variety + of vendors through a single API, streamlining the process of leveraging generative + artificial intelligence. +type: integration +weight: 3 +--- + +[Amazon Bedrock](https://aws.amazon.com/bedrock/) streamlines GenAI deployment by offering foundational models (FMs) as a unified API, eliminating complex infrastructure management. It lets you create AI-powered [Agents](https://aws.amazon.com/bedrock/agents/) that execute complex tasks. Through [Knowledge Bases](https://aws.amazon.com/bedrock/knowledge-bases/) within Amazon Bedrock, you can seamlessly tether FMs to your proprietary data sources using retrieval-augmented generation (RAG). This direct integration amplifies the FM's intelligence based on your organization's resources. + +Amazon Bedrock lets you choose Redis Cloud as the [vector database](https://redis.io/solutions/vector-search/) for your agent's Knowledge Base. Once Redis Cloud is integrated with Amazon Bedrock, it automatically reads text documents from your Amazon Simple Storage Service (S3) buckets. This process lets the large language model (LLM) pinpoint and extract pertinent context in response to user queries, ensuring your AI agents are well-informed and grounded in their responses. + +For more information about the Redis integration with Amazon Bedrock, see the [Amazon Bedrock integration blog post](https://redis.io/blog/amazon-bedrock-integration-with-redis-enterprise/). + +To fully set up Bedrock with Redis Cloud, you will need to do the following: + +1. [Set up a Redis Cloud subscription and vector database]({{< relref "/integrate/amazon-bedrock/set-up-redis" >}}) for Bedrock. + +1. [Create a knowledge base]({{< relref "/integrate/amazon-bedrock/create-knowledge-base" >}}) connected to your vector database. + +1. [Create an agent]({{< relref "/integrate/amazon-bedrock/create-agent" >}}) connected to your knowledge base. + +## More info + +- [Amazon Bedrock integration blog post](https://redis.io/blog/amazon-bedrock-integration-with-redis-enterprise/) +- [Detailed steps](https://github.com/redis-applied-ai/aws-redis-bedrock-stack/blob/main/README.md) +--- +Title: Architecture +aliases: /integrate/redis-data-integration/ingest/architecture/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Discover the main components of RDI +group: di +headerRange: '[2]' +linkTitle: Architecture +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 3 +--- + +## Overview + +RDI implements a [change data capture](https://en.wikipedia.org/wiki/Change_data_capture) (CDC) pattern that tracks changes to the data in a +non-Redis *source* database and makes corresponding changes to a Redis +*target* database. You can use the target as a cache to improve performance +because it will typically handle read queries much faster than the source. + +To use RDI, you define a *dataset* that specifies which data items +you want to capture from the source and how you want to +represent them in the target. For example, if the source is a +relational database then you specify which table columns you want +to capture but you don't need to store them in an equivalent table +structure in the target. This means you can choose whatever target +representation is most suitable for your app. To convert from the +source to the target representation, RDI applies *transformations* +to the data after capture. + +RDI synchronizes the dataset between the source and target using +a *data pipeline* that implements several processing steps +in sequence: + +1. A *CDC collector* captures changes to the source database. RDI + currently uses an open source collector called + [Debezium](https://debezium.io/) for this step. + +1. The collector records the captured changes using Redis streams + in the RDI database. + +1. A *stream processor* reads data from the streams and applies + any transformations that you have defined (if you don't need + any custom transformations then it uses defaults). + It then writes the data to the target database for your app to use. + +Note that the RDI control processes run on dedicated virtual machines (VMs) +outside the Redis +Enterprise cluster where the target database is kept. However, RDI keeps +its state and configuration data and also the change data streams in a Redis database on the same cluster as the target. The following diagram shows the pipeline steps and the path the data takes on its way from the source to the target: + +{{< image filename="images/rdi/ingest/ingest-dataflow.webp" >}} + +When you first start RDI, the target database is empty and so all +of the data in the source database is essentially "change" data. +RDI collects this data in a phase called *initial cache loading*, +which can take minutes or hours to finish, depending on the size +of the source data. Once the initial cache loading is complete, +there is a *snapshot* dataset in the target that will gradually +change when new data gets captured from the source. At this point, +RDI automatically enters a second phase called *change streaming*, where +changes in the data are captured as they happen. Changes are usually +added to the target within a few seconds after capture. + +## Backpressure mechanism + +Sometimes, data records can get added to the streams faster than RDI can +process them. This can happen if the target is slowed or disconnected +or simply if the source quickly generates a lot of change data. +If this continues, then the streams will eventually occupy all the +available memory. When RDI detects this situation, it applies a +*backpressure* mechanism to slow or stop the flow of incoming data. +Change data is held at the source until RDI clears the backlog and has +enough free memory to resume streaming. + +{{}}The Debezium log sometimes reports that RDI has run out +of memory (usually while creating the initial snapshot). This is not +an error, just an informative message to note that RDI has applied +the backpressure mechanism. +{{}} + +### Supported sources + +RDI supports the following database sources using [Debezium Server](https://debezium.io/documentation/reference/stable/operations/debezium-server.html) connectors: + +{{< embed-md "rdi-supported-source-versions.md" >}} + +## How RDI is deployed + +RDI is designed with three *planes* that provide its services. + +The *control plane* contains the processes that keep RDI active. +It includes: + +- An *API server* process that exposes a REST API to observe and control RDI. +- An *operator* process that manages the *data plane* processes. +- A *metrics exporter* process that reads metrics from the RDI database + and exports them as [Prometheus](https://prometheus.io/) metrics. + +The *data plane* contains the processes that actually move the data. +It includes the *CDC collector* and the *stream processor* that implement +the two phases of the pipeline lifecycle (initial cache loading and change streaming). + +The *management plane* provides tools that let you interact +with the control plane. + +- Use the CLI tool to install and administer RDI and to deploy + and manage a pipeline. +- Use the pipeline editor included in Redis Insight to design + or edit a pipeline. + +The diagram below shows all RDI components and the interactions between them: + +{{< image filename="images/rdi/ingest/ingest-control-plane.webp" >}} + +The following sections describe the VM configurations you can use to +deploy RDI. + +### RDI on your own VMs + +For this deployment, you must provide two VMs. The collector and stream processor +are active on one VM, while on the other they are in standby to provide high availability. +The two operators running on both VMs use a leader election algorithm to decide which +VM is the active one (the "leader"). +The diagram below shows this configuration: + +{{< image filename="images/rdi/ingest/ingest-active-passive-vms.webp" >}} + +See [Install on VMs]({{< relref "/integrate/redis-data-integration/installation/install-vm" >}}) +for more information. + +### RDI on Kubernetes + +You can use the RDI [Helm chart](https://helm.sh/docs/topics/charts/) to install +on [Kubernetes (K8s)](https://kubernetes.io/), including Red Hat +[OpenShift](https://docs.openshift.com/). This creates: + +- A K8s [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `rdi`. + You can also use a different namespace name if you prefer. +- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) and + [services](https://kubernetes.io/docs/concepts/services-networking/service/) for the + [RDI operator]({{< relref "/integrate/redis-data-integration/architecture#how-rdi-is-deployed" >}}), + [metrics exporter]({{< relref "/integrate/redis-data-integration/observability" >}}), and API server. +- A [service account](https://kubernetes.io/docs/concepts/security/service-accounts/) + and [RBAC resources](https://kubernetes.io/docs/reference/access-authn-authz/rbac) for the RDI operator. +- A [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with RDI database details. +- [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) + with the RDI database credentials and TLS certificates. +- Other optional K8s resources such as [ingresses](https://kubernetes.io/docs/concepts/services-networking/ingress/) + that can be enabled depending on your K8s environment and needs. + +See [Install on Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) +for more information. + +### Secrets and security considerations + +The credentials for the database connections, as well as the certificates +for [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) and +[mTLS](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) are saved in K8s secrets. +RDI stores all state and configuration data inside the Redis Enterprise cluster +and does not store any other data on your RDI VMs or anywhere else outside the cluster. +--- +Title: Troubleshooting +aliases: /integrate/redis-data-integration/ingest/troubleshooting/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Solve and report simple problems with RDI +group: di +hideListLinks: false +linkTitle: Troubleshooting +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 50 +--- + +The following sections explain how you can get extra information from +Redis Data Integration (RDI) to help you solve problems that you may encounter. Redis support may +also ask you to provide this information to help you resolve issues. + +## Debug information during installation {#install-debug} + +If the installer fails with an error, then try installing again with the +log level set to `DEBUG`: + +```bash +./install.sh -l DEBUG # Installer script +redis-di install -l DEBUG # Install command +``` + +This gives you more detail about the installation steps and can often +help you to pinpoint the source of the error. + +## RDI logs + +By default, RDI records the following logs in the host VM file system at +`/opt/rdi/logs` (or whichever path you specified during installation); + +| Filename | Phase | +| :-- | :-- | +| `rdi_collector-collector-initializer.log` | Initializing the collector. | +| `rdi_collector-debezium-ssl-init.log` | Establishing the connector SSL connections to the source and RDI database (if you are using SSL). | +| `rdi_collector-collector-source.log` | Collector [change data capture (CDC)]({{< relref "/integrate/redis-data-integration/architecture" >}}) operations. | +| `rdi_rdi-rdi-operator.log` | Main [RDI control plane]({{< relref "/integrate/redis-data-integration/architecture#how-rdi-is-deployed" >}}) component. | +| `rdi_processor-processor.log` | RDI stream processing. | + +Logs are recorded at the minimum `INFO` level in a simple format that +log analysis tools can use. + +{{< note >}}Often during the initial sync phase, the collector source log will contain a message +saying RDI is out of +memory. This is not an error but an informative message to say that RDI +is applying *backpressure* to the collector. See +[Backpressure mechanism]({{< relref "/integrate/redis-data-integration/architecture#backpressure-mechanism" >}}) +in the Architecture guide for more information. +{{< /note >}} + +## Dump support package + +If you need to send a comprehensive set of forensics data to Redis support, +run the +[`redis-di dump-support-package`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-dump-support-package" >}}) +command from the CLI. + +This command gathers the following data: + +- All the internal RDI components and their status +- All internal RDI configuration +- List of secret names used by RDI components (but not the secrets themselves) +- RDI logs +- RDI component versions +- Output from the [`redis-di status`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-status" >}}) command +- Text of the `config.yaml` file +- Text of the Job configuration files +- [optional] RDI DLQ streams content +- Rejected records along with the reason for their rejection (should not exist in production) +--- +Title: Quickstart +linkTitle: Quickstart +description: Get started with a simple pipeline example +weight: 1 +alwaysopen: false +categories: ["redis-di"] +aliases: /integrate/redis-data-integration/ingest/quick-start-guide/ +--- + +In this tutorial you will learn how to install RDI and set up a pipeline to ingest live data from a [PostgreSQL](https://www.postgresql.org/) database into a Redis database. + +## Prerequisites + +- A Redis Enterprise database that will serve as the pipeline target. The dataset that will be ingested is + quite small in size, so a single shard database should be enough. RDI also needs to maintain its + own database on the cluster to store state information. *This requires Redis Enterprise v6.4 or greater*. +- [Redis Insight]({{< relref "/develop/tools/insight" >}}) + to edit your pipeline +- A virtual machine (VM) with one of the following operating systems: + {{< embed-md "rdi-os-reqs.md" >}} + +## Overview + +The following diagram shows the structure of the pipeline we will create (see +the [architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) to learn how the pipeline works): + +{{< image filename="images/rdi/ingest/ingest-qsg.webp" >}} + +Here, the RDI *collector* tracks changes in PostgreSQL and writes them to streams in the +RDI database in Redis. The *stream processor* then reads data records from the RDI +database streams, processes them, and writes them to the target. + +### Install PostgreSQL + +We provide a [Docker](https://www.docker.com/) image for an example PostgreSQL +database that we will use for the tutorial. Follow the +[instructions on our Github page](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres/tree/main) +to download the image and start serving the database. The database, which is +called `chinook`, has the [schema and data](https://www.kaggle.com/datasets/samaxtech/chinook-music-store-data?select=schema_diagram.png) for an imaginary online music store +and is already set up for the RDI collector to use. + +### Install RDI + +Install RDI using the instructions in the +[VM installation guide]({{< relref "/integrate/redis-data-integration/installation/install-vm" >}}). + +RDI will create the pipeline template for your chosen source database type at +`/opt/rdi/config`. You will need this pathname later when you prepare the pipeline for deployment +(see [Prepare the pipeline](#prepare-the-pipeline) below). + +At the end of the installation, RDI CLI will prompt you to set the access secrets +for both the source PostgreSQL database and the target Redis database. RDI needs these to +run the pipeline. + +Use the Redis Enterprise Cluster Manager UI to create the RDI database with the following requirements: + +{{< embed-md "rdi-db-reqs.md" >}} + +### Prepare the pipeline + +During the installation, RDI placed the pipeline templates at `/opt/rdi/config`. +If you go to that folder and run the `ll` command, you will see the pipeline +configuration file, `config.yaml`, and the `jobs` folder (see the page about +[Pipelines]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}) for more information). Use Redis Insight to open +the `config.yaml` file and then edit the following settings: + +- Set the `host` to `localhost` and the `port` to 5432. +- Under `tables`, specify the `Track` table from the source database. +- Add the details of your target database to the `target` section. + +At this point, the pipeline is ready to deploy. + +### Create a context (optional) {#create-context} + +To manage and inspect RDI, you can use the +[`redis-di`]({{< relref "/integrate/redis-data-integration/reference/cli" >}}) +CLI command, which has several subcommands for different purposes. Most of these commands require you +to pass at least two options, `--rdi-host` and `--rdi-port`, to specify the host and port of your +RDI installation. You can avoid typing these options repeatedly by saving the +information in a *context*. + +When you activate a context, the saved values of +`--rdi-host`, `--rdi-port`, and a few other options are passed automatically whenever +you use `redis-di`. If you have more than one RDI installation, you can create a context +for each of them and select the one you want to be active using its unique name. + +To create a context, use the +[`redis-di add-context`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-add-context" >}}) +command: + +```bash +redis-di add-context --rdi-host --rdi-port +``` + +These options are required but there are also a few others you can save, such as TLS credentials, if +you are using them (see the +[reference page]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-add-context" >}}) +for details). When you have created a context, use +[`redis-di set-context`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-set-context" >}}) +to activate it: + +```bash +redis-di set-context +``` + +There are also subcommands to +[list]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-list-contexts" >}}) +and [delete]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-delete-context" >}}) +contexts. + +### Deploy the pipeline + +You can use [Redis Insight]({{< relref "/develop/tools/insight/rdi-connector" >}}) +to deploy the pipeline by adding a connection to the RDI API +endpoint (which has the same hostname or IP address as your RDI VM and uses the default HTTPS port 443) and then clicking the **Deploy** button. You can also deploy it with the following command: + +```bash +redis-di deploy --dir +``` + +where the path is the one you supplied earlier during the installation. (You may also need +to supply `--rdi-host` and `--rdi-port` options if you are not using a +[context](#create-context) as described above.) RDI first +validates your pipeline and then deploys it if the configuration is correct. + +Once the pipeline is running, you can use Redis Insight to view the data flow using the +pipeline metrics. You can also connect to your target database to see the keys that RDI has written there. + +See [Deploy a pipeline]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) +for more information about deployment settings. + +### View RDI's response to data changes + +Once the pipeline has loaded a *snapshot* of all the existing data from the source, +it enters *change data capture (CDC)* mode (see the +[architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) +and the +[ingest pipeline lifecycle]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#ingest-pipeline-lifecycle" >}}) +for more information +). + +To see the RDI pipeline working in CDC mode: + +- Create a simulated load on the source database + (see [Generating load on the database](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres?tab=readme-ov-file#generating-load-on-the-database) + to learn how to do this). +- Run + [`redis-di status --live`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-status" >}}) + to see the flow of records. +- Use [Redis Insight]({{< relref "/develop/tools/insight" >}}) to look at the data in the target database. +--- +Title: FAQ +aliases: /integrate/redis-data-integration/ingest/faq/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Find answers to common questions about RDI +group: di +hideListLinks: false +linkTitle: FAQ +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 50 +--- + +## Which license does RDI use? + +You must purchase a commercial license for RDI with Redis Enterprise. This includes two extra +Redis Enterprise shards (primary and replica) for the staging database. + +## How does RDI track data changes in the source database? + +RDI uses mechanisms that are specific for each of the supported +source databases: + +- **Oracle**: RDI uses `logminer` to parse the Oracle `binary log` and `archive logs`. This + lists any changes in a database view that RDI can query. +- **MySQL/MariaDB**: RDI uses `binary log replication` to get all the commits. +- **PostgreSQL**: RDI uses the `pgoutput` plugin. +- **SQL Server**: RDI uses the CDC mechanism. + +## How much data can RDI process? + +RDI uses the concept of *processing units*. Each processing unit uses 1 CPU core and can process +about 10,000 records per second, assuming the records have a size of about 1KB each. This throughput +might change slightly depending on the number of columns, the number of data transformations, +and the speed of the network. Typically, one processing unit is enough for RDI to deal with the +traffic from a relational database. + +## Can RDI work with any Redis database? + +No. RDI is designed and tested to work only with Redis Enterprise. The staging database can +only use version 6.4 or above. The target Redis database can be of any version and can be a +replica of an Active-Active replication setup or an Auto tiering database. + +## Can RDI automatically track changes to the source database schema? + +If you don't configure RDI to capture a specific set of tables in the schema then it will +detect any new tables when they are added. Similarly, RDI will capture new table columns +and changes to column names unless you configure it for a specific set of columns. +Bear in mind that the Redis keys in the target database will change to reflect the +new or renamed tables and columns. + +## Should I be concerned when the log says RDI is out of memory? {#rdi-oom} + +Sometimes the Debezium log will contain a message saying that RDI is out of +memory. This is not an error but an informative message to say that RDI +is applying *backpressure* to Debezium. See +[Backpressure mechanism]({{< relref "/integrate/redis-data-integration/architecture#backpressure-mechanism" >}}) +in the Architecture guide for more information. + +## What happens when RDI can't write to the target Redis database? + +RDI will keep attempting to write the changes to the target and will also attempt +to reconnect to it, if necessary. While the target is disconnected, RDI +will keep capturing change events from the source database and adding them to its +streams in the staging database. This continues until the staging database gets +low on space to store new events. When RDI detects this, it applies a "back pressure" +mechanism to capture data from the source less frequently, which reduces the risk of running +out of space altogether. The systems that the source databases use to record changes can +retain the change data for at least a few hours, and RDI can catch up with the +changes as soon as the target connection recovers or the staging database has +more space available. + +## What does RDI do if the data is corrupted or invalid? + +The collector reports the data to RDI in a structured JSON format. If +the structure of the JSON data is invalid or if there is a fatal bug in the transformation +job then RDI can't transform the data. When this happens, RDI will store the original data +in a "dead letter queue" along with a message to say why it was rejected. The dead letter +queue is stored as a capped stream in the RDI staging database. You can see its contents +with Redis Insight or with the +[`redis-di get-rejected`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-get-rejected" >}}) +command from the CLI. +--- +Title: Install on VMs +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to install RDI on one or more VMs +group: di +hideListLinks: false +linkTitle: Install on VMs +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +This guide explains how to install Redis Data Integration (RDI) on one or more VMs and integrate it with +your source database. You can also +[Install RDI on Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}). + +{{< note >}}We recommend you always use the latest version, which is RDI v{{< rdi-version >}}. +{{< /note >}} + +## Create the RDI database + +RDI uses a database on your Redis Enterprise cluster to store its state +information. Use the Redis Enterprise Cluster Manager UI to create the RDI database with the following +requirements: + +{{< embed-md "rdi-db-reqs.md" >}} + +## Hardware sizing + +RDI is mainly CPU and network bound. +Each of the RDI VMs should have at least: + +{{< embed-md "rdi-vm-reqs.md" >}} + +## VM Installation Requirements + +You would normally install RDI on two VMs for High Availability (HA) but you can also install +just one VM if you don't need this. For example, you might not need HA during +development and testing. + +{{< note >}}You can't install RDI on a host where a Redis Enterprise cluster +is also installed, due to incompatible network rules. If you want to install RDI on a +host that you have previously used for Redis Enterprise then you must +use [`iptables`](https://www.netfilter.org/projects/iptables/index.html) to +"clean" the host before installation with the following command line: + +```bash + sudo iptables-save | awk '/^[*]/ { print $1 } + /^:[A-Z]+ [^-]/ { print $1 " ACCEPT" ; } + /COMMIT/ { print $0; }' | sudo iptables-restore +``` + +You may encounter problems if you use `iptables` v1.6.1 and earlier in +`nftables` mode. Use `iptables` versions later than v1.6.1 or enable the `iptables` +legacy mode with the following commands: + +```bash +sudo update-alternatives --set iptables /usr/sbin/iptables-legacy +sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy +``` + +Also, `iptables` versions 1.8.0-1.8.4 have known issues that can prevent RDI +from working, especially on RHEL 8. Ideally, use `iptables` v1.8.8, which is +known to work correctly with RDI. +{{< /note >}} + +The supported OS versions for RDI are: + +{{< embed-md "rdi-os-reqs.md" >}} + +You must run the RDI installer as a privileged user because it installs +[containerd](https://containerd.io/) and registers services. However, you don't +need any special privileges to run RDI processes for normal operation. + +RDI has a few +requirements for cloud VMs that you must implement before running the +RDI installer, or else installation will fail. The following sections +give full pre-installation instructions for [RHEL](#firewall-rhel) and +[Ubuntu](#firewall-ubuntu). + +### RHEL {#firewall-rhel} + +We recommend you turn off +[`firewalld`](https://firewalld.org/documentation/) +before installation using the command: + +```bash +sudo systemctl disable firewalld --now +``` + +However, if you do need to use `firewalld`, you must add the following rules: + +```bash +sudo firewall-cmd --permanent --add-port=443/tcp # RDI API +sudo firewall-cmd --permanent --add-port=6443/tcp # kube-apiserver +sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # Kubernetes pods +sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # Kubernetes services +sudo firewall-cmd --reload +``` + +If you have `nm-cloud-setup.service` enabled, you must disable it and reboot the +node with the following commands: + +```bash +sudo systemctl disable nm-cloud-setup.service nm-cloud-setup.timer +sudo reboot +``` + +### Ubuntu {#firewall-ubuntu} + +We recommend you turn off +[Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (`ufw`) +before installation with the command: + +```bash +sudo ufw disable +``` + +However, if you do need to use `ufw`, you must add the following rules: + +```bash +sudo ufw allow 443/tcp # RDI API +sudo ufw allow 6443/tcp # kube-apiserver +sudo ufw allow from 10.42.0.0/16 to any # Kubernetes pods +sudo ufw allow from 10.43.0.0/16 to any # Kubernetes services +sudo ufw reload +``` + +## Installation steps + +Follow the steps below for each of your VMs: + +1. Download the RDI installer from the + [Redis download center](https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-installation-{{< rdi-version >}}.tar.gz) + (from the *Modules, Tools & Integration* category) and extract it to your preferred installation + folder. + + ```bash + export RDI_VERSION={{< rdi-version >}} + wget https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-installation-$RDI_VERSION.tar.gz + tar -xvf rdi-installation-$RDI_VERSION.tar.gz + ``` + +1. Go to the installation folder: + + ```bash + cd rdi_install/$RDI_VERSION + ``` + +1. Run the `install.sh` script as a privileged user: + + ```bash + sudo ./install.sh + ``` + + {{< note >}}RDI uses [K3s](https://k3s.io/) as part of its implementation. + By default, the installer installs K3s in the `/var/lib` directory, + but this might be a problem if you have limited space in `/var` + or your company policy forbids you to install there. You can + select a different directory for the K3s installation using the + `--installation-dir` option with `install.sh`: + + ```bash + sudo ./install.sh --installation-dir + ``` + {{< /note >}} + +The RDI installer collects all necessary configuration details and alerts you to potential issues, +offering options to abort, apply fixes, or provide additional information. +Once complete, it guides you through creating secrets and setting up your pipeline. + +{{< note >}}It is strongly recommended to specify a hostname rather than an IP address for +connecting to your RDI database, for the following reasons: + +- Any DNS resolution issues will be detected during the installation rather than + later during pipeline deployment. +- If you use TLS, your RDI database CA certificate must contain the hostname you specified + either as a common name (CN) or as a subject alternative name (SAN). CA certificates + usually don't contain IP addresses. +{{< /note >}} + +{{< note >}}If you specify `localhost` as the address of the RDI database server during +installation then the connection will fail if the actual IP address changes for the local +VM. For this reason, we recommend that you don't use `localhost` for the address. However, +if you do encounter this problem, you can fix it using the following commands on the VM +that is running RDI itself: + +```bash +sudo k3s kubectl delete nodes --all +sudo service k3s restart +``` +{{< /note >}} + +After the installation is finished, RDI is ready for use. + +### Supply cloud DNS information + +{{< note >}}This section is only relevant if you are installing RDI +on VMs in a cloud environment. +{{< /note >}} + +If you are using [Amazon Route 53](https://aws.amazon.com/route53/), +[Google Cloud DNS](https://cloud.google.com/dns?hl=en), or +[Azure DNS](https://azure.microsoft.com/en-gb/products/dns) +then you must supply the installer with the nameserver IP address +during installation. The table below +shows the appropriate IP address for each cloud provider: + +| Platform | Nameserver IP | +| :-- | :-- | +| [Amazon Route 53](https://aws.amazon.com/route53/) | 169.254.169.253 | +| [Google Cloud DNS](https://cloud.google.com/dns?hl=en) | 169.254.169.254 | +| [Azure DNS](https://azure.microsoft.com/en-gb/products/dns) | 168.63.129.16 | + +If you are using Route 53, you should first check that your VPC +is configured to allow it. See +[DNS attributes in your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/AmazonDNS-concepts.html#vpc-dns-support) +in the Amazon docs for more information. + +### Installing with High Availability + +To install RDI with High Availability (HA), perform the [Installation steps](#installation-steps) +on two different VMs. The first VM will automatically become the active (primary) instance, +while the second VM will become the passive (secondary) one. +When starting the RDI installation on the second VM, the installer will detect that the RDI +database is already in use and ask you to confirm that you intend to install RDI with HA. + +After the installation is complete, you must set the source and target database secrets +on both VMs as described in [Deploy a pipeline]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}). If you use `redis-di` to deploy your configuration, you only need to do this on one of the VMs, not both. + +In a High Availability setup, the RDI pipeline is only active on the primary instance (VM). +The two RDI instances will use the RDI database for leader election. If the primary instance fails +to renew the lease in the RDI database, it will lose the leadership and a failover to the secondary instance +will take place. After the failover, the secondary instance will become the primary one, +and the RDI pipeline will be active on that VM. + +## Prepare your source database + +Before deploying a pipeline, you must configure your source database to enable CDC. See the +[Prepare source databases]({{< relref "/integrate/redis-data-integration/data-pipelines/prepare-dbs" >}}) +section to learn how to do this. + +## Deploy a pipeline + +When the installation is complete, and you have prepared the source database for CDC, +you are ready to start using RDI. See the guides on how to +[configure]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}) and +[deploy]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) +RDI pipelines for more information. You can also configure and deploy a pipeline +using [Redis Insight]({{< relref "/develop/tools/insight/rdi-connector" >}}). + +## Uninstall RDI + +If you want to remove your RDI installation, go to the installation folder and run +the uninstall script as a privileged user: + +```bash +sudo ./uninstall.sh +``` + +The script will ask if you are sure before proceeding: + +``` +This will uninstall RDI and its dependencies, are you sure? [y, N] +``` + +If you type anything other than "y" here, the script will abort without making any changes +to RDI or your source database. +--- +Title: Requirements summary +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Requirements and recommendations for RDI installations. +group: di +hideListLinks: false +linkTitle: Requirements summary +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 5 +--- + +The sections below summarize the software and hardware requirements for +an RDI installation. + +## Hardware requirements for VM installation + +{{< embed-md "rdi-vm-reqs.md" >}} + +## OS requirements for VM installation + +{{< embed-md "rdi-os-reqs.md" >}} + +## Kubernetes/OpenShift supported versions + +{{< embed-md "rdi-k8s-reqs.md" >}} + +## RDI database requirements + +{{< embed-md "rdi-db-reqs.md" >}} + +## Supported source databases + +{{< embed-md "rdi-supported-source-versions.md" >}} +--- +Title: Install on Kubernetes +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to install RDI on Kubernetes +group: di +hideListLinks: false +linkTitle: Install on K8s +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 20 +--- + +This guide explains how to use the RDI [Helm chart](https://helm.sh/docs/topics/charts/) +to install on [Kubernetes](https://kubernetes.io/) (K8s). You can also +[Install RDI on VMs]({{< relref "/integrate/redis-data-integration/installation/install-vm" >}}). + +The installation creates the following K8s objects: + +- A K8s [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `rdi`. + You can also use a different namespace name if you prefer. +- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) and + [services](https://kubernetes.io/docs/concepts/services-networking/service/) for the + [RDI operator]({{< relref "/integrate/redis-data-integration/architecture#how-rdi-is-deployed" >}}), + [metrics exporter]({{< relref "/integrate/redis-data-integration/observability" >}}), and API server. +- A [service account](https://kubernetes.io/docs/concepts/security/service-accounts/) + and [RBAC resources](https://kubernetes.io/docs/reference/access-authn-authz/rbac) for the RDI operator. +- A [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with RDI database details. +- [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) + with the RDI database credentials and TLS certificates. +- Other optional K8s resources such as [ingresses](https://kubernetes.io/docs/concepts/services-networking/ingress/) + that can be enabled depending on your K8s environment and needs. + +You can use this installation on [OpenShift](https://docs.openshift.com/) and other K8s distributions +including cloud providers' K8s managed clusters. + +You can configure the RDI Helm chart to pull the RDI images from [dockerhub](https://hub.docker.com/u/redis) +or from your own [private image registry](#using-a-private-image-registry). + +## Before you install + +Complete the following steps before installing the RDI Helm chart: + +- [Create the RDI database](#create-the-rdi-database) on your Redis Enterprise cluster. + +- Create a [user]({{< relref "/operate/rs/security/access-control/create-users" >}}) + for the RDI database if you prefer not to use the default password (see + [Access control]({{< relref "/operate/rs/security/access-control" >}}) for + more information). + +- Download the RDI Helm chart tar file from the + [Redis download center](https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-{{< rdi-version >}}.tgz) (in the *Modules, Tools & Integration* category) . + + ```bash + export RDI_VERSION={{< rdi-version >}} + wget https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-$RDI_VERSION.tgz + ``` + +- If you want to use a private image registry, + [prepare it with the RDI images](#using-a-private-image-registry). + +### Create the RDI database + +RDI uses a database on your Redis Enterprise cluster to store its state +information. Use the Redis Enterprise Cluster Manager UI to create the RDI database with the following +requirements: + +{{< embed-md "rdi-db-reqs.md" >}} + +You should then provide the details of this database in the [`values.yaml`](#the-valuesyaml-file) +file as described below. + +### Using a private image registry + +Add the RDI images from [dockerhub](https://hub.docker.com/u/redis) to your local registry. +You need the following RDI images with tags matching the RDI version you want to install: + +- [redis/rdi-api](https://hub.docker.com/r/redis/rdi-api) +- [redis/rdi-operator](https://hub.docker.com/r/redis/rdi-operator) +- [redis/rdi-monitor](https://hub.docker.com/r/redis/rdi-monitor) +- [redis/rdi-processor](https://hub.docker.com/r/redis/rdi-processor) +- [redis/rdi-collector-api](https://hub.docker.com/r/redis/rdi-collector-api) +- [redis/rdi-collector-initializer](https://hub.docker.com/r/redis/rdi-collector-initializer) + +In addition, the RDI Helm chart uses the following 3rd party images: + +- [redislabs/debezium-server:3.0.8.Final-rdi.1](https://hub.docker.com/r/redislabs/debezium-server), + based on `quay.io/debezium/server/3.0.8.Final` with minor modifications: + [Debezium](https://debezium.io/), an open source distributed platform for change data capture. +- [redis/reloader:v1.1.0](https://hub.docker.com/r/redis/reloader), originally `ghcr.io/stakater/reloader:v1.1.0`: + [Reloader](https://github.com/stakater/Reloader), a K8s controller to watch changes to ConfigMaps + and Secrets and do rolling upgrades. +- [redis/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6](https://hub.docker.com/r/redis/kube-webhook-certgen), + originally `registry.k8s.io/ingress-nginx/kube-webhook-certgen/v20221220-controller-v1.5.1-58-g787ea74b6`: + [kube-webhook-certgen](https://github.com/jet/kube-webhook-certgen), K8s webhook certificate generator and patcher. + +The example below shows how to specify the registry and image pull secret in your +[`rdi-values.yaml`](#the-valuesyaml-file) file for the Helm chart: + +```yaml +global: + # Global image settings. + # If using a private image registry, update the default values accordingly. + image: + registry: your-registry + repository: your-repository # If different from "redis" + + # Image pull secrets to be used when using a private image registry. + imagePullSecrets: + - name: your-secret-name +``` + +To pull images from a private image registry, you must provide the image pull secret and in some cases also set the permissions. Follow the links below to learn how to use a private registry with: + +- [Rancher](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-and-docker-registries#using-a-private-registry) +- [OpenShift](https://docs.openshift.com/container-platform/4.17/openshift_images/managing_images/using-image-pull-secrets.html) +- [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_EKS.html) +- [Google Kubernetes Engine (GKE)](https://cloud.google.com/artifact-registry/docs/pull-cached-dockerhub-images) +- [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/cluster-container-registry-integration?tabs=azure-cli) + +## Supported versions of Kubernetes and OpenShift + +{{< embed-md "rdi-k8s-reqs.md" >}} + +## Install the RDI Helm chart + +1. Scaffold the default `values.yaml` file from the chart into a local + `rdi-values.yaml` file: + + ```bash + helm show values rdi-.tar.gz > rdi-values.yaml + ``` + +1. Open the `rdi-values.yaml` file you just created, change or add the appropriate + values for your installation, and delete the values you have not changed to + use their default values. + See [The `values.yaml` file](#the-valuesyaml-file) for more details. + +1. Run the `helm upgrade --install` command: + + ```bash + helm upgrade --install rdi rdi-.tar.gz -f rdi-values.yaml -n rdi --create-namespace + ``` + + {{< note >}}The above command will install RDI in a namespace called + `rdi`. If you want to use a different namespace, pass the option + `-n ` to the `helm install` command instead. + {{< /note >}} + +### The `values.yaml` file + +The [`values.yaml`](https://helm.sh/docs/topics/charts/#templates-and-values) file inside the +Helm chart contains the values you can set for the RDI Helm installation. +See the comments by each value for more information about the values you may need to add or change +depending on your use case. + +At a minimum, you must set the values of `connection.host`, `connection.port`, and `connection.password` +to enable the basic connection to the RDI database. +You must also set `api.jwtKey`, RDI uses this value to encrypt the +[JSON web token (JWT)](https://jwt.io/) token used by RDI API. Best practice is +to generate a value containing 32 random bytes of data (equivalent to 256 +bits) and then encode this value as ASCII characters. Use the following +command to generate the random key from the +[`urandom` special file](https://en.wikipedia.org/wiki//dev/random): + +```bash +head -c 32 /dev/urandom | base64 +``` + +If you use TLS to connect to the RDI database, you must set the +CA certificate content in `connection.ssl.cacert` (for TLS). In addition, if you +also use mTLS, you must set the client certificate and private key contents in +`connection.ssl.cert`, and `connection.ssl.key`. + +- You can add the certificate content directly in the `rdi-values.yaml` file + as follows: + + ```yaml + connection: + ssl: + enabled: true + cacert: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + cert: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + key: | + -----BEGIN PRIVATE KEY----- + ... + -----END PRIVATE KEY----- + ``` + +- Alternatively, you can use the `--set-file` argument to set these values to + the content of your certificate files as follows: + + ```bash + helm upgrade --install rdi rdi-.tar.gz -f rdi-values.yaml -n rdi --create-namespace \ + --set connection.ssl.enabled=true \ + --set-file connection.ssl.cacert= \ + --set-file connection.ssl.cert= \ + --set-file connection.ssl.key= + ``` + +## Check the installation + +To verify the status of the K8s deployment, run the following command: + +```bash +helm list -n rdi +``` + +The output looks like the following. Check that the `rdi` release is listed. +With RDI 1.8.0 or later, check that the `default` release is also listed. + +``` +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +default rdi 1 2025-05-08 ... deployed pipeline-0.1.0 +rdi rdi 3 2025-05-08 ... deployed rdi-1.0.0 +``` + +Also, check that all pods have `Running` status: + +```bash +kubectl get pod -n rdi + +NAME READY STATUS RESTARTS AGE +collector-api- 1/1 Running 0 29m +rdi-api- 1/1 Running 0 29m +rdi-metric-exporter- 1/1 Running 0 29m +rdi-operator- 1/1 Running 0 29m +rdi-reloader- 1/1 Running 0 29m +``` + +You can verify that the RDI API works by adding a connection to the RDI API server to +[Redis Insight]({{< relref "/develop/tools/insight/rdi-connector" >}}). + +## Using ingress controllers + +You must ensure that an appropriate +[ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) +is available in your K8s cluster to expose the RDI API service via the K8s +[`Ingress`](https://kubernetes.io/docs/concepts/services-networking/ingress/) +resource. Follow the documentation of your cloud provider or of +the ingress controller to install the controller correctly. + +### Using the `nginx` ingress controller on AKS + +On AKS, if you want to use the open source +[`nginx`](https://nginx.org/) +[ingress controller](https://github.com/kubernetes/ingress-nginx/blob/main/README.md#readme) +rather than the +[AKS application routing add-on](https://learn.microsoft.com/en-us/azure/aks/app-routing), +follow the AKS documentation for +[creating an unmanaged ingress controller](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/load-bal-ingress-c/create-unmanaged-ingress-controller?tabs=azure-cli). +Specifically, ensure that one or both of the following Helm chart values is set: + +- `controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz` +- `controller.service.externalTrafficPolicy=Local` + +## Prepare your source database + +Before deploying a pipeline, you must configure your source database to enable CDC. See the +[Prepare source databases]({{< relref "/integrate/redis-data-integration/data-pipelines/prepare-dbs" >}}) +section to learn how to do this. + +## Deploy a pipeline + +When the Helm installation is complete and you have prepared the source database for CDC, +you are ready to start using RDI. +Use [Redis Insight]({{< relref "/develop/tools/insight/rdi-connector" >}}) to +[configure]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}) and +[deploy]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) +your pipeline. + +## Uninstall RDI + +If you want to remove your RDI K8s installation, first run +the following commands. (If you installed RDI into a custom namespace then +replace `rdi` with the name of your namespace.) + +```bash +kubectl delete pipeline default -n rdi +helm uninstall rdi -n rdi +kubectl delete namespace rdi +``` + +{{< note >}}The line `kubectl delete pipeline default -n rdi` is only needed for RDI 1.8.0 or above. +{{< /note >}} + +If you also want to delete the keys from your RDI database, connect to it with +[`redis-cli`]({{< relref "/develop/tools/cli" >}}) and run a +[`FLUSHALL`]({{< relref "/commands/flushall" >}}) command. +--- +Title: Upgrading RDI +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to upgrade an existing RDI installation +group: di +hideListLinks: false +linkTitle: Upgrade +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +## Upgrading a VM installation + +Follow the steps below to upgrade an existing +[VM installation]({{< relref "/integrate/redis-data-integration/installation/install-vm" >}}) +of RDI: + +1. Download the RDI installer from the [Redis download center](https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-installation-{{< rdi-version >}}.tar.gz) + (in the *Modules, Tools & Integration* category) and extract it to your + preferred installation folder. + + ```bash + export RDI_VERSION={{< rdi-version >}} + wget https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-installation-$RDI_VERSION.tar.gz + tar -xvf rdi-installation-$RDI_VERSION.tar.gz + ``` + +1. Go to the installation folder: + + ```bash + cd rdi_install/$RDI_VERSION + ``` + +1. Run the `upgrade.sh` script as a privileged user. Note that you must pass + your RDI password to the script unless the password is empty. + + ```bash + sudo ./upgrade.sh --rdi-password + ``` + +### Recovering from failure during a VM upgrade + +If the previous version is v1.4.4 or later, go to the `rdi_install/` +directory and run `sudo ./upgrade.sh` to revert to that version, as described in the section +[Upgrading a VM installation](#upgrading-a-vm-installation) above. + +If the version you are replacing is earlier than v1.4.4, follow these steps: + +1. Run `redis-di --version` to check the current version. + + If the version is the new one, copy the previous version + of the RDI CLI to `/usr/local/bin` with the following command: + + ```bash + sudo cp rdi_install//deps/rdi-cli//redis-di usr/local/bin + ``` + +1. Check that the CLI version is correct by running `redis-di --version`. + + Then, go to the `rdi_install/` directory and run the + following command; + + ```bash + sudo redis-di upgrade --rdi-host --rdi-port + ``` + +{{< note >}}If the `collector-source` or the `processor` pods are not in the `Running` state after +the upgrade, you must run `redis-di deploy` and check again that they are both in the +`Running` state. +{{< /note >}} + +### Upgrading a VM installation with High Availability + +If there is an active pipeline, upgrade RDI on the active VM first. +This will cause a short pipeline downtime of up to two minutes. +Afterwards, upgrade RDI on the passive VM. This will not cause any downtime. + +## Upgrading a Kubernetes installation + +Follow the steps below to upgrade an existing +[Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) +installation of RDI: + +1. If you are using a private registry, pull the new versions of all images listed in + [Using a private image registry]({{< relref "/integrate/redis-data-integration/installation/install-k8s#using-a-private-image-registry" >}}) + and add them to your local registry. + +1. Download the RDI Helm chart tar file from the [Redis download center](https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-{{< rdi-version >}}.tgz) + (in the *Modules, Tools & Integration* category). + + ```bash + export RDI_VERSION={{< rdi-version >}} + wget https://redis-enterprise-software-downloads.s3.amazonaws.com/redis-di/rdi-$RDI_VERSION.tgz + ``` + +1. Adapt your `rdi-values.yaml` file to any changes in the new RDI version if needed. + See also [Upgrading to RDI 1.8.0 or later from an earlier version](#upgrading-to-rdi-180-or-later-from-an-earlier-version). + Before making any changes, save your existing `rdi-values.yaml` if you need to revert + to the old RDI version for any reason. + +1. Run the `helm upgrade` command: + + ```bash + helm upgrade --install rdi rdi-.tar.gz -f rdi-values.yaml -n rdi + ``` + +Note that you don't need to +[deploy]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) +the RDI configuration again after this step. + +### Upgrading to RDI 1.8.0 or later from an earlier version + +When upgrading to RDI 1.8.0 or later from an earlier version +you must adapt your `rdi-values.yaml` file to the following changes: + +- All collector and processor values that were previously under `collector`, + `collectorSourceMetricsExporter`, and `processor` have been moved to + `operator.dataPlane.collector` and `operator.dataPlane.processor`. +- `global.collectorApiEnabled` has been moved to `operator.dataPlane.collectorApi.enabled`, + and is now a boolean value, not `"0"` or `"1"`. +- `api.authEnabled` is also now a boolean value, not `"0"` or `"1"`. +- The following values have been deprecated: `rdiMetricsExporter.service.protocol`, + `rdiMetricsExporter.service.port`, `rdiMetricsExporter.serviceMonitor.path`, + `api.service.name`. + +### Verifying the upgrade + +Check that all pods have `Running` status: + +```bash +kubectl get all -n rdi +``` + +If you find that the upgrade did not work as expected for any reason, +then run the `helm upgrade` command again (as described in the section +[Upgrading a Kubernetes installation](#upgrading-a-kubernetes-installation) above), +but this time with the previous version you were upgrading from, and using +your saved `rdi-values.yaml` for that version. This will restore your previous working state. + +{{< note >}}Downgrading from RDI 1.8.0 or later to an earlier version using `helm upgrade` +will not work. If you need to perform such an upgrade, uninstall RDI completely first as +described in [Uninstall RDI]({{< relref "/integrate/redis-data-integration/installation/install-k8s#uninstall-rdi" >}}), +and then install the old version. +{{< /note >}} + +## What happens during the upgrade? + +The upgrade process replaces the current RDI components with their new versions: + +- Firstly, the control plane components are replaced. At this point, the pipeline + is still active but monitoring will be disconnected. +- Secondly, the pipeline data plane components are replaced. + If a pipeline is active while upgrading, the `collector-source` and `processor` + pods will be restarted. The pipeline will pause for up to two minutes but it + will catch up very quickly after restarting. + The pipeline data and state are both stored in Redis, so data will not + be lost during the upgrade. +--- +Title: Install and upgrade +aliases: /integrate/redis-data-integration/ingest/installation/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to install and upgrade RDI +group: di +hideListLinks: false +linkTitle: Install/upgrade +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +The guides in this section explain the options you have for installing and upgrading RDI. +Before you use RDI, you must also configure your source database to enable CDC. See the +[Prepare source databases]({{< relref "/integrate/redis-data-integration/data-pipelines/prepare-dbs" >}}) +section to learn how to do this.--- +Title: Redis Data Integration release notes 1.2 (June 2024) +alwaysopen: false +aliases: /integrate/redis-data-integration/ingest/release-notes/rdi-1-2/ +categories: +- docs +- operate +- rs +description: API server with a set of APIs to support Redis Insight with creating, testing, deploying & monitoring RDI pipelines. RDI Installer verifies all components are running at the end of installation. +linkTitle: 1.2 (June 2024) +toc: 'true' +weight: 999 +--- + +> This minor release replaces the 1.0 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a +[_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient +and fast data structures that match your app's requirements. You specify the +transformations using a configuration system, so no coding is required. + +## Headlines + +- API server with a set of APIs to support Redis Insight with creating, testing, deploying & monitoring RDI pipelines. +- RDI Installer verifies all components are running at the end of installation. + +## Fixed Bugs + +- Support for source database TLS & mTLS was fixed. Certificate and file names used with the redis-di set-secret command can have any file names +- Mismatch between Reloader version provided and the one required. + +## Limitations + +- RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +- RDI write-behind (which is currently in preview) should not be used on the same data set that RDI ingest is writing to Redis. This would either cause an infinite loop or would harm the data integrity, since both ingest and write-behind are asynchronous, eventually-consistent processes. +--- +Title: Redis Data Integration release notes 1.6.2 (March 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.2 (March 2025) +toc: 'true' +weight: 990 +--- + +> This maintenance release replaces the 1.6.1 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Fix: With an RDI namespace that contains dashes, `rdi-metrics-exporter` crashes immediately upon startup +- Fix reported security vulnerabilities +- Fix: Connection to PostgreSQL from Debezium fails with "could not read SSL key file" when using mTLS + +## Fixes & Improvements + +- **Resolved startup crash in `rdi-metrics-exporter`** + Fixed an issue where the exporter would crash on startup if the RDI namespace included dashes (e.g., `my-namespace`) that are not allowed in prometheus labels. + +- **Security Vulnerabilities Patched** + Addressed the following reported CVEs: + + - CVE-2019-14250 + - CVE-2019-17543 + - CVE-2023-32665 + - CVE-2024-52533 + - CVE-2020-17049 + - CVE-2024-47874 + +- **Improved mTLS compatibility in `collector-initializer`** + DER-formatted keys are now skipped during initialization, resolving a PostgreSQL connection error with Debezium (`could not read SSL key file`) when using mTLS. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.4.4 (January 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.4.4 (January 2025) +toc: 'true' +weight: 993 +--- + +> This maintenance release replaces the 1.4.3 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Installation on [Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) using a [Helm chart](https://helm.sh/docs/). You can install on [OpenShift](https://docs.openshift.com/) or other flavours of K8s using Helm. + +- Improvements for installation on VMs: + - Adding an `upgrade.sh` script + - Uninstall script removes RDI CLI + - Adding an ingress to the `collector-source` metrics exporter for VM installs + - Fix RDI version issues during upgrade + - Ensure `KUBECONFIG` is set during upgrade + - Upgrade [`datayoga`](https://github.com/datayoga-io/datayoga) to 1.127.0 - add_field block was rejected for one or more records but the entire batch was marked as rejected. making troubleshooting difficult + - Fix installer not setting `RDI_REDIS_SSL` properly if SSL is enabled + +## Issues fixed + +- **RDSC-3103**: If a record's transformation is rejected, the entire Processor batch ends up in DLQ +- **RDSC-3130**: Failed to install 1.4.3, but successful install with 1.2.8 +- **RDSC-3141**: There is no way to reach metrics when running RDI on VM +- **RDSC-3142**: `redis-di upgrade` fails with error message - CRITICAL - Error while attempting to upgrade RDI: Could not get the current version of the RDI instance +- **RDSC-3143**: RDI on VM upgrade error message - Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused +- **RDSC-3156**: Add script to execute `redis-di upgrade` correctly + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.8.0 (May 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: | + Enhanced RDI operator for better pipeline orchestration, resiliency, observability and flexibility; + External collector support; + Labels and annotations to RDI data plane pods; + Custom Debezium image; + Calculated TTL for target database. +linkTitle: 1.8.0 (May 2025) +toc: 'true' +weight: 984 +--- + +{{< note >}}This minor release replaces the 1.6.7 release.{{< /note >}} + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Enhanced RDI operator for better pipeline orchestration, resiliency, observability, and flexibility. It + will also enable many new features in the near future. +- You can now use an external collector that is not managed by RDI but writes into RDI streams + (Debezium compatible). +- You can now add labels and annotations to RDI data plane pods, for example to control service + mesh features. +- RDI now uses a custom image of Debezium (based on `3.0.8.Final`) to address known vulnerabilities. +- Added support for calculated TTL for target database keys via `expire` expressions. + +## Detailed changes + +### Helm chart changes + +- All collector and processor values that were previously under `collector`, `collectorSourceMetricsExporter`, and `processor` have been moved to `operator.dataPlane.collector` and `operator.dataPlane.processor`. +- `global.collectorApiEnabled` has been moved to `operator.dataPlane.collectorApi.enabled`, and is now a boolean value (`true` or `false`), not `"0"` or `"1"`. +- `api.authEnabled` is also now a boolean value, not `"0"` or `"1"`. +- The following values have been deprecated: `rdiMetricsExporter.service.protocol`, `rdiMetricsExporter.service.port`, `rdiMetricsExporter.serviceMonitor.path`, `api.service.name` +- You can now add custom labels and annotations to all RDI components. +- You can now disable the creation of the RDI system secrets. + +### Operator Improvements + +The RDI operator has been significantly enhanced in the following areas: + +- **Resilience**: The operator now always maintains the desired pipeline state. Manual changes or random disruptions are reverted automatically. +- **Automatic recovery**: When a configuration issue is resolved, the entire pipeline starts automatically, eliminating the need for manual redeployment. +- **Consistency**: A pipeline that has been stopped with `stop` will remain stopped after `deploy` or `reset`, until explicitly started again. +- **Enhanced configuration**: You can now configure data plane components in ways that were previously not supported, such as adding labels and annotations. +- **External collector support**: No collector resources are created for sources of type `external`. +- **Enhanced troubleshooting**: You can now gain extra insight into the pipeline state by examining the `Pipeline` and `PipelineRelease` custom K8s resources. + +### Other Features, Improvements and Enhancements + +- Added `expire` expression for target output in transformation jobs. +- Addressed security vulnerabilities: TLS certificate hostname verification is now ON by default. +- Improved Helm default values while preserving `values.yaml` formatting. +- Enhanced Helm values and templates for better configuration. +- Added a script to create or update secrets when using Helm (`rdi-secret.sh` in the Helm zip file). +- Improved validation schema and ensured backward compatibility. +- Fixed compatibility issues with newer versions of `requests` and `urllib3`. +- Improved error messages for JSON schema validation. +- Improved PostgreSQL documentation for mTLS. +- Added timestamps to the `status` command. +- Fixed issues with `primary_key` and `unique_constraint` attributes in Oracle metadata. +- Added `capture.mode` to MongoDB scaffolding. +- Improved Helm TLS setup for RDI database connections. +- Enhanced error handling and validation for transformation jobs. +- Improved documentation for supported platforms and configurations. + +### Fixes + +- Fixed HTTP 500 error when querying columns with tables parameter. +- Improved Helm TLS setup for RDI database connections. +- Fixed keystore overwrite when using mTLS on both source and RDI DBs in the collector. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.6 (April 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.6 (April 2025) +toc: 'true' +weight: 986 +--- + +> This maintenance release replaces the 1.6.5 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Fixed a Redis Insight issue when accessing the RDI API `/strategies` endpoint. +- Added a new script to the RDI Helm chart for managing the source and target database secrets. See + [Deploy a pipeline]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) + for more information. +- Fixed an issue when connecting from Redis Insight to the RDI API `/login` endpoint with a non-default user. +- Updated the RDI API `/schemas` endpoint to return a list of databases for MySQL and MariaDB, instead of an empty list of schemas. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.4.0 (October 2024) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.4.0 (October 2024) +toc: 'true' +weight: 997 +--- + +> This is a GA version of Redis Data Integration (RDI) that improves the installation of RDI. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Installation on [Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) using a [Helm chart](https://helm.sh/docs/). You can install on [OpenShift](https://docs.openshift.com/) or other flavours of K8s using Helm. + +- Improvements for installation on VMs: + - Installer checks if the OS firewall is enabled on Ubuntu and RHEL. + - Installer verifies DNS resolution from RDI components. + - Installer provides log lines from components that failed during RDI deployment if a problem occurs. + - Improved verification of RDI installation. + - Installer verifies if the RDI database is in use by another instance of RDI. + - Installer checks and warns if any [`iptables`](https://www.netfilter.org/projects/iptables/index.html) rules are set. + - Improved message when RDI tries to connect to its Redis database with invalid TLS keys. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.7 (May 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.7 (May 2025) +toc: 'true' +weight: 985 +--- + +> This maintenance release replaces the 1.6.6 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Update the RDI API `/tables` and `/metadata` endpoints to filter the results by schema for MySQL and + MariaDB. Schema and Database are the same for these databases. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.4.1 (November 2024) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.4.1 (November 2024) +toc: 'true' +weight: 996 +--- + +> This maintenance release replaces the 1.4.0 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Installation on [Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) using a [Helm chart](https://helm.sh/docs/). You can install on [OpenShift](https://docs.openshift.com/) or other flavours of K8s using Helm. + +- Improvements for installation on VMs: + - Installer checks if the OS firewall is enabled on Ubuntu and RHEL. + - Installer verifies DNS resolution from RDI components. + - Installer provides log lines from components that failed during RDI deployment if a problem occurs. + - Improved verification of RDI installation. + - Installer verifies if the RDI database is in use by another instance of RDI. + - Installer checks and warns if any [`iptables`](https://www.netfilter.org/projects/iptables/index.html) rules are set. + - Improved message when RDI tries to connect to its Redis database with invalid TLS keys. + +## Issues fixed + +- **RDSC-2806**: Remove incorrectly created deployment. +- **RDSC-2792**: Disable `kubectl run` checks. +- **RDSC-2782**: Fix `coredns` issue. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.3 (March 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.3 (March 2025) +toc: 'true' +weight: 989 +--- + +> This maintenance release replaces the 1.6.2 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Fix: RDI 1.6.2 VM Installer not working on RHEL-8 OS. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.4 (April 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.4 (April 2025) +toc: 'true' +weight: 988 +--- + +> This maintenance release replaces the 1.6.3 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Feature: Add support for Debezium 3.0.8 + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.4.2 (November 2024) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.4.2 (November 2024) +toc: 'true' +weight: 995 +--- + +> This maintenance release replaces the 1.4.1 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Installation on [Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) using a [Helm chart](https://helm.sh/docs/). You can install on [OpenShift](https://docs.openshift.com/) or other flavours of K8s using Helm. + +- Improvements for installation on VMs: + - Installer checks if the OS firewall is enabled on Ubuntu and RHEL. + - Installer verifies DNS resolution from RDI components. + - Installer provides log lines from components that failed during RDI deployment if a problem occurs. + - Improved verification of RDI installation. + - Installer verifies if the RDI database is in use by another instance of RDI. + - Installer checks and warns if any [`iptables`](https://www.netfilter.org/projects/iptables/index.html) rules are set. + - Improved message when RDI tries to connect to its Redis database with invalid TLS keys. + +## Issues fixed + +- **RDSC-2802**: Reintroduce checks for DNS resolution and network connectivity from inside a K8s pod. +- **RDSC-2804**: RDI installation previously failed if the user specified a non-default HTTPS port. + The user can now specify the port correctly. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.0 (February 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.0 (February 2025) +toc: 'true' +weight: 992 +--- + +> This maintenance release replaces the 1.4.4 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- RDI now requires the RDI database to have the following properties set, otherwise + RDI will not start: + - `maxmemory_policy`: `noeviction`, + - `aof_enabled`: `1` + +- Allow RDI to run in any K8s namespace + +- Fix metadata API to support Oracle and SQL Server + +- Added [denormalisation lookup block]({{< relref "/integrate/redis-data-integration/reference/data-transformation/lookup" >}}) + +- Many bug fixes + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.2.8 (August 2024) +alwaysopen: false +aliases: /integrate/redis-data-integration/ingest/release-notes/rdi-1-2-8/ +categories: +- docs +- operate +- rs +description: API server with a set of APIs to support Redis Insight with creating, testing, deploying & monitoring RDI pipelines. RDI Installer verifies all components are running at the end of installation. +linkTitle: 1.2.8 (August 2024) +toc: 'true' +weight: 998 +--- + +# Redis Data Integration 1.2.8 GA + +> This maintenance release replaces the 1.2 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- API server with a set of APIs to support Redis Insight with creating, testing, deploying & monitoring RDI pipelines. +- RDI Installer verifies all components are running at the end of installation. + +## Fixed Bugs + +- Fixed issue with [CoreDNS](https://coredns.io/) configuration. Previously, it could only resolve from the host DNS. +- Fixed incorrect validation schema for jobs in [Redis Insight](https://redis.io/docs/latest/develop/connect/insight/). +- The pipeline advanced settings were missing when downloading the pipeline from RDI but this is now fixed. + +## Limitations + +- RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +- RDI write-behind (which is currently in preview) should not be used on the same data set that RDI ingest is writing to Redis. This would either cause an infinite loop or would harm the data integrity, since both ingest and write-behind are asynchronous, eventually-consistent processes. +--- +Title: Redis Data Integration release notes 1.6.1 (March 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.1 (March 2025) +toc: 'true' +weight: 991 +--- + +> This release replaces the 1.6.0 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Fixed missing metrics and incorrect status for SQL Server. +- Improved Redis Insight autocomplete using schema description property. +- Restricted Redis connections to targets and non-Redis connections to sources. +- Enhanced scaffolded `config.yaml` with examples for each database type. +- VM Installer: Captured output of all sub-processes. +- Improved error message for critical errors when sending commands to Redis cluster fails. +- Fixed issue where pipeline status did not return to streaming after stopping and starting. +- Added RDI API logging, validation examples, and bug fixes. +- Disabled strict parsing of collector application properties to allow overriding. +- Fixed HTTP 500 error in `GET /pipelines/sources/{source}/columns` when passing tables parameter. +- VM setup: Fixed incorrect output status and incomplete upgrades in `upgrade.sh` script. +- Ensured `collector-api` service force upgrades when `Operator` restarts. +- Corrected typos in CLI option help messages. +- Generated and published [API reference page]({{< relref "/integrate/redis-data-integration/reference/api-reference" >}}). +- Resolved error in `collector-api` when Source secret contains special characters. +- Fixed RDI 1.6.0 installation failure for HA due to `rdi-operator` deployment timeout. +- Added support for downloading Debezium image from a private registry. +- VM Installer now supports Ubuntu 22.04 and Ubuntu 24.04. +- Resolved performance degradation in 1.6.0 that caused the initial sync to be 3-5 times slower. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.6.5 (April 2025) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.6.5 (April 2025) +toc: 'true' +weight: 987 +--- + +> This maintenance release replaces the 1.6.4 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Vulnerabilities: Resolve vulnerabilities in the `collector-api` component. + - CVE-2025-27363 + - CVE-2021-23336 + - CVE-2025-27113 + - CVE-2024-52533 + - CVE-2025-1632 + - CVE-2024-47535 + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.4.3 (December 2024) +alwaysopen: false +categories: +- docs +- operate +- rs +description: Installation on Kubernetes with a Helm chart. Improvements for installation on VMs. +linkTitle: 1.4.3 (December 2024) +toc: 'true' +weight: 994 +--- + +> This maintenance release replaces the 1.4.2 release. + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required. + +## Headlines + +- Installation on [Kubernetes]({{< relref "/integrate/redis-data-integration/installation/install-k8s" >}}) using a [Helm chart](https://helm.sh/docs/). You can install on [OpenShift](https://docs.openshift.com/) or other flavours of K8s using Helm. + +- Improvements for installation on VMs: + - Installer checks if the OS firewall is enabled on Ubuntu and RHEL. + - Installer verifies DNS resolution from RDI components. + - Installer provides log lines from components that failed during RDI deployment if a problem occurs. + - Improved verification of RDI installation. + - Installer verifies if the RDI database is in use by another instance of RDI. + - Installer checks and warns if any [`iptables`](https://www.netfilter.org/projects/iptables/index.html) rules are set. + - Improved message when RDI tries to connect to its Redis database with invalid TLS keys. + +## Issues fixed + +- **RDSC-2963**: Helm chart `rdiSysSecret` does not create an empty secret if you are not using a password. +- **RDSC-2729**: Use Debezium Server 2.7.3 and remove Prometheus. +- **RDSC-2333**: Ensure the installer creates the context file correctly. +- **RDSC-2806**: Remove incorrectly created deployment. +- **RDSC-2729**: Fix `processors` `null` instead of `{}`. +- **RDSC-2905**: Fix DNS check with multiple IP addresses. +- **RDSC-2845**: RDI Helm chart release is set with `tag: 0.0.0`, but this should be the current release. + +## Limitations + +RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +--- +Title: Redis Data Integration release notes 1.0 (June 2024) +alwaysopen: false +aliases: /integrate/redis-data-integration/ingest/release-notes/rdi-1-0/ +categories: +- docs +- operate +- rs +description: Changes to the processing mode. Simple installation. Silent installation. Pipeline orchestration. Logging. Monitoring. High availability mechanism. +linkTitle: 1.0 (June 2024) +toc: 'true' +weight: 1000 +--- + +This is the first General Availability version of Redis Data Integration (RDI). + +RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +RDI keeps the Redis cache up to date with changes in the primary database, using a +[_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you _transform_ the data from relational tables into convenient +and fast data structures that match your app's requirements. You specify the +transformations using a configuration system, so no coding is required. + +## Headlines + +- Changes to the processing mode: The preview versions of RDI processed data inside the Redis Enterprise database using the shard CPU. The GA version moves the processing of data outside the cluster. RDI is now deployed on VMs or on Kubernetes (K8s). +- Simple installation: RDI ships with all of its dependencies. A simple interactive installer provides a streamlined process that takes a few minutes. +- Silent installation: RDI can be installed by software using a script and an input file. +- Pipeline orchestration: The preview versions of RDI required you to manually install and configure the Debezium server. In this version, we add support for source database configuration to the pipeline configuration and orchestration of all pipeline components including the Debezium server (RDI Collector). +- Logging: All RDI component logs are now shipped to a central folder and get rotated by RDI's logging mechanism. +- Monitoring: RDI comes with two Prometheus exporters, one For Debezium Server and one for RDI's pipeline data processing. +- High availability mechanism: The preview versions of RDI used an external clustering dependency to provide active-passive deployment of the Debezium server. The GA version has a Redis-based built-in fail-over mechanism between an active VM and a passive VM. Kubernetes deployments rely on K8s probes that are included in RDI components. + +## Limitations + +- RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. +- RDI write-behind (which is currently in preview) should not be used on the same data set that RDI ingest is writing to Redis. This would either cause an infinite loop or would harm the data integrity, since both ingest and write-behind are asynchronous, eventually-consistent processes.--- +Title: Release notes +aliases: /integrate/redis-data-integration/ingest/release-notes/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +hideListLinks: true +linkTitle: Release notes +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 100 +--- + +Here's what changed recently in Redis Data Integration (RDI): + +{{< table-children columnNames="Version (Release date) ,Major changes" columnSources="LinkTitle,Description" enableLinks="LinkTitle" >}} +--- +Title: Preview version +alwaysopen: false +categories: +- docs +- operate +- rc +description: Describes where to view the preview version for RDI products +linkTitle: Preview +weight: 999 +--- + +RDI is now in general availability but you can still access an +[archived version of the docs for the preview version](https://docs.redis.com/rdi-preview/rdi/) +if you need to refer to them. Note that these docs will not be updated and +information in the current docs supersedes the content of the preview docs. + +There is also another RDI product, **Write-behind**, that is still in preview. +See the [Write-behind]({{< relref "/integrate/write-behind" >}}) docs for +more information. +--- +Title: Deploy a pipeline +aliases: /integrate/redis-data-integration/ingest/data-pipelines/data-type-handling/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to deploy an RDI pipeline +group: di +linkTitle: Deploy +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +The sections below explain how to deploy a pipeline after you have created the required +[configuration]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}). + +## Set secrets + +Before you deploy your pipeline, you must set the authentication secrets for the +source and target databases. Each secret has a name that you can pass to the +[`redis-di set-secret`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-set-secret" >}}) +command (VM deployment) or the `rdi-secret.sh` script (K8s deployment) to set the secret value. +You can then refer to these secrets in the `config.yaml` file using the syntax "`${SECRET_NAME}`" +(the sample [config.yaml file]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#the-configyaml-file" >}}) shows these secrets in use). + +The table below lists all valid secret names. Note that the +username and password are required for the source and target, but the other +secrets are only relevant for TLS/mTLS connections. + +| Secret name | Description | +| :-- | :-- | +| `SOURCE_DB_USERNAME` | Username for the source database | +| `SOURCE_DB_PASSWORD` | Password for the source database | +| `SOURCE_DB_CACERT` | (For TLS only) Source database CA certificate | +| `SOURCE_DB_CERT` | (For mTLS only) Source database client certificate | +| `SOURCE_DB_KEY` | (For mTLS only) Source database private key | +| `SOURCE_DB_KEY_PASSWORD` | (For mTLS only) Source database private key password | +| `TARGET_DB_USERNAME` | Username for the target database | +| `TARGET_DB_PASSWORD` | Password for the target database | +| `TARGET_DB_CACERT` | (For TLS only) Target database CA certificate | +| `TARGET_DB_CERT` | (For mTLS only) Target database client certificate | +| `TARGET_DB_KEY` | (For mTLS only) Target database private key | +| `TARGET_DB_KEY_PASSWORD` | (For mTLS only) Target database private key password | + +{{< note >}}When creating secrets for TLS or mTLS, ensure that all certificates and keys are in `PEM` format. The only exception to this is that for PostgreSQL, the private key `SOURCE_DB_KEY` secret must be in `DER` format. If you have a key in `PEM` format, you must convert it to `DER` before creating the `SOURCE_DB_KEY` secret using the command: + +```bash +openssl pkcs8 -topk8 -inform PEM -outform DER -in /path/to/myclient.pem -out /path/to/myclient.pk8 -nocrypt +``` + +This command assumes that the private key is not encrypted. See the [`openssl` documentation](https://docs.openssl.org/master/) to learn how to convert an encrypted private key. +{{< /note >}} + +### Set secrets for VM deployment + +Use [`redis-di set-secret`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-set-secret" >}}) +to set secrets for a VM deployment. + +The specific command lines for source secrets are as follows: + +```bash +# For username and password +redis-di set-secret SOURCE_DB_USERNAME yourUsername +redis-di set-secret SOURCE_DB_PASSWORD yourPassword + +# With source TLS, in addition to the above +redis-di set-secret SOURCE_DB_CACERT /path/to/myca.crt + +# With source mTLS, in addition to the above +redis-di set-secret SOURCE_DB_CERT /path/to/myclient.crt +redis-di set-secret SOURCE_DB_KEY /path/to/myclient.key +# Use this only if SOURCE_DB_KEY is password-protected +redis-di set-secret SOURCE_DB_KEY_PASSWORD yourKeyPassword +``` + +The corresponding command lines for target secrets are: + +```bash +# For username and password +redis-di set-secret TARGET_DB_USERNAME yourUsername +redis-di set-secret TARGET_DB_PASSWORD yourPassword + +# With target TLS, in addition to the above +redis-di set-secret TARGET_DB_CACERT /path/to/myca.crt + +# With target mTLS, in addition to the above +redis-di set-secret TARGET_DB_CERT /path/to/myclient.crt +redis-di set-secret TARGET_DB_KEY /path/to/myclient.key +# Use this only if TARGET_DB_KEY is password-protected +redis-di set-secret TARGET_DB_KEY_PASSWORD yourKeyPassword +``` + +### Set secrets for K8s/Helm deployment using the rdi-secret.sh script + +Use the `rdi-secret.sh` script to set secrets for a K8s/Helm deployment. To use this script, unzip the archive that contains the RDI Helm chart and navigate to the resulting folder. The `rdi-secret.sh` script is located in the `scripts` subfolder. The general pattern for using this script is: + +```bash +scripts/rdi-secret.sh set +``` + +The script also lets you retrieve a specific secret or list all the secrets that have been set: + +```bash +# Get specific secret +scripts/rdi-secret.sh get + +# List all secrets +scripts/rdi-secret.sh list +``` + +The specific command lines for source secrets are as follows: + +```bash +# For username and password +scripts/rdi-secret.sh set SOURCE_DB_USERNAME yourUsername +scripts/rdi-secret.sh set SOURCE_DB_PASSWORD yourPassword + +# With source TLS, in addition to the above +scripts/rdi-secret.sh set SOURCE_DB_CACERT /path/to/myca.crt + +# With source mTLS, in addition to the above +scripts/rdi-secret.sh set SOURCE_DB_CERT /path/to/myclient.crt +scripts/rdi-secret.sh set SOURCE_DB_KEY /path/to/myclient.key +# Use this only if SOURCE_DB_KEY is password-protected +scripts/rdi-secret.sh set SOURCE_DB_KEY_PASSWORD yourKeyPassword +``` + +The corresponding command lines for target secrets are: + +```bash +# For username and password +scripts/rdi-secret.sh set TARGET_DB_USERNAME yourUsername +scripts/rdi-secret.sh set TARGET_DB_PASSWORD yourPassword + +# With target TLS, in addition to the above +scripts/rdi-secret.sh set TARGET_DB_CACERT /path/to/myca.crt + +# With target mTLS, in addition to the above +scripts/rdi-secret.sh set TARGET_DB_CERT /path/to/myclient.crt +scripts/rdi-secret.sh set TARGET_DB_KEY /path/to/myclient.key +# Use this only if TARGET_DB_KEY is password-protected +scripts/rdi-secret.sh set TARGET_DB_KEY_PASSWORD yourKeyPassword +``` + +### Set secrets for K8s/Helm deployment using Kubectl command + +In some scenarios, you may prefer to use [`kubectl create secret generic`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/) +to set secrets for a K8s/Helm deployment. The general pattern of the commands is: + +```bash +kubectl create secret generic \ +--namespace=rdi \ +--from-literal== +``` + +Where `` is either `source-db` for source secrets or `target-db` for target secrets. + +If you use TLS or mTLS for either the source or target databases, you also need to create the `source-db-ssl` and/or `target-db-ssl` K8s secrets that contain the certificates used to establish secure connections. The general pattern of the commands is: + +```bash +kubectl create secret generic -ssl \ +--namespace=rdi \ +--from-file== +``` + +The specific command lines for source secrets are as follows: + +```bash +# Without source TLS +# Create or update source-db secret +kubectl create secret generic source-db --namespace=rdi \ +--from-literal=SOURCE_DB_USERNAME=yourUsername \ +--from-literal=SOURCE_DB_PASSWORD=yourPassword \ +--save-config --dry-run=client -o yaml | kubectl apply -f - + +# With source TLS +# Create of update source-db secret +kubectl create secret generic source-db --namespace=rdi \ +--from-literal=SOURCE_DB_USERNAME=yourUsername \ +--from-literal=SOURCE_DB_PASSWORD=yourPassword \ +--from-literal=SOURCE_DB_CACERT=/etc/certificates/source_db/ca.crt \ +--save-config --dry-run=client -o yaml | kubectl apply -f - +# Create or update source-db-ssl secret +kubectl create secret generic source-db-ssl --namespace=rdi \ +--from-file=ca.crt=/path/to/myca.crt \ +--save-config --dry-run=client -o yaml | kubectl apply -f - + +# With source mTLS +# Create or update source-db secret +kubectl create secret generic source-db --namespace=rdi \ +--from-literal=SOURCE_DB_USERNAME=yourUsername \ +--from-literal=SOURCE_DB_PASSWORD=yourPassword \ +--from-literal=SOURCE_DB_CACERT=/etc/certificates/source_db/ca.crt \ +--from-literal=SOURCE_DB_CERT=/etc/certificates/source_db/client.crt \ +--from-literal=SOURCE_DB_KEY=/etc/certificates/source_db/client.key \ +--from-literal=SOURCE_DB_KEY_PASSWORD=yourKeyPassword \ # add this only if SOURCE_DB_KEY is password-protected +--save-config --dry-run=client -o yaml | kubectl apply -f - +# Create or update source-db-ssl secret +kubectl create secret generic source-db-ssl --namespace=rdi \ +--from-file=ca.crt=/path/to/myca.crt \ +--from-file=client.crt=/path/to/myclient.crt \ +--from-file=client.key=/path/to/myclient.key \ +--save-config --dry-run=client -o yaml | kubectl apply -f - +``` + +The corresponding command lines for target secrets are: + +```bash +# Without target TLS +# Create or update target-db secret +kubectl create secret generic target-db --namespace=rdi \ +--from-literal=TARGET_DB_USERNAME=yourUsername \ +--from-literal=TARGET_DB_PASSWORD=yourPassword \ +--save-config --dry-run=client -o yaml | kubectl apply -f - + +# With target TLS +# Create of update target-db secret +kubectl create secret generic target-db --namespace=rdi \ +--from-literal=TARGET_DB_USERNAME=yourUsername \ +--from-literal=TARGET_DB_PASSWORD=yourPassword \ +--from-literal=TARGET_DB_CACERT=/etc/certificates/target_db/ca.crt \ +--save-config --dry-run=client -o yaml | kubectl apply -f - +# Create or update target-db-ssl secret +kubectl create secret generic target-db-ssl --namespace=rdi \ +--from-file=ca.crt=/path/to/myca.crt \ +--save-config --dry-run=client -o yaml | kubectl apply -f - + +# With target mTLS +# Create or update target-db secret +kubectl create secret generic target-db --namespace=rdi \ +--from-literal=TARGET_DB_USERNAME=yourUsername \ +--from-literal=TARGET_DB_PASSWORD=yourPassword \ +--from-literal=TARGET_DB_CACERT=/etc/certificates/target_db/ca.crt \ +--from-literal=TARGET_DB_CERT=/etc/certificates/target_db/client.crt \ +--from-literal=TARGET_DB_KEY=/etc/certificates/target_db/client.key \ +--from-literal=TARGET_DB_KEY_PASSWORD=yourKeyPassword \ # add this only if TARGET_DB_KEY is password-protected +--save-config --dry-run=client -o yaml | kubectl apply -f - +# Create or update target-db-ssl secret +kubectl create secret generic target-db-ssl --namespace=rdi \ +--from-file=ca.crt=/path/to/myca.crt \ +--from-file=client.crt=/path/to/myclient.crt \ +--from-file=client.key=/path/to/myclient.key \ +--save-config --dry-run=client -o yaml | kubectl apply -f - +``` + +Note that the certificate paths contained in the secrets `SOURCE_DB_CACERT`, `SOURCE_DB_CERT`, and `SOURCE_DB_KEY` (for the source database) and `TARGET_DB_CACERT`, `TARGET_DB_CERT`, and `TARGET_DB_KEY` (for the target database) are internal to RDI, so you *must* use the values shown in the example above. You should only change the certificate paths when you create the `source-db-ssl` and `target-db-ssl` secrets. + +## Deploy a pipeline + +When you have created your configuration, including the [jobs]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#job-files" >}}), you are +ready to deploy. Use [Redis Insight]({{< relref "/develop/tools/insight/rdi-connector" >}}) +to configure and deploy pipelines for both VM and K8s installations. + +For VM installations, you can also use the +[`redis-di deploy`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-deploy" >}}) +command to deploy a pipeline: + +```bash +redis-di deploy --dir +```--- +Title: Data type handling +aliases: /integrate/redis-data-integration/ingest/data-pipelines/data-type-handling/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how relational data types are converted to Redis data types +group: di +linkTitle: Data type handling +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 20 +--- + +RDI automatically converts data that has a Debezium JSON schema into Redis types. +Some Debezium types require special conversion. For example: + +- Date and Time types are converted to epoch time. +- Decimal numeric types are converted to strings so your app can use them + without losing precision. + +The following Debezium logical types are supported: + +- double +- float +- io.debezium.data.Bits +- io.debezium.data.Json +- io.debezium.data.VariableScaleDecimal +- io.debezium.time.Date +- io.debezium.time.NanoTime +- io.debezium.time.NanoTimestamp +- io.debezium.time.MicroTime +- io.debezium.time.MicroTimestamp +- io.debezium.time.ZonedTime +- io.debezium.time.ZonedTimestamp +- org.apache.kafka.connect.data.Date +- org.apache.kafka.connect.data.Decimal +- org.apache.kafka.connect.data.Time + +These types are **not** supported and will return "Unsupported Error": + +- io.debezium.time.interval + +All other values are treated as plain strings. +--- +Title: Prepare AWS Aurora and PostgreSQL for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/my-sql-mariadb/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Prepare AWS Aurora/PostgreSQL databases to work with RDI +group: di +linkTitle: Prepare AWS Aurora/PostgreSQL +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 5 +--- + +Follow the steps in the sections below to prepare an +[AWS Aurora PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html) +database to work with RDI. + +## 1. Create a parameter group + +In the [Relational Database Service (RDS) console](https://console.aws.amazon.com/rds/), +navigate to **Parameter groups > Create parameter group**. You will see the panel shown +below: + +{{Create parameter group panel}} + +Enter the following information: + +| Name | Value | +| :-- | :-- | +| **Parameter group name** | rdi-aurora-pg | +| **Description** | Enable logical replication for RDI | +| **Engine Type** | Aurora PostgreSQL | +| **Parameter group family** | aurora-postgresql15 | +| **Type** | DB Cluster Parameter Group | + +Select **Create** to create the parameter group. + +## 2. Edit the parameter group + +Navigate to **Parameter groups** in the console. Select the `rdi-aurora-pg` +group you have just created and then select **Edit** . You will see this panel: + +{{Edit parameter group panel}} + +Search for the `rds.logical_replication` parameter and set its value to 1. Then, +select **Save Changes**. + +## 3. Select the new parameter group + +Go back to your target database on the RDS console, select **Modify** and then +scroll down to **Additional Configuration**. Set +the **DB Cluster Parameter Group** to the value `rdi-aurora-pg` that you have just added: + +{{Additional Configuration panel}} +--- +Title: Prepare MySQL/MariaDB for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/my-sql-mariadb/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Prepare MySQL and MariaDB databases to work with RDI +group: di +linkTitle: Prepare MySQL/MariaDB +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +Follow the steps in the sections below to set up a MySQL or MariaDB +database for CDC with Debezium. + +## 1. Create a CDC user + +The Debezium connector needs a user account to connect to MySQL/MariaDB. This +user must have appropriate permissions on all databases where you want Debezium +to capture changes. + +Run the [MySQL CLI client](https://dev.mysql.com/doc/refman/8.3/en/mysql.html) +and then run the following commands: + +1. Create the CDC user: + + ```sql + mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password'; + ``` + +1. Grant the required permissions to the user: + + ```sql + # MySQL GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password'; + + # MySQL v8.0 and above + mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user'@'localhost'; + ``` + +1. Finalize the user's permissions: + + ```sql + mysql> FLUSH PRIVILEGES; + ``` + +## 2. Enable the binlog + +You must enable binary logging for MySQL replication. The binary logs record transaction +updates so that replication tools can propagate changes. You will need administrator +privileges to do this. + +First, you should check whether the `log-bin` option is already set to `ON`, using +the following query: + +```sql +// for MySql 5.x +mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" +FROM information_schema.global_variables WHERE variable_name='log_bin'; +// for MySql 8.x +mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" +FROM performance_schema.global_variables WHERE variable_name='log_bin'; +``` + +If `log-bin` is `OFF` then add the following properties to your +server configuration file: + +``` +server-id = 223344 # Querying variable is called server_id, e.g. SELECT variable_value FROM information_schema.global_variables WHERE variable_name='server_id'; +log_bin = mysql-bin +binlog_format = ROW +binlog_row_image = FULL +binlog_expire_logs_seconds = 864000 +``` + +You can run the query above again to check that `log-bin` is now `ON`. + +{{< note >}}If you are using [Amazon RDS for MySQL](https://aws.amazon.com/rds/mysql/) then +you must enable automated backups for your database before it can use binary logging. +If you don't enable automated backups first then the settings above will have no +effect.{{< /note >}} + +## 3. Enable GTIDs + +*Global transaction identifiers (GTIDs)* uniquely identify the transactions that occur +on a server within a cluster. You don't strictly need to use them with a Debezium MySQL +connector, but you might find it helpful to enable them. +Use GTIDs to simplify replication and to confirm that the primary and replica servers are +consistent. + +GTIDs are available in MySQL 5.6.5 and later. See the +[MySQL documentation about GTIDs](https://dev.mysql.com/doc/refman/8.0/en/replication-options-gtids.html#option_mysqld_gtid-mode) for more information. + +Follow the steps below to enable GTIDs. You will need access to the MySQL configuration file +to do this. + +1. Enable `gtid_mode`: + + ```sql + mysql> gtid_mode=ON + ``` + +1. Enable `enforce_gtid_consistency`: + + ```sql + mysql> enforce_gtid_consistency=ON + ``` + +1. Confirm the changes: + + ```sql + mysql> show global variables like '%GTID%'; + + >>> Result: + + +--------------------------+-------+ + | Variable_name | Value | + +--------------------------+-------+ + | enforce_gtid_consistency | ON | + | gtid_mode | ON | + +--------------------------+-------+ + ``` + +## 4. Configure session timeouts + +RDI captures an initial *snapshot* of the source database when it begins +the CDC process (see the +[architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) +for more information). If your database is large then the connection could time out +while RDI is reading the data for the snapshot. You can prevent this using the +`interactive_timeout` and `wait_timeout` settings in your MySQL configuration file: + +``` +mysql> interactive_timeout= +mysql> wait_timeout= +``` + +## 5. Enable query log events + +If you want to see the original SQL statement for each binlog event then you should +enable `binlog_rows_query_log_events` (MySQL configuration) or +`binlog_annotate_row_events` (MariaDB configuration): + +``` +mysql> binlog_rows_query_log_events=ON + +mariadb> binlog_annotate_row_events=ON +``` + +This option is available in MySQL 5.6 and later. + +## 6. Check `binlog_row_value_options` + +You should check the value of the `binlog_row_value_options` variable +to ensure it is not set to `PARTIAL_JSON`. If it *is* set to +`PARTIAL_JSON` then Debezium might not be able to see `UPDATE` events. + +Check the current value of the variable with the following command: + +```sql +mysql> show global variables where variable_name = 'binlog_row_value_options'; + +>>> Result: + ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| binlog_row_value_options | | ++--------------------------+-------+ +``` + +If the value is `PARTIAL_JSON` then you should unset the variable: + +```sql +mysql> set @@global.binlog_row_value_options="" ; +``` + +## 7. Configuration is complete + +After following the steps above, your MySQL/MariaDB database is ready +for Debezium to use. +--- +Title: Prepare Oracle and Oracle RAC for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/oracle/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Prepare Oracle and Oracle RAC databases to work with RDI +group: di +linkTitle: Prepare Oracle +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 1 +--- + +The Oracle Debezium connector uses +[Oracle LogMiner](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html) +to get data from the commitlog to a view inside the database. Follow the +steps below to configure LogMiner and prepare your database for use with +RDI. + +## 1. Configure Oracle LogMiner + +The following example shows the configuration for Oracle LogMiner. + +{{< note >}}[Amazon RDS for Oracle](https://aws.amazon.com/rds/oracle/) +doesn't let you execute the commands +in the example below or let you log in as `sysdba`. See the +separate example below to [configure Amazon RDS for Oracle](#config-aws). +{{< /note >}} + +```sql +ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog + +CONNECT sys/top_secret AS SYSDBA +alter system set db_recovery_file_dest_size = 10G; +alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile; +shutdown immediate +startup mount +alter database archivelog; +alter database open; +-- You should now see "Database log mode: Archive Mode" +archive log list + +exit; +``` + +### Configure Amazon RDS for Oracle {#config-aws} + +AWS provides its own set of commands to configure LogMiner. + +{{< note >}}Before executing these commands, +you must enable backups on your Oracle AWS RDS instance. +{{< /note >}} + +Check that Oracle has backups enabled with the following command: + +```sql +SQL> SELECT LOG_MODE FROM V$DATABASE; + +LOG_MODE +------------ +ARCHIVELOG +``` + +The `LOG_MODE` should be set to `ARCHIVELOG`. If it isn't then you +should reboot your Oracle AWS RDS instance. + +Once `LOG_MODE` is correctly set to ARCHIVELOG, execute the following +commands to complete the LogMiner configuration. The first command enables +archive logging and the second adds [supplemental logging](#supp-logging). + +```sql +exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24); +exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD'); +``` + +## 2. Enable supplemental logging {#supp-logging} + +You must enable supplemental logging for the tables you want to capture or +for the entire database. This lets Debezium capture the state of +database rows before and after changes occur. + +The following example shows how to configure supplemental logging for all columns +in a single table called `inventory.customers`: + +```sql +ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; +``` + +{{< note >}}If you enable supplemental logging for *all* table columns, you will +probably see the size of the Oracle redo logs increase dramatically. Avoid this +by using supplemental logging only when you need it. {{< /note >}} + +You must also enable minimal supplemental logging at the database level with +the following command: + +```sql +ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; +``` + +## 3. Check the redo log sizing + +Before you use the Debezium connector, you should check with your +database administrator that there are enough +redo logs with enough capacity to store the data dictionary for your +database. In general, the size of the data dictionary increases with the number +of tables and columns in the database. If you don't have enough capacity in +the logs then you might see performance problems with both the database and +the Debezium connector. + +## 4. Set the Archive log destination + +You can configure up to 31 different destinations for archive logs +(you must have administrator privileges to do this). You can set parameters for +each destination to specify its purpose, such as log shipping for physical +standbys, or external storage to allow for extended log retention. Oracle reports +details about archive log destinations in the `V$ARCHIVE_DEST_STATUS` view. + +The Debezium Oracle connector only uses destinations that have a status of +`VALID` and a type of `LOCAL`. If you only have one destination with these +settings then Debezium will use it automatically. +If you have more than one destination with these settings, +then you should consult your database administrator about which one to +choose for Debezium. + +Use the `log.mining.archive.destination.name` property in the connector configuration +to select the archive log destination for Debezium. + +For example, suppose you have two archive destinations, `LOG_ARCHIVE_DEST_2` and +`LOG_ARCHIVE_DEST_3`, and they both have status set to `VALID` and type set to +`LOCAL`. Debezium could use either of these destinations, so you must select one +of them explicitly in the configuration. To select `LOG_ARCHIVE_DEST_3`, you would +use the following setting: + +```json +{ + "log.mining.archive.destination.name": "LOG_ARCHIVE_DEST_3" +} +``` + +## 5. Create a user for the connector {#create-dbz-user} + +The Debezium Oracle connector must run as an Oracle LogMiner user with +specific permissions. The following example shows some SQL that creates +an Oracle user account for the connector in a multi-tenant database model: + +```sql +sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba +CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' + SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; +exit; + +sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba +CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' + SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; +exit; + +sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba + +CREATE USER c##dbzuser IDENTIFIED BY dbz + DEFAULT TABLESPACE logminer_tbs + QUOTA UNLIMITED ON logminer_tbs + CONTAINER=ALL; + +GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; +GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL; + +-- See `Limiting privileges` below if the privileges +-- granted by these two commands raise security concerns. +GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL; +-- + +GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; +GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL; +GRANT LOGMINING TO c##dbzuser CONTAINER=ALL; + +-- See `Limiting privileges` below if the privileges +-- granted by these two commands raise security concerns. +GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL; +GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL; +-- + +GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL; + +GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL; +GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL; + +GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$TRANSACTION TO c##dbzuser CONTAINER=ALL; + +GRANT SELECT ON V_$MYSTAT TO c##dbzuser CONTAINER=ALL; +GRANT SELECT ON V_$STATNAME TO c##dbzuser CONTAINER=ALL; + +exit; +``` + +### Limiting privileges + +The privileges granted in the example above are convenient, +but you may prefer to restrict them further to improve security. In particular, +you might want to prevent the Debezium user from creating tables, or +selecting or locking any table. + +The Debezium user needs the `CREATE TABLE` privilege to create the +`LOG_MINING_FLUSH` table when it connects for the first +time. After this point, it doesn't need to create any more tables, +so you can safely revoke this privilege with the following command: + +```sql +REVOKE CREATE TABLE FROM c##dbzuser container=all; +``` + +[The example above](#create-dbz-user) grants the `SELECT ANY TABLE` and +`FLASHBACK ANY TABLE` privileges for convenience, but only the tables synced to RDI +and the `V_$XXX` tables strictly need these privileges. +You can replace the `GRANT SELECT ANY TABLE` command with explicit +commands for each table. For example, you would use commands like the +following for the tables in our sample +[`chinook`](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres) +database. (Note that Oracle 19c requires you to run a separate `GRANT` +command for each table individually.) + +```sql +GRANT SELECT ON chinook.album TO c##dbzuser; +GRANT SELECT ON chinook.artist TO c##dbzuser; +GRANT SELECT ON chinook.customer TO c##dbzuser; +... +``` + +Similarly, instead of `GRANT FLASHBACK ANY TABLE`, you would use the following +commands: + +```sql +GRANT FLASHBACK ON chinook.album TO c##dbzuser; +GRANT FLASHBACK ON chinook.artist TO c##dbzuser; +GRANT FLASHBACK ON chinook.customer TO c##dbzuser; +... +``` + +The `LOCK` privilege is automatically granted by the `SELECT` +privilege, so you can omit this command if you have granted `SELECT` +on specific tables. + +### Revoking existing privileges + +If you initially set the Debezium user's privileges on all tables, +but you now want to restrict them, you can revoke the existing +privileges before resetting them as described in the +[Limiting privileges](#limiting-privileges) section. + +Use the following commands to revoke and reset the `SELECT` privileges: + +```sql +REVOKE SELECT ANY TABLE FROM c##dbzuser container=all; +ALTER SESSION SET container=orclpdb1; + +GRANT SELECT ON chinook.album TO c##dbzuser; +-- ...etc +``` + +The equivalent commands for `FLASHBACK` are: + +```sql +REVOKE FLASHBACK ANY TABLE FROM c##dbzuser container=all; +ALTER SESSION SET container=orclpdb1; +GRANT FLASHBACK ON chinook.album TO c##dbzuser; +``` + +The `SELECT` privilege automatically includes the `LOCK` +privilege, so when you grant `SELECT` for specific tables +you should also revoke `LOCK` on all tables: + +```sql +REVOKE LOCK ANY TABLE FROM c##dbzuser container=all; +``` + +## 6. Configuration is complete + +Once you have followed the steps above, your Oracle database is ready +for Debezium to use. +--- +Title: Prepare PostgreSQL for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/postgresql/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Prepare PostgreSQL databases to work with RDI +group: di +linkTitle: Prepare PostgreSQL +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +PostgreSQL supports several +[logical decoding plug-ins](https://wiki.postgresql.org/wiki/Logical_Decoding_Plugins) +to enable CDC. If you don't want to use the native `pgoutput` logical replication stream support +then you must install your preferred plug-in into the PostgreSQL server. Once you have done this, +you must enable a replication slot, and configure a user with privileges to perform the replication. + +If you are using a service like [Heroku Postgres](https://www.heroku.com/postgres) to host +your database then this might restrict the plug-ins you can use. If you can't use your preferred +plug-in then could try the `pgoutput` decoder if you are using PostgreSQL 10 or above. +If this doesn't work for you then you won't be able to use RDI with your database. + +### Amazon RDS for PostgreSQL + +Follow the steps below to enable CDC with [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/): + +1. Set the instance parameter `rds.logical_replication` to 1. + +1. Check that the `wal_level` parameter is set to `logical` by running the query `SHOW wal_level` + as the database RDS master user. The parameter might not have this value in multi-zone replication + setups. You can't change the value manually but it should change automatically when you set the + `rds.logical_replication` parameter to 1. If it doesn't change then you probably just need to + restart your database instance. You can restart manually or wait until a restart occurs + during your maintenance window. + +1. Set the Debezium `plugin.name` parameter to `pgoutput`. + +1. Initiate logical replication from an AWS account that has the `rds_replication` role. The role grants + permissions to manage logical slots and to stream data using logical slots. By default, only the master user account on AWS has the `rds_replication` role on Amazon RDS, but if you have administrator privileges, + you can grant the role to other accounts using a query like the following: + + ```sql + GRANT rds_replication TO + ``` + + To enable accounts other than the master account to create an initial snapshot, you must grant `SELECT` + permission to the accounts on the tables to be captured. See the documentation about + [security for PostgreSQL logical replication](https://www.postgresql.org/docs/current/logical-replication-security.html) + for more information. + +## Install the logical decoding output plug-in + +As of PostgreSQL 9.4, the only way to read changes to the write-ahead-log is to +[install a logical decoding output plug-in](https://debezium.io/documentation/reference/2.6/postgres-plugins.html). +These plug-ins are written in C using PostgreSQL-specific APIs, as described in the +[PostgreSQL documentation](https://www.postgresql.org/docs/current/logicaldecoding-output-plugin.html). +The PostgreSQL connector uses one of Debezium’s supported logical decoding +plug-ins to receive change events from the database in either the default +[`pgoutput`](https://github.com/postgres/postgres/blob/master/src/backend/replication/pgoutput/pgoutput.c) format (supplied with PostgreSQL) or the +[`Protobuf`](https://github.com/protocolbuffers/protobuf) format. +See the +[decoderbufs Protobuf plug-in documentation](https://github.com/debezium/postgres-decoderbufs) +for more details about how to compile it and also its requirements and limitations. + +For simplicity, Debezium also provides a container image that compiles and installs the plug-ins +on top of the upstream PostgreSQL server image. Use this image as an example of the steps +involved in the installation. + +{{< note >}} The Debezium logical decoding plug-ins have been tested on Linux machines, but if you are +using Windows or other operating systems, the installation steps might be different from +those listed here. {{< /note >}} + +### Plug-in differences + +Plug-ins don't all behave in exactly the same way. All of them refresh information about +the database schema when they detect that it has changed, but the `pgoutput` plug-in is +more "eager" than some other plug-ins to do this. For example, `pgoutput` will refresh +when it detects a change to the default value of a column but other plug-ins won't +notice this until another, more significant change happens (such as adding a new table +column). + +The Debezium project maintains a +[Java class](https://github.com/debezium/debezium/blob/main/debezium-connector-postgres/src/test/java/io/debezium/connector/postgresql/DecoderDifferences.java) that tracks the known differences between plug-ins. + + +## Configure the PostgreSQL server + +If you want to use a logical decoding plug-in other than the default `pgoutput` then +you must first configure it in the `postgresql.conf` file. Set the `shared_preload_libraries` +parameter to load your plug-in at startup. For example, to load the `decoderbufs` +plug-in, you would add the following line: + +``` +# MODULES +shared_preload_libraries = 'decoderbufs' +``` + +Add the line below to configure the replication slot (for any plug-in). +This instructs the server to use logical decoding with the write-ahead log. + +``` +# REPLICATION +wal_level = logical +``` + +You can also set other PostgreSQL streaming replication parameters if you need them. +For example, you can use `max_wal_senders` and `max_replication_slots` to increase +the number of connectors that can access the sending server concurrently, +and `wal_keep_size` to limit the maximum WAL size that a replication slot retains. +The +[configuration parameters](https://www.postgresql.org/docs/current/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER) +documentation describes all the parameters you can use. + +PostgreSQL’s logical decoding uses replication slots. These are guaranteed to retain all the WAL +segments that Debezium needs even when Debezium suffers an outage. You should monitor replication +slots carefully to avoid excessive disk consumption and other conditions such as catalog bloat that can arise +if a replication slot is used infrequently. See the PostgreSQL documentation about +[replication slots](https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS) +for more information. +If you are using a `synchronous_commit` setting other than `on`, then you should set `wal_writer_delay` +to a value of about 10 milliseconds to ensure a low latency for change events. If you don't set this then +the default value of about 200 milliseconds will apply. + +{{< note >}}This guide summarizes the operation of the PostgreSQL write-ahead log, but we strongly +recommend you consult the [PostgreSQL write-ahead log](https://www.postgresql.org/docs/current/wal-configuration.html) +documentation to get a better understanding.{{< /note >}} + +## Set up permissions + +The Debezium connector needs a database user that has the REPLICATION and LOGIN roles so that it +can perform replications. By default, a superuser has these roles but for security reasons, you +should give the minimum necessary permissions to the Debezium user rather than full superuser +permissions. + +If you have administrator privileges then you can create a role for your Debezium user +using a query like the following. Note that these are the *minimum* permissions the user +needs to perform replications, but you might also need to grant other permissions. + +```sql +CREATE ROLE REPLICATION LOGIN; +``` + +## Set privileges for Debezium to create PostgreSQL publications with `pgoutput` + +The Debezium user needs specific permissions to work with the `pgoutput` plug-in. +The plug-in captures change events from the +[*publications*](https://www.postgresql.org/docs/current/logical-replication-publication.html) +that PostgreSQL produces for your chosen source tables. A publication contains change events from +one or more tables that are filtered using criteria from a *publication specification*. + +If you have administrator privileges, you can create the publication specification +manually or you can grant the Debezium user the privileges to create the specification +automatically. The required privileges are: + +- Replication privileges in the database to add the table to a publication. +- `CREATE` privileges on the database to add publications. +- `SELECT` privileges on the tables to copy the initial table data. Table owners + automatically have `SELECT` permission for the table. + +To add a table to a publication, the user must be an owner of the table. However, in +this case, the source table already exists, so you must use a PostgreSQL replication +group to share ownership between the Debezium user and the original owner. Configure +the replication group using the following commands: + +1. Create the replication group (the name `replication_group` here is + just an example): + + ```sql + CREATE ROLE replication_group; + ``` +1. Add the original owner of the table to the group: + + ```sql + GRANT replication_group TO original_owner; + ``` + +1. Add the Debezium replication user to the group: + + ```sql + GRANT replication_group TO replication_user; + ``` +1. Transfer ownership of the table to `replication_group`: + + ```sql + ALTER TABLE table_name OWNER TO replication_group; + ``` + +You must also set the value of the `publication.autocreate.mode` parameter to `filtered` +to allow Debezium to specify the publication configuration. See the +[Debezium documentation for `publication.autocreate.mode`](https://debezium.io/documentation/reference/2.6/connectors/postgresql.html#postgresql-publication-autocreate-mode) +to learn more about this setting. + +## Configure PostgreSQL for replication with the Debezium connector host + +You must configure the database to allow replication with the host that runs +the PostgreSQL Debezium connector. To do this, add an entry to the +host-based authentication file, `pg_hba.conf`, for each client that needs to +use replication. For example, to enable replication for `` locally, +on the server machine, you would add a line like the following: + +``` +local replication trust +``` + +To allow `` on localhost to receive replication changes using IPV4, +add the line: + +``` +host replication 127.0.0.1/32 trust +``` + +To allow `` on localhost to receive replication changes using IPV6, +add the line: + +``` +host replication ::1/128 trust +``` + +Find out more from the PostgreSQL pages about +[`pg_hba.conf`](https://www.postgresql.org/docs/10/auth-pg-hba-conf.html) +and +[network address types](https://www.postgresql.org/docs/current/datatype-net-types.html). + +## Supported PostgreSQL topologies + +You can use the Debezium PostgreSQL connector with a standalone PostgreSQL server or +with a cluster of servers. +For versions 12 and below, PostgreSQL supports logical replication slots on only primary servers. +This means that Debezium can only connect to a primary server for CDC and the connection will +stop if this server fails. If the same server is promoted to primary when service resumes +then you can simply restart the Debezium connector. However, if a different server is +promoted to primary, then you must reconfigure Debezium to use the new server +before restarting. Also, make sure the new server has the correct plug-in and configuration +for Debezium. +--- +Title: Prepare SQL Server for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/sql-server/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Prepare SQL Server databases to work with RDI +group: di +linkTitle: Prepare SQL Server +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 2 +--- + +To prepare your SQL Server database for Debezium, you must first create a dedicated Debezium user, +run a script to enable CDC globally, and then separately enable CDC for each table you want to +capture. You need administrator privileges to do this. + +Once you enable CDC, it captures all of the INSERT, UPDATE, and DELETE operations +on your chosen tables. The Debezium connector can then emit these events to RDI. + +## 1. Create a Debezium user + +It is strongly recommended to create a dedicated Debezium user for the connection between RDI +and the source database. When using an existing user, ensure that the required +permissions are granted and that the user is added to the CDC role. + +1. Create a user with the Transact-SQL below: + + ```sql + USE master + GO + CREATE LOGIN MyUser WITH PASSWORD = 'My_Password' + GO + USE MyDB + GO + CREATE USER MyUser FOR LOGIN MyUser + GO + ``` + + Replace `MyUser`, `My_Password` and `MyDB` with your chosen values. + +1. Grant required permissions: + + ```sql + USE master + GO + GRANT VIEW SERVER STATE TO MyUser + GO + USE MyDB + GO + EXEC sp_addrolemember N'db_datareader', N'MyUser' + GO + ``` + +## 2. Enable CDC on the database + +There are two system stored procedures to enable CDC (you need +administrator privileges to run these). Use `sys.sp_cdc_enable_db` +to enable CDC for the whole database and then `sys.sp_cdc_enable_table` to enable CDC for individual tables. + +Before running the procedures, ensure that: + +- You are a member of the `sysadmin` fixed server role for the SQL Server. +- You are a `db_owner` of the database. +- The SQL Server Agent is running. + +Then, assuming your database is called `MyDB`, run the script below to enable CDC: + +```sql +USE MyDB +GO +EXEC sys.sp_cdc_enable_db +GO +``` + +{{< note >}}For SQL Server on AWS RDS, you must use a different stored procedure: +```sql +EXEC msdb.dbo.rds_cdc_enable_db 'Chinook' +GO +``` +{{< /note >}} + +When you enable CDC for the database, it creates a schema called `cdc` and also +a CDC user, metadata tables, and other system objects. + +## 3. Enable CDC for the tables you want to capture + +1. You must also enable CDC on the tables you want Debezium to capture using the +following commands (again, you need administrator privileges for this): + + ```sql + USE MyDB + GO + + EXEC sys.sp_cdc_enable_table + @source_schema = N'dbo', + @source_name = N'MyTable', + @role_name = N'MyRole', + @supports_net_changes = 0 + GO + ``` + + Repeat this for every table you want to capture. + + {{< note >}}The value for `@role_name` can’t be a fixed database role, such as `db_datareader`. + Specifying a new name will create a corresponding database role that has full access to the + captured change data. + {{< /note >}} + +1. Add the Debezium user to the CDC role: + + ```sql + USE MyDB + GO + EXEC sp_addrolemember N'MyRole', N'MyUser' + GO + ``` + +## 4. Check that you have access to the CDC table + +You can use another stored procedure `sys.sp_cdc_help_change_data_capture` +to query the CDC information for the database and check you have enabled +it correctly. To do this, connect as the Debezium user you created previously (`MyUser`). + +1. Run the `sys.sp_cdc_help_change_data_capture` stored procedure to query + the CDC configuration. For example, if your database was called `MyDB` then you would + run the following: + + ```sql + USE MyDB; + GO + EXEC sys.sp_cdc_help_change_data_capture + GO + ``` + +1. The query returns configuration information for each table in the database that + has CDC enabled and that contains change data that you are authorized to + access. If the result is empty then you should check that you have privileges + to access both the capture instance and the CDC tables. + +### Troubleshooting + +If no CDC is happening then it might mean that SQL Server Agent is down. You can check for this using the SQL query shown below: + +```sql +IF EXISTS (SELECT 1 + FROM master.dbo.sysprocesses + WHERE program_name = N'SQLAgent - Generic Refresher') +BEGIN + SELECT @@SERVERNAME AS 'InstanceName', 1 AS 'SQLServerAgentRunning' +END +ELSE +BEGIN + SELECT @@SERVERNAME AS 'InstanceName', 0 AS 'SQLServerAgentRunning' +END +``` + +If the query returns a result of 0, you need to need to start SQL Server Agent using the following commands: + +```sql +EXEC xp_servicecontrol N'START',N'SQLServerAGENT'; +GO +``` + +## SQL Server capture job agent configuration parameters + +In SQL Server, the parameters that control the behavior of the capture job agent +are defined in the SQL Server table `msdb.dbo.cdc_jobs`. If you experience performance +problems while running the capture job agent then you can adjust the capture jobs +settings to reduce CPU load. To do this, run the `sys.sp_cdc_change_job` stored procedure +with your new parameter values. + +{{< note >}}A full guide to configuring the SQL Server capture job agent parameters +is outside the scope of the Redis documentation.{{< /note >}} + +The following parameters are the most important ones for modifying the capture agent behavior +of the Debezium SQL Server connector: + +* `pollinginterval`: This specifies the number of seconds that the capture agent + waits between log scan cycles. A higher value reduces the load on the database + host, but increases latency. A value of 0 specifies no wait between scans. + The default value is 5. +* `maxtrans`: This specifies the maximum number of transactions to process during + each log scan cycle. After the capture job processes the specified number of + transactions, it pauses for the length of time that `pollinginterval` specifies + before the next scan begins. A lower value reduces the load on the database host, + but increases latency. The default value is 500. +* `maxscans`: This specifies a limit on the number of scan cycles that the capture + job can attempt when capturing the full contents of the database transaction log. + If the continuous parameter is set to 1, the job pauses for the length of time + that the `pollinginterval` specifies before it resumes scanning. A lower values + reduces the load on the database host, but increases latency. The default value is 10. + +See the SQL Server documentation for more information about capture agent parameters. + +## SQL Server on Azure + +You can also use the Debezium SQL Server connector with SQL Server on Azure. +See Microsoft's guide to +[configuring SQL Server on Azure for CDC with Debezium](https://learn.microsoft.com/en-us/samples/azure-samples/azure-sql-db-change-stream-debezium/azure-sql%2D%2Dsql-server-change-stream-with-debezium/) +for more information. + +## Handling changes to the schema + +RDI can't adapt automatically when you change the schema of a CDC table in SQL Server. For example, +if you add a new column to a table you are capturing then RDI will generate errors +instead of capturing the changes correctly. See Debezium's +[SQL Server schema evolution](https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-schema-evolution) +docs for more information. + +If you have administrator privileges, you can follow the steps below to update RDI after +a schema change and resume CDC. See the +[online schema updates](https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#online-schema-updates) +documentation for further details. + +1. Make your changes to the source table schema. + +1. Create a new capture table for the updated source table by running the `sys.sp_cdc_enable_table` stored + procedure with a new, unique value for the parameter `@capture_instance`. For example, if the old value + was `dbo_MyTable`, you could replace it with `dbo_MyTable_v2` (you can see the existing values by running + stored procedure `sys.sp_cdc_help_change_data_capture`): + + ```sql + EXEC sys.sp_cdc_enable_table + @source_schema = N'dbo', + @source_name = N'MyTable', + @role_name = N'MyRole', + @capture_instance = N'dbo_MyTable_v2', + @supports_net_changes = 0 + GO + ``` + +1. When Debezium starts streaming from the new capture table, drop the old capture table by running + the `sys.sp_cdc_disable_table` stored procedure with the parameter `@capture_instance` set to the old + capture instance name, `dbo_MyTable`: + + ```sql + EXEC sys.sp_cdc_disable_table + @source_schema = N'dbo', + @source_name = N'MyTable', + @capture_instance = N'dbo_MyTable' + GO + ``` + +{{< note >}}RDI will *not* correctly capture changes that happen in the time gap between changing +the source schema (step 1 above) and updating the value of `@capture_instance` (step 2). +Try to keep the gap as short as possible or perform the update at a time when you expect +few changes to the data.{{< /note >}}--- +Title: Prepare source databases +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Enable CDC features in your source databases +group: di +hideListLinks: false +linkTitle: Prepare source databases +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +Each database uses a different mechanism to track changes to its data and +generally, these mechanisms are not switched on by default. +RDI's Debezium collector uses these mechanisms for change data capture (CDC), +so you must prepare your source database before you can use it with RDI. + +RDI supports the following source databases: + +{{< embed-md "rdi-supported-source-versions.md" >}} + +The pages in this section give detailed instructions to get your source +database ready for Debezium to use: +--- +Title: Data denormalization +aliases: /integrate/redis-data-integration/ingest/data-pipelines/data-denormalization/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn about denormalization strategies +group: di +linkTitle: Data denormalization +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +The data in the source database is often +[*normalized*](https://en.wikipedia.org/wiki/Database_normalization). +This means that columns can't have composite values (such as arrays) and relationships between entities +are expressed as mappings of primary keys to foreign keys between different tables. +Normalized data models reduce redundancy and improve data integrity for write queries but this comes +at the expense of speed. +A Redis cache, on the other hand, is focused on making *read* queries fast, so RDI provides data +*denormalization* to help with this. + +## Nest strategy + +*Nesting* is the strategy RDI uses to denormalize many-to-one relationships in the source database. +It does this by representing the +parent object (the "one") as a JSON document with the children (the "many") nested inside a JSON map +attribute in the parent. The diagram belows shows a nesting with the child objects in a map +called `InvoiceLineItems`: + +{{< image filename="/images/rdi/ingest/nest-flow.webp" width="500px" >}} + +You configure normalization with a `nest` block in the child entities' RDI job, as shown in this example: + +```yaml +source: + server_name: chinook + schema: public + table: InvoiceLine +output: + - uses: redis.write + with: + nest: # cannot co-exist with other parameters such as 'key' + parent: + # server_name: chinook + # schema: public + table: Invoice + nesting_key: InvoiceLineId # cannot be composite + parent_key: InvoiceId # cannot be composite + path: $.InvoiceLineItems # path must start from document root ($) + structure: map # only map supported for now + on_update: merge # only merge supported for now + data_type: json # only json supported for now +``` + +The job has a `with` section under `output` that includes the `nest` block. +The job must include the following attributes in the `nest` block: + +- `parent`: This specifies the RDI data stream for the parent entities. Typically, you only + need to supply the parent `table` name, unless you are nesting children under a parent that comes from + a different source database. If you do this then you must also specify `server_name` and + `schema` attributes. Note that this attribute refers to a Redis *key* that will be added to the target + database, not to a table you can access from the pipeline. See [Using nesting](#using-nesting) below + for the format of the key that is generated. +- `nesting_key`: The field of the child entity that stores the unique ID (primary key) of the child entity. +- `parent_key`: The field in the parent entity that stores the unique ID (foreign key) of the parent entity. +- `child_key`: The field in the child entity that stores the unique ID (foreign key) of the parent entity. + You only need to add this attribute if the name of the child's foreign key field is different from the parent's. +- `path`: The [JSONPath](https://goessner.net/articles/JsonPath/) + for the map where you want to store the child entities. The path must start with the `$` character, which denotes + the document root. +- `structure`: (Optional) The type of JSON nesting structure for the child entities. Currently, only a JSON map + is supported so if you supply this attribute then the value must be `map`. + +## Using nesting + +There are several important things to note when you use nesting: + +- When you specify `nest` in the job, you must also set the `data_type` attribute to `json` and + the `on_update` attribute to `merge` in the surrounding `output` block. +- Key expressions are *not* supported for the `nest` output blocks. The parent key is always calculated + using the following template: + + ```bash + :: + ``` + + For example: + + ```bash + Invoice:InvoiceId:1 + ``` + +- If you specify `expire` in the `nest` output block then this will set the expiration on the *parent* object. +- You can only use one level of nesting. +- If you are using PostgreSQL then you must make the following change for all child tables that you want to nest: + + ```sql + ALTER TABLE REPLICA IDENTITY FULL; + ``` + + This configuration affects the information written to the write-ahead log (WAL) and whether it is available + for RDI to capture. By default, PostgreSQL only records + modified fields in the log, which means that it might omit the `parent_key`. This can cause incorrect updates to the + Redis key in the destination database. + See the + [Debezium PostgreSQL Connector Documentation](https://debezium.io/documentation/reference/connectors/postgresql.html#postgresql-replica-identity) + for more information about this. +--- +Title: Configure data pipelines +linkTitle: Configure +description: Learn how to configure ingest pipelines for data transformation +weight: 1 +alwaysopen: false +categories: ["redis-di"] +aliases: /integrate/redis-data-integration/ingest/data-pipelines/data-pipelines/ +--- + +RDI implements +[change data capture](https://en.wikipedia.org/wiki/Change_data_capture) (CDC) +with *pipelines*. (See the +[architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) +for an introduction to pipelines.) + +## Overview + +An RDI pipeline captures change data records from the source database, and transforms them +into Redis data structures. It writes each of these new structures to a Redis target +database under its own key. + +By default, RDI transforms the source data into +[hashes]({{< relref "/develop/data-types/hashes" >}}) or +[JSON objects]({{< relref "/develop/data-types/json" >}}) for the target with a +standard data mapping and a standard format for the key. +However, you can also provide your own custom transformation [jobs](#job-files) +for each source table, using your own data mapping and key pattern. You specify these +jobs declaratively with YAML configuration files that require no coding. + +The data tranformation involves two separate stages. First, the data ingested +during CDC is automatically transformed to a JSON format. Then, +this JSON data gets passed on to your custom transformation for further processing. + +You can provide a job file for each source table you want to transform, but you +can also add a *default job* for any tables that don't have their own. +You must specify the full name of the source table in the job file (or the special +name "*" in the default job) and you +can also include filtering logic to skip data that matches a particular condition. +As part of the transformation, you can specify whether you want to store the +data in Redis as +[JSON objects]({{< relref "/develop/data-types/json" >}}), +[hashes]({{< relref "/develop/data-types/hashes" >}}), +[sets]({{< relref "/develop/data-types/sets" >}}), +[streams]({{< relref "/develop/data-types/streams" >}}), +[sorted sets]({{< relref "/develop/data-types/sorted-sets" >}}), or +[strings]({{< relref "/develop/data-types/strings" >}}). + +The diagram below shows the flow of data through the pipeline: + +{{< image filename="/images/rdi/ingest/RDIPipeDataflow.webp" >}} + +## Pipeline configuration + +RDI uses a set of [YAML](https://en.wikipedia.org/wiki/YAML) +files to configure each pipeline. The following diagram shows the folder +structure of the configuration: + +{{< image filename="images/rdi/ingest/ingest-config-folders.webp" width="600px" >}} + +The main configuration for the pipeline is in the `config.yaml` file. +This specifies the connection details for the source database (such +as host, username, and password) and also the queries that RDI will use +to extract the required data. You should place job configurations in the `Jobs` +folder if you want to specify your own data transformations. + +The sections below describe the two types of configuration file in more detail. + +## The `config.yaml` file + +Here is an example of a `config.yaml` file. Note that the values of the +form "`${name}`" refer to secrets that you should set as described in +[Set secrets]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy#set-secrets" >}}). +In particular, you should normally use secrets as shown to set the source +and target username and password rather than storing them in plain text in this file. + +```yaml +sources: + mysql: + type: cdc + logging: + level: info + connection: + type: mysql + host: # e.g. localhost + port: 3306 + # User and password are injected from the secrets. + user: ${SOURCE_DB_USERNAME} + password: ${SOURCE_DB_PASSWORD} + # Additional properties for the source collector: + # List of databases to include (optional). + # databases: + # - database1 + # - database2 + + # List of tables to be synced (optional). + # tables: + # If only one database is specified in the databases property above, + # then tables can be defined without the database prefix. + # .: + # List of columns to be synced (optional). + # columns: + # - + # - + # List of columns to be used as keys (optional). + # keys: + # - + + # Example: Sync specific tables. + # tables: + # Sync a specific table with all its columns: + # redislabscdc.account: {} + # Sync a specific table with selected columns: + # redislabscdc.emp: + # columns: + # - empno + # - fname + # - lname + + # Advanced collector properties (optional): + # advanced: + # Sink collector properties - see the full list at + # https://debezium.io/documentation/reference/stable/operations/debezium-server.html#_redis_stream + # sink: + # Optional hard limits on memory usage of RDI streams. + # redis.memory.limit.mb: 300 + # redis.memory.threshold.percentage: 85 + + # Uncomment for production so RDI Collector will wait on replica + # when writing entries. + # redis.wait.enabled: true + # redis.wait.timeout.ms: 1000 + # redis.wait.retry.enabled: true + # redis.wait.retry.delay.ms: 1000 + + # Source specific properties - see the full list at + # https://debezium.io/documentation/reference/stable/connectors/ + # source: + # snapshot.mode: initial + # Uncomment if you want a snapshot to include only a subset of the rows + # in a table. This property affects snapshots only. + # snapshot.select.statement.overrides: . + # The specified SELECT statement determines the subset of table rows to + # include in the snapshot. + # snapshot.select.statement.overrides..: + + # Example: Snapshot filtering by order status. + # To include only orders with non-pending status from customers.orders + # table: + # snapshot.select.statement.overrides: customer.orders + # snapshot.select.statement.overrides.customer.orders: SELECT * FROM customers.orders WHERE status != 'pending' ORDER BY order_id DESC + + # Quarkus framework properties - see the full list at + # https://quarkus.io/guides/all-config + # quarkus: + # banner.enabled: "false" + +targets: + # Redis target database connections. + # The default connection must be named 'target' and is used when no + # connection is specified in jobs or no jobs + # are deployed. However multiple connections can be defined here and used + # in the job definition output blocks: + # (e.g. target1, my-cloud-redis-db2, etc.) + target: + connection: + type: redis + # Host of the Redis database to which RDI will + # write the processed data. + host: # e.g. localhost + # Port for the Redis database to which RDI will + # write the processed data. + port: # e.g. 12000 + # User of the Redis database to which RDI will write the processed data. + # Uncomment if you are not using the default user. + # user: ${TARGET_DB_USERNAME} + # Password for Redis target database. + password: ${TARGET_DB_PASSWORD} + # SSL/TLS configuration: Uncomment to enable secure connections. + # key: ${TARGET_DB_KEY} + # key_password: ${TARGET_DB_KEY_PASSWORD} + # cert: ${TARGET_DB_CERT} + # cacert: ${TARGET_DB_CACERT} +processors: + # Interval (in seconds) on which to perform retry on failure. + # on_failed_retry_interval: 5 + # The batch size for reading data from the source database. + # read_batch_size: 2000 + # Time (in ms) after which data will be read from stream even if + # read_batch_size was not reached. + # duration: 100 + # The batch size for writing data to the target Redis database. Should be + # less than or equal to the read_batch_size. + # write_batch_size: 200 + # Enable deduplication mechanism (default: false). + # dedup: + # Max size of the deduplication set (default: 1024). + # dedup_max_size: + # Error handling strategy: ignore - skip, dlq - store rejected messages + # in a dead letter queue + # error_handling: dlq +``` + +The main sections of the file configure [`sources`](#sources) and [`targets`](#targets). + +### Sources + +The `sources` section has a subsection for the source that +you need to configure. The source section starts with a unique name +to identify the source (in the example we have a source +called `mysql` but you can choose any name you like). The example +configuration contains the following data: + +- `type`: The type of collector to use for the pipeline. + Currently, the only types we support are `cdc` and `external`. + If the source type is set to `external`, no collector resources will be created by the operator, + and all other source sections should be empty or not specified at all. +- `connection`: The connection details for the source database: `type`, `host`, `port`, + and credentials (`username` and `password`). + - `type` is the source database type, one of `mariadb`, `mysql`, `oracle`, `postgresql`, or `sqlserver`. + - If you use [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security)/ + or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) to connect + to the source database, you may need to specify additional properties in the + `advanced` section with references to the corresponding certificates depending + on the source database type. Note that these properties **must** be references to + secrets that you should set as described in [Set secrets]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy#set-secrets" >}}). +- `databases`: List of all databases to collect data from for source database types + that support multiple databases, such as `mysql` and `mariadb`. +- `schemas`: List of all schemas to collect data from for source database types + that support multiple schemas, such as `oracle`, `postgresql`, and `sqlserver`. +- `tables`: List of all tables to collect data from. Each table is identified by its + full name, including a database or schema prefix. If there is a single + database or schema, this prefix can be omitted. + For each table, you can specify: + - `columns`: A list of the columns you are interested in (the default is to + include all columns) + - `keys`: A list of columns to create a composite key if your table + doesn't already have a [`PRIMARY KEY`](https://www.w3schools.com/sql/sql_primarykey.asp) or + [`UNIQUE`](https://www.w3schools.com/sql/sql_unique.asp) constraint. + - `snapshot_sql`: A query to be used when performing the initial snapshot. + By default, a query that contains all listed columns of all listed tables will be used. +- `advanced`: These optional properties configure other Debezium-specific features. + The available sub-sections are: + - `source`: Properties for reading from the source database. + See the Debezium [Source connectors](https://debezium.io/documentation/reference/stable/connectors/) + pages for more information about the properties available for each database type. + - `sink`: Properties for writing to Redis streams in the RDI database. + See the Debezium [Redis stream properties](https://debezium.io/documentation/reference/stable/operations/debezium-server.html#_redis_stream) + page for the full set of available properties. + - `quarkus`: Properties for the Debezium server, such as the log level. See the + Quarkus [Configuration options](https://quarkus.io/guides/all-config) + docs for the full set of available properties. + +### Targets + +Use this section to provide the connection details for the target Redis +database(s). As with the sources, you should start each target section +with a unique name that you are free to choose (here, we have used +`target` as an example). In the `connection` section, you can specify the +`type` of the target database, which must be `redis`, along with +connection details such as `host`, `port`, and credentials (`username` and `password`). +If you use [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security)/ +or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) to connect +to the target database, you must specify the CA certificate (for TLS), +and the client certificate and private key (for mTLS) in `cacert`, `cert`, and `key`. +Note that these certificates **must** be references to secrets +that you should set as described in [Set secrets]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy#set-secrets" >}}) +(it is not possible to include these certificates as plain text in the file). + +{{< note >}}If you specify `localhost` as the address of either the source or target server during +installation then the connection will fail if the actual IP address changes for the local +VM. For this reason, we recommend that you don't use `localhost` for the address. However, +if you do encounter this problem, you can fix it using the following commands on the VM +that is running RDI itself: + +```bash +sudo k3s kubectl delete nodes --all +sudo service k3s restart +``` +{{< /note >}} + +## Job files + +You can optionally supply one or more job files that specify how you want to +transform the captured data before writing it to the target. +Each job file contains a YAML +configuration that controls the transformation for a particular table from the source +database. You can also add a `default-job.yaml` file to provide +a default transformation for tables that don't have a specific job file of their own. + +The job files have a structure like the following example. This configures a default +job that: + +- Writes the data to a Redis hash +- Adds a field `app_code` to the hash with a value of `foo` +- Adds a prefix of `aws` and a suffix of `gcp` to the key + +```yaml +source: + table: "*" + row_format: full +transform: + - uses: add_field + with: + fields: + - field: after.app_code + expression: "`foo`" + language: jmespath +output: + - uses: redis.write + with: + data_type: hash + key: + expression: concat(['aws', '#', table, '#', keys(key)[0], '#', values(key)[0], '#gcp']) + language: jmespath +``` + +The main sections of these files are: + +- `source`: This is a mandatory section that specifies the data items that you want to + use. You can add the following properties here: + - `server_name`: Logical server name (optional). + - `db`: Database name (optional) + - `schema`: Database schema (optional) + - `table`: Database table name. This refers to a table name you supplied in `config.yaml`. The default + job doesn't apply to a specific table, so use "*" in place of the table name for this job only. + - `row_format`: Format of the data to be transformed. This can take the values `data_only` (default) to + use only the payload data, or `full` to use the complete change record. See the `transform` section below + for details of the extra data you can access when you use the `full` option. + - `case_insensitive`: This applies to the `server_name`, `db`, `schema`, and `table` properties + and is set to `true` by default. Set it to `false` if you need to use case-sensitive values for these + properties. + +- `transform`: This is an optional section describing the transformation that the pipeline + applies to the data before writing it to the target. The `uses` property specifies a + *transformation block* that will use the parameters supplied in the `with` section. See the + [data transformation reference]({{< relref "/integrate/redis-data-integration/reference/data-transformation" >}}) + for more details about the supported transformation blocks, and also the + [JMESPath custom functions]({{< relref "/integrate/redis-data-integration/reference/jmespath-custom-functions" >}}) reference. You can test your transformation logic using the [dry run]({{< relref "/integrate/redis-data-integration/reference/api-reference/#tag/secure/operation/job_dry_run_api_v1_pipelines_jobs_dry_run_post" >}}) feature in the API. + + {{< note >}}If you set `row_format` to `full` under the `source` settings, you can access extra data from the + change record in the transformation: + - Use the expression `key.key` to get the generated Redis key as a string. + - Use `before.` to get the value of a field *before* it was updated in the source database + (the field name by itself gives you the value *after* the update).{{< /note >}} + +- `output`: This is a mandatory section to specify the data structure(s) that + RDI will write to + the target along with the text pattern for the key(s) that will access it. + Note that you can map one record to more than one key in Redis or nest + a record as a field of a JSON structure (see + [Data denormalization]({{< relref "/integrate/redis-data-integration/data-pipelines/data-denormalization" >}}) + for more information about nesting). You can add the following properties in the `output` section: + - `uses`: This must have the value `redis.write` to specify writing to a Redis data + structure. You can add more than one block of this type in the same job. + - `with`: + - `connection`: Connection name as defined in `config.yaml` (by default, the connection named `target` is used). + - `data_type`: Target data structure when writing data to Redis. The supported types are `hash`, `json`, `set`, + `sorted_set`, `stream` and `string`. + - `key`: This lets you override the default key for the data structure with custom logic: + - `expression`: Expression to generate the key. + - `language`: Expression language, which must be `jmespath` or `sql`. + - `expire`: Positive integer value indicating a number of seconds for the key to expire. + If you don't specify this property, the key will never expire. + +{{< note >}}In a job file, the `transform` section is optional, but if you don't specify +a `transform`, you must specify custom key logic in `output.with.key`. You can include +both of these sections if you want both a custom transform and a custom key.{{< /note >}} + +Another example below shows how you can rename the `fname` field to `first_name` in the table `emp` +using the +[`rename_field`]({{< relref "/integrate/redis-data-integration/reference/data-transformation/rename_field" >}}) block. It also demonstrates how you can set the key of this record instead of relying on +the default logic. (See the +[Transformation examples]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples" >}}) +section for more examples of job files.) + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + from_field: fname + to_field: first_name +output: + - uses: redis.write + with: + connection: target + key: + expression: concat(['emp:fname:',fname,':lname:',lname]) + language: jmespath +``` + +See the +[RDI configuration file]({{< relref "/integrate/redis-data-integration/reference/config-yaml-reference" >}}) +reference for full details about the +available source, transform, and target configuration options and see +also the +[data transformation reference]({{< relref "/integrate/redis-data-integration/reference/data-transformation" >}}) +for details of all the available transformation blocks. + +## Source preparation + +Before using the pipeline you must first prepare your source database to use +the Debezium connector for *change data capture (CDC)*. See the +[architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) +for more information about CDC. +Each database type has a different set of preparation steps. You can +find the preparation guides for the databases that RDI supports in the +[Prepare source databases]({{< relref "/integrate/redis-data-integration/data-pipelines/prepare-dbs" >}}) +section. + +## Deploy a pipeline + +When your configuration is ready, you must deploy it to start using the pipeline. See +[Deploy a pipeline]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) +to learn how to do this. + +## Pipeline lifecycle + +A pipeline goes through the following phases: + +1. *Deploy* - when you deploy the pipeline, RDI first validates it before use. +Then, the [operator]({{< relref "/integrate/redis-data-integration/architecture#how-rdi-is-deployed">}}) creates and configures the collector and stream processor that will run the pipeline. +1. *Snapshot* - The collector starts the pipeline by creating a snapshot of the full +dataset. This involves reading all the relevant source data, transforming it and then +writing it into the Redis target. You should expect this phase to take minutes or +hours to complete if you have a lot of data. +1. *CDC* - Once the snapshot is complete, the collector starts listening for updates to +the source data. Whenever a change is committed to the source, the collector captures +it and adds it to the target through the pipeline. This phase continues indefinitely +unless you change the pipeline configuration. +1. *Update* - If you update the pipeline configuration, the operator applies it +to the collector and the stream processor. Note that the changes only affect newly-captured +data unless you reset the pipeline completely. Once RDI has accepted the updates, the +pipeline returns to the CDC phase with the new configuration. +1. *Reset* - There are circumstances where you might want to rebuild the dataset +completely. For example, you might want to apply a new transformation to all the source +data or refresh the dataset if RDI is disconnected from the +source for a long time. In situations like these, you can *reset* the pipeline back +to the snapshot phase. When this is complete, the pipeline continues with CDC as usual. +--- +Title: Write to a Redis stream +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/redis-stream-example/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis stream +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +In the example below, data is captured from the source table named `invoice` and is written to a Redis stream. The `connection` is an optional parameter that refers to the corresponding connection name defined in `config.yaml`. +When you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +When writing to streams, you can use the optional parameter `mapping` to limit the number of fields sent in a message and to provide aliases for them. If you don't use the `mapping` parameter, all fields captured in the source will be passed as the message payload. + +Note that streams are different from other data structures because existing messages are never updated or deleted. Any operation in the source will generate a new message with the corresponding operation code (`op_code` field) that is automatically added to the message payload. + +In this case, the result will be a Redis stream with the name based on the key expression (for example, `invoice:events`) and with an expiration of 100 seconds for the whole stream. If you don't supply an `expire` parameter, the keys will never expire. + +In the example, only three original fields are passed in the message payload: `InvoiceId` (as `message_id`), `BillingCountry` (as `country`), `Total` (as `Total`, no alias provided) and `op_code`, which is implicitly added to all messages sent to streams. + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + - uses: redis.write + with: + connection: target + data_type: stream + key: + expression: "`invoice:events`" + language: jmespath + mapping: + - InvoiceId: message_id + - BillingCountry: country + - Total + expire: 100 +```--- +Title: Write to a Redis set +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/redis-set-example/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis set +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +In the example below, data is captured from the source table named `invoice` and is written to a Redis set. The `connection` is an optional parameter that refers to the corresponding connection name defined in `config.yaml`. When you specify the +`data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +When writing to a set, you must supply an extra argument, `member`, which specifies the field that will be written. In this case, the result will be a Redis set with key names based on the key expression (for example, `invoices:Germany`, `invoices:USA`) and with an expiration of 100 seconds. If you don't supply an `expire` parameter, the keys will never expire. + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + - uses: redis.write + with: + connection: target + data_type: set + key: + expression: concat(['invoices:', BillingCountry]) + language: jmespath + args: + member: InvoiceId + expire: 100 +```--- +Title: Remove fields from a key +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Remove fields +summary: Redis Data Integration keeps Redis in sync with a primary database in near + real time. +type: integration +weight: 40 +--- + +By default, RDI adds fields to +[hash]({{< relref "/develop/data-types/hashes" >}}) or +[JSON]({{< relref "/develop/data-types/json" >}}) objects in the target +database for each of the columns of the source table. +The examples below show how to omit some of those fields from the target data with the +[`remove_field`]({{< relref "/integrate/redis-data-integration/reference/data-transformation/remove_field" >}}) transformation. + +## Remove a single field + +The first example removes a single field from the data. +The `source` section selects the `employee` table of the +[`chinook`](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres) +database (the optional `db` field here corresponds to the +`sources..connection.database` field defined in +[`config.yaml`]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#the-configyaml-file" >}})). + +In the `transform` section, the `remove_field` transformation removes the +`hiredate` field. + +The `output` section specifies `hash` as the `data_type` to write to the target, which +overrides the default setting of `target_data_type` defined in `config.yaml`. Also, the +`output.with.key` section specifies a custom key format of the form `emp:`. +Note that any fields you remove in the `transform` section are not available for +the key calculation in the `output` section. + +The full example is shown below: + +```yaml +source: + db: chinook + table: employee +transform: + - uses: remove_field + with: + field: hiredate +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat(['emp:', employeeid]) + language: jmespath +``` + +If you queried the generated target data from the default transformation +using [`redis-cli`]({{< relref "/develop/tools/cli" >}}), you would +see something like the following: + +```bash +> hgetall emp:8 + 1) "employeeid" + 2) "8" + 3) "lastname" + 4) "Callahan" + 5) "firstname" + 6) "Laura" + 7) "title" + 8) "IT Staff" + 9) "reportsto" +10) "6" +11) "birthdate" +12) "-62467200000000" +13) "hiredate" +14) "1078358400000000" +15) "address" +16) "923 7 ST NW" +. +. +``` + +Using the job file above, the data omits the `hiredate` field: + +```bash + > hgetall emp:8 + 1) "employeeid" + 2) "8" + 3) "lastname" + 4) "Callahan" + 5) "firstname" + 6) "Laura" + 7) "title" + 8) "IT Staff" + 9) "reportsto" +10) "6" +11) "birthdate" +12) "-62467200000000" +13) "address" +14) "923 7 ST NW" +. +. +``` + +## Remove multiple fields + +The `remove_field` transformation can also remove multiple fields at the same time +if you specify them under a `fields` subsection. The example below is similar +to the previous one but also removes the `birthdate` field: + +```yaml +source: + db: chinook + table: employee +transform: + - uses: remove_field + with: + fields: + - field: hiredate + - field: birthdate +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat(['emp:', employeeid]) + language: jmespath +``` + +If you query the data, you can see that it also omits the +`birthdate` field: + +```bash +> hgetall emp:8 + 1) "employeeid" + 2) "8" + 3) "lastname" + 4) "Callahan" + 5) "firstname" + 6) "Laura" + 7) "title" + 8) "IT Staff" + 9) "reportsto" +10) "6" +11) "address" +12) "923 7 ST NW" +. +. +``` + +## Using `remove_field` with `add_field` + +The `remove_field` transformation is very useful in combination with +[`add_field`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-add-field-example" >}}). +For example, if you use `add_field` to concatenate a person's first +and last names, you may not need separate `firstname` and `lastname` +fields, so you can use `remove_field` to omit them. +See [Using `add_field` with `remove_field`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-add-field-example#using-add_field-with-remove_field" >}}) +for an example of how to do this. +--- +Title: Write to a Redis string +aliases: null +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis string +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +The string data type is useful for capturing a string representation of a single column from +a source table. + +In the example job below, the `title` column is captured from the `invoice` table in the source. +The `title` is then written to the Redis target database as a string under a custom key of the +form `AlbumTitle:42`, where the `42` is the primary key value of the table (the `albumid` column). + +The `connection` is an optional parameter that refers to the corresponding connection name defined in +[`config.yaml`]({{< relref "integrate/redis-data-integration/data-pipelines/data-pipelines#the-configyaml-file" >}}). +When you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. Here, the `string` data type also requires an `args` subsection +with a `value` argument that specifies the column you want to capture from the source table. + +The optional `expire` parameter sets the length of time, in seconds, that a new key will +persist for after it is created (here, it is 86400 seconds, which equals one day). +After this time, the key will be deleted automatically. +If you don't supply an `expire` parameter, the keys will never expire. + +```yaml +source: + server_name: chinook + table: album + row_format: full +output: + - uses: redis.write + with: + connection: target + data_type: string + key: + expression: concat(['AlbumTitle:', values(key)[0]]) + language: jmespath + args: + value: title + expire: 86400 +```--- +Title: Add new fields to a key +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Add new fields +summary: Redis Data Integration keeps Redis in sync with a primary database in near + real time. +type: integration +weight: 40 +--- + +By default, RDI adds fields to +[hash]({{< relref "/develop/data-types/hashes" >}}) or +[JSON]({{< relref "/develop/data-types/json" >}}) objects in the target +database that match the columns of the source table. +The examples below show how to add extra fields to the target data with the +[`add_field`]({{< relref "/integrate/redis-data-integration/reference/data-transformation/add_field" >}}) transformation. + +## Add a single field + +The first example adds a single field to the data. +The `source` section selects the `customer` table of the +[`chinook`](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres) +database (the optional `db` value here corresponds to the +`sources..connection.database` value defined in +[`config.yaml`]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#the-configyaml-file" >}})). + +In the `transform` section, the `add_field` transformation adds an extra field called `localphone` +to the object, which is created by removing the country and area code from the `phone` +field with the +[JMESPath]({{< relref "/integrate/redis-data-integration/reference/jmespath-custom-functions" >}}) function `regex_replace()`. +You can also specify `sql` as the `language` if you prefer to create the new +field with an [SQL](https://en.wikipedia.org/wiki/SQL) expression. + +The `output` section specifies `hash` as the `data_type` to write to the target, which +overrides the default setting of `target_data_type` defined in `config.yaml`. Also, the +`output.with.key` section specifies a custom key format of the form `cust:` where +the `id` part is generated by the `uuid()` function. + +The full example is shown below: + +```yaml +source: + db: chinook + table: customer +transform: + - uses: add_field + with: + expression: regex_replace(phone, '\+[0-9]+ (\([0-9]+\) )?', '') + field: localphone + language: jmespath +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat(['cust:', uuid()]) + language: jmespath +``` + +If you queried the generated target data from the default transformation +using [`redis-cli`]({{< relref "/develop/tools/cli" >}}), you would +see something like the following: + +``` + 1) "customerid" + 2) "27" + 3) "firstname" + 4) "Patrick" + 5) "lastname" + 6) "Gray" +. +. +17) "phone" +18) "+1 (520) 622-4200" +. +. +``` + +Using the job file above, the data also includes the new `localphone` field: + +``` + 1) "customerid" + 2) "27" + 3) "firstname" + 4) "Patrick" + 5) "lastname" + 6) "Gray" + . + . +23) "localphone" +24) "622-4200" +``` + +## Add multiple fields + +The `add_field` transformation can also add multiple fields at the same time +if you specify them under a `fields` subsection. The example below adds two +fields to the `track` objects. The first new field, `seconds`, is created using a SQL +expression to calculate the duration of the track in seconds from the +`milliseconds` field. +The second new field, `composerlist`, adds a JSON array using the `split()` function +to split the `composer` string field wherever it contains a comma. + +```yaml +source: + db: chinook + table: track +transform: + - uses: add_field + with: + fields: + - expression: floor(milliseconds / 1000) + field: seconds + language: sql + - expression: split(composer) + field: composerlist + language: jmespath +output: + - uses: redis.write + with: + connection: target + data_type: json + key: + expression: concat(['track:', trackid]) + language: jmespath +``` + +You can query the target database to see the new fields in +the JSON object: + +```bash +> JSON.GET track:1 $ + +"[{\"trackid\":1,\"name\":\"For Those About To Rock (We Salute You)\",\"albumid\":1,\"mediatypeid\":1,\"genreid\":1,\"composer\":\"Angus Young, Malcolm Young, Brian Johnson\",\"milliseconds\":343719,\"bytes\":11170334,\"unitprice\":\"0.99\",\"seconds\":343,\"composerlist\":[\"Angus Young\",\" Malcolm Young\",\" Brian Johnson\"]}]" +``` + +## Using `add_field` with `remove_field` + +You can use the `add_field` and +[`remove_field`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-remove-field-example" >}}) +transformations together to completely replace fields from the source. For example, +if you add a new `fullname` field, you might not need the separate `firstname` and +`lastname` fields. You can remove them with a job file like the following: + +```yaml +source: + db: chinook + table: customer +transform: + - uses: add_field + with: + expression: concat(firstname, ' ', lastname) + field: fullname + language: sql + - uses: remove_field + with: + fields: + - field: firstname + - field: lastname +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat(['cust:', customerid]) + language: jmespath +``` +--- +Title: Write to a Redis JSON document +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/redis-json-example/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis JSON document +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +{{}} +You must enable the [RedisJSON]({{< relref "/develop/data-types/json" >}}) module in the target Redis +database to use this feature. +{{}} + +In the example below, the data is captured from the source table named `invoice` and is written to the Redis database as a JSON document. The `connection` is an optional parameter that refers to the corresponding connection name defined in `config.yaml`. When you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +Another optional parameter, `on_update`, specifies the writing strategy. You can set this to either `replace` (the default) or `merge`. This affects the way the document is written to the target. Replacing the document will overwrite it completely, while merging will update it with the fields captured in the source, keeping the rest of the document intact. The `replace` option is usually more performant, while `merge` allows other jobs and applications to set extra fields in the same JSON documents. + +In this case, the result will be Redis JSON documents with key names based on the key expression (for example, `invoice_id:1`) and with an expiration of 100 seconds. If you don't supply an `expire` parameter, the keys will never expire. + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + - uses: redis.write + with: + connection: target + data_type: json + key: + expression: concat(['invoice_id:', InvoiceId]) + language: jmespath + on_update: replace + expire: 100 +```--- +Title: Add the opcode to the Redis output +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Add the opcode to the Redis output +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 100 +--- + +In the example below, the data is captured from the source table named `employee` and is written to the Redis database as a JSON document. When you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +Here, the result will be Redis JSON documents with fields captured from the source table +(`employeeid`, `firstname`, `lastname`) and also with +an extra field `my_opcode` added using the `merge` update strategy (see the +[JSON job example]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-json-example" >}}) +for more information). The `opcode` expression refers to the operation code captured from +the source. This is a database-specific value that indicates which type of operation generated +the change (insert, update, etc). + +```yaml +source: + schema: public + table: employee + row_format: full +transform: + - uses: add_field + with: + field: after.my_opcode + expression: opcode + language: jmespath +output: + - uses: redis.write + with: + data_type: json + mapping: + - employeeid + - firstname + - lastname + - my_opcode + connection: target + on_update: merge +```--- +Title: Write to a Redis hash +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/redis-hash-example/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis hash +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +In the following example, the data is captured from the source table named `invoice` and is written to the Redis database as hash keys. The `connection` is an optional parameter that refers to the corresponding connection name defined in `config.yaml`. +When you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +In this case, the result will be Redis hashes with key names based on the key expression (for example, `invoice_id:1`) and with an expiration of 100 seconds. +If you don't supply an `expire` parameter, the keys will never expire. + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat(['invoice_id:', InvoiceId]) + language: jmespath + expire: 100 +```--- +Title: Write to a Redis sorted set +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/redis-sorted-set-example/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to a Redis sorted set +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- + +In the example below, data is captured from the source table named `invoice` and is written to a Redis sorted set. The `connection` is an optional parameter that refers to the corresponding connection name defined in `config.yaml`. When +you specify the `data_type` parameter for the job, it overrides the system-wide setting `target_data_type` defined in `config.yaml`. + +When writing to sorted sets, you must provide two additional arguments, `member` and `score`. These specify the field names that will be used as a member and a score to add an element to a sorted set. In this case, the result will be a Redis sorted set named `invoices:sorted` based on the key expression and with an expiration of 100 seconds for each set member. If you don't supply an `expire` parameter, the keys will never expire. + +```yaml +source: + server_name: chinook + schema: public + table: invoice +output: + - uses: redis.write + with: + connection: target + data_type: sorted_set + key: + expression: "`invoices:sorted`" + language: jmespath + args: + score: Total + member: InvoiceId + expire: 100 +``` + +Since sorted sets in Redis are inherently sorted, you can easily get the top N invoices by total invoice amount using the command below (the range 0..9 gets the top 10 invoices): + +``` +ZREVRANGE invoices:sorted 0 9 WITHSCORES +```--- +Title: Transformation examples +aliases: /integrate/redis-data-integration/ingest/data-pipelines/transform-examples/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Explore some examples of common RDI transformations +group: di +hideListLinks: false +linkTitle: Transformation examples +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- +--- +Title: Restructure JSON or hash objects +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Restructure objects +summary: Redis Data Integration keeps Redis in sync with a primary database in near + real time. +type: integration +weight: 40 +--- + +By default, RDI adds fields to +[hash]({{< relref "/develop/data-types/hashes" >}}) or +[JSON]({{< relref "/develop/data-types/json" >}}) objects in the target +database that closely match the columns of the source table. +The examples below show how you can create a completely new object structure +from existing fields using the +[`map`]({{< relref "/integrate/redis-data-integration/reference/data-transformation/map" >}}) +transformation. + +## Map to a new JSON structure + +The first +[job file]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#job-files" >}}) +example creates a new [JSON]({{< relref "/develop/data-types/json" >}}) +object structure to write to the target. +The `source` section selects the `employee` table of the +[`chinook`](https://github.com/Redislabs-Solution-Architects/rdi-quickstart-postgres) +database (the optional `db` value here corresponds to the +`sources..connection.database` value defined in +[`config.yaml`]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#the-configyaml-file" >}})). + +In the `transform` section, the `map` transformation uses a [JMESPath](https://jmespath.org/) +expression to specify the new JSON format. (Note that the vertical bar "|" in the `expression` +line indicates that the following indented lines should be interpreted as a single string.) +The expression resembles JSON notation but with data values supplied from +table fields and +[JMESPath functions]({{< relref "/integrate/redis-data-integration/reference/jmespath-custom-functions" >}}). + +Here, we rename the +`employeeid` field to `id` and create two nested objects for the `address` +and `contact` information. The `name` field is the concatenation of the existing +`firstname` and `lastname` fields, with `lastname` converted to uppercase. +In the `contact` subobject, the `email` address is obfuscated slightly, using the +`replace()` function to hide the '@' sign and dots. + +In the `output` section of the job file, we specify that we want to write +to a JSON object with a custom key. Note that in the `output` section, you must refer to +fields defined in the `map` transformation, so we use the new name `id` +for the key instead of `employeeid`. + +The full example is shown below: + +```yaml +source: + db: chinook + table: employee +transform: + - uses: map + with: + expression: | + { + "id": employeeid, + "name": concat([firstname, ' ', upper(lastname)]), + "address": { + "street": address, + "city": city, + "state": state, + "postalCode": postalcode, + "country": country + }, + "contact": { + "phone": phone, + "safeEmail": replace(replace(email, '@', '_at_'), '.', '_dot_') + } + } + language: jmespath +output: + - uses: redis.write + with: + connection: target + data_type: json + key: + expression: concat(['emp:', id]) + language: jmespath +``` + +If you query one of the new JSON objects, you see output like the following: + +```bash +> JSON.GET emp:1 $ +"[{\"id\":1,\"name\":\"Andrew ADAMS\",\"address\":{\"street\":\"11120 Jasper Ave NW\",\"city\":\"Edmonton\",\"state\":\"AB\",\"postalCode\":\"T5K 2N1\",\"country\":\"Canada\"},\"contact\":{\"phone\":\"+1 (780) 428-9482\",\"safeEmail\":\"andrew_at_chinookcorp_dot_com\"}}]" +``` + +Formatted in the usual JSON style, the output looks like the sample below: + +```json +{ + "id": 1, + "name": "Andrew ADAMS", + "address": { + "street": "11120 Jasper Ave NW", + "city": "Edmonton", + "state": "AB", + "postalCode": "T5K 2N1", + "country": "Canada" + }, + "contact": { + "phone": "+1 (780) 428-9482", + "safeEmail": "andrew_at_chinookcorp_dot_com" + } +} +``` + +## Map to a hash structure + +This example creates a new [hash]({{< relref "/develop/data-types/hashes" >}}) +object structure for items from the `track` table. Here, the `map` transformation uses +[SQL](https://en.wikipedia.org/wiki/SQL) for the expression because this is often +more suitable for hashes or "flat" +JSON objects without subobjects or arrays. The expression renames some of the fields. +It also calculates more human-friendly representations for the track duration (originally +stored in the `milliseconds` field) and the storage size (originally stored in the +`bytes` field). + +The full example is shown below: + +```yaml +source: + db: chinook + table: track +transform: + - uses: map + with: + expression: + id: trackid + name: name + duration: concat(floor(milliseconds / 60000), ':', floor(mod(milliseconds / 1000, 60))) + storagesize: concat(round(bytes / 1048576.0, 2), 'MB') + language: sql +output: + - uses: redis.write + with: + connection: target + data_type: hash + key: + expression: concat('track:', id) + language: sql +``` + +If you query the data for one of the `track` hash objects, you see output +like the following: + +```bash +> hgetall track:16 +1) "id" +2) "16" +3) "name" +4) "Dog Eat Dog" +5) "duration" +6) "3:35.0" +7) "storagesize" +8) "6.71MB" +```--- +Title: Data pipelines +aliases: /integrate/redis-data-integration/ingest/data-pipelines/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how an RDI pipeline can transform source data before writing +group: di +hideListLinks: false +linkTitle: Data pipelines +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 30 +--- +--- +Title: Observability +aliases: /integrate/redis-data-integration/ingest/observability/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to monitor RDI +group: di +hideListLinks: false +linkTitle: Observability +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 40 +--- + +RDI reports metrics about its operation using +[Prometheus exporter endpoints](https://prometheus.io/docs/instrumenting/exporters/). +You can connect to the endpoints with +[Prometheus](https://prometheus.io/docs/prometheus/latest/getting_started/) +to query the metrics and plot simple graphs or with +[Grafana](https://grafana.com/) to produce more complex visualizations and +dashboards. + +RDI exposes two endpoints, one for [CDC collector metrics](#collector-metrics) and +another for [stream processor metrics](#stream-processor-metrics). +The sections below explain these sets of metrics in more detail. +See the +[architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) +for an introduction to these concepts. + +{{< note >}}If you don't use Prometheus or Grafana, you can still see +RDI metrics with the RDI monitoring screen in Redis Insight or with the +[`redis-di status`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-status" >}}) +command from the CLI.{{< /note >}} + +## Collector metrics + +The endpoint for the collector metrics is `https:///metrics/collector-source` + +These metrics are divided into three groups: + +- **Pipeline state**: metrics about the pipeline mode and connectivity +- **Data flow counters**: counters for data breakdown per source table +- **Processing performance**: processing speed of RDI micro batches + +## Stream processor metrics + +The endpoint for the stream processor metrics is `https:///metrics/rdi` + +RDI reports metrics during the two main phases of the ingest pipeline, the *snapshot* +phase and the *change data capture (CDC)* phase. (See the +[pipeline lifecycle]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines" >}}) +docs for more information). The table below shows the full set of metrics that +RDI reports. + +| Metric | Phase | +|:-- |:-- | +| CapturedTables | Both | +| Connected | CDC | +| LastEvent | Both | +| LastTransactionId | CDC | +| MilliSecondsBehindSource | CDC | +| MilliSecondsSinceLastEvent | Both | +| NumberOfCommittedTransactions | CDC | +| NumberOfEventsFiltered | Both | +| QueueRemainingCapacity | Both | +| QueueTotalCapacity | Both | +| RemainingTableCount | Snapshot | +| RowsScanned | Snapshot | +| SnapshotAborted | Snapshot | +| SnapshotCompleted | Snapshot | +| SnapshotDurationInSeconds | Snapshot | +| SnapshotPaused | Snapshot | +| SnapshotPausedDurationInSeconds | Snapshot | +| SnapshotRunning | Snapshot | +| SourceEventPosition | CDC | +| TotalNumberOfCreateEventsSeen | CDC | +| TotalNumberOfDeleteEventsSeen | CDC | +| TotalNumberOfEventsSeen | Both | +| TotalNumberOfUpdateEventsSeen | CDC | +| TotalTableCount | Snapshot | + +## RDI logs + +RDI uses [fluentd](https://www.fluentd.org/) and +[logrotate](https://linux.die.net/man/8/logrotate) to ship and rotate logs +for its Kubernetes (K8s) components. +So whenever a containerized component is removed by the RDI operator process or by K8s, +the logs are available for you to inspect. +By default, RDI stores logs in the host VM file system at `/opt/rdi/logs`. +The logs are recorded at the minimum `INFO` level and get rotated when they reach a size of 100MB. +RDI retains the last five log rotated files by default. +Logs are in a straightforward text format, which lets you analyze them with several different observability tools. +You can change the default log settings using the +[`redis-di configure-rdi`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-configure-rdi" >}}) +command. + +## Dump support package + +If you ever need to send a comprehensive set of forensics data to Redis support then you should +run the +[`redis-di dump-support-package`]({{< relref "/integrate/redis-data-integration/reference/cli/redis-di-dump-support-package" >}}) +command from the CLI. See +[Troubleshooting]({{< relref "/integrate/redis-data-integration/troubleshooting#dump-support-package" >}}) +for more information. +--- +Title: JMESPath custom functions +aliases: /integrate/redis-data-integration/ingest/reference/jmespath-custom-functions/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: JMESPath custom function reference +group: di +linkTitle: JMESPath custom functions +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 40 +--- + +| Function | Description | Example | Comments | +| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `base64_decode` | Decodes a base64(RFC 4648) encoded string | Input: `{"encoded": "SGVsbG8gV29ybGQh"}`
Expression: `base64_decode(encoded)`
Output: `Hello World!` | | +| `capitalize` | Capitalizes all the words in the string | Input: `{"name": "john doe"}`
Expression: `capitalize(name)`
Output: `John Doe` | | +| `concat` | Concatenates an array of variables or literals | Input: `{"fname": "john", "lname": "doe"}`
Expression: `concat([fname, ' ' ,lname])`
Output: `john doe` | This is equivalent to the more verbose built-in expression: `' '.join([fname,lname])` | +| `filter_entries` | Filters entries in a dictionary (object) using the given JMESPath predicate | Input: `{ "name": "John", "age": 30, "country": "US", "score": 15}`
Expression: `` filter_entries(@, `key == 'name' \|\| key == 'age'`)``
Output:`{"name": "John", "age": 30 }` | | +| `from_entries` | Converts an array of objects with `key` and `value` properties into a single object | Input: `[{"key": "name", "value": "John"}, {"key": "age", "value": 30}, {"key": "city", "value": null}]`
Expression: `from_entries(@)`
Output: `{"name": "John", "age": 30, "city": null}` | | +| `hash` | Calculates a hash using the `hash_name` hash function and returns its hexadecimal representation | Input: `{"some_str": "some_value"}`
Expression: `hash(some_str, `sha1`)`
Output: `8c818171573b03feeae08b0b4ffeb6999e3afc05` | Supported algorithms: sha1 (default), sha256, md5, sha384, sha3_384, blake2b, sha512, sha3_224, sha224, sha3_256, sha3_512, blake2s | +| `in` | Checks if an element matches any value in a list of values | Input: `{"el": "b"}`
Expression: `in(el, `["a", "b", "c"]`)`
Output: `True` | | +| `left` | Returns a specified number of characters from the start of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `left(greeting, `5`)`
Output: `hello` | | +| `lower` | Converts all uppercase characters in a string into lowercase characters | Input: `{"fname": "John"}`
Expression: `lower(fname)`
Output: `john` | | +| `mid` | Returns a specified number of characters from the middle of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `mid(greeting, `4`, `3`)`
Output: `o w` | | +| `json_parse` | Returns parsed object from the given json string | Input: `{"data": '{"greeting": "hello world!"}'}`
Expression: `parse_json(data)`
Output: `{"greeting": "hello world!"}` | | +| `regex_replace` | Replaces a string that matches a regular expression | Input: `{"text": "Banana Bannnana"}`
Expression: `regex_replace(text, 'Ban\w+', 'Apple Apple')`
Output: `Apple Apple` | | +| `replace` | Replaces all the occurrences of a substring with a new one | Input: `{"sentence": "one four three four!"}`
Expression: `replace(sentence, 'four', 'two')`
Output: `one two three two!` | | +| `right` | Returns a specified number of characters from the end of a given text string | Input: `{"greeting": "hello world!"}`
Expression: `right(greeting, `6`)`
Output: `world!` | | +| `split` | Splits a string into a list of strings after breaking the given string by the specified delimiter (comma by default) | Input: `{"departments": "finance,hr,r&d"}`
Expression: `split(departments)`
Output: `['finance', 'hr', 'r&d']` | Default delimiter is comma - a different delimiter can be passed to the function as the second argument, for example: `split(departments, ';')` | +| `time_delta_days` | Returns the number of days between a given `dt` and now (positive) or the number of days that have passed from now (negative) | Input: `{"dt": '2021-10-06T18:56:16.701670+00:00'}`
Expression: `time_delta_days(dt)`
Output: `365` | If `dt` is a string, ISO datetime (2011-11-04T00:05:23+04:00, for example) is assumed. If `dt` is a number, Unix timestamp (1320365123, for example) is assumed. | +| `time_delta_seconds` | Returns the number of seconds between a given `dt` and now (positive) or the number of seconds that have passed from now (negative) | Input: `{"dt": '2021-10-06T18:56:16.701670+00:00'}`
Expression: `time_delta_days(dt)`
Output: `31557600` | If `dt` is a string, ISO datetime (2011-11-04T00:05:23+04:00, for example) is assumed. If `dt` is a number, Unix timestamp (1320365123, for example) is assumed. | +| `to_entries` | Converts a given object into an array of objects with `key` and `value` properties | Input: `{"name": "John", "age": 30, "city": null}`
Expression: `to_entries(@)`
Output: `[{"key": "name", "value": "John"}, {"key": "age", "value": 30}, {"key": "city", "value": null}]` | | +| `upper` | Converts all lowercase characters in a string into uppercase characters | Input: `{"fname": "john"}`
Expression: `upper(fname)`
Output: `JOHN` | | +| `uuid` | Generates a random UUID4 and returns it as a string in standard format | Input: None
Expression: `uuid()`
Output: `3264b35c-ff5d-44a8-8bc7-9be409dac2b7` | | +--- +Title: redis-di delete-all-contexts +linkTitle: redis-di delete-all-contexts +description: Deletes all contexts +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di delete-all-contexts [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. Skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di delete-all-contexts [OPTIONS] + + Deletes all contexts + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + -f, --force Force operation. Skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di deploy +linkTitle: redis-di deploy +description: Deploys the RDI configurations including target +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di deploy [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `directory`: + + - Type: STRING + - Default: `.` + - Usage: `--dir` + + Directory containing RDI configuration + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di deploy [OPTIONS] + + Deploys the RDI configurations including target + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --dir TEXT Directory containing RDI configuration + [default: .] + --help Show this message and exit. +``` +--- +Title: redis-di list-jobs +linkTitle: redis-di list-jobs +description: Lists transformation engine's jobs +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di list-jobs [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di list-jobs [OPTIONS] + + Lists transformation engine's jobs + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di stop +linkTitle: redis-di stop +description: Stops the pipeline +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di stop [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di stop [OPTIONS] + + Stops the pipeline + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di get-rejected +linkTitle: redis-di get-rejected +description: Returns all the stored rejected entries +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di get-rejected [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `max_records`: + + - Type: =1> + - Default: `none` + - Usage: `--max-records` + + Maximum rejected records per DLQ + +- `oldest`: + + - Type: BOOL + - Default: `false` + - Usage: `--oldest +-o` + + Displays the oldest rejected records. If omitted, most recent records will be retrieved + +- `dlq_name`: + + - Type: STRING + - Default: `none` + - Usage: `--dlq-name` + + Only prints the rejected records for the specified DLQ (Dead Letter Queue) name + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di get-rejected [OPTIONS] + + Returns all the stored rejected entries + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --max-records INTEGER RANGE Maximum rejected records per DLQ [x>=1] + -o, --oldest Displays the oldest rejected records. If + omitted, most recent records will be + retrieved + --dlq-name TEXT Only prints the rejected records for the + specified DLQ (Dead Letter Queue) name + --help Show this message and exit. +``` +--- +Title: redis-di dump-support-package +linkTitle: redis-di dump-support-package +description: Dumps RDI support package +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di dump-support-package [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `directory`: + + - Type: STRING + - Default: `.` + - Usage: `--dir` + + Directory where the support file should be generated + +- `dump_rejected`: + + - Type: INT + - Default: `none` + - Usage: `--dump-rejected` + + Dumps rejected records + +- `trace_timeout`: + + - Type: + - Default: `none` + - Usage: `--trace-timeout` + + Stops the trace after exceeding this timeout (in seconds) + +- `max_change_records`: + + - Type: =1> + - Default: `10` + - Usage: `--max-change-records` + + Maximum traced change records + +- `trace_only_rejected`: + + - Type: BOOL + - Default: `false` + - Usage: `--trace-only-rejected` + + Trace only rejected change records + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di dump-support-package [OPTIONS] + + Dumps RDI support package + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --dir TEXT Directory where the support file should be + generated [default: .] + --dump-rejected INTEGER Dumps rejected records + --trace-timeout INTEGER RANGE Stops the trace after exceeding this timeout + (in seconds) [1<=x<=600] + --max-change-records INTEGER RANGE + Maximum traced change records [x>=1] + --trace-only-rejected Trace only rejected change records + --help Show this message and exit. +``` +--- +Title: redis-di list-contexts +linkTitle: redis-di list-contexts +description: Lists all saved contexts +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di list-contexts [OPTIONS] +``` + +## Options + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di list-contexts [OPTIONS] + + Lists all saved contexts + +Options: + --help Show this message and exit. +``` +--- +Title: redis-di set-secret +linkTitle: redis-di set-secret +description: Creates a secret of a specified key +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di set-secret [OPTIONS] {RDI_REDIS_USERNAME|RDI_REDIS_PASSWORD|RD + I_REDIS_CACERT|RDI_REDIS_CERT|RDI_REDIS_KEY|RDI_RED + IS_KEY_PASSPHRASE|SOURCE_DB_USERNAME|SOURCE_DB_PASS + WORD|SOURCE_DB_CACERT|SOURCE_DB_CERT|SOURCE_DB_KEY| + SOURCE_DB_KEY_PASSWORD|TARGET_DB_USERNAME|TARGET_DB + _PASSWORD|TARGET_DB_CACERT|TARGET_DB_CERT|TARGET_DB + _KEY|TARGET_DB_KEY_PASSWORD|JWT_SECRET_KEY} [VALUE] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `key` (REQUIRED): + + - Type: Choice(['RDI_REDIS_USERNAME', 'RDI_REDIS_PASSWORD', 'RDI_REDIS_CACERT', 'RDI_REDIS_CERT', 'RDI_REDIS_KEY', 'RDI_REDIS_KEY_PASSPHRASE', 'SOURCE_DB_USERNAME', 'SOURCE_DB_PASSWORD', 'SOURCE_DB_CACERT', 'SOURCE_DB_CERT', 'SOURCE_DB_KEY', 'SOURCE_DB_KEY_PASSWORD', 'TARGET_DB_USERNAME', 'TARGET_DB_PASSWORD', 'TARGET_DB_CACERT', 'TARGET_DB_CERT', 'TARGET_DB_KEY', 'TARGET_DB_KEY_PASSWORD', 'JWT_SECRET_KEY']) + - Default: `none` + - Usage: `key` + +- `value`: + + - Type: STRING + - Default: `none` + - Usage: `value` + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di set-secret [OPTIONS] {RDI_REDIS_USERNAME|RDI_REDIS_PASSWORD|RD + I_REDIS_CACERT|RDI_REDIS_CERT|RDI_REDIS_KEY|RDI_RED + IS_KEY_PASSPHRASE|SOURCE_DB_USERNAME|SOURCE_DB_PASS + WORD|SOURCE_DB_CACERT|SOURCE_DB_CERT|SOURCE_DB_KEY| + SOURCE_DB_KEY_PASSWORD|TARGET_DB_USERNAME|TARGET_DB + _PASSWORD|TARGET_DB_CACERT|TARGET_DB_CERT|TARGET_DB + _KEY|TARGET_DB_KEY_PASSWORD|JWT_SECRET_KEY} [VALUE] + + Creates a secret of a specified key + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --help Show this message and exit. +``` +--- +Title: redis-di add-context +linkTitle: redis-di add-context +description: Adds a new context +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di add-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di add-context [OPTIONS] CONTEXT_NAME + + Adds a new context + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --help Show this message and exit. +``` +--- +Title: redis-di delete-context +linkTitle: redis-di delete-context +description: Deletes a context +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di delete-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. Skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di delete-context [OPTIONS] CONTEXT_NAME + + Deletes a context + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + -f, --force Force operation. Skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di install +linkTitle: redis-di install +description: Installs RDI +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di install [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `warning` + - Usage: `--log-level +-l` + +- `file`: + + - Type: + - Default: `none` + - Usage: `-f +--file` + + Path to a YAML configuration file for silent installation + +- `installation_dir`: + + - Type: + - Default: `none` + - Usage: `--installation-dir` + + Custom installation directory + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di install [OPTIONS] + + Installs RDI + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: WARNING] + -f, --file FILE Path to a YAML configuration file for silent + installation + --installation-dir DIRECTORY Custom installation directory + --help Show this message and exit. +``` +--- +Title: redis-di set-context +linkTitle: redis-di set-context +description: Sets a context to be the active one +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di set-context [OPTIONS] CONTEXT_NAME +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `context_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `context-name` + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di set-context [OPTIONS] CONTEXT_NAME + + Sets a context to be the active one + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --help Show this message and exit. +``` +--- +Title: redis-di configure-rdi +linkTitle: redis-di configure-rdi +description: Configures RDI db connection credentials +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di configure-rdi [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `rdi_log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `none` + - Usage: `--rdi-log-level` + + Log level for RDI components + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di configure-rdi [OPTIONS] + + Configures RDI db connection credentials + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --rdi-log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + Log level for RDI components + --help Show this message and exit. +``` +--- +Title: redis-di start +linkTitle: redis-di start +description: Starts the pipeline +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di start [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di start [OPTIONS] + + Starts the pipeline + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di reset +linkTitle: redis-di reset +description: Resets the pipeline into initial full sync mode +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di reset [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. Skips verification prompts + +- `pause_for_confirmation`: + + - Type: BOOL + - Default: `false` + - Usage: `--pause-for-confirmation` + + Pause for user confirmation if manual shutdown of collector required + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di reset [OPTIONS] + + Resets the pipeline into initial full sync mode + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -f, --force Force operation. Skips verification prompts + --pause-for-confirmation Pause for user confirmation if manual + shutdown of collector required + --help Show this message and exit. +``` +--- +Title: redis-di upgrade +linkTitle: redis-di upgrade +description: Upgrades RDI without losing data or downtime +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di upgrade [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `force`: + + - Type: BOOL + - Default: `false` + - Usage: `--force +-f` + + Force operation. Skips verification prompts + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di upgrade [OPTIONS] + + Upgrades RDI without losing data or downtime + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -f, --force Force operation. Skips verification prompts + --help Show this message and exit. +``` +--- +Title: redis-di status +linkTitle: redis-di status +description: Displays the status of the pipeline end to end +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di status [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_namespace`: + + - Type: STRING + - Default: `rdi` + - Usage: `--rdi-namespace` + + RDI Kubernetes namespace + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `live`: + + - Type: BOOL + - Default: `false` + - Usage: `--live +-l` + + Live data flow monitoring + +- `page_number`: + + - Type: =1> + - Default: `none` + - Usage: `--page-number +-p` + + Set the page number (live mode only) + +- `page_size`: + + - Type: =1> + - Default: `none` + - Usage: `--page-size +-s` + + Set the page size (live mode only) + +- `ingested_only`: + + - Type: BOOL + - Default: `false` + - Usage: `--ingested-only +-i` + + Display ingested data streams (live mode only) + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di status [OPTIONS] + + Displays the status of the pipeline end to end + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-namespace TEXT RDI Kubernetes namespace [default: rdi] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + -l, --live Live data flow monitoring + -p, --page-number INTEGER RANGE + Set the page number (live mode only) [x>=1] + -s, --page-size INTEGER RANGE Set the page size (live mode only) [x>=1] + -i, --ingested-only Display ingested data streams (live mode + only) + --help Show this message and exit. +``` +--- +Title: redis-di +linkTitle: redis-di +description: A command line tool to manage & configure Redis Data Integration +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di [OPTIONS] COMMAND [ARGS]... +``` + +## Options + +- `version`: + + - Type: BOOL + - Default: `false` + - Usage: `--version` + + Show the version and exit. + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di [OPTIONS] COMMAND [ARGS]... + + A command line tool to manage & configure Redis Data Integration + +Options: + --version Show the version and exit. + --help Show this message and exit. + +Commands: + add-context Adds a new context + configure-rdi Configures RDI db connection credentials + delete-all-contexts Deletes all contexts + delete-context Deletes a context + deploy Deploys the RDI configurations including target + describe-job Describes a transformation engine's job + dump-support-package Dumps RDI support package + get-rejected Returns all the stored rejected entries + install Installs RDI + list-contexts Lists all saved contexts + list-jobs Lists transformation engine's jobs + reset Resets the pipeline into initial full sync mode + scaffold Generates configuration files for RDI + set-context Sets a context to be the active one + set-secret Creates a secret of a specified key + start Starts the pipeline + status Displays the status of the pipeline end to end + stop Stops the pipeline + trace Starts a trace session for troubleshooting data... + upgrade Upgrades RDI without losing data or downtime +``` +--- +Title: redis-di describe-job +linkTitle: redis-di describe-job +description: Describes a transformation engine's job +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di describe-job [OPTIONS] JOB_NAME +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `job_name` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `job-name` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di describe-job [OPTIONS] JOB_NAME + + Describes a transformation engine's job + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --help Show this message and exit. +``` +--- +Title: redis-di scaffold +linkTitle: redis-di scaffold +description: Generates configuration files for RDI +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di scaffold [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `db_type` (REQUIRED): + + - Type: Choice([, , , , , , ]) + - Default: `none` + - Usage: `--db-type` + + DB type + + Output to directory or stdout + +- `directory`: + + - Type: STRING + - Default: `none` + - Usage: `--dir` + + Directory containing RDI configuration + +- `preview`: + + - Type: STRING + - Default: `none` + - Usage: `--preview` + + Print the content of the scaffolded config file to CLI output + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di scaffold [OPTIONS] + + Generates configuration files for RDI + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --db-type [cassandra|mariadb|mongodb|mysql|oracle|postgresql|sqlserver] + DB type [required] + Output formats: [mutually_exclusive, required] + Output to directory or stdout + --dir TEXT Directory containing RDI configuration + --preview TEXT Print the content of the scaffolded config + file to CLI output + --help Show this message and exit. +``` +--- +Title: CLI reference +aliases: /integrate/redis-data-integration/ingest/reference/cli/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Reference for the RDI CLI commands +group: di +hideListLinks: false +linkTitle: CLI commands +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 60 +--- +--- +Title: redis-di trace +linkTitle: redis-di trace +description: Starts a trace session for troubleshooting data transformation +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +## Usage + +``` +Usage: redis-di trace [OPTIONS] +``` + +## Options + +- `log_level`: + + - Type: Choice(['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']) + - Default: `info` + - Usage: `--log-level +-l` + +- `rdi_host` (REQUIRED): + + - Type: STRING + - Default: `none` + - Usage: `--rdi-host` + + Host/IP of RDI Database + +- `rdi_port` (REQUIRED): + + - Type: + - Default: `none` + - Usage: `--rdi-port` + + Port of RDI Database + +- `rdi_user`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-user` + + RDI Database Username + +- `rdi_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-password` + + RDI Database Password + +- `rdi_key`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key` + + Private key file to authenticate with + +- `rdi_cert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cert` + + Client certificate file to authenticate with + +- `rdi_cacert`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-cacert` + + CA certificate file to verify with + +- `rdi_key_password`: + + - Type: STRING + - Default: `none` + - Usage: `--rdi-key-password` + + Password for unlocking an encrypted private key + +- `max_change_records`: + + - Type: =1> + - Default: `10` + - Usage: `--max-change-records` + + Maximum traced change records + +- `timeout` (REQUIRED): + + - Type: + - Default: `20` + - Usage: `--timeout` + + Stops the trace after exceeding this timeout (in seconds) + +- `trace_only_rejected`: + + - Type: BOOL + - Default: `false` + - Usage: `--trace-only-rejected` + + Trace only rejected change records + +- `help`: + + - Type: BOOL + - Default: `false` + - Usage: `--help` + + Show this message and exit. + +## CLI help + +``` +Usage: redis-di trace [OPTIONS] + + Starts a trace session for troubleshooting data transformation + +Options: + -l, --log-level [TRACE|DEBUG|INFO|WARNING|ERROR|CRITICAL] + [default: INFO] + --rdi-host TEXT Host/IP of RDI Database [required] + --rdi-port INTEGER RANGE Port of RDI Database [1<=x<=65535; + required] + --rdi-user TEXT RDI Database Username + --rdi-password TEXT RDI Database Password + --rdi-key TEXT Private key file to authenticate with + --rdi-cert TEXT Client certificate file to authenticate with + --rdi-cacert TEXT CA certificate file to verify with + --rdi-key-password TEXT Password for unlocking an encrypted private + key + --max-change-records INTEGER RANGE + Maximum traced change records [x>=1] + --timeout INTEGER RANGE Stops the trace after exceeding this timeout + (in seconds) [default: 20; 1<=x<=600; + required] + --trace-only-rejected Trace only rejected change records + --help Show this message and exit. +``` +--- +Title: Redis Data Integration configuration file +linkTitle: RDI configuration file +description: Redis Data Integration configuration file reference +weight: 10 +alwaysopen: false +categories: ["redis-di"] +aliases: +--- + +Configuration file for Redis Data Integration (RDI) source collectors and target connections + +**Properties** + +| Name | Type | Description | Required | +| ----------------------------------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| [**sources**](#sources)
(Source collectors) | `object` | Defines source collectors and their configurations. Each key represents a unique source identifier, and its value contains specific configuration for that collector
| | +| [**processors**](#processors)
(Data processing configuration) | `object`, `null` | Configuration settings that control how data is processed, including batch sizes, error handling, and performance tuning
| | +| [**targets**](#targets)
(Target connections) | `object` | Configuration for target Redis databases where processed data will be written
| | +| [**secret\-providers**](#secret-providers)
(Secret providers) | `object` | | | + + + +## sources: Source collectors + +Defines source collectors and their configurations. Each key represents a unique source identifier, and its value contains specific configuration for that collector + +**Properties (Pattern)** + +| Name | Type | Description | Required | +| -------- | ---- | ----------- | -------- | +| **\.\*** | | | | + + + +## processors: Data processing configuration + +Configuration settings that control how data is processed, including batch sizes, error handling, and performance tuning + +**Properties** + +| Name | Type | Description | Required | +| -------------------------------------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | +| **on_failed_retry_interval**
(Retry interval on failure) | `integer`, `string` | Number of seconds to wait before retrying a failed operation
Default: `5`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **read_batch_size** | `integer`, `string` | Maximum number of records to process in a single batch
Default: `2000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **debezium_lob_encoded_placeholder**
(Debezium LOB placeholder) | `string` | Placeholder value for LOB fields in Debezium
Default: `"__debezium_unavailable_value"`
| | +| **dedup**
(Enable deduplication) | `boolean` | Enable the deduplication mechanism to handle duplicate records
Default: `false`
| | +| **dedup_max_size**
(Deduplication set size) | `integer` | Maximum number of entries to store in the deduplication set
Default: `1024`
Minimum: `1`
| | +| **dedup_strategy**
(Deduplication strategy) | `string` | (DEPRECATED)
Property 'dedup_strategy' is now deprecated. The only supported strategy is 'ignore'. Please remove from the configuration.
Default: `"ignore"`
Enum: `"reject"`, `"ignore"`
| | +| **duration**
(Batch duration limit) | `integer`, `string` | Maximum time in milliseconds to wait for a batch to fill before processing
Default: `100`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **write_batch_size** | `integer`, `string` | Maximum number of records to write to target Redis database in a single batch
Default: `200`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **error_handling**
(Error handling strategy) | `string` | Strategy for handling errors: ignore to skip errors, dlq to store rejected messages in dead letter queue
Default: `"dlq"`
Pattern: `^\${.*}$\|ignore\|dlq`
| | +| **dlq_max_messages**
(DLQ message limit) | `integer`, `string` | Maximum number of messages to store in dead letter queue per stream
Default: `1000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **target_data_type**
(Target Redis data type) | `string` | Data type to use in Redis: hash for Redis Hash, json for RedisJSON (requires RedisJSON module)
Default: `"hash"`
Pattern: `^\${.*}$\|hash\|json`
| | +| **json_update_strategy** | `string` | (DEPRECATED)
Property 'json_update_strategy' will be deprecated in future releases. Use 'on_update' job-level property to define the json update strategy.
Default: `"replace"`
Pattern: `^\${.*}$\|replace\|merge`
| | +| **initial_sync_processes** | `integer`, `string` | Number of parallel processes for performing initial data synchronization
Default: `4`
Pattern: `^\${.*}$`
Minimum: `1`
Maximum: `32`
| | +| **idle_sleep_time_ms**
(Idle sleep interval) | `integer`, `string` | Time in milliseconds to sleep between processing batches when idle
Default: `200`
Pattern: `^\${.*}$`
Minimum: `1`
Maximum: `999999`
| | +| **idle_streams_check_interval_ms**
(Idle streams check interval) | `integer`, `string` | Time in milliseconds between checking for new streams when processor is idle
Default: `1000`
Pattern: `^\${.*}$`
Minimum: `1`
Maximum: `999999`
| | +| **busy_streams_check_interval_ms**
(Busy streams check interval) | `integer`, `string` | Time in milliseconds between checking for new streams when processor is busy
Default: `5000`
Pattern: `^\${.*}$`
Minimum: `1`
Maximum: `999999`
| | +| **wait_enabled**
(Enable replica wait) | `boolean` | Enable verification that data has been written to replica shards
Default: `false`
| | +| **wait_timeout**
(Replica wait timeout) | `integer`, `string` | Maximum time in milliseconds to wait for replica write verification
Default: `1000`
Pattern: `^\${.*}$`
Minimum: `1`
| | +| **retry_on_replica_failure** | `boolean` | Continue retrying writes until successful replication to replica shards is confirmed
Default: `true`
| | + +**Additional Properties:** not allowed + + +## targets: Target connections + +Configuration for target Redis databases where processed data will be written + +**Properties (Pattern)** + +| Name | Type | Description | Required | +| -------- | ---- | ----------- | -------- | +| **\.\*** | | | | + + + +## secret\-providers: Secret providers + +**Properties (Pattern)** + +| Name | Type | Description | Required | +| --------------------------------------------------------- | -------- | ----------- | -------- | +| [**\.\***](#secret-providers)
(Secret provider entry) | `object` | | yes | + + + +#### secret\-providers\.\.\*: Secret provider entry + +**Properties** + +| Name | Type | Description | Required | +| ----------------------------------------------------------------------- | -------- | ----------------------------- | -------- | +| **type**
(Provider type) | `string` | Enum: `"aws"`, `"vault"`
| yes | +| [**parameters**](#secret-providersparameters)
(Provider parameters) | `object` | | yes | + +**Additional Properties:** not allowed +**Example** + +```yaml +parameters: + objects: + - {} +``` + + + +##### secret\-providers\.\.\*\.parameters: Provider parameters + +**Properties** + +| Name | Type | Description | Required | +| ----------------------------------------------------------------------------- | ---------- | ----------- | -------- | +| [**objects**](#secret-providersparametersobjects)
(Secrets objects array) | `object[]` | | yes | + +**Example** + +```yaml +objects: + - {} +``` + + + +###### secret\-providers\.\.\*\.parameters\.objects\[\]: Secrets objects array + +**Items: Secret object** + +**No properties.** + +**Example** + +```yaml +- {} +``` +--- +Title: filter +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/filter/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Filter records +group: di +linkTitle: filter +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Filter records + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: filter + with: + language: sql + expression: age>20 +``` +--- +Title: map +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/map/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Map a record into a new output based on expressions +group: di +linkTitle: map +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Map a record into a new output based on expressions + +**Properties** + +| Name | Type | Description | Required | +| ----------------------------- | ------------------ | --------------------------------------------- | -------- | +| [**expression**](#expression) | `object`, `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: map + with: + expression: + first_name: first_name + last_name: last_name + greeting: >- + 'Hello ' || CASE WHEN gender = 'F' THEN 'Ms.' WHEN gender = 'M' THEN 'Mr.' + ELSE 'N/A' END || ' ' || full_name + country: country + full_name: full_name + language: sql +``` + +**Example** + +```yaml +source: + table: customer +transform: + - uses: map + with: + expression: | + { + "CustomerId": customer_id, + "FirstName": first_name, + "LastName": last_name, + "Company": company, + "Location": + { + "Street": address, + "City": city, + "State": state, + "Country": country, + "PostalCode": postal_code + }, + "Phone": phone, + "Fax": fax, + "Email": email + } + language: jmespath +``` + + + +## expression: object + +Expression + +**No properties.** +--- +Title: remove_field +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/remove_field/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Remove fields +group: di +linkTitle: remove_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Remove fields + +**Option 1 (alternative):** +Remove multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: remove_field + with: + fields: + - field: credit_card + - field: name.mname +``` + +**Option 2 (alternative):** +Remove one field + +**Properties** + +| Name | Type | Description | Required | +| --------- | -------- | ----------- | -------- | +| **field** | `string` | Field
| yes | + +**Additional Properties:** not allowed +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: remove_field + with: + field: credit_card +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| --------- | -------- | ----------- | -------- | +| **field** | `string` | Field
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: rename_field +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/rename_field/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Rename fields. All other fields remain unchanged. +group: di +linkTitle: rename_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Rename fields. All other fields remain unchanged. + +**Option 1 (alternative):** +Rename multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + fields: + - from_field: name.lname + to_field: name.last_name + - from_field: name.fname + to_field: name.first_name +``` + +**Option 2 (alternative):** +Rename one field + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------- | -------- | +| **from_field** | `string` | From field
| yes | +| **to_field** | `string` | To field
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: rename_field + with: + from_field: name.lname + to_field: name.last_name +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------- | -------- | +| **from_field** | `string` | From field
| yes | +| **to_field** | `string` | To field
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: add_field +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/add_field/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Add fields to a record +group: di +linkTitle: add_field +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +Add fields to a record + +**Option 1 (alternative):** +Add multiple fields + +**Properties** + +| Name | Type | Description | Required | +| ---------------------------- | ---------- | ----------- | -------- | +| [**fields**](#option1fields) | `object[]` | Fields
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: add_field + with: + fields: + - field: name.full_name + language: jmespath + expression: concat([name.fname, ' ', name.lname]) + - field: name.fname_upper + language: jmespath + expression: upper(name.fname) +``` + +**Option 2 (alternative):** +Add one field + +**Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **field** | `string` | Field
| yes | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Additional Properties:** not allowed + +**Example** + +```yaml +source: + server_name: redislabs + schema: dbo + table: emp +transform: + - uses: add_field + with: + field: country + language: sql + expression: country_code || ' - ' || UPPER(country_name) +``` + + + +## Option 1: fields\[\]: array + +Fields + +**Items** + +**Item Properties** + +| Name | Type | Description | Required | +| -------------- | -------- | --------------------------------------------- | -------- | +| **field** | `string` | Field
| yes | +| **expression** | `string` | Expression
| yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | + +**Item Additional Properties:** not allowed + +**Example** + +```yaml +- {} +``` +--- +Title: redis.lookup +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: Lookup data from Redis using the given command and key +group: di +linkTitle: redis.lookup +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 10 +--- + +**Properties** + +| Name | Type | Description | Required | +| ----------------- | ---------- | --------------------------------------------- | -------- | +| **connection** | `string` | Connection name | yes | +| **cmd** | `string` | The command to execute | yes | +| [**args**](#args) | `string[]` | Redis command arguments | yes | +| **language** | `string` | Language
Enum: `"jmespath"`, `"sql"`
| yes | +| **field** | `string` | The target field to write the result to
| yes | + +**Additional Properties:** not allowed + +## args\[\]: Redis command arguments {#args} + +The list of expressions that produce arguments. + +**Items** + +**Item Type:** `string` +--- +Title: Data transformation reference +aliases: /integrate/redis-data-integration/ingest/reference/data-transformation/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: View reference material for RDI data transformations +group: di +hideListLinks: false +linkTitle: Data transformation +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 60 +--- +--- +Title: Reference +aliases: /integrate/redis-data-integration/ingest/reference/ +alwaysopen: false +categories: + - docs + - integrate + - rs + - rdi +description: View reference material for Redis Data Integration +group: di +hideListLinks: false +linkTitle: Reference +summary: + Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 60 +--- +--- +Title: RDI API Reference +layout: rdiapireference +type: page +--- +--- +Title: Redis Data Integration +aliases: /integrate/redis-data-integration/ingest +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +hideListLinks: false +linkTitle: Redis Data Integration +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 1 +--- + +This is the first General Availability version of Redis Data Integration (RDI). + +RDI's purpose is to help Redis customers sync Redis Enterprise with live data from their slow disk based databases in order to: + +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +If you use a relational database as the system of record for your app, +you may eventually find +that its performance doesn't scale well as your userbase grows. It may be +acceptable for a few thousand users but for a few million, it can become a +major problem. If you don't have the option of abandoning the relational +database, you should consider using a fast +database, such as Redis, to cache data from read queries. Since read queries +are typically many times more common than writes, the cache will greatly +improve performance and let your app scale without a major redesign. + +RDI keeps a Redis cache up to date with changes in the primary database, using a +[*Change Data Capture (CDC)*](https://en.wikipedia.org/wiki/Change_data_capture) mechanism. +It also lets you *transform* the data from relational tables into convenient +and fast data structures that match your app's requirements. You specify the +transformations using a configuration system, so no coding is necessary. + +{{}} +RDI is supported with Redis database or [CRDB](https://redis.io/active-active/) (Active Active Replication) targets. +{{}} + +## Features + +RDI provides enterprise-grade streaming data pipelines with the following features: + +- **Near realtime pipeline** - The CDC system captures changes in very short time intervals, + then ships and processes them in *micro-batches* to provide near real time updates to Redis. +- **At least once guarantee** - RDI will deliver any change to the selected data set at least + once to the target Redis database. +- **Data integrity** - RDI keeps the data change order per source table or unique key. +- **High availability** - All stateless components have hot failover or quick automatic recovery. + RDI state is always highly available using Redis Enterprise replication. +- **Easy to install and operate** - Use a self-documenting command line interface (CLI) + for all installation and day-two operations. +- **No coding needed** - Create and test your pipelines using Redis Insight. +- **Data-in-transit encryption** - RDI never persists data to disk. All data in-flight is + protected using TLS or mTLS connections. +- **Observability - Metrics** - RDI collects data processing counters at source table granularity + along with data processing performance metrics. These are available via GUI, CLI and + [Prometheus](https://prometheus.io/) endpoints. +- **Observability - logs** - RDI saves rotating logs to a single folder. They are in a JSON format, + so you can collect and process them with your favorite observability tool. +- **Backpressure mechanism** - RDI is designed to backoff writing data when the cache gets + disconnected, which prevents cascading failure. Since the change data is persisted in the source + database and Redis is very fast, RDI can easily catch up with missed changes after a short period of + disconnection. See [Backpressure mechanism]({{< relref "/integrate/redis-data-integration/architecture#backpressure-mechanism">}}) for more information. +- **Recovering from full failure** - If the cache fails or gets disconnected for a long time, + RDI can reconstruct the cache data in Redis using a full snapshot of the defined dataset. +- **High throughput** - Because RDI uses Redis for staging and writes to Redis as a target, + it has very high throughput. With a single processor core and records of about 1KB in size, + RDI processes around 10,000 records per second. While taking the initial full *snapshot* of + the source database, RDI automatically scales to a configurable number of processing units, + to fill the cache as fast as possible. + +## When to use RDI + +RDI is designed to support apps that must use a disk based database as the system of record +but must also be fast and scalable. This is a common requirement for mobile and web +apps with a rapidly-growing number of users; the performance of the main database is fine at first +but it will soon struggle to handle the increasing demand without a cache. + +You should use RDI when: + +- You must use a slow database as the system of record for the app . +- The app must always *write* its data to the slow database. +- You already intend to use Redis for the app cache. +- The data changes frequently in small increments. +- Your app can tolerate *eventual* consistency of data in the Redis cache. + +You should *not* use RDI when: + +- You are migrating an existing data set into Redis only once. +- The data is updated infrequently and in big batches. +- Your app needs *immediate* cache consistency rather than *eventual* consistency. +- The data is ingested from two replicas of Active-Active at the same time. +- The app must *write* data to the Redis cache, which then updates the source database. +- Your data set will only ever be small. + +## Supported source databases + +RDI can capture data from any of the following sources: + +{{< embed-md "rdi-supported-source-versions.md" >}} + +## Documentation + +Learn more about RDI from the other pages in this section:--- +LinkTitle: Datadog with Redis Enterprise +Title: Datadog with Redis Enterprise +alwaysopen: false +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Datadog to your Redis Enterprise cluster using + the Redis Datadog Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Datadog to your Redis Enterprise cluster using + the Redis Datadog Integration. +type: integration +weight: 7 +--- + + +[Datadog](https://www.datadoghq.com/) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The Datadog Integration for Redis Enterprise uses Datadog's Integration API to connect to Redis metrics exporters. +The integration is based on the Datadog +[OpenMetrics integration](https://datadoghq.dev/integrations-core/base/openmetrics/) in their core API. This integration +enables Redis Enterprise users to export metrics directly to Datadog for analysis, and includes Redis-designed +dashboards for use in monitoring Redis Enterprise clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rc/redis-cloud-datadog.png" >}} +## Install Redis' Datadog Integration for Redis Enterprise + +Installing the Datadog integration is a two-step process. Firstly, the installation must be part of your configuration. +Select 'Integrations' from the menu in the Datadog portal and then enter 'Redis' in the search bar, then select +'Redis Enterprise by Redis, Inc.'. Next click 'Install Integration' in the top-right corner of the overview page. +Once it has been installed follow the instructions for adding an instance to the conf.yaml in +/etc/datadog-agent/conf.d/redis_cloud.d. + +After you have edited the conf.yaml file please restart the service and check its status: + +```shell +sudo service datadog-agent restart +``` + +followed by: + +```shell +sudo service datadog-agent status +``` + +to be certain that the service itself is running and did not encounter any problems. Next, check the output of the +service; in the terminal on the Datadog agent host run the following command: + +```shell +tail -f /var/log/datadog/agent.log +``` + +It will take several minutes for data to reach Datadog. Finally, check the Datadog console by selecting +Infrastructure -> Host Map from the menu and then finding the host that is monitoring the Redis Enterprise instance. The host +should be present, and in its list of components there should be a section called 'rdse', which is the namespace used by +the Redis Enterprise integration, although this can take several minutes to appear. It is also possible to verify the metrics +by choosing Metrics -> Explorer from the menu and entering 'rdse.bdb_up'. + +## View metrics + +The Redis Enterprise Integration for Datadog contains pre-defined dashboards to aid in monitoring your Redis Enterprise deployment. + +The following dashboards are currently available: + +- Overview +- Database +- Node +- Shard +- Active-Active +- Proxy +- Proxy Threads + + +## Monitor metrics + +See [Observability and monitoring guidance]({{< relref "/integrate/prometheus-with-redis-enterprise/observability" >}}) for monitoring details. + +--- +LinkTitle: redis-py +Title: Python client for Redis +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis and Python +group: library +stack: true +summary: redis-py is a Python library for Redis. +title: redis-py +type: integration +weight: 1 +--- + +Connect your Python application to a Redis database using the redis-py client library. + +Refer to the complete [Python guide]({{< relref "/develop/clients/redis-py" >}}) to install, connect, and use redis-py. +--- +Title: Dynatrace with Redis Cloud +LinkTitle: Dynatrace with Redis Cloud +categories: +- docs +- integrate +- rs +description: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Dynatrace to your Redis Cloud cluster using + the Redis Dynatrace Integration. +group: observability +summary: To collect, view, and monitor metrics data from your databases and other + cluster components, you can connect Dynatrace to your Redis Cloud cluster using + the Redis Dynatrace Integration. +type: integration +weight: 7 +--- + + +[Dynatrace](https://www.dynatrace.com/) is used by organizations of all sizes and across a wide range of industries to +enable digital transformation and cloud migration, drive collaboration among development, operations, security and +business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and +infrastructure, understand user behavior, and track key business metrics. + +The Redis Dynatrace Integration for Redis Cloud uses Prometheus remote write functionality to connect Prometheus data +sources to Dynatrace. This integration enables Redis Cloud users to export metrics to Dynatrace for analysis, +and includes Redis-designed dashboards for use in monitoring Redis Cloud clusters. + +This integration makes it possible to: +- Collect and display metrics not available in the admin console +- Set up automatic alerts for node or cluster events +- Display these metrics alongside data from other systems + +{{< image filename="/images/rc/redis-cloud-dynatrace.png" >}} +## Install Redis' Dynatrace Integration for Redis Cloud + +The Dynatrace Integration is based on a feature of the Prometheus data source. Prometheus can forward metrics on to +another destination using remote writes. This will require a Prometheus installation inside the same datacenter as the +Redis Cloud deployment. + +If you have not already created a VPC between the Redis Cloud cluster and the network in which the machine hosting +Prometheus lives you should do so now. Please visit [VPC Peering](https://redis.io/docs/latest/operate/rc/security/vpc-peering/) +and follow the instructions for the cloud platform of your choice. + + + +## View metrics + +The Redis Cloud Integration for Dynatrace contains pre-defined dashboards to aid in monitoring your Redis Enterprise deployment. + +The following dashboards are currently available: + +- Cluster: top-level statistics indicating the general health of the cluster +- Database: performance metrics at the database level +- Shard: low-level details of an individual shard +- Active-Active: replication and performance for geo-replicated clusters +- Proxy: network and command information regarding the proxy +- Proxy Threads: processor usage information regarding the proxy's component threads + +## Monitor metrics + +Dynatrace dashboards can be filtered using the text area. For example, when viewing a cluster dashboard it is possible +filter the display to show data for only one cluster by typing 'cluster' in the text area and waiting for the system to +retrieve the relevant data before choosing one of the options in the 'cluster' section. + +Certain types of data do not know the name of the database from which they were drawn. The dashboard should have a list +of database names and ids; use the id value when filtering input to the dashboard. + + + +--- +LinkTitle: jedis +Title: Java client for Redis +categories: +- docs +- integrate +- oss +- rs +- rc +description: Learn how to build with Redis and Java +group: library +stack: true +summary: jedis is a Java library for Redis. +title: jedis +type: integration +weight: 2 +--- + +Connect your Java application to a Redis database using the Jedis client library. + +Refer to the complete [Jedis guide]({{< relref "/develop/clients/jedis" >}}) to install, connect, and use Jedis. +--- +title: Libraries and tools +description: +linkTitle: Integrate +--- +--- +categories: +- docs +- operate +- redisinsight +description: 'How to install Redis Insight on AWS EC2 + + ' +linkTitle: Install on AWS EC2 +title: Install on AWS EC2 +weight: 4 +--- +This tutorial shows you how to install Redis Insight on an AWS EC2 instance and manage ElastiCache Redis instances using Redis Insight. To complete this tutorial you must have access to the AWS Console and permissions to launch EC2 instances. + +Step 1: Launch EC2 Instance +-------------- + +Next, launch an EC2 instance. + +1. Navigate to EC2 under AWS Console. +1. Click Launch Instance. +1. Choose 64-bit Amazon Linux AMI. +1. Choose at least a t2.medium instance. The size of the instance depends on the memory used by your ElastiCache instance that you want to analyze. +1. Under Configure Instance: + * Choose the VPC that has your ElastiCache instances. + * Choose a subnet that has network access to your ElastiCache instances. + * Ensure that your EC2 instance has a public IP Address. + * Assign the IAM role that you created in Step 1. +1. Under the storage section, allocate at least 100 GiB storage. +1. Under security group, ensure that: + * Incoming traffic is allowed on port 5540 + * Incoming traffic is allowed on port 22 only during installation +1. Review and launch the ec2 instance. + +Step 2: Verify permissions and connectivity +---------- + +Next, verify that the EC2 instance has the required IAM permissions and can connect to ElastiCache Redis instances. + +1. SSH into the newly launched EC2 instance. +1. Open a command prompt. +1. Run the command `aws s3 ls`. This should list all S3 buckets. + 1. If the `aws` command cannot be found, make sure your EC2 instance is based of Amazon Linux. +1. Next, find the hostname of the ElastiCache instance you want to analyze and run the command `echo info | nc 6379`. +1. If you see some details about the ElastiCache Redis instance, you can proceed to the next step. +1. If you cannot connect to redis, you should review your VPC, subnet, and security group settings. + +Step 3: Install Docker on EC2 +------- + +Next, install Docker on the EC2 instance. Run the following commands: + +1. `sudo yum update -y` +1. `sudo yum install -y docker` +1. `sudo service docker start` +1. `sudo usermod -a -G docker ec2-user` +1. Log out and log back in again to pick up the new docker group permissions. +1. To verify, run `docker ps`. You should see some output without having to run `sudo`. + +Step 4: Run Redis Insight in the Docker container +------- + +Finally, install Redis Insight using one of the options described below. + +1. If you do not want to persist your Redis Insight data: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest +``` +2. If you want to persist your Redis Insight data, first attach the Docker volume to the `/data` path and then run the following command: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data +``` + +If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permission to access the volume provided (`redisinsight` in the command above). + +Find the IP Address of your EC2 instances and launch your browser at `http://:5540`. Accept the EULA and start using Redis Insight. + +Redis Insight also provides a health check endpoint at `http://:5540/api/health/` to monitor the health of the running container. + +Summary +------ + +In this guide, we installed Redis Insight on an AWS EC2 instance running Docker. As a next step, you should add an ElastiCache Redis Instance and then run the memory analysis. +--- +categories: +- docs +- operate +- redisinsight +description: How to install Redis Insight on the desktop +linkTitle: Install on desktop +title: Install on desktop +weight: 1 +--- +## Supported operating systems + +Redis Insight is supported on multiple operating systems: + +| Operating System | Supported Versions [^1] | +|:--- |:--- | +| **Windows** | Windows 11 | +| | Windows 10 | +| **macOS** | macOS 15 | +| | macOS 14 | +| | macOS 13 | +| | macOS 12 | +| | macOS 11 | +| | macOS 10.15 | +| **Ubuntu Linux** | Ubuntu 24.04 | +| | Ubuntu 23.10 | +| | Ubuntu 22.04 | +| | Ubuntu 20.04 | +| **Debian Linux** | Debian 12 | +| | Debian 11 | + +[^1]: Includes later versions of same major or major.minor release. + +## Install + +Redis Insight is available for download for free from this [web site](https://redis.io/insight/?utm_source=redisinsight&utm_medium=website&utm_campaign=install_redisinsight#insight-form). + +It is also available on: +- Microsoft Store +- Apple Store +- Snapcraft +- Flathub +- [Docker Hub]({{< relref "/operate/redisinsight/install/install-on-docker" >}}). + +After installation, run the Redis Insight application in the same was as you would run other desktop applications. + +## Build + +Alternatively, you can also build Redis Insight from source. See the [wiki](https://github.com/RedisInsight/RedisInsight#build) for instructions. +--- +categories: +- docs +- operate +- redisinsight +description: How to install Redis Insight on Kubernetes +linkTitle: Install on Kubernetes +title: Install on Kubernetes +weight: 3 +--- +This tutorial shows how to install Redis Insight on [Kubernetes](https://kubernetes.io/) (K8s). +This is an easy way to use Redis Insight with a [Redis Enterprise K8s deployment]({{< relref "operate/kubernetes/" >}}). + +## Create the Redis Insight deployment and service + +Below is an annotated YAML file that will create a Redis Insight +deployment and a service in a K8s cluster. + +1. Create a new file named `redisinsight.yaml` with the content below. + +```yaml +# Redis Insight service with name 'redisinsight-service' +apiVersion: v1 +kind: Service +metadata: + name: redisinsight-service # name should not be 'redisinsight' + # since the service creates + # environment variables that + # conflicts with redisinsight + # application's environment + # variables `RI_APP_HOST` and + # `RI_APP_PORT` +spec: + type: LoadBalancer + ports: + - port: 80 + targetPort: 5540 + selector: + app: redisinsight +--- +# Redis Insight deployment with name 'redisinsight' +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight #deployment name + labels: + app: redisinsight #deployment label +spec: + replicas: 1 #a single replica pod + selector: + matchLabels: + app: redisinsight #which pods is the deployment managing, as defined by the pod template + template: #pod template + metadata: + labels: + app: redisinsight #label for pod/s + spec: + containers: + + - name: redisinsight #Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest #repo/image + imagePullPolicy: IfNotPresent #Installs the latest Redis Insight version + volumeMounts: + - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 #exposed container port and protocol + protocol: TCP + volumes: + - name: redisinsight + emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir +``` + +2. Create the Redis Insight deployment and service: + +```sh +kubectl apply -f redisinsight.yaml +``` + +3. Once the deployment and service are successfully applied and complete, access Redis Insight. This can be accomplished by using the `` of the service we created to reach Redis Insight. + +```sh +$ kubectl get svc redisinsight-service +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +redisinsight-service 80:32143/TCP 1m +``` + +4. If you are using minikube, run `minikube list` to list the service and access Redis Insight at `http://:`. +``` +$ minikube list +|-------------|----------------------|--------------|---------------------------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-------------|----------------------|--------------|---------------------------------------------| +| default | kubernetes | No node port | | +| default | redisinsight-service | 80 | http://: | +| kube-system | kube-dns | No node port | | +|-------------|----------------------|--------------|---------------------------------------------| +``` + +## Create the Redis Insight deployment with persistant storage + +Below is an annotated YAML file that will create a Redis Insight +deployment in a K8s cluster. It will assign a peristent volume created from a volume claim template. +Write access to the container is configured in an init container. When using deployments +with persistent writeable volumes, it's best to set the strategy to `Recreate`. Otherwise you may find yourself +with two pods trying to use the same volume. + +1. Create a new file `redisinsight.yaml` with the content below. + +```yaml +# Redis Insight service with name 'redisinsight-service' +apiVersion: v1 +kind: Service +metadata: + name: redisinsight-service # name should not be 'redisinsight' + # since the service creates + # environment variables that + # conflicts with redisinsight + # application's environment + # variables `RI_APP_HOST` and + # `RI_APP_PORT` +spec: + type: LoadBalancer + ports: + - port: 80 + targetPort: 5540 + selector: + app: redisinsight +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: redisinsight-pv-claim + labels: + app: redisinsight +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + storageClassName: default +--- +# Redis Insight deployment with name 'redisinsight' +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight #deployment name + labels: + app: redisinsight #deployment label +spec: + replicas: 1 #a single replica pod + strategy: + type: Recreate + selector: + matchLabels: + app: redisinsight #which pods is the deployment managing, as defined by the pod template + template: #pod template + metadata: + labels: + app: redisinsight #label for pod/s + spec: + volumes: + - name: redisinsight + persistentVolumeClaim: + claimName: redisinsight-pv-claim + initContainers: + - name: init + image: busybox + command: + - /bin/sh + - '-c' + - | + chown -R 1000 /data + resources: {} + volumeMounts: + - name: redisinsight + mountPath: /data + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + containers: + - name: redisinsight #Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest #repo/image + imagePullPolicy: IfNotPresent #Always pull image + volumeMounts: + - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 #exposed container port and protocol + protocol: TCP +``` + +2. Create the Redis Insight deployment and service. + +```sh +kubectl apply -f redisinsight.yaml +``` + +## Create the Redis Insight deployment without a service. + +Below is an annotated YAML file that will create a Redis Insight +deployment in a K8s cluster. + +1. Create a new file redisinsight.yaml with the content below + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight # deployment name + labels: + app: redisinsight # deployment label +spec: + replicas: 1 # a single replica pod + selector: + matchLabels: + app: redisinsight # which pods is the deployment managing, as defined by the pod template + template: # pod template + metadata: + labels: + app: redisinsight # label for pod/s + spec: + containers: + - name: redisinsight # Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest # repo/image + imagePullPolicy: IfNotPresent # Always pull image + env: + # If there's a service named 'redisinsight' that exposes the + # deployment, we manually set `RI_APP_HOST` and + # `RI_APP_PORT` to override the service environment + # variables. + - name: RI_APP_HOST + value: "0.0.0.0" + - name: RI_APP_PORT + value: "5540" + volumeMounts: + - name: redisinsight # Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 # exposed container port and protocol + protocol: TCP + livenessProbe: # Probe to check container health + httpGet: + path: /healthcheck/ # exposed RI endpoint for healthcheck + port: 5540 # exposed container port + initialDelaySeconds: 5 # number of seconds to wait after the container starts to perform liveness probe + periodSeconds: 5 # period in seconds after which liveness probe is performed + failureThreshold: 1 # number of liveness probe failures after which container restarts + volumes: + - name: redisinsight + emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir +``` + +2. Create the Redis Insight deployment + +```sh +kubectl apply -f redisinsight.yaml +``` + +{{< alert title="Note" >}} +If the deployment will be exposed by a service whose name is 'redisinsight', set `RI_APP_HOST` and `RI_APP_PORT` environment variables to override the environment variables created by the service. +{{< /alert >}} + +## Run Redis Insight + +Once the deployment has been successfully applied and the deployment is complete, access Redis Insight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below: + +```sh +kubectl port-forward deployment/redisinsight 5540 +``` + +Open your browser and point to +--- +categories: +- docs +- operate +- redisinsight +description: How to install Redis Insight on Docker +linkTitle: Install on Docker +title: Install on Docker +weight: 2 +--- +This tutorial shows how to install Redis Insight on [Docker](https://www.docker.com/) so you can use Redis Insight in development. +See a separate guide for installing [Redis Insight on AWS]({{< relref "/operate/redisinsight/install/install-on-aws" >}}). + +## Install Docker + +The first step is to [install Docker for your operating system](https://docs.docker.com/install/). + +## Run Redis Insight Docker image + +You can install Redis Insight using one of the options described below. + +1. If you do not want to persist your Redis Insight data: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest +``` +2. If you want to persist your Redis Insight data, first attach the Docker volume to the `/data` path and then run the following command: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data +``` + +If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permissions to access the volume provided (`redisinsight` in the command above). + +Next, point your browser to `http://localhost:5540`. + +Redis Insight also provides a health check endpoint at `http://localhost:5540/api/health/` to monitor the health of the running container. +--- +categories: +- docs +- operate +- redisinsight +description: Install Redis Insight on AWS, Docker, Kubernetes, and desktop +linkTitle: Install Redis Insight +title: Install Redis Insight +weight: 3 +--- + +This is a an installation guide. You'll learn how to install Redis Insight on Amazon Web Services (AWS), Docker, and Kubernetes.--- +categories: +- docs +- operate +- redisinsight +linkTitle: Configuration settings +title: Redis Insight configuration settings +weight: 5 +--- +## Configuration environment variables + +| Variable | Purpose | Default | Additional info | +| --- | --- | --- | --- | +| RI_APP_PORT | The port that Redis Insight listens on. |
  • Docker: 5540
  • desktop: 5530
| See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| +| RI_APP_HOST | The host that Redis Insight connects to. |
  • Docker: 0.0.0.0
  • desktop: 127.0.0.1
| See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| +| RI_SERVER_TLS_KEY | Private key for HTTPS. | n/a | Private key in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| +| RI_SERVER_TLS_CERT | Certificate for supplied private key. | n/a | Public certificate in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| +| RI_ENCRYPTION_KEY | Key to encrypt data with. | n/a | Available only for Docker.
Redis insight stores sensitive information (database passwords, Workbench history, etc.) locally (using [sqlite3](https://github.com/TryGhost/node-sqlite3)). This variable allows you to store sensitive information encrypted using the specified encryption key.
Note: The same encryption key should be provided for subsequent `docker run` commands with the same volume attached to decrypt the information. | +| RI_LOG_LEVEL | Configures the log level of the application. | `info` | Supported logging levels are prioritized from highest to lowest:
  • error
  • warn
  • info
  • http
  • verbose
  • debug
  • silly
| +| RI_FILES_LOGGER | Logs to file. | `true` | By default, you can find log files in the following folders:
  • Docker: `/data/logs`
  • desktop: `/.redisinsight-app/logs`
| +| RI_STDOUT_LOGGER | Logs to STDOUT. | `true` | | +| RI_PROXY_PATH | Configures a subpath for a proxy. | n/a | Available only for Docker. | +| RI_DATABASE_MANAGEMENT | When set to `false`, this disables the ability to manage database connections (adding, editing, or deleting). | `true` | | + +## Preconfigure database connections +Redis Insight allows you to preconfigure database connections using environment variables or a JSON file, enabling centralized and efficient configuration. +There are two ways to preconfigure database connections in Redis Insight Electron and Docker: +1. Use environment variables. +1. Use a JSON file. + +### Preconfigure database connections using environment variables +Redis Insight allows you to preconfigure database connections using environment variables. + +**NOTES**: +- To configure multiple database connections, replace the asterisk (*) in each environment variable with a unique identifier for each database connection. If setting up only one connection, you can omit the asterisk, and Redis Insight will default to using 0 as the ID. +- If you modify environment variables, the changes will take effect after restarting Redis Insight. +- If you restart Redis Insight without these environment variables, all previously added database connections will be removed. + +| Variable | Purpose | Default | Additional info | +| --- | --- | --- | --- | +| RI_REDIS_HOST* | Host of a Redis database. | N/A | | +| RI_REDIS_PORT* | Port of a Redis database. | `6379` | | +| RI_REDIS_ALIAS* | Alias of a database connection. | `{host}:{port}` | | +| RI_REDIS_USERNAME* | Username to connect to a Redis database. | `default` | | +| RI_REDIS_PASSWORD* | Password to connect to a Redis database. | No password | | +| RI_REDIS_TLS* | Indicates whether TLS certificates should be used to connect. | `FALSE` | Accepts `TRUE` or `FALSE` | +| RI_REDIS_TLS_CA_BASE64* | CA certificate in base64 format. | N/A | Specify a CA certificate in this environment variable or provide a file path using `RI_REDIS_TLS_CA_PATH*`. | +| RI_REDIS_TLS_CA_PATH* | Path to the CA certificate file. | N/A | | +| RI_REDIS_TLS_CERT_BASE64* | Client certificate in base64 format. | N/A | Specify a client certificate in this environment variable or provide a file path using `RI_REDIS_TLS_CERT_PATH*`. | +| RI_REDIS_TLS_CERT_PATH* | Path to the Client certificate file. | N/A | | +| RI_REDIS_TLS_KEY_BASE64* | Private key for the client certificate in base64 format. | N/A | Indicate a private key in this environment variable or use another variable to get it from a file. | +| RI_REDIS_TLS_KEY_PATH* | Path to private key file. | N/A | | + +### Preconfigure database connections using a JSON file +Redis Insight also allows you to preconfigure database connections using a JSON file. + +**NOTES** +- The JSON file format should match the one used when exporting database connections from Redis Insight. +- The `id` field in the JSON file should include unique identifiers to avoid conflicts for database connections. +- Changes to the JSON file will take effect after restarting Redis Insight. +- If the JSON file is removed, all database connections added via the file will be removed. + +| Variable | Purpose | Default | Additional info | +| --- | --- | --- | --- | +| RI_PRE_SETUP_DATABASES_PATH | Path to a JSON file containing the database connections to preconfigure | | + +## Use Redis Insight behind a reverse proxy + +When you configure Redis Insight to run behind a reverse proxy like [NGINX](https://www.nginx.com/), set the request timeout to over 30 seconds on the reverse proxy because some requests can be long-running. + +Redis Insight also allows you to manage its connection timeout on the form to configure the connection details. The default timeout is 30 seconds. + +Hosting Redis Insight behind a prefix path (path-rewriting) is not supported. +--- +categories: +- docs +- operate +- redisinsight +linkTitle: Proxy settings +title: Subpath proxy +weight: 7 +--- + +{{}} +Subpath proxy is available only on the Docker version. +{{}} + +You can enable the subpath proxy by setting the `RI_PROXY_PATH` environment variable. + + +When `RI_PROXY_PATH` is being set with a path, Redis Insight is +accessible only on that subpath. The default routes are given the +provided prefix subpath. There isn’t any way to add another proxy behind +this one unless the same subpath is used for the new one. + +{{}} +Once you set the static subpath environment variable, Redis Insight is only reachable on the provided subpath. The default endpoint won't work. +{{}} + +## Using Redis Insight behind a reverse proxy + +When you configure Redis Insight to run behind a reverse proxy like NGINX, set the request timeout to over 30 seconds on the reverse proxy because some requests can be long-running. + +Redis Insight also allows you to manage its connection timeout on the form to configure the connection details. The default timeout is 30 seconds. + +Hosting Redis Insight behind a prefix path (path-rewriting) is not supported. + + +## Example + +### Docker compose file + +```yaml +version: "3.7" +services: + redis-stack: + image: redis/redis-stack-server + networks: + - redis-network + + redisinsight: + image: redis/redisinsight + environment: + - RIPORT=${RIPORT:-5540} + - RITRUSTEDORIGINS=http://localhost:9000 + depends_on: + - redis-stack + networks: + - redis-network + + nginx-basicauth: + image: nginx + volumes: + - ./nginx-basic-auth.conf.template:/etc/nginx/templates/nginx-basic-auth.conf.template + ports: + - "${NGINX_PORT:-9000}:${NGINX_PORT:-9000}" + environment: + - FORWARD_HOST=redisinsight + - FORWARD_PORT=${RIPORT:-5540} + - NGINX_PORT=${NGINX_PORT:-9000} + - BASIC_USERNAME=${BASIC_USERNAME:-redis} + - BASIC_PASSWORD=${BASIC_PASSWORD:-password} + command: + - bash + - -c + - | + printf "$$BASIC_USERNAME:$$(openssl passwd -1 $$BASIC_PASSWORD)\n" >> /etc/nginx/.htpasswd + /docker-entrypoint.sh nginx -g "daemon off;" + depends_on: + - redisinsight + networks: + - redis-network +``` + +### nginx config + +``` +server { + listen ${NGINX_PORT} default_server; + + location / { + auth_basic "redisinsight"; + auth_basic_user_file .htpasswd; + + proxy_pass http://${FORWARD_HOST}:${FORWARD_PORT}; + proxy_read_timeout 900; + } +} + +``` + +### Login page + +{{< image filename="/images/ri/ri-reverse-proxy-login.png" alt="RedisInsight login page" >}} + + +### After login + +{{< image filename="/images/ri/ri-reverse-proxy-post-login.png" alt="RedisInsight after login" >}} + +--- +title: Redis Insight +description: Install and manage Redis Insight +linkTitle: Redis Insight +categories: +- docs +- operate +- redisinsight +weight: 50 +--- + +For information on using Redis Insight, see [these pages]({{< relref "/develop/tools/insight" >}}). + +--- +Title: Considerations for planning Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Information about Active-Active database to take into consideration while + planning a deployment, such as compatibility, limitations, and special configuration +linktitle: Planning considerations +weight: 22 +--- + +In Redis Enterprise, Active-Active geo-distribution is based on [conflict-free replicated data type (CRDT) technology](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). Compared to databases without geo-distribution, Active-Active databases have more complex replication and networking, as well as a different data type. + +Because of the complexities of Active-Active databases, there are special considerations to keep in mind while planning your Active-Active database. + +See [Active-Active Redis]({{< relref "/operate/rs/databases/active-active/" >}}) for more information about geo-distributed replication. For more info on other high availability features, see [Durability and high availability]({{< relref "/operate/rs/databases/durability-ha/" >}}). + +## Participating clusters + +You need at least [two participating clusters]({{< relref "/operate/rs/clusters/new-cluster-setup" >}}) for an Active-Active database. If your database requires more than ten participating clusters, contact Redis support. You can [add or remove participating clusters]({{< relref "/operate/rs/databases/active-active/manage#participating-clusters/" >}}) after database creation. + +{{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +{{}} + +Changes made from the Cluster Manager UI to an Active-Active database configuration only apply to the cluster you are editing. For global configuration changes across all clusters, use the `crdb-cli` command-line utility. + +## Memory limits + +Database memory limits define the maximum size of your database across all database replicas and [shards]({{< relref "/operate/rs/references/terminology.md#redis-instance-shard" >}}) on the cluster. Your memory limit also determines the number of shards. + +Besides your dataset, the memory limit must also account for replication, Active-Active metadata, and module overhead. These features can increase your database size, sometimes increasing it by two times or more. + +Factors to consider when sizing your database: + +- **dataset size**: you want your limit to be above your dataset size to leave room for overhead. +- **database throughput**: high throughput needs more shards, leading to a higher memory limit. +- [**modules**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}): using modules with your database can consume more memory. +- [**database clustering**]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}): enables you to spread your data into shards across multiple nodes (scale out). +- [**database replication**]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}): enabling replication doubles memory consumption +- [**Active-Active replication**]({{< relref "/operate/rs/databases/active-active/_index.md" >}}): enabling Active-Active replication requires double the memory of regular replication, which can be up to two times (2x) the original data size per instance. +- [**database replication backlog**]({{< relref "/operate/rs/databases/active-active/manage#replication-backlog/" >}}) for synchronization between shards. By default, this is set to 1% of the database size. +- [**Active-Active replication backlog**]({{< relref "/operate/rs/databases/active-active/manage.md" >}}) for synchronization between clusters. By default, this is set to 1% of the database size. + +It's also important to know Active-Active databases have a lower threshold for activating the eviction policy, because it requires propagation to all participating clusters. The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. + +For more information on memory limits, see [Memory and performance]({{< relref "/operate/rs/databases/memory-performance/" >}}) or [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit.md" >}}). + +## Networking + +Network requirements for Active-Active databases include: + +- A VPN between each network that hosts a cluster with an instance (if your database spans WAN). +- A network connection to [several ports](#network-ports) on each cluster from all nodes in all participating clusters. +- A [network time service](#network-time-service) running on each node in all clusters. + +Networking between the clusters must be configured before creating an Active-Active database. The setup will fail if there is no connectivity between the clusters. + +### Network ports + +Every node must have access to the REST API ports of every other node as well as other ports for proxies, VPNs, and the Cluster Manager UI. See [Network port configurations]({{< relref "/operate/rs/networking/port-configurations.md" >}}) for more details. These ports should be allowed through firewalls that may be positioned between the clusters. + +### Network Time Service {#network-time-service} + +Active-Active databases require a time service like NTP or Chrony to make sure the clocks on all cluster nodes are synchronized. +This is critical to avoid problems with internal cluster communications that can impact your data integrity. + +See [Synchronizing cluster node clocks]({{< relref "/operate/rs/clusters/configure/sync-clocks.md" >}}) for more information. + +## Redis modules {#redis-modules} + +Several Redis modules are compatible with Active-Active databases. Find the list of [compatible Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}). +{{< note >}} +Starting with v6.2.18, you can index, query, and perform full-text searches of nested JSON documents in Active-Active databases by combining RedisJSON and RediSearch. +{{< /note >}} + +## Limitations + +Active-Active databases have the following limitations: + +- An existing database can't be changed into an Active-Active database. To move data from an existing database to an Active-Active database, you must [create a new Active-Active database]({{< relref "/operate/rs/databases/active-active/create.md" >}}) and [migrate the data]({{< relref "/operate/rs/databases/import-export/migrate-to-active-active.md" >}}). +- [Discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service.md" >}}) is not supported with Active-Active databases. Active-Active databases require FQDNs or [mDNS]({{< relref "/operate/rs/networking/mdns.md" >}}). +- The `FLUSH` command is not supported from the CLI. To flush your database, use the API or Cluster Manager UI. +- The `UNLINK` command is a blocking command for all types of keys. +- Cross slot multi commands (such as `MSET`) are not supported with Active-Active databases. +- The hashing policy can't be changed after database creation. +- If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +--- +Title: Application failover with Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: How to failover your application to connect to a remote replica. +linkTitle: App failover +weight: 99 +--- +Active-Active Redis deployments don't have a built-in failover or failback mechanism for application connections. +An application deployed with an Active-Active database connects to a replica of the database that is geographically nearby. +If that replica is not available, the application can failover to a remote replica, and failback again if necessary. +In this article we explain how this process works. + +Active-Active connection failover can improve data availability, but can negatively impact data consistency. +Active-Active replication, like Redis replication, is asynchronous. +An application that fails over to another replica can miss write operations. +If the failed replica saved the write operations in persistent storage, +then the write operations are processed when the failed replica recovers. + +## Detecting Failure + +Your application can detect two types of failure: + +1. **Local failures** - The local replica is down or otherwise unavailable +1. **Replication failures** - The local replica is available but fails to replicate to or from remote replicas + +### Local Failures + +Local failure is detected when the application is unable to connect to the database endpoint for any reason. Reasons for a local failure can include: multiple node failures, configuration errors, connection refused, connection timed out, unexpected protocol level errors. + +### Replication Failures + +Replication failures are more difficult to detect reliably without causing false positives. Replication failures can include: network split, replication configuration issues, remote replica failures. + +The most reliable method for health-checking replication is by using the Redis publish/subscribe (pub/sub) mechanism. + +{{< note >}} +Note that this document does not suggest that Redis pub/sub is reliable in the common sense. Messages can get lost in certain conditions, but that is acceptable in this case because typically the application determines that replication is down only after not being able to deliver a number of messages over a period of time. +{{< /note >}} + +When you use the pub/sub data type to detect failures, the application: + +1. Connects to all replicas and subscribes to a dedicated channel for each replica. +1. Connects to all replicas and periodically publishes a uniquely identifiable message. +1. Monitors received messages and ensures that it is able to receive its own messages within a predetermined window of time. + +You can also use known dataset changes to monitor the reliability of the replication stream, +but pub/sub is preferred method because: + +1. It does not involve dataset changes. +1. It does not make any assumptions about the dataset. +1. Pub/sub messages are delivered as replicated effects and are a more reliable indicator of a live replication link. In certain cases, dataset keys may appear to be modified even if the replication link fails. This happens because keys may receive updates through full-state replication (re-sync) or through online replication of effects. + +## Impact of sharding on failure detection + +If your sharding configuration is symmetric, make sure to use at least one key (PUB/SUB channels or real dataset key) per shard. Shards are replicated individually and are vulnerable to failure. Symmetric sharding configurations have the same number of shards and hash slots for all replicas. +We do not recommend an asymmetric sharding configuration, which requires at least one key per hash slot that intersects with a pair of shards. + +To make sure that there is at least one key per shard, the application should: + +1. Use the Cluster API to retrieve the database sharding configuration. +1. Compute a number of key names, such that there is one key per shard. +1. Use those key names as channel names for the pub/sub mechanism. + +### Failing over + +When the application needs to failover to another replica, it should simply re-establish its connections with the endpoint on the remote replica. Because Active/Active and Redis replication are asynchronous, the remote endpoint may not have all of the locally performed and acknowledged writes. + +It's best if your application doesn't read its own recent writes. Those writes can be either: + +1. Lost forever, if the local replica has an event such as a double failure or loss of persistent files. +1. Temporarily unavailable, but will be available at a later time if the local replica's failure is temporary. + + + +## Failback decision + +Your application can use the same checks described above to continue monitoring the state of the failed replica after failover. + +To monitor the state of a replica during the failback process, you must make sure the replica is available, re-synced with the remote replicas, and not in stale mode. The PUB/SUB mechanism is an effective way to monitor this. + +Dataset-based mechanisms are potentially less reliable for several reasons: +1. In order to determine that a local replica is not stale, it is not enough to simply read keys from it. You must also attempt to write to it. +1. As stated above, remote writes for some keys appear in the local replica before the replication link is back up and while the replica is still in stale mode. +1. A replica that was never written to never becomes stale, so on startup it is immediately ready but serves stale data for a longer period of time. + +## Replica Configuration Changes + +All failover and failback operations should be done strictly on the application side, and should not involve changes to the Active-Active configuration. +The only valid case for re-configuring the Active-Active deployment and removing a replica is when memory consumption becomes too high because garbage collection cannot be performed. +Once a replica is removed, it can only be re-joined as a new replica and it loses any writes that were not converged. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Overview of how developing applications differs for Active-Active databases + from standalone Redis databases. +linkTitle: Develop for Active-Active +title: Develop applications with Active-Active databases +weight: 10 +--- +Developing geo-distributed, multi-master applications can be difficult. +Application developers may have to understand a large number of race +conditions between updates to various sites, network, and cluster +failures that could reorder the events and change the outcome of the +updates performed across geo-distributed writes. + +Active-Active databases (formerly known as CRDB) are geo-distributed databases that span multiple Redis Enterprise Software (RS) clusters. +Active-Active databases depend on multi-master replication (MMR) and Conflict-free +Replicated Data Types (CRDTs) to power a simple development experience +for geo-distributed applications. Active-Active databases allow developers to use existing +Redis data types and commands, but understand the developers intent and +automatically handle conflicting concurrent writes to the same key +across multiple geographies. For example, developers can simply use the +INCR or INCRBY method in Redis in all instances of the geo-distributed +application, and Active-Active databases handle the additive nature of INCR to reflect the +correct final value. The following example displays a sequence of events +over time : t1 to t9. This Active-Active database has two member Active-Active databases : member CRDB1 and +member CRDB2. The local operations executing in each member Active-Active database is +listed under the member Active-Active database name. The "Sync" even represent the moment +where synchronization catches up to distribute all local member Active-Active database +updates to other participating clusters and other member Active-Active databases. + +| **Time** | **Member CRDB1** | **Member CRDB2** | +| :------: | :------: | :------: | +| t1 | INCRBY key1 7 | | +| t2 | | INCRBY key1 3 | +| t3 | GET key1
7 | GET key1
3 | +| t4 | — Sync — | — Sync — | +| t5 | GET key1
10 | GET key1
10 | +| t6 | DECRBY key1 3 | | +| t7 | | INCRBY key1 6 | +| t8 | — Sync — | — Sync — | +| t9 | GET key1
13 | GET key1
13 | + +Databases provide various approaches to address some of these concerns: + +- Active-Passive Geo-distributed deployments: With active-passive + distributions, all writes go to an active cluster. Redis Enterprise + provides a "Replica Of" capability that provides a similar approach. + This can be employed when the workload is heavily balanced towards + read and few writes. However, WAN performance and availability + is quite flaky and traveling large distances for writes take away + from application performance and availability. +- Two-phase Commit (2PC): This approach is designed around a protocol + that commits a transaction across multiple transaction managers. + Two-phase commit provides a consistent transactional write across + regions but fails transactions unless all participating transaction + managers are "available" at the time of the transaction. The number + of messages exchanged and its cross-regional availability + requirement make two-phase commit unsuitable for even moderate + throughputs and cross-geo writes that go over WANs. +- Sync update with Quorum-based writes: This approach synchronously + coordinates a write across majority number of replicas across + clusters spanning multiple regions. However, just like two-phase + commit, number of messages exchanged and its cross-regional + availability requirement make geo-distributed quorum writes + unsuitable for moderate throughputs and cross geo writes that go + over WANs. +- Last-Writer-Wins (LWW) Conflict Resolution: Some systems provide + simplistic conflict resolution for all types of writes where the + system clocks are used to determine the winner across conflicting + writes. LWW is lightweight and can be suitable for simpler data. + However, LWW can be destructive to updates that are not necessarily + conflicting. For example adding a new element to a set across two + geographies concurrently would result in only one of these new + elements appearing in the final result with LWW. +- MVCC (multi-version concurrency control): MVCC systems maintain + multiple versions of data and may expose ways for applications to + resolve conflicts. Even though MVCC system can provide a flexible + way to resolve conflicting writes, it comes at a cost of great + complexity in the development of a solution. + +Even though types and commands in Active-Active databases look identical to standard Redis +types and commands, the underlying types in RS are enhanced to maintain +more metadata to create the conflict-free data type experience. This +section explains what you need to know about developing with Active-Active databases on +Redis Enterprise Software. + +## Lua scripts + +Active-Active databases support Lua scripts, but unlike standard Redis, Lua scripts always +execute in effects replication mode. There is currently no way to +execute them in script-replication mode. + +## Eviction + +The default policy for Active-Active databases is _noeviction_ mode. Redis Enterprise version 6.0.20 and later support all eviction policies for Active-Active databases, unless [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering" >}})(previously known as Redis on Flash) is enabled. +For details, see [eviction for Active-Active databases]({{< relref "/operate/rs/databases/memory-performance/eviction-policy#active-active-database-eviction" >}}). + + +## Expiration + +Expiration is supported with special multi-master semantics. + +If a key's expiration time is changed at the same time on different +members of the Active-Active database, the longer extended time set via TTL on a key is +preserved. As an example: + +If this command was performed on key1 on cluster #1 + +```sh +127.0.0.1:6379> EXPIRE key1 10 +``` + +And if this command was performed on key1 on cluster #2 + +```sh +127.0.0.1:6379> EXPIRE key1 50 +``` + +The EXPIRE command setting the key to 50 would win. + +And if this command was performed on key1 on cluster #3: + +```sh +127.0.0.1:6379> PERSIST key1 +``` + +It would win out of the three clusters hosting the Active-Active database as it sets the +TTL on key1 to an infinite time. + +The replica responsible for the "winning" expire value is also +responsible to expire the key and propagate a DEL effect when this +happens. A "losing" replica is from this point on not responsible +for expiring the key, unless another EXPIRE command resets the TTL. +Furthermore, a replica that is NOT the "owner" of the expired value: + +- Silently ignores the key if a user attempts to access it in READ + mode, e.g. treating it as if it was expired but not propagating a + DEL. +- Expires it (sending a DEL) before making any modifications if a user + attempts to access it in WRITE mode. + + {{< note >}} +Expiration values are in the range of [0, 2^49] for Active-Active databases and [0, 2^64] for non Active-Active databases. + {{< /note >}} + +## Out-of-Memory (OOM) {#outofmemory-oom} + +If a member Active-Active database is in an out of memory situation, that member is marked +"inconsistent" by RS, the member stops responding to user traffic, and +the syncer initiates full reconciliation with other peers in the Active-Active database. + +## Active-Active Database Key Counts + +Keys are counted differently for Active-Active databases: + +- DBSIZE (in `shard-cli dbsize`) reports key header instances + that represent multiple potential values of a key before a replication conflict is resolved. +- expired_keys (in `bdb-cli info`) can be more than the keys count in DBSIZE (in `shard-cli dbsize`) + because expires are not always removed when a key becomes a tombstone. + A tombstone is a key that is logically deleted but still takes memory + until it is collected by the garbage collector. +- The Expires average TTL (in `bdb-cli info`) is computed for local expires only. + +## INFO + +The INFO command has an additional crdt section which provides advanced +troubleshooting information (applicable to support etc.): + +| **Section** | **Field** | **Description** | +| ------ | ------ | ------ | +| **CRDT Context** | crdt_config_version | Currently active Active-Active database configuration version. | +| | crdt_slots | Hash slots assigned and reported by this shard. | +| | crdt_replid | Unique Replica/Shard IDs. | +| | crdt_clock | Clock value of local vector clock. | +| | crdt_ovc | Locally observed Active-Active database vector clock. | +| **Peers** | A list of currently connected Peer Replication peers. This is similar to the slaves list reported by Redis. | | +| **Backlogs** | A list of Peer Replication backlogs currently maintained. Typically in a full mesh topology only a single backlog is used for all peers, as the requested Ids are identical. | | +| **CRDT Stats** | crdt_sync_full | Number of inbound full synchronization processes performed. | +| | crdt_sync_partial_ok | Number of partial (backlog based) re-synchronization processes performed. | +| | crdt_sync_partial-err | Number of partial re-synchronization processes failed due to exhausted backlog. | +| | crdt_merge_reqs | Number of inbound merge requests processed. | +| | crdt_effect_reqs | Number of inbound effect requests processed. | +| | crdt_ovc_filtered_effect_reqs | Number of inbound effect requests filtered due to old vector clock. | +| | crdt_gc_pending | Number of elements pending garbage collection. | +| | crdt_gc_attempted | Number of attempts to garbage collect tombstones. | +| | crdt_gc_collected | Number of tombstones garbaged collected successfully. | +| | crdt_gc_gvc_min | The minimal globally observed vector clock, as computed locally from all received observed clocks. | +| | crdt_stale_released_with_merge | Indicates last stale flag transition was a result of a complete full sync. | +| **CRDT Replicas** | A list of crdt_replica \ entries, each describes the known state of a remote instance with the following fields: | | +| | config_version | Last configuration version reported. | +| | shards | Number of shards. | +| | slots | Total number of hash slots. | +| | slot_coverage | A flag indicating remote shards provide full coverage (i.e. all shards are alive). | +| | max_ops_lag | Number of local operations not yet observed by the least updated remote shard | +| | min_ops_lag | Number of local operations not yet observed by the most updated remote shard | +--- +Title: Sorted sets in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using sorted sets with an Active-Active database. +linkTitle: Sorted sets +weight: $weight +--- +{{< note >}} +[Redis Geospatial (Geo)]({{< relref "/commands/GEOADD" >}}) is based on Sorted Sets, so the same Active-Active database development instructions apply to Geo. +{{< /note >}} + +Similar to Redis Sets, Redis Sorted Sets are non-repeating collections +of Strings. The difference between the two is that every member of a +Sorted Set is associated with a score used to order the Sorted Set from +lowest to highest. While members are unique, they may have the same +score. + +With Sorted Sets, you can quickly add, remove or update elements as +well as get ranges by score or by rank (position). Sorted Sets in Active-Active databases +behave the same and maintain additional metadata to handle concurrent +conflicting writes. Conflict resolution is done in two +phases: + +1. First, the database resolves conflict at the set level using "OR + Set" (Observed-Remove Set). With OR-Set behavior, writes across + multiple Active-Active database instances are typically unioned except in cases of + conflicts. Conflicting writes can happen when an Active-Active database instance + deletes an element while the other adds or updates the same element. + In this case, an observed Remove rule is followed, and only + instances it has already seen are removed. In all other cases, the + Add / Update element wins. +1. Second, the database resolves conflict at the score level. In this + case, the score is treated as a counter and applies the same + conflict resolution as regular counters. + +See the following examples to get familiar with Sorted Sets' +behavior in Active-Active database: + +Example of Simple Sorted Set with No +Conflict: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | | ZADD Z 1.2 y | +| t4 | — Sync — | — Sync — | +| t5 | ZRANGE Z 0 -1 => x y | ZRANGE Z 0 -1 => x y | + +**Explanation**: +When adding two different elements to a Sorted Set from different +replicas (in this example, x with score 1.1 was added by Instance 1 to +Sorted Set Z, and y with score 1.2 was added by Instance 2 to Sorted Set +Z) in a non-concurrent manner (i.e. each operation happened separately +and after both instances were in sync), the end result is a Sorted +Set including both elements in each Active-Active database instance. +Example of Sorted Set and Concurrent +Add: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | | ZADD Z 2.1 x | +| t3 | ZSCORE Z x => 1.1 | ZSCORE Z x => 2.1 | +| t4 | — Sync — | — Sync — | +| t5 | ZSCORE Z x => 2.1 | ZSCORE Z x => 2.1 | + +**Explanation**: +When concurrently adding an element x to a Sorted Set Z by two different +Active-Active database instances (Instance 1 added score 1.1 and Instance 2 added score +2.1), the Active-Active database implements Last Write Win (LWW) to determine the score of +x. In this scenario, Instance 2 performed the ZADD operation at time +t2\>t1 and therefore the Active-Active database sets the score 2.1 to +x. + +Example of Sorted Set with Concurrent Add Happening at the Exact Same +Time: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | ZADD Z 2.1 x | +| t2 | ZSCORE Z x => 1.1 | ZSCORE Z x => 2.1 | +| t3 | — Sync — | — Sync — | +| t4 | ZSCORE Z x => 1.1 | ZSCORE Z x => 1.1 | + +**Explanation**: +The example above shows a relatively rare situation, in which two Active-Active database +instances concurrently added the same element x to a Sorted Set at the +same exact time but with a different score, i.e. Instance 1 added x with +a 1.1 score and Instance 2 added x with a 2.1 score. After syncing, the +Active-Active database realized that both operations happened at the same time and +resolved the conflict by arbitrarily (but consistently across all Active-Active database +instances) giving precedence to Instance 1. +Example of Sorted Set with Concurrent Counter +Increment: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | ZINCRBY Z 1.0 x | ZINCRBY Z 1.0 x | +| t4 | — Sync — | — Sync — | +| t5 | ZSCORE Z x => 3.1 | ZSCORE Z x => 3.1 | + +**Explanation**: +The result is the sum of all +ZINCRBY +operations performed by all Active-Active database instances. + +Example of Removing an Element from a Sorted +Set: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 4.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | ZSCORE Z x => 4.1 | ZSCORE Z x => 4.1 | +| t4 | ZREM Z x | ZINCRBY Z 2.0 x | +| t5 | ZSCORE Z x => nill | ZSCORE Z x => 6.1 | +| t6 | — Sync — | — Sync — | +| t7 | ZSCORE Z x => 2.0 | ZSCORE Z x => 2.0 | + +**Explanation**: +At t4 - t5, concurrent ZREM and ZINCRBY operations ran on Instance 1 +and Instance 2 respectively. Before the instances were in sync, the ZREM +operation could only delete what had been seen by Instance 1, so +Instance 2 was not affected. Therefore, the ZSCORE operation shows the +local effect on x. At t7, after both instances were in-sync, the Active-Active database +resolved the conflict by subtracting 4.1 (the value of element x in +Instance 1) from 6.1 (the value of element x in Instance 2). +--- +Title: Strings and bitfields in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using strings and bitfields with an Active-Active database. +linkTitle: Strings and bitfields +weight: $weight +--- +Active-Active databases support both strings and bitfields. + +{{}} +Active-Active **bitfield** support was added in RS version 6.0.20. +{{}} + +Changes to both of these data structures will be replicated across Active-Active member databases. + +## Replication semantics + +Except in the case of [string counters]({{< relref "#string-counter-support" >}}) (see below), both strings and bitfields are replicated using a "last write wins" approach. The reason for this is that strings and bitfields are effectively binary objects. So, unlike with lists, sets, and hashes, the conflict resolution semantics of a given operation on a string or bitfield are undefined. + +### How "last write wins" works + +A wall-clock timestamp (OS time) is stored in the metadata of every string +and bitfield operation. If the replication syncer cannot determine the order of operations, +the value with the latest timestamp wins. This is the only case with Active-Active databases where OS time is used to resolve a conflict. + +Here's an example where an update happening to the same key at a later +time (t2) wins over the update at t1. + +| **Time** | **Region 1** | **Region 2** | +| :------: | :------: | :------: | +| t1 | SET text “a” | | +| t2 | | SET text “b” | +| t3 | — Sync — | — Sync — | +| t4 | SET text “c” | | +| t5 | — Sync — | — Sync — | +| t6 | | SET text “d” | + +### String counter support + +When you're using a string as counter (for instance, with the [INCR]({{< relref "/commands/incr" >}}) or [INCRBY]({{< relref "/commands/incrby" >}}) commands), +then conflicts will be resolved semantically. + +On conflicting writes, counters accumulate the total counter operations +across all member Active-Active databases in each sync. + +Here's an example of how counter +values works when synced between two member Active-Active databases. With +each sync, the counter value accumulates the private increment and +decrements of each site and maintain an accurate counter across +concurrent writes. + +| **Time** | **Region 1** | **Region 2** | +| :------: | :------: | :------: | +| t1 | INCRBY counter 7 | | +| t2 | | INCRBY counter 3 | +| t3 | GET counter
7 | GET counter
3 | +| t4 | — Sync — | — Sync — | +| t5 | GET counter
10 | GET counter
10 | +| t6 | DECRBY counter 3 | | +| t7 | | INCRBY counter 6 | +| t8 | — Sync — | — Sync — | +| t9 | GET counter
13 | GET counter
13 | + +{{< note >}} +Active-Active databases support 59-bit counters. +This limitation is to protect from overflowing a counter in a concurrent operation. +{{< /note >}} +--- +Title: Hashes in an Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using hashes with an Active-Active database. +linkTitle: Hashes +weight: $weight +--- +Hashes are great for structured data that contain a map of fields and +values. They are used for managing distributed user or app session +state, user preferences, form data and so on. Hash fields contain string +type and string types operate just like the standard Redis string types +when it comes to CRDTs. Fields in hashes can be initialized as a string +using HSET or HMSET or can be used to initialize counter types that are +numeric integers using HINCRBY or floats using HINCRBYFLOAT. + +Hashes in Active-Active databases behave the same and maintain additional metadata to +achieve an "OR-Set" behavior to handle concurrent conflicting writes. +With the OR-Set behavior, writes to add new fields across multiple Active-Active database +instances are typically unioned except in cases of conflicts. +Conflicting instance writes can happen when an Active-Active database instance deletes a +field while the other adds the same field. In this case and observed +remove rule is followed. That is, remove can only remove fields it has +already seen and in all other cases element add/update wins. + +Field values behave just like CRDT strings. String values can be types +string, counter integer based on the command used for initialization of +the field value. See "String Data Type in Active-Active databases" and "String Data Type +with Counter Value in Active-Active databases" for more details. + +Here is an example of an "add wins" case: + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | HSET key1 field1 “a” | | +| t2 | | HSET key1 field2 “b” | +| t4 | - Sync - | - Sync - | +| t5 | HGETALL key1
1) “field2”
2) “b”
3) “field1”
4) “a” | HGETALL key1
1) “field2”
2) “b”
3) “field1”
4) “a” | +--- +Title: JSON in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using JSON with an Active-Active database. +linkTitle: JSON +weight: $weight +tocEmbedHeaders: true +--- +Active-Active databases support JSON data structures. + +The design is based on [A Conflict-Free Replicated JSON Datatype](https://arxiv.org/abs/1608.03960) by Kleppmann and Beresford, but the implementation includes some changes. Several [conflict resolution rule](#conflict-resolution-rules) examples were adapted from this paper as well. + +## Prerequisites + +To use JSON in an Active-Active database, you must enable JSON during database creation. + +Active-Active Redis Cloud databases add JSON by default. See [Create an Active-Active subscription]({{< relref "/operate/rc/databases/create-database/create-active-active-database#select-capabilities" >}}) in the Redis Cloud documentation for details. + +In Redis Enterprise Software, JSON is not enabled by default for Active-Active databases. See [Create an Active-Active JSON database]({{< relref "/operate/oss_and_stack/stack-with-enterprise/json/active-active#create-an-active-active-json-database" >}}) in the Redis Stack and Redis Enterprise documentation for instructions. + +{{}} + +{{}}--- +Title: Sets in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using sets with an Active-Active database. +linkTitle: Sets +weight: $weight +--- +A Redis set is an unordered collection of strings. It is possible to +add, remove, and test for the existence of members with Redis commands. +A Redis set maintains a unique collection of elements. Sets can be great +for maintaining a list of events (click streams), users (in a group +conversation), products (in recommendation lists), engagements (likes, +shares) and so on. + +Sets in Active-Active databases behave the same and maintain additional metadata to +achieve an "OR-Set" behavior to handle concurrent conflicting +writes. With the OR-Set behavior, writes across multiple Active-Active database instances +are typically unioned except in cases of conflicts. Conflicting instance +writes can happen when a Active-Active database instance deletes an element while the +other adds the same element. In this case and observed remove rule is +followed. That is, remove can only remove instances it has already seen +and in all other cases element add wins. + +Here is an example of an "add wins" case: + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SADD key1 “a” | | +| t2 | | SADD key1 “b” | +| t3 | SMEMBERS key1 “a” | SMEMBERS key1 “b” | +| t4 | — Sync — | — Sync — | +| t3 | SMEMBERS key1 “a” “b” | SMEMBERS key1 “a” “b” | + +Here is an example of an "observed remove" case. + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SMEMBERS key1 “a” “b” | SMEMBERS key1 “a” “b” | +| t2 | SREM key1 “a” | SADD key1 “c” | +| t3 | SREM key1 “c” | | +| t4 | — Sync — | — Sync — | +| t3 | SMEMBERS key1 “c” “b” | SMEMBERS key1 “c” “b” | +--- +Title: Streams in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using streams with an Active-Active database. +linkTitle: Streams +weight: $weight +--- +A [Redis Stream]({{< relref "/develop/data-types/streams" >}}) is a data structure that acts like an append-only log. +Each stream entry consists of: + +- A unique, monotonically increasing ID +- A payload consisting of a series key-value pairs + +You add entries to a stream with the XADD command. You access stream entries using the XRANGE, XREADGROUP, and XREAD commands (however, see the caveat about XREAD below). + +## Streams and Active-Active + +Active-Active databases allow you to write to the same logical stream from more than one region. +Streams are synchronized across the regions of an Active-Active database. + +In the example below, we write to a stream concurrently from two regions. Notice that after syncing, both regions have identical streams: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TimeRegion 1Region 2
t1XADD messages * text helloXADD messages * text goodbye
t2XRANGE messages - +
→ [1589929244828-1]
XRANGE messages - +
→ [1589929246795-2]
t3— Sync —— Sync —
t4XRANGE messages - +
→ [1589929244828-1, 1589929246795-2]
XRANGE messages - +
→ [1589929244828-1, 1589929246795-2]
+ +Notice also that the synchronized streams contain no duplicate IDs. As long as you allow the database to generate your stream IDs, you'll never have more than one stream entry with the same ID. + +{{< note >}} +Redis Open Source uses one radix tree (referred to as `rax` in the code base) to implement each stream. However, Active-Active databases implement a single logical stream using one `rax` per region. +Each region adds entries only to its associated `rax` (but can remove entries from all `rax` trees). +This means that XREAD and XREADGROUP iterate simultaneously over all `rax` trees and return the appropriate entry by comparing the entry IDs from each `rax`. +{{< /note >}} + +### Conflict resolution + +Active-Active databases use an "observed-remove" approach to automatically resolve potential conflicts. + +With this approach, a delete only affects the locally observable data. + +In the example below, a stream, `x`, is created at _t1_. At _t3_, the stream exists in two regions. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TimeRegion 1Region 2
t1XADD messages * text hello
t2— Sync —— Sync —
t3XRANGE messages - +
→ [1589929244828-1]
XRANGE messages - +
→ [1589929244828-1]
t4DEL messagesXADD messages * text goodbye
t5— Sync —— Sync —
t6XRANGE messages - +
→ [1589929246795-2]
XRANGE messages - +
→ [1589929246795-2]
+ +At _t4_, the stream is deleted from Region 1. At the same time, an entry with ID ending in `3700` is added to the same stream at Region 2. After the sync, at _t6_, the entry with ID ending in `3700` exists in both regions. This is because that entry was not visible when the local stream was deleted at _t4_. + +### ID generation modes + +Usually, you should allow Redis streams generate its own stream entry IDs. You do this by specifying `*` as the ID in calls to XADD. However, you _can_ provide your own custom ID when adding entries to a stream. + +Because Active-Active databases replicate asynchronously, providing your own IDs can create streams with duplicate IDs. This can occur when you write to the same stream from multiple regions. + +| Time | Region 1 | Region 2 | +| ---- | ------------------------------- | ------------------------------- | +| _t1_ | `XADD x 100-1 f1 v1` | `XADD x 100-1 f1 v1` | +| _t2_ | _— Sync —_ | _— Sync —_ | +| _t3_ | `XRANGE x - +`
**→ [100-1, 100-1]** | `XRANGE x - +`
**→ [100-1, 100-1]** | + +In this scenario, two entries with the ID `100-1` are added at _t1_. After syncing, the stream `x` contains two entries with the same ID. + +{{< note >}} +Stream IDs in Redis Open Source consist of two integers separated by a dash ('-'). When the server generates the ID, the first integer is the current time in milliseconds, and the second integer is a sequence number. So, the format for stream IDs is MS-SEQ. +{{< /note >}} + +To prevent duplicate IDs and to comply with the original Redis streams design, Active-Active databases provide three ID modes for XADD: + +1. **Strict**: In _strict_ mode, XADD allows server-generated IDs (using the '`*`' ID specifier) or IDs consisting only of the millisecond (MS) portion. When the millisecond portion of the ID is provided, the ID's sequence number is calculated using the database's region ID. This prevents duplicate IDs in the stream. Strict mode rejects full IDs (that is, IDs containing both milliseconds and a sequence number). +1. **Semi-strict**: _Semi-strict_ mode is just like _strict_ mode except that it allows full IDs (MS-SEQ). Because it allows full IDs, duplicate IDs are possible in this mode. +1. **Liberal**: XADD allows any monotonically ascending ID. When given the millisecond portion of the ID, the sequence number will be set to `0`. This mode may also lead to duplicate IDs. + +The default and recommended mode is _strict_, which prevents duplicate IDs. + +{{% warning %}} +Why do you want to prevent duplicate IDs? First, XDEL, XCLAIM, and other commands can affect more than one entry when duplicate IDs are present in a stream. Second, duplicate entries may be removed if a database is exported or renamed. +{{% /warning %}} + +To change XADD's ID generation mode, use the `rladmin` command-line utility: + +Set _strict_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode strict +``` + +Set _semi-strict_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode semi-strict +``` + +Set _liberal_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode liberal +``` + +### Iterating a stream with XREAD + +In Redis Open Source and in non-Active-Active databases, you can use XREAD to iterate over the entries in a Redis Stream. However, with an Active-Active database, XREAD may skip entries. This can happen when multiple regions write to the same stream. + +In the example below, XREAD skips entry `115-2`. + +| Time | Region 1 | Region 2 | +| ---- | -------------------------------------------------- | -------------------------------------------------- | +| _t1_ | `XADD x 110 f1 v1` | `XADD x 115 f1 v1` | +| _t2_ | `XADD x 120 f1 v1` | | +| _t3_ | `XADD x 130 f1 v1` | | +| _t4_ | `XREAD COUNT 2 STREAMS x 0`
**→ [110-1, 120-1]** | | +| _t5_ | _— Sync —_ | _— Sync —_ | +| _t6_ | `XREAD COUNT 2 STREAMS x 120-1`
**→ [130-1]** | | +| _t7_ | `XREAD STREAMS x 0`
**→[110-1, 115-2, 120-1, 130-1]** | `XREAD STREAMS x 0`
**→[110-1, 115-2, 120-1, 130-1]** | + + +You can use XREAD to reliably consume a stream only if all writes to the stream originate from a single region. Otherwise, you should use XREADGROUP, which always guarantees reliable stream consumption. + +## Consumer groups + +Active-Active databases fully support consumer groups with Redis Streams. Here is an example of creating two consumer groups concurrently: + +| Time | Region 1 | Region 2 | +| ---- | --------------------------- | --------------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | `XGROUP CREATE x group2 0` | +| _t2_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group2]** | +| _t3_ | _— Sync —_ | — Sync — | +| _t4_ | `XINFO GROUPS x`
**→ [group1, group2]** | `XINFO GROUPS x`
**→ [group1, group2]** | + + +{{< note >}} +Redis Open Source uses one radix tree (`rax`) to hold the global pending entries list and another `rax` for each consumer's PEL. +The global PEL is a unification of all consumer PELs, which are disjoint. + +An Active-Active database stream maintains a global PEL and a per-consumer PEL for each region. + +When given an ID different from the special ">" ID, XREADGROUP iterates simultaneously over all of the PELs for all consumers. +It returns the next entry by comparing entry IDs from the different PELs. +{{< /note >}} + +### Conflict resolution + +The "delete wins" approach is a way to automatically resolve conflicts with consumer groups. +In case of concurrent consumer group operations, a delete will "win" over other concurrent operations on the same group. + +In this example, the DEL at _t4_ deletes both the observed `group1` and the non-observed `group2`: + +| Time | Region 1 | Region 2 | +| ---- | ----------------------- | ----------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | | +| _t2_ | _— Sync —_ | _— Sync —_ | +| _t3_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group1]** | +| _t4_ | `DEL x` | `XGROUP CREATE x group2 0` | +| _t5_ | _— Sync —_ | _— Sync —_ | +| _t6_ | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | + +In this example, the XGROUP DESTROY at _t4_ affects both the observed `group1` created in Region 1 and the non-observed `group1` created in Region 3: + +| time | Region 1 | Region 2 | Region 3 | +| ---- | ----------------------- | ----------------------- | --------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | | | +| _t2_ | _— Sync —_ | _— Sync —_ | | +| _t3_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ []** | +| _t4_ | | `XGROUP DESTROY x group1` | `XGROUP CREATE x group1 0` | +| _t5_ | _— Sync —_ | _— Sync — | — Sync — | +| _t6_ | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | + +### Group replication + +Calls to XREADGROUP and XACK change the state of a consumer group or consumer. However, it's not efficient to replicate every change to a consumer or consumer group. + +To maintain consumer groups in Active-Active databases with optimal performance: + +1. Group existence (CREATE/DESTROY) is replicated. +1. Most XACK operations are replicated. +1. Other operations, such as XGROUP, SETID, DELCONSUMER, are not replicated. + +For example: + +| Time | Region 1 | Region 2 | +| ---- | ------------------------------------------------- | ------------------------ | +| _t1_ | `XADD messages 110 text hello` | | +| _t2_ | `XGROUP CREATE messages group1 0` | | +| _t3_ | `XREADGROUP GROUP group1 Alice STREAMS messages >`
**→ [110-1]** | | +| _t4_ | _— Sync —_ | _— Sync —_ | +| _t5_ | `XRANGE messages - +`
**→ [110-1]** | XRANGE messages - +
**→ [110-1]** | +| _t6_ | `XINFO GROUPS messages`
**→ [group1]** | XINFO GROUPS messages
**→ [group1]** | +| _t7_ | `XINFO CONSUMERS messages group1`
**→ [Alice]** | XINFO CONSUMERS messages group1
**→ []** | +| _t8_ | `XPENDING messages group1 - + 1`
**→ [110-1]** | XPENDING messages group1 - + 1
**→ []** | + +Using XREADGROUP across regions can result in regions reading the same entries. +This is due to the fact that Active-Active Streams is designed for at-least-once reads or a single consumer. +As shown in the previous example, Region 2 is not aware of any consumer group activity, so redirecting the XREADGROUP traffic from Region 1 to Region 2 results in reading entries that have already been read. + +### Replication performance optimizations + +Consumers acknowledge messages using the XACK command. Each ack effectively records the last consumed message. This can result in a lot of cross-region traffic. To reduce this traffic, we replicate XACK messages only when all of the read entries are acknowledged. + +| Time | Region 1 | Region 2 | Explanation | +| ---- | --------------------------------------------------------------- | ------------ | --------------------------------------------------------------------------------------------------------------- | +| _t1_ | `XADD x 110-0 f1 v1` | | | +| _t2_ | `XADD x 120-0 f1 v1` | | | +| _t3_ | `XADD x 130-0 f1 v1` | | | +| _t4_ | `XGROUP CREATE x group1 0` | | | +| _t5_ | `XREADGROUP GROUP group1 Alice STREAMS x >`
**→ [110-0, 120-0, 130-0]** | | | +| _t6_ | `XACK x group1 110-0` | | | +| _t7_ | _— Sync —_ | _— Sync —_ | 110-0 and its preceding entries (none) were acknowledged. We replicate an XACK effect for 110-0. | +| _t8_ | `XACK x group1 130-0` | | | +| _t9_ | _— Sync —_ | _— Sync —_ | 130-0 was acknowledged, but not its preceding entries (120-0). We DO NOT replicate an XACK effect for 130-0 | +| _t10_ | `XACK x group1 120-0` | | | +| _t11_ | _— Sync —_ | _— Sync —_ | 120-0 and its preceding entries (110-0 through 130-0) were acknowledged. We replicate an XACK effect for 130-0. | + +In this scenario, if we redirect the XREADGROUP traffic from Region 1 to Region 2 we do not re-read entries 110-0, 120-0 and 130-0. +This means that the XREADGROUP does not return already-acknowledged entries. + +### Guarantees + +Unlike XREAD, XREADGOUP will never skip stream entries. +In traffic redirection, XREADGROUP may return entries that have been read but not acknowledged. It may also even return entries that have already been acknowledged. + +## Summary + +With Active-Active streams, you can write to the same logical stream from multiple regions. As a result, the behavior of Active-Active streams differs somewhat from the behavior you get with Redis Open Source. This is summarized below: + +### Stream commands + +1. When using the _strict_ ID generation mode, XADD does not permit full stream entry IDs (that is, an ID containing both MS and SEQ). +1. XREAD may skip entries when iterating a stream that is concurrently written to from more than one region. For reliable stream iteration, use XREADGROUP instead. +1. XSETID fails when the new ID is less than current ID. + +### Consumer group notes + +The following consumer group operations are replicated: + +1. Consecutive XACK operations +1. Consumer group creation and deletion (that is, XGROUP CREATE and XGROUP DESTROY) + +All other consumer group metadata is not replicated. + +A few other notes: + +1. XGROUP SETID and DELCONSUMER are not replicated. +1. Consumers exist locally (XREADGROUP creates a consumer implicitly). +1. Renaming a stream (using RENAME) deletes all consumer group information. +--- +Title: Lists in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using list with an Active-Active database. +linkTitle: Lists +weight: $weight +--- +Redis lists are simply lists of strings, sorted by insertion order. It +is possible to add elements to a Redis List that push new elements to +the head (on the left) or to the tail (on the right) of the list. Redis +lists can be used to easily implement queues (using LPUSH and RPOP, for +example) and stacks (using LPUSH and LPOP, for +example). + +Lists in Active-Active databases are just the same as regular Redis Lists. See the +following examples to get familiar with Lists' behavior in an +Active-Active database. + +Simple Lists +example: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH mylist “hello” | | +| t2 | — Sync — | — Sync — | +| t3 | | LPUSH mylist “world” | +| t4 | — Sync — | — Sync — | +| t5 | LRANGE mylist 0 -1 =>“world” “hello” | LRANGE mylist 0 -1 => “world” “hello” | + +**Explanation**: +The final list contains both the "world" and "hello" elements, in that +order (Instance 2 observed "hello" when it added +"world"). + +Example of Lists with Concurrent +Insertions: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x | | +| t2 | — Sync — | — Sync — | +| t3 | LINSERT L AFTER x y1 | | +| t4 | | LINSERT L AFTER x y2 | +| t5 | LRANGE L 0 -1 => x y1 | LRANGE L 0 -1 => x y2 | +| t6 | — Sync — | — Sync — | +| t7 | LRANGE L 0 -1 => x y1 y2 | LRANGE L 0 -1 => x y1 y2 | + +**Explanation**: +Instance 1 added an element y1 after x, and then Instance 2 added element y2 after x. +The final List contains all three elements: x is the first element, after it y1 and then y2. +The Active-Active database resolves the conflict arbitrarily but applies the resolution consistently across all Active-Active database instances. + +Example of Deleting a List while Pushing a New +Element: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x | | +| t2 | — Sync — | — Sync — | +| t3 | LRANGE L 0 -1 => x | LRANGE L 0 -1 => x | +| t4 | LPUSH L y | DEL L | +| t5 | — Sync — | — Sync — | +| t6 | LRANGE L 0 -1 => y | LRANGE L 0 -1 => y | + +**Explanation** +At t4 - t6, DEL deletes only observed elements. This is why L still +contains y. + +Example of Popping Elements from a +List: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x y z | | +| t2 | — Sync — | — Sync — | +| t3 | | RPOP L => x | +| t4 | — Sync — | — Sync — | +| t5 | RPOP L => y | | +| t6 | — Sync — | — Sync — | +| t7 | RPOP L => z | RPOP L => z | + +**Explanation**: +At t1, the operation pushes elements x, y, z to List L. At t3, the +sequential pops behave as expected from a queue. At t7, the concurrent +pop in both instances might show the same result. The instance was not +able to sync regarding the z removal so, from the point of view of each +instance, z is located in the List and can be popped. After syncing, +both lists are empty. + +Be aware of the behavior of Lists in Active-Active databases when using List as a stack +or queue. As seen in the above example, two parallel RPOP operations +performed by two different Active-Active database instances can get the same element in +the case of a concurrent operation. Lists in Active-Active databases guarantee that each +element is POP-ed at least once, but cannot guarantee that each +element is POP-ed only once. Such behavior should be taken into +account when, for example, using Lists in Active-Active databases as building blocks for +inter-process communication systems. + +In that case, if the same element cannot be handled twice by the +applications, it's recommended that the POP operations be performed by +one Active-Active database instance, whereas the PUSH operations can be performed by +multiple instances. +--- +Title: HyperLogLog in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using hyperloglog with an Active-Active database. +linkTitle: HyperLogLog +weight: $weight +--- +**HyperLogLog** is an algorithm that addresses the [count-distinct problem](https://en.wikipedia.org/wiki/Count-distinct_problem). +To do this it approximates the numbers of items in a [set](https://en.wikipedia.org/wiki/Multiset). +Determining the _exact_ cardinality of a set requires memory according to the cardinality of the set. +Because it estimates the cardinality by probability, the HyperLogLog algorithm can run with more reasonable memory requirements. + +## HyperLogLog in Redis + +Redis Open source implements [HyperLogLog](https://redislabs.com/redis-best-practices/counting/hyperloglog/) (HLL) as a native data structure. +It supports adding elements ([PFADD]({{< relref "/commands/pfadd" >}}) to an HLL, counting elements ([PFCOUNT]({{< relref "/commands/pfcount" >}}) of HLLs, and merging of ([PFMERGE]({{< relref "/commands/pfmerge" >}}) HLLs. + +Here is an example of a simple write case: + +| Time | Replica 1 | Replica 2 | +| ---- | ----------------- | ----------------- | +| t1 | PFADD hll x | | +| t2 | --- sync --- | | +| t3 | | PFADD hll y | +| t4 | --- sync --- | | +| t5 | PFCOUNT hll --> 2 | PFCOUNT hll --> 2 | + +Here is an example of a concurrent add case: + +| Time | Replica 1 | Replica 2 | +| ---- | ----------------- | ----------------- | +| t1 | PFADD hll x | PFADD hll y | +| t2 | PFCOUNT hll --> 1 | PFCOUNT hll --> 1 | +| t3 | --- sync --- | | +| t4 | PFCOUNT hll --> 2 | PFCOUNT hll --> 2 | + +## The DEL-wins approach + +Other collections in the Redis-CRDT implementation use the observed remove method to resolve conflicts. +The CRDT-HLL uses the DEL-wins method. +If a DEL request is received at the same time as any other request (ADD/MERGE/EXPIRE) on the HLL-key +the replicas consistently converge to delete key. +In the observed remove method used by other collections (sets, lists, sorted-sets and hashes), +only the replica that received the DEL request removes the elements, but elements added concurrently in other replicas exist in the consistently converged collection. +We chose to use the DEL-wins method for the CRDT-HLL to maintain the original time and space complexity of the HLL in Redis Open source. + +Here is an example of a DEL-wins case: + +| HLL | | | \| | Set | | | +| ---- | --------------- | --------------- | --- | ---- | ------------------- | ------------------- | +| | | | \| | | | | +| Time | Replica 1 | Replica 2 | \| | Time | Replica 1 | Replica 2 | +| | | | \| | | | | +| t1 | PFADD h e1 | | \| | t1 | SADD s e1 | | +| t2 | --- sync --- | | \| | t2 | --- sync --- | | +| t3 | PFCOUNT h --> 1 | PFCOUNT h --> 1 | \| | t3 | SCARD s --> 1 | SCARD s --> 1 | +| t4 | PFADD h e2 | Del h | \| | t4 | SADD s e2 | Del S | +| t5 | PFCOUNT h --> 2 | PFCOUNT h --> 0 | \| | t5 | SCARD s --> 2 | SCARD s --> 0 | +| t6 | --- sync --- | | \| | t6 | --- sync --- | | +| t7 | PFCOUNT h --> 0 | PFCOUNT h --> 0 | \| | t7 | SCARD s --> 1 | SCARD s --> 1 | +| t8 | Exists h --> 0 | Exists h --> 0 | \| | t8 | Exists s --> 1 | Exists s --> 1 | +| | | | \| | t9 | SMEMBERS s --> {e2} | SMEMBERS s --> {e2} | + +## HLL in Active-Active databases versus HLL in Redis Open source + +In Active-Active databases, we implemented HLL within the CRDT on the basis of the Redis implementation with a few exceptions: + +- Redis keeps the HLL data structure as an encoded string object + such that you can potentially run any string request can on a key that contains an HLL. In CRDT, only get and set are supported for HLL. +- In CRDT, if you do SET on a key that contains a value encoded as an HLL, then the value will remain an HLL. If the value is not encoded as HLL, then it will be a register. +--- +Title: Data types for Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Introduction to differences in data types between standalone and Active-Active + Redis databases. +hideListLinks: true +linktitle: Data types +weight: 90 +--- + + +Active-Active databases use conflict-free replicated data types (CRDTs). From a developer perspective, most supported data types work the same for Active-Active and standard Redis databases. However, a few methods also come with specific requirements in Active-Active databases. + +Even though they look identical to standard Redis data types, there are specific rules that govern the handling of +conflicting concurrent writes for each data type. + +As conflict handling rules differ between data types, some commands have slightly different requirements in Active-Active databases versus standard Redis databases. + +See the following articles for more information + +--- +Title: Active-Active Redis applications +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: General information to keep in mind while developing applications for + an Active-Active database. +hideListLinks: true +linktitle: Develop applications +weight: 99 +--- +Developing globally distributed applications can be challenging, as +developers have to think about race conditions and complex combinations +of events under geo-failovers and cross-region write conflicts. In Redis Enterprise Software (RS), Active-Active databases +simplify developing such applications by directly using built-in smarts +for handling conflicting writes based on the data type in use. Instead +of depending on just simplistic "last-writer-wins" type conflict +resolution, geo-distributed Active-Active databases (formerly known as CRDBs) combines techniques defined in CRDT +(conflict-free replicated data types) research with Redis types to +provide smart and automatic conflict resolution based on the data types +intent. + +An Active-Active database is a globally distributed database that spans multiple Redis +Enterprise Software clusters. Each Active-Active database can have many Active-Active database instances +that come with added smarts for handling globally distributed writes +using the proven +[CRDT](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) +approach. +[CRDT](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) +research describes a set of techniques for creating systems that can +handle conflicting writes. CRDBs are powered by Multi-Master Replication +(MMR) provides a straightforward and effective way to replicate your +data between regions and simplify development of complex applications +that can maintain correctness under geo-failovers and concurrent +cross-region writes to the same data. + +{{< image filename="/images/rs/crdbs.png" alt="Geo-replication world map">}} + +Active-Active databases replicate data between multiple Redis Enterprise Software +clusters. Common uses for Active-Active databases include disaster recovery, +geographically redundant applications, and keeping data closer to your +user's locations. MMR is always multi-directional amongst the clusters +configured in the Active-Active database. For unidirectional replication, please see the +Replica Of capabilities in Redis Enterprise Software. + +## Example of synchronization + +In the example below, database writes are concurrent at the point in +times t1 and t2 and happen before a sync can communicate the changes. +However, writes at times t4 and t6 are not concurrent as a sync happened +in between. + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SET key1 “a” | | +| t2 | | SET key1 “b” | +| t3 | — Sync — | — Sync — | +| t4 | SET key1 “c” | | +| t5 | — Sync — | — Sync — | +| t6 | | SET key1 “d” | + +[Learn more about +synchronization]({{< relref "/operate/rs/databases/active-active" >}}) for +each supported data type and [how to develop]({{< relref "/operate/rs/databases/active-active/develop/develop-for-aa.md" >}}) with them on Redis Enterprise Software. +--- +Title: Configure distributed synchronization +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to configure distributed synchronization so that any available proxy + endpoint can manage synchronization traffic. +linktitle: Distributed synchronization +weight: 80 +--- +Replicated databases, such as [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/" >}}) and [Active-Active]({{< relref "/operate/rs/databases/active-active" >}}) databases, +use proxy endpoints to synchronize database changes with the databases on other participating clusters. + +To improve the throughput and lower the latency for synchronization traffic, +you can configure a replicated database to use distributed synchronization where any available proxy endpoint can manage synchronization traffic. + +Every database by default has one proxy endpoint that manages client and synchronization communication with the database shards, +and that proxy endpoint is used for database synchronization. +This is called centralized synchronization. + +To prepare a database to use distributed synchronization you must first make sure that the database [proxy policy]({{< relref "/operate/rs/databases/configure/proxy-policy.md" >}}) +is defined so that either each node has a proxy endpoint or each primary (master) shard has a proxy endpoint. +After you have multiple proxies for the database, +you can configure the database synchronization to use distributed synchronization. + +## Configure distributed synchronization + +{{< note >}} +You may use the database name in place of `db:` in the following `rladmin` commands. +{{< /note >}} + +To configure distributed synchronization: + +1. To check the proxy policy for the database, run: `rladmin status` + + The output of the status command shows the list of endpoints on the cluster and the proxy policy for the endpoint. + + ```sh + ENDPOINTS: + DB:ID NAME ID NODE ROLE SSL + db:1 db endpoint:1:1 node:1 all-master-shards No + ``` + + If the proxy policy (also known as a _role_) is `single`, configure the policy to `all-nodes` or `all-master-shards` according to your needs with the command: + + ```sh + rladmin bind db db: endpoint policy + ``` + +1. To configure the database to use distributed synchronization, run: + + ```sh + rladmin tune db db: syncer_mode distributed + ``` + + To change back to centralized synchronization, run: + + ```sh + rladmin tune db db: syncer_mode centralized + ``` + +## Verify database synchronization + +Use `rladmin` to verify a database synchronization role: + +```sh +rladmin info db db: +``` + +The current database role is reported as the `syncer_mode` value: + +```sh +$ rladmin info db db: +db: []: + // (Other settings removed) + syncer_mode: centralized +``` +--- +Title: Manage Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage your Active-Active database settings. +linktitle: Manage +weight: 30 +--- + +You can configure and manage your Active-Active database from either the Cluster Manager UI or the command line. + +To change the global configuration of the Active-Active database, use [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}). + +If you need to apply changes locally to one database instance, you use the Cluster Manager UI or [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}). + +## Database settings + +Many Active-Active database settings can be changed after database creation. One notable exception is database clustering. Database clustering can't be turned on or off after the database has been created. + +## Participating clusters + +You can add and remove participating clusters of an Active-Active database to change the topology. +To manage the changes to Active-Active topology, use [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/" >}}) or the participating clusters list in the Cluster Manager UI. + +### Add participating clusters + +All existing participating clusters must be online and in a syncing state when you add new participating clusters. + +New participating clusters create the Active-Active database instance based on the global Active-Active database configuration. +After you add new participating clusters to an existing Active-Active database, +the new database instance can accept connections and read operations. +The new instance does not accept write operations until it is in the syncing state. + +{{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +{{}} + +To add a new participating cluster to an existing Active-Active configuration using the Cluster Manager UI: + +1. Select the Active-Active database from the **Databases** list and go to its **Configuration** screen. + +1. Click **Edit**. + +1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + +1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + {{Add cluster panel.}} + +1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Click **Save**. + + +### Remove participating clusters + +All existing participating clusters must be online and in a syncing state when you remove an online participating cluster. +If you must remove offline participating clusters, you can forcefully remove them. +If a forcefully removed participating cluster tries to rejoin the cluster, +its Active-Active database membership will be out of date. +The joined participating clusters reject updates sent from the removed participating cluster. +To prevent rejoin attempts, purge the forcefully removed instance from the participating cluster. + +To remove a participating cluster using the Cluster Manager UI: + +1. Select the Active-Active database from the **Databases** list and go to its **Configuration** screen. + +1. Click **Edit**. + +1. In the **Participating clusters** section, point to the cluster you want to delete in the **Other participating clusters** list: + + {{Edit and delete buttons appear when you point to an entry in the Other participating clusters list.}} + +1. Click {{< image filename="/images/rs/buttons/delete-button.png#no-click" alt="The Delete button" width="25px" class="inline" >}} to remove the cluster. + +1. Click **Save**. + +## Replication backlog + +Redis databases that use [replication for high availability]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}) maintain a replication backlog (per shard) to synchronize the primary and replica shards of a database. In addition to the database replication backlog, Active-Active databases maintain a backlog (per shard) to synchronize the database instances between clusters. + +By default, both the database and Active-Active replication backlogs are set to one percent (1%) of the database size divided by the number of shards. This can range between 1MB to 250MB per shard for each backlog. + +### Change the replication backlog size + +Use the [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) utility to control the size of the replication backlogs. You can set it to `auto` or set a specific size. + +Update the database replication backlog configuration with the `crdb-cli` command shown below. + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"repl_backlog_size\": }" +``` + +Update the Active-Active (CRDT) replication backlog with the command shown below: + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"crdt_repl_backlog_size\": }" +``` + +## Data persistence + +Active-Active supports AOF (Append-Only File) data persistence only. Snapshot persistence is _not_ supported for Active-Active databases and should not be used. + +If an Active-Active database is currently using snapshot data persistence, use `crdb-cli` to switch to AOF persistence: +```text + crdb-cli crdb update --crdb-guid --default-db-config '{"data_persistence": "aof", "aof_policy":"appendfsync-every-sec"}' +``` + + +--- +Title: Enable causal consistency +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable causal consistency in an Active-Active database. +linkTitle: Causal consistency +weight: 70 +--- +When you enable causal consistency in Active-Active databases, +the order of operations on a specific key are maintained across all Active-Active database instances. + +For example, if operations A and B were applied on the same key and the effect of A was observed by the instance that initiated B before B was applied to the key. +All instances of an Active-Active database would then observe the effect of A before observing the effect of B. +This way, any causal relationship between operations on the same key is also observed and maintained by every replica. + +### Enable causal consistency + +When you create an Active-Active database, you can enable causal consistency in the Cluster Manager UI: + +1. In the **Participating clusters** section of the **Create Active-Active database** screen, locate **Causal Consistency**: + + {{The Participating clusters section of the Create Active-Active database screen.}} + +1. Click **Change** to open the **Causal Consistency** dialog. + +1. Select **Enabled**: + + {{Enabled is selected in the Causal Consistency dialog.}} + +1. Click **Change** to confirm your selection. + +After database creation, you can only turn causal consistency on or off using the REST API or `crdb-cli`. +The updated setting only affects commands and operations received after the change. + +### Causal consistency side effects + +When the causal consistency option is enabled, each instance maintains the order of operations it received from another instance +and relays that information to all other N-2 instances, +where N represents the number of instances used by the Active-Active database. + +As a result, network traffic is increased by a factor of (N-2). +The memory consumed by each instance and overall performance are also impacted when causal consistency is activated. + +--- +Title: Connect to your Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to connect to an Active-Active database using redis-cli or a sample + Python application. +linkTitle: Connect +weight: 26 +--- + +With the Redis database created, you are ready to connect to your +database to store data. You can use one of the following ways to test +connectivity to your database: + +- Connect with redis-cli, the built-in command-line tool +- Connect with a _Hello World_ application written in Python + +Remember we have two member Active-Active databases that are available for connections and +concurrent reads and writes. The member Active-Active databases are using bi-directional +replication to for the global Active-Active database. + +{{< image filename="/images/rs/crdb-diagram.png" >}} + +### Connecting using redis-cli {#connecting-using-rediscli} + +redis-cli is a simple command-line tool to interact with redis database. + +1. To use redis-cli on port 12000 from the node 1 terminal, run: + + ```sh + redis-cli -p 12000 + ``` + +1. Store and retrieve a key in the database to test the connection with these + commands: + + - `set key1 123` + - `get key1` + + The output of the command looks like this: + + ```sh + 127.0.0.1:12000> set key1 123 + OK + 127.0.0.1:12000> get key1 + "123" + ``` + +1. Enter the terminal of node 1 in cluster 2, run the redis-cli, and + retrieve key1. + + The output of the commands looks like this: + + ```sh + $ redis-cli -p 12000 + 127.0.0.1:12000> get key1 + "123" + ``` + +### Connecting using _Hello World_ application in Python + +A simple python application running on the host machine can also connect +to the database. + +{{< note >}} +Before you continue, you must have python and +[redis-py](https://github.com/andymccurdy/redis-py#installation) +(python library for connecting to Redis) configured on the host machine +running the container. +{{< /note >}} + +1. In the command-line terminal, create a new file called "redis_test.py" + + ```sh + vi redis_test.py + ``` + +1. Paste this code into the "redis_test.py" file. + + This application stores a value in key1 in cluster 1, gets that value from + key1 in cluster 1, and gets the value from key1 in cluster 2. + + ```py + import redis + rp1 = redis.StrictRedis(host='localhost', port=12000, db=0) + rp2 = redis.StrictRedis(host='localhost', port=12002, db=0) + print ("set key1 123 in cluster 1") + print (rp1.set('key1', '123')) + print ("get key1 cluster 1") + print (rp1.get('key1')) + print ("get key1 from cluster 2") + print (rp2.get('key1')) + ``` + +1. To run the "redis_test.py" application, run: + + ```sh + python redis_test.py + ``` + + If the connection is successful, the output of the application looks like: + + ```sh + set key1 123 in cluster 1 + True + get key1 cluster 1 + "123" + get key1 from cluster 2 + "123" + ```--- +Title: Create an Active-Active geo-replicated database +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to create an Active-Active database and things to consider when setting + it up. +linkTitle: Create +weight: 25 +--- +[Active-Active geo-replicated databases]({{< relref "/operate/rs/databases/active-active" >}}) (formerly known as CRDBs) give applications write access +to replicas of the dataset in different geographical locations. + +The participating Redis Enterprise Software clusters that host the instances can be distributed in different geographic locations. +Every instance of an Active-Active database can receive write operations, and all operations are [synchronized]({{< relref "/operate/rs/databases/active-active/develop#example-of-synchronization" >}}) to all instances without conflict. + +## Steps to create an Active-Active database + +1. **Create a service account** - On each participating cluster, create a dedicated user account with the Admin role. +1. **Confirm connectivity** - Confirm network connectivity between the participating clusters. +1. **Create Active-Active database** - Connect to one of your clusters and create a new Active-Active database. +1. **Add participating clusters** - Add the participating clusters to the Active-Active database with the user credentials for the service account. +1. **Verify creation** - Log in to each of the participating clusters and verify your Active-Active database was created on them. +1. **Confirm Active-Active database synchronization** - Test writing to one cluster and reading from a different cluster. + +## Prerequisites + +- Two or more machines with the same version of Redis Enterprise Software installed +- Network connectivity and cluster FQDN name resolution between all participating clusters +- [Network time service]({{< relref "/operate/rs/databases/active-active#network-time-service-ntp-or-chrony" >}}) listener (ntpd) configured and running on each node in all clusters + +## Create an Active-Active database + +1. Create service accounts on each participating cluster: + + 1. In a browser, open the Cluster Manager UI for the participating cluster. + + The default address is: `https://:8443` + + 1. Go to the **Access Control > Users** tab: + + {{Add role with name}} + + 1. Click **+ Add user**. + + 1. Enter the username, email, and password for the user. + + 1. Select the **Admin** role. + + 1. Click **Save**. + +1. To verify network connectivity between participating clusters, + run the following `telnet` command from each participating cluster to all other participating clusters: + + ```sh + telnet 9443 + ``` + +1. In a browser, open the Cluster Manager UI of the cluster where you want to create the Active-Active database. + + The default address is: `https://:8443` + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select **Active-Active database**. + +1. Enter the cluster's local admin credentials, then click **Save**: + + {{Enter the cluster's admin username and password.}} + +1. Add participating clusters that will host instances of the Active-Active database: + + 1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + + 1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + {{Add cluster panel.}} + + {{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. + {{}} + + 1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Enter a **Database name**. + +1. If your cluster supports [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}), in **Runs on** you can select **Flash** so that your database uses Flash memory. We recommend that you use AOF every 1 sec for the best performance during the initial Active-Active database sync of a new replica. + +1. To configure additional database settings, expand each relevant section to make changes. + + See [Configuration settings](#configuration-settings) for more information about each setting. + +1. Click **Create**. + +## Configuration settings + +- **Database version** - The Redis version used by your database. + +- **Database name** - The database name requirements are: + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +- **Port** - You can define the port number that clients use to connect to the database. Otherwise, a port is randomly selected. + + {{< note >}} +You cannot change the [port number]({{< relref "/operate/rs/networking/port-configurations.md" >}}) +after the database is created. + {{< /note >}} + +- **Memory limit** - [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit.md" >}}) include all database replicas and shards, including replica shards in database replication and database shards in database clustering. + + If the total size of the database in the cluster reaches the memory limit, the data eviction policy for the database is enforced. + + {{< note >}} +If you create a database with Auto Tiering enabled, you also need to set the RAM-to-Flash ratio +for this database. Minimum RAM is 10%. Maximum RAM is 50%. + {{< /note >}} + +- **Memory eviction** - The default [eviction policy]({{}}) for Active-Active databases is `noeviction`. Redis Enterprise versions 6.0.20 and later support all eviction policies for Active-Active databases, unless [Auto Tiering]({{}}) is enabled. + +- [**Capabilities**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) (previously **Modules**) - When you create a new in-memory database, you can enable multiple Redis Stack capabilities in the database. For Auto Tiering databases, you can enable capabilities that support Auto Tiering. See [Redis Enterprise and Redis Stack feature compatibility +]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}) for compatibility details. + + {{}} +To use Redis Stack capabilities, enable them when you create a new database. +You cannot enable them after database creation. + {{}} + + To add capabilities to the database: + + 1. In the **Capabilities** section, select one or more capabilities. + + 1. To customize capabilities, select **Parameters** and enter the optional custom configuration. + + 1. Select **Done**. + +### TLS + +If you enable TLS when you create the Active-Active database, the nodes use the TLS mode **Require TLS for CRDB communication only** to require TLS authentication and encryption for communications between participating clusters. + +After you create the Active-Active database, you can set the TLS mode to **Require TLS for all communications** so client communication from applications are also authenticated and encryption. + +### High availability + +- [**Replication**]({{< relref "/operate/rs/databases/durability-ha/replication" >}}) - We recommend that all Active-Active database use replication for best intercluster synchronization performance. + + When replication is enabled, every Active-Active database master shard is replicated to a corresponding replica shard. The replica shards are then used to synchronize data between the instances, and the master shards are dedicated to handling client requests. + +- [**Replica high availability**]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) - We also recommend that you enable replica high availability to ensure replica shards are highly-available for this synchronization. + +### Clustering + +- In the [**Clustering**]({{}}) section, you can either: + + - **Enable sharding** and select the number of shards you want to have in the database. When database clustering is enabled, databases have limitations for [multi-key operations]({{}}). + + You can increase the number of shards in the database at any time. + + - Clear the **Enable sharding** option to use only one shard, which allows you to use [multi-key operations]({{}}) without the limitations. + + {{}} +You cannot enable or turn off database clustering after the Active-Active database is created. + {{}} + +- [**OSS Cluster API**]({{< relref "/operate/rs/databases/configure/oss-cluster-api.md" >}}) - The OSS Cluster API configuration allows access to multiple endpoints for increased throughput. The OSS Cluster API setting applies to all instances of the Active-Active database across participating clusters. + + This configuration requires clients to connect to the primary node to retrieve the cluster topology before they can connect directly to proxies on each node. + + When you enable the OSS Cluster API, shard placement changes to _Sparse_, and the database proxy policy changes to _All primary shards_ automatically. + +### Durability + +To protect against loss of data stored in RAM, you can enable [**Persistence**]({{}}) to store a copy of the data on disk. + +Active-Active databases support append-only file (AOF) persistence only. Snapshot persistence is not supported for Active-Active databases. + +### Access control + +- **Unauthenticated access** - You can access the database as the default user without providing credentials. + +- **Password-only authentication** - When you configure a password for your database's default user, all connections to the database must authenticate with the [AUTH command]({{< relref "/commands/auth" >}}. + + If you also configure an access control list, connections can specify other users for authentication, and requests are allowed according to the Redis ACLs specified for that user. + + Creating a database without ACLs enables a *default* user with full access to the database. You can secure default user access by requiring a password. + +- **Access Control List** - You can specify the [user roles]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) that have access to the database and the [Redis ACLs]({{< relref "/operate/rs/security/access-control/redis-acl-overview" >}}) that apply to those connections. + + You can only configure access control after the Active-Active database is created. In each participating cluster, add ACLs after database creation. + + To define an access control list for a database: + + 1. In **Security > Access Control > Access Control List**, select **+ Add ACL**. + + 1. Select a [role]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) to grant database access. + + 1. Associate a [Redis ACL]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) with the role and database. + + 1. Select the check mark to add the ACL. + +### Causal consistency + +[**Causal consistency**]({{< relref "/operate/rs/databases/active-active/causal-consistency" >}}) in an Active-Active database guarantees that the order of operations on a specific key is maintained across all instances of an Active-Active database. + +To enable causal consistency for an existing Active-Active database, use the REST API. + + +## Test Active-Active database connections + +With the Redis database created, you are ready to connect to your database. See [Connect to Active-Active databases]({{< relref "/operate/rs/databases/active-active/connect.md" >}}) for tutorials and examples of multiple connection methods. +--- +Title: Delete Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Considerations while deleting Active-Active databases. +linktitle: Delete +weight: 35 +--- + +When you delete an Active-Active database (formerly known as CRDB), +all instances of the Active-Active database are deleted from all participating clusters. + +{{% warning %}} +This action is immediate, non-reversible, and has no rollback. +{{% /warning %}} + +Because Active-Active databases are made up of instances on multiple participating clusters, +to restore a deleted Active-Active database you must create the database again with all of its instances +and then restore the data to the database from backup. + +We recommended that you: + +- Back up your data and test the restore on another database before you delete an Active-Active database. +- Consider [flushing the data]({{< relref "/operate/rs/databases/import-export/flush.md" >}}) from the database + so that you can keep the Active-Active database configuration and restore the data to it if necessary. +--- +Title: Syncer process +alwaysopen: false +categories: +- docs +- operate +- rs +description: Detailed information about the syncer process and its role in distributed + databases. +linktitle: Syncer process +weight: 90 +--- + +## Syncer process + +Each node in a cluster containing an instance of an Active-Active database hosts a process called the syncer. +The syncer process: + +1. Connects to the proxy on another participating cluster +1. Reads data from that database instance +1. Writes the data to the local cluster's primary(master) shard + +Some replication capabilities are also included in [Redis Open Source]({{< relref "/operate/oss_and_stack/management/replication" >}}). + +The primary (also known as master) shard at the top of the primary-replica tree creates a replication ID. +This replication ID is identical for all replicas in that tree. +When a new primary is appointed, the replication ID changes, but a partial sync from the previous ID is still possible. + + +In a partial sync, the backlog of operations since the offset are transferred as raw operations. +In a full sync, the data from the primary is transferred to the replica as an RDB file which is followed by a partial sync. + +Partial synchronization requires a backlog large enough to store the data operations until connection is restored. See [replication backlog]({{< relref "/operate/rs/databases/active-active/manage#replication-backlog" >}}) for more info on changing the replication backlog size. + +### Syncer in Active-Active replication + +In the case of an Active-Active database: + +- Multiple past replication IDs and offsets are stored to allow for multiple syncs +- The [Active-Active replication backlog]({{< relref "/operate/rs/databases/active-active/manage#replication-backlog" >}}) is also sent to the replica during a full sync. + +{{< warning >}} +Full sync triggers heavy data transfers between geo-replicated instances of an Active-Active database. +{{< /warning >}} + +An Active-Active database uses partial synchronization in the following situations: + +- Failover of primary shard to replica shard +- Restart or crash of replica shard that requires sync from primary +- Migrate replica shard to another node +- Migrate primary shard to another node as a replica using failover and replica migration +- Migrate primary shard and preserve roles using failover, replica migration, and second failover to return shard to primary + +{{< note >}} +Synchronization of data from the primary shard to the replica shard is always a full synchronization. +{{< /note >}}--- +Title: Get started with Redis Enterprise Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Quick start guide to create an Active-Active database for test and development. +linktitle: Get started +weight: 20 +--- + +To get started, this article will help you set up an Active-Active database, formerly known as CRDB (conflict-free replicated database), spanning across two Redis Enterprise Software +clusters for test and development environments. Here are the steps: + +1. Run two Redis Enterprise Software Docker containers. + +1. Set up each container as a cluster. + +1. Create a new Redis Enterprise Active-Active database. + +1. Test connectivity to the Active-Active database. + +To run an Active-Active database on installations from the [Redis Enterprise Software download package]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}), +set up two Redis Enterprise Software installations and continue from Step 2. + +{{}} +This getting started guide is for development or demonstration environments. +For production environments, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/databases/active-active/create" >}}) for instructions. +{{}} + +## Run two containers + +To spin up two Redis Enterprise Software containers, run these commands: + +```sh +docker run -d --cap-add sys_resource -h rs1_node1 --name rs1_node1 -p 8443:8443 -p 9443:9443 -p 12000:12000 redislabs/redis +``` + +```sh +docker run -d --cap-add sys_resource -h rs2_node1 --name rs2_node1 -p 8445:8443 -p 9445:9443 -p 12002:12000 redislabs/redis +``` + +The **-p** options map the Cluster Manager UI port (8443), REST API port (9443), and +database access port differently for each container to make sure that all +containers can be accessed from the host OS that is running the containers. + +## Set up two clusters + +1. For cluster 1, go to `https://localhost:8443` in a browser on the +host machine to access the Redis Enterprise Software Cluster Manager UI. + + {{}} +Depending on your browser, you may see a certificate error. Continue to the website. + {{}} + +1. Click **Create new cluster**: + + {{When you first install Redis Enterprise Software, you need to set up a cluster.}} + +1. Enter an email and password for the administrator account, then click **Next** to proceed to cluster setup: + + {{Set the credentials for your admin user.}} + +1. Enter your cluster license key if you have one. Otherwise, a trial version is installed. + + {{Enter your cluster license key if you have one.}} + +1. In the **Configuration** section of the **Cluster** settings page, enter a cluster FQDN, for example `cluster1.local`: + + {{Configure the cluster FQDN.}} + +1. On the node setup screen, keep the default settings and click **Create cluster**: + + {{Configure the node specific settings.}} + +1. Click **OK** to confirm that you are aware of the replacement of the HTTPS SSL/TLS + certificate on the node, and proceed through the browser warning. + +1. Repeat the previous steps for cluster 2 with these differences: + + - In your web browser, go to `https://localhost:8445` to set up the cluster 2. + + - For the **Cluster name (FQDN)**, enter a different name, such as `cluster2.local`. + +Now you have two Redis Enterprise Software clusters with FQDNs +`cluster1.local` and `cluster2.local`. + +{{}} +Each Active-Active instance must have a unique fully-qualified domain name (FQDN). +{{}} + +## Create an Active-Active database + +1. Sign in to cluster1.local's Cluster Manager UI at `https://localhost:8443` + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select **Active-Active database**. + +1. Enter the cluster's local admin credentials, then click **Save**: + + {{Enter the cluster's admin username and password.}} + +1. Add participating clusters that will host instances of the Active-Active database: + + 1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + + 1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + In the **Other participating clusters** list, add the address and admin credentials for the other cluster: `https://cluster2.local:9443` + + {{Add cluster panel.}} + + 1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Enter `database1` for **Database name** and `12000` for **Port**: + + {{Database name and port text boxes.}} + +1. Configure additional settings: + + 1. In the **High availability** section, turn off **Replication** since each cluster has only one node in this setup: + + {{Turn off replication in the High availability section.}} + + + 1. In the **Clustering** section, either: + + - Make sure that **Sharding** is enabled and select the number of shards you want to have in the database. When database clustering is enabled, + databases are subject to limitations on [Multi-key commands]({{< relref "/operate/rs/databases/durability-ha/clustering" >}}). + You can increase the number of shards in the database at any time. + + - Turn off **Sharding** to use only one shard and avoid [Multi-key command]({{< relref "/operate/rs/databases/durability-ha/clustering" >}}) limitations. + + {{< note >}} +You cannot enable or turn off database clustering after the Active-Active database is created. + {{< /note >}} + +1. Click **Create**. + + {{< note >}} +{{< embed-md "docker-memory-limitation.md" >}} + {{< /note >}} + +1. After the Active-Active database is created, sign in to the Cluster Manager UIs for cluster 1 at `https://localhost:8443` and cluster 2 at `https://localhost:8445`. + +1. Make sure each cluster has an Active-Active database member database with the name `database1`. + + In a real-world deployment, cluster 1 and cluster 2 would most likely be + in separate data centers in different regions. However, for + local testing we created the scale-minimized deployment using two + local clusters running on the same host. + + +## Test connection + +With the Redis database created, you are ready to connect to your +database. See [Connect to Active-Active databases]({{< relref "/operate/rs/databases/active-active/connect" >}}) for tutorials and examples of multiple connection methods. +--- +Title: Active-Active geo-distributed Redis +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Overview of the Active-Active database in Redis Enterprise Software +hideListLinks: true +linktitle: Active-Active +weight: 40 +--- +In Redis Enterprise, Active-Active geo-distribution is based on [CRDT technology](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). +The Redis Enterprise implementation of CRDT is called an Active-Active database (formerly known as CRDB). +With Active-Active databases, applications can read and write to the same data set from different geographical locations seamlessly and with latency less than one millisecond (ms), +without changing the way the application connects to the database. + +Active-Active databases also provide disaster recovery and accelerated data read-access for geographically distributed users. + + +## High availability + +The [high availability]({{< relref "/operate/rs/databases/durability-ha/" >}}) that Active-Active replication provides is built upon a number of Redis Enterprise Software features (such as [clustering]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}), [replication]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}), and [replica HA]({{< relref "/operate/rs/databases/configure/replica-ha.md" >}})) as well as some features unique to Active-Active ([multi-primary replication]({{}}), [automatic conflict resolution]({{}}), and [strong eventual consistency]({{}})). + +Clustering and replication are used together in Active-Active databases to distribute multiple copies of the dataset across multiple nodes and multiple clusters. As a result, a node or cluster is less likely to become a single point of failure. If a primary node or primary shard fails, a replica is automatically promoted to primary. To avoid having one node hold all copies of certain data, the [replica HA]({{< relref "/operate/rs/databases/configure/replica-ha.md" >}}) feature (enabled by default) automatically migrates replica shards to available nodes. + +## Multi-primary replication + +In Redis Enterprise Software, replication copies data from primary shards to replica shards. Active-Active geo-distributed replication also copies both primary and replica shards to other clusters. Each Active-Active database needs to span at least two clusters; these are called participating clusters. + +Each participating cluster hosts an instance of your database, and each instance has its own primary node. Having multiple primary nodes means you can connect to the proxy in any of your participating clusters. Connecting to the closest cluster geographically enables near-local latency. Multi-primary replication (previously referred to as multi-master replication) also means that your users still have access to the database if one of the participating clusters fails. + +{{< note >}} +Active-Active databases do not replicate the entire database, only the data. +Database configurations, LUA scripts, and other support info are not replicated. +{{< /note >}} + +## Syncer + +Keeping multiple copies of the dataset consistent across multiple clusters is no small task. To achieve consistency between participating clusters, Redis Active-Active replication uses a process called the [syncer]({{< relref "/operate/rs/databases/active-active/syncer" >}}). + +The syncer keeps a [replication backlog]({{< relref "/operate/rs/databases/active-active/manage#replication-backlog/" >}}), which stores changes to the dataset that the syncer sends to other participating clusters. The syncer uses partial syncs to keep replicas up to date with changes, or a full sync in the event a replica or primary is lost. + +## Conflict resolution + +Because you can connect to any participating cluster to perform a write operation, concurrent and conflicting writes are always possible. Conflict resolution is an important part of the Active-Active technology. Active-Active databases only use [conflict-free replicated data types (CRDTs)](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). These data types provide a predictable conflict resolution and don't require any additional work from the application or client side. + +When developing with CRDTs for Active-Active databases, you need to consider some important differences. See [Develop applications with Active-Active databases]({{< relref "/operate/rs/databases/active-active/develop/_index.md" >}}) for related information. + + +## Strong eventual consistency + +Maintaining strong consistency for replicated databases comes with tradeoffs in scalability and availability. Redis Active-Active databases use a strong eventual consistency model, which means that local values may differ across replicas for short periods of time, but they all eventually converge to one consistent state. Redis uses vector clocks and the CRDT conflict resolution to strengthen consistency between replicas. You can also enable the causal consistency feature to preserve the order of operations as they are synchronized among replicas. + +Other Redis Enterprise Software features can also be used to enhance the performance, scalability, or durability of your Active-Active database. These include [data persistence]({{< relref "/operate/rs/databases/configure/database-persistence.md" >}}), [multiple active proxies]({{< relref "/operate/rs/databases/configure/proxy-policy.md" >}}), [distributed synchronization]({{< relref "/operate/rs/databases/active-active/synchronization-mode.md" >}}), [OSS Cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api.md" >}}), and [rack-zone awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness.md" >}}). + +## Next steps + +- [Plan your Active-Active deployment]({{< relref "/operate/rs/databases/active-active/planning.md" >}}) +- [Get started with Active-Active]({{< relref "/operate/rs/databases/active-active/get-started.md" >}}) +- [Create an Active-Active database]({{< relref "/operate/rs/databases/active-active/create.md" >}})--- +Title: Configure high availability for replica shards +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure high availability for replica shards so that the cluster automatically + migrates the replica shards to an available node. +linkTitle: Replica high availability +weight: 50 +--- + +When you enable [database replication]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}), +Redis Enterprise Software creates a replica of each primary shard. The replica shard will always be +located on a different node than the primary shard to make your data highly available. If the primary shard +fails or if the node hosting the primary shard fails, then the replica is promoted to primary. + +Without replica high availability (_replica\_ha_) enabled, the promoted primary shard becomes a single point of failure +as the only copy of the data. + +Enabling _replica\_ha_ configures the cluster to automatically replicate the promoted replica on an available node. +This automatically returns the database to a state where there are two copies of the data: +the former replica shard which has been promoted to primary and a new replica shard. + +An available node: + +1. Meets replica migration requirements, such as [rack-awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness.md" >}}). +1. Has enough available RAM to store the replica shard. +1. Does not also contain the primary shard. + +In practice, replica migration creates a new replica shard and copies the data from the primary shard to the new replica shard. + +For example: + +1. Node:2 has a primary shard and node:3 has the corresponding replica shard. +1. Either: + + - Node:2 fails and the replica shard on node:3 is promoted to primary. + - Node:3 fails and the primary shard is no longer replicated to the replica shard on the failed node. + +1. If replica HA is enabled, a new replica shard is created on an available node. +1. The data from the primary shard is replicated to the new replica shard. + +{{< note >}} +- Replica HA follows all prerequisites of replica migration, such as [rack-awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness.md" >}}). +- Replica HA migrates as many shards as possible based on available DRAM in the target node. When no DRAM is available, replica HA stops migrating replica shards to that node. +{{< /note >}} + +## Configure high availability for replica shards + +If replica high availability is enabled for both the cluster and a database, +the database's replica shards automatically migrate to another node when a primary or replica shard fails. +If replica HA is not enabled at the cluster level, +replica HA will not migrate replica shards even if replica HA is enabled for a database. + +Replica high availability is enabled for the cluster by default. + +When you create a database using the Cluster Manager UI, replica high availability is enabled for the database by default if you enable replication. + +{{When you select the Replication checkbox in the High availability section of the database configuration screen, the Replica high availability checkbox is also selected by default.}} + +To use replication without replication high availability, clear the **Replica high availability** checkbox. + +You can also enable or turn off replica high availability for a database using `rladmin` or the REST API. + +{{< note >}} +For Active-Active databases, replica HA is enabled for the database by default to make sure that replica shards are available for Active-Active replication. +{{< /note >}} + +### Configure cluster policy for replica HA + +To enable or turn off replica high availability by default for the entire cluster, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster slave_ha { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "slave_ha": } + ``` + +### Turn off replica HA for a database + +To turn off replica high availability for a specific database using `rladmin`, run: + +``` text +rladmin tune db db: slave_ha disabled +``` + +You can use the database name in place of `db:` in the preceding command. + + +## Configuration options + +You can see the current configuration options for replica HA with: + +``` text +rladmin info cluster +``` + +### Grace period + +By default, replica HA has a 10-minute grace period after node failure and before new replica shards are created. + +{{}}The default grace period is 30 minutes for containerized applications using [Redis Enterprise Software for Kubernetes]({{< relref "/operate/kubernetes/" >}}).{{}} + +To configure this grace period from rladmin, run: + +``` text +rladmin tune cluster slave_ha_grace_period +``` + + +### Shard priority + +Replica shard migration is based on priority. When memory resources are limited, the most important replica shards are migrated first: + +1. `slave_ha_priority` - Replica shards with higher + integer values are migrated before shards with lower values. + + To assign priority to a database, run: + + ``` text + rladmin tune db db: slave_ha_priority + ``` + + You can use the database name in place of `db:` in the preceding command. + +1. Active-Active databases - Active-Active database synchronization uses replica shards to synchronize between the replicas. +1. Database size - It is easier and more efficient to move replica shards of smaller databases. +1. Database UID - The replica shards of databases with a higher UID are moved first. + +### Cooldown periods + +Both the cluster and the database have cooldown periods. + +After node failure, the cluster cooldown period (`slave_ha_cooldown_period`) prevents another replica migration due to another node failure for any +database in the cluster until the cooldown period ends. The default is one hour. + +After a database is migrated with replica HA, +it cannot go through another migration due to another node failure until the cooldown period for the database (`slave_ha_bdb_cooldown_period`) ends. The default is two hours. + +To configure cooldown periods, use [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + +- For the cluster: + + ``` text + rladmin tune cluster slave_ha_cooldown_period + ``` + +- For all databases in the cluster: + + ``` text + rladmin tune cluster slave_ha_bdb_cooldown_period + ``` + +### Alerts + +The following alerts are sent during replica HA activation: + +- Shard migration begins after the grace period. +- Shard migration fails because there is no available node (sent hourly). +- Shard migration is delayed because of the cooldown period. +--- +Title: Change database upgrade configuration +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure cluster-wide policies that affect default database upgrades. +linkTitle: Upgrade configuration +toc: 'true' +weight: 15 +--- + +Database upgrade configuration includes cluster-wide policies that affect default database upgrades. + +## Edit upgrade configuration + +To edit database upgrade configuration using the Cluster Manager UI: + +1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Upgrade configuration**. + +1. Change database [upgrade configuration settings](#upgrade-config-settings). + +1. Select **Save**. + +## Upgrade configuration settings {#upgrade-config-settings} + +### Database shard parallel upgrade + +To change the number of shards upgraded in parallel during database upgrades, use one of the following methods: + +- Cluster Manager UI – Edit **Database shard parallel upgrade** in [**Upgrade configuration**](#edit-upgrade-configuration) + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster parallel_shards_upgrade { all | } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "parallel_shards_upgrade": } + ``` + +### RESP3 support + +The cluster-wide option `resp3_default` determines the default value of the `resp3` option, which enables or deactivates RESP3 for a database, upon upgrading a database to version 7.2 or later. `resp3_default` is set to `enabled` by default. + +To change `resp3_default` to `disabled`, use one of the following methods: + +- Cluster Manager UI – Edit **RESP3 support** in [**Upgrade configuration**](#edit-upgrade-configuration) + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster resp3_default { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "resp3_default": } + ``` +--- +Title: Enable OSS Cluster API +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: OSS Cluster API +weight: 20 +--- + +Review [Redis OSS Cluster API]({{< relref "/operate/rs/clusters/optimize/oss-cluster-api" >}}) to determine if you should enable this feature for your database. + +## Prerequisites + +The Redis OSS Cluster API is supported only when a database meets specific criteria. + +The database must: + +- Use the standard [hashing policy]({{< relref "/operate/rs/databases/durability-ha/clustering#supported-hashing-policies" >}}). +- Have the [proxy policy]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}) set to either _All primary shards_ or _All nodes_. + +In addition, the database must _not_: + +- Use node `include` or `exclude` in the proxy policy. +- Use [RediSearch]({{< relref "/operate/oss_and_stack/stack-with-enterprise/search" >}}), [RedisTimeSeries]({{< relref "/operate/oss_and_stack/stack-with-enterprise/timeseries" >}}), or [RedisGears]({{< relref "/operate/oss_and_stack/stack-with-enterprise/gears-v1" >}}) modules. + +The OSS Cluster API setting applies to individual databases instead of the entire cluster. + +## Enable OSS Cluster API support + +You can use the Cluster Manager UI, the `rladmin` utility, or the REST API to enable OSS Cluster API support for a database. + +When you enable OSS Cluster API support for an existing database, the change applies to new connections but does not affect existing connections. Clients must close existing connections and reconnect to apply the change. + +### Cluster Manager UI method + +When you use the Cluster Manager UI to enable the OSS Cluster API, it automatically configures the [prerequisites]({{< relref "/operate/rs/databases/configure/oss-cluster-api#prerequisites" >}}). + +To enable the OSS Cluster API for an existing database in the Cluster Manager UI: + +1. From the database's **Configuration** tab, select **Edit**. + +1. Expand the **Clustering** section. + +1. Select **Enable sharding**. + +1. Select **OSS Cluster API**. + + {{Use the *OSS Cluster API* setting to enable the API for the selected database.}} + +1. Select **Save**. + +You can also use the Cluster Manager UI to enable the setting when creating a new database. + +### Command-line method + +You can use the [`rladmin` utility]({{< relref "/operate/rs/references/cli-utilities/rladmin/" >}}) to enable the OSS Cluster API for Redis Enterprise Software databases, including Replica Of databases. + +For Active-Active (CRDB) databases, [use the crdb-cli utility](#active-active-databases). + +Ensure the [prerequisites]({{< relref "/operate/rs/databases/configure/oss-cluster-api#prerequisites" >}}) have been configured. Then, enable the OSS Cluster API for a Redis database from the command line: + +```sh +$ rladmin tune db oss_cluster enabled +``` + +To determine the current setting for a database from the command line, use `rladmin info db` to return the value of the `oss_cluster` setting. + +```sh +$ rladmin info db test | grep oss_cluster: + oss_cluster: enabled +``` + +The OSS Cluster API setting applies to the specified database only; it does not apply to the cluster. + +### REST API method + +You can enable the OSS Cluster API when you [create a database]({{}}) using the REST API: + +```sh +POST /v1/bdbs +{ + "oss_cluster": true, + // Other database configuration parameters +} +``` + +To enable the OSS Cluster API for an existing database, you can use an [update database configuration]({{}}) REST API request: + +```sh +PUT /v1/bdbs/ +{ "oss_cluster": true } +``` + +### Active-Active databases + +The OSS Cluster API setting applies to all instances of the Active-Active database across participating clusters. To enable the OSS Cluster API for Active-Active databases, use the [Cluster Manager UI](#cluster-manager-ui) or the [`crdb-cli`]({{}}) utility. + +To create an Active-Active database and enable the OSS Cluster API with `crdb-cli`: + +```sh +$ crdb-cli crdb create --name \ + --memory-size 10g --port \ + --sharding true --shards-count 2 \ + --replication true --oss-cluster true --proxy-policy all-master-shards \ + --instance fqdn=,username=,password= \ + --instance fqdn=,username=,password= \ + --instance fqdn=,username=,password= +``` + +See the [`crdb-cli crdb create`]({{}}) reference for more options. + +To enable the OSS Cluster API for an existing Active-Active database with `crdb-cli`: + +1. Obtain the `CRDB-GUID` for the new database: + + ```sh + $ crdb-cli crdb list + CRDB-GUID NAME REPL-ID CLUSTER-FQDN + Test 4 cluster1.local + ``` + +1. Use the `CRDB-GUID` to enable the OSS Cluster API: + + ```sh + $ crdb-cli crdb update --crdb-guid \ + --oss-cluster true + ``` + +## Change preferred IP type + +By default, using [`CLUSTER SLOTS`]({{}}) and [`CLUSTER SHARDS`]({{}}) in a Redis Enterprise Software cluster exposes the internal IP addresses for databases with the OSS Cluster API enabled. + +To use external IP addresses instead of internal IP addresses, run the following [`rladmin tune db`]({{}}) command for each affected database: + +```sh +$ rladmin tune db db: oss_cluster_api_preferred_ip_type external +``` + +## Turn off OSS Cluster API support + +To deactivate OSS Cluster API support for a database, either: + +- Use the Cluster Manager UI to turn off the **OSS Cluster API** in the **Clustering** section of the database **Configuration** settings. + +- Use the appropriate utility to deactivate the OSS Cluster API setting. + + For standard databases, including Replica Of, use `rladmin`: + + ```sh + $ rladmin tune db oss_cluster disabled + ``` + + For Active-Active databases, use the Cluster Manager UI or `crdb-cli`: + + ```sh + $ crdb-cli crdb update --crdb-guid \ + --oss-cluster false + ``` + +When you turn off OSS Cluster API support for an existing database, the change applies to new connections but does not affect existing connections. Clients must close existing connections and reconnect to apply the change. + +## Multi-key command support + +When you enable the OSS Cluster API for a database, +[multi-key commands]({{< relref "/operate/rc/databases/configuration/clustering#multikey-operations" >}}) are only allowed when all keys are mapped to the same slot. + +To verify that your database meets this requirement, make sure that the `CLUSTER KEYSLOT` reply is the same for all keys affected by the multi-key command. To learn more, see [multi-key operations]({{< relref "/operate/rs/databases/durability-ha/clustering#multikey-operations" >}}). +--- +Title: Configure shard placement +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure shard placement to improve performance. +linktitle: Shard placement +weight: 60 +--- +In Redis Enterprise Software , the location of master and replica shards on the cluster nodes can impact the database and node performance. +Master shards and their corresponding replica shards are always placed on separate nodes for data resiliency. +The [shard placement policy]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy.md" >}}) helps to maintain optimal performance and resiliency. + +{{< embed-md "shard-placement-intro.md" >}} + +## Default shard placement policy + +When you create a new cluster, the cluster configuration has a `dense` default shard placement policy. +When you create a database, this default policy is applied to the new database. + +To see the current default shard placement policy, run `rladmin info cluster`: + +{{< image filename="/images/rs/shard_placement_info_cluster.png" >}} + +To change the default shard placement policy so that new databases are created with the `sparse` shard placement policy, run: + +```sh +rladmin tune cluster default_shards_placement [ dense | sparse ] +``` + +## Shard placement policy for a database + +To see the shard placement policy for a database in `rladmin status`. + +{{< image filename="/images/rs/shard_placement_rladmin_status.png" >}} + +To change the shard placement policy for a database, run: + +```sh +rladmin placement db [ database name | database ID ] [ dense | sparse ] +``` +--- +Title: Configure proxy policy +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Proxy policy +weight: 40 +--- +Redis Software provides high-performance data access +through a proxy process that manages and optimizes access to shards +within the Redis Software cluster. Each node contains a single proxy process. +Each proxy can be active and take incoming traffic or it can be passive +and wait for failovers. + +## Proxy policies + +A database can have one of the following proxy policies: + +| Proxy policy | Description | Recommended use cases | Advantages | Disadvantages | +|--------------|-------------|-----------------------|-----------|-----------------| +| Single | Only a single proxy is bound to the database. This is the default database configuration. | Most use cases without high traffic or load | Lower resource usage, fewer application-to-cluster connections | Higher latency, more network hops | +| All primary shards | Multiple proxies are bound to the database, one on each node that hosts a database primary shard. | Most use cases that require multiple endpoints, such as when using the [OSS Cluster API]({{}}) | Lower latency, fewer network hops, higher throughput | Higher resource usage, more application-to-proxy connections | +| All nodes | Multiple proxies are bound to the database, one on each node in the cluster, regardless of whether or not there is a shard from this database on the node. | When using [load balancers]({{}}) for environments without DNS | Higher throughput | Highest resource usage | + +## View proxy policy + +You can use the Cluster Manager UI, [`rladmin`]({{}}), or the [REST API]({{}}) to view proxy configuration settings. + +The [`rladmin info cluster`]({{}}) command returns the current proxy policy for sharded and non-sharded (single shard) databases. + +```sh +$ rladmin info cluster +cluster configuration: +   ... + default_non_sharded_proxy_policy: single +   default_sharded_proxy_policy: single + ... +``` + +## Configure database proxy policy + +You can use the [Cluster Manager UI](#cluster-manager-ui-method), the [REST API](#rest-api-method), or [`rladmin`](#command-line-method) to configure a database's proxy policy. + +{{}} +Any configuration update that unbinds existing proxies can disconnect existing client connections. +{{}} + +### Cluster Manager UI method + +You can change a database's proxy policy when you [create]({{}}) or [edit]({{}}) a database using the Cluster Manager UI: + +1. While in edit mode on the database's configuration screen, expand the **Clustering** section. + +1. Select a policy from the **Database proxy** list. + +1. Click **Create** or **Save**. + +### REST API method + +You can specify a proxy policy when you [create a database]({{}}) using the REST API: + +```sh +POST /v1/bdbs +{ + "proxy_policy": "single | all-master-shards | all-nodes", + // Other database configuration parameters +} +``` + +To change an existing database's proxy policy, you can use an [update database configuration]({{}}) REST API request: + +```sh +PUT /v1/bdbs/ +{ "proxy_policy": "single | all-master-shards | all-nodes" } +``` + +### Command-line method + +You can configure a database's proxy policy using [`rladmin bind`]({{}}). + +The following example changes the bind policy for a database named "db1" with an endpoint ID "1:1" to "All primary shards" proxy policy: + +```sh +rladmin bind db db1 endpoint 1:1 policy all-master-shards +``` + +The next command performs the same task using the database ID instead of the name. The ID of this database is "1". + +```sh +rladmin bind db db:1 endpoint 1:1 policy all-master-shards +``` + +{{< note >}} +You can find the endpoint ID for the endpoint argument by running `rladmin status`. Look for the endpoint ID information under +the `ENDPOINT` section of the output. +{{< /note >}} + +### Reapply policies after topology changes + +If you want to reapply the policy after topology changes, such as node restarts, +failovers and migrations, run this command to reset the policy: + +```sh +rladmin bind db db: endpoint policy +``` + +This is not required with single policies. + +#### Other implications + +During the regular operation of the cluster different actions might take +place, such as automatic migration or automatic failover, which change +what proxy needs to be bound to what database. When such actions take +place the cluster attempts, as much as possible, to automatically change +proxy bindings to adhere to the defined policies. That said, the cluster +attempts to prevent any existing client connections from being +disconnected, and hence might not entirely enforce the policies. In such +cases, you can enforce the policy using the appropriate rladmin +commands. + +## Multiple active proxies + +Each database you create in a Redis Software cluster has an endpoint, which consists of a unique URL and port on the FQDN. This endpoint receives all the traffic for all operations for that database. By default, Redis Software binds this database endpoint to one of the proxies on a single node in the cluster. This proxy becomes an active proxy and receives all the operations for the given database. If the node with the active proxy fails, a new proxy on another node takes over as part of the failover process automatically. + +In most cases, a single proxy can handle a large number of operations +without consuming additional resources. However, under high load, +network bandwidth, or a high rate of packets per second (PPS) on the +single active proxy can become a bottleneck to how fast database +operations can be performed. In such cases, having multiple active proxies across multiple nodes, mapped to the same external database +endpoint, can significantly improve throughput. + +You can configure a database to have multiple internal proxies, which can improve performance in some cases. +Even though multiple active proxies can help improve the throughput of database +operations, configuring multiple active proxies may cause additional +latency in operations as the shards and proxies are spread across +multiple nodes in the cluster. + +{{< note >}} +When the network on a single active proxy becomes the bottleneck, consider enabling multiple NIC support in Redis Software. With nodes that have multiple physical NICs (Network Interface Cards), you can configure Redis Software to separate internal and external traffic onto independent physical NICs. For more details, refer to [Multi-IP & IPv6]({{< relref "/operate/rs/networking/multi-ip-ipv6.md" >}}). +{{< /note >}} + +Having multiple proxies for a database can improve Redis Software's ability for fast failover in case of proxy or node failure. With multiple proxies for a database, a client doesn't need to wait for the cluster to spin up another proxy and a DNS change in most cases. Instead, the client uses the next IP address in the list to connect to another proxy. +--- +Title: Configure database persistence +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to configure database persistence with either an append-only file + (AOF) or snapshots. +linktitle: Persistence +weight: 30 +--- + +Data is stored in RAM or a combination of RAM and flash memory ([Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}})), which risks data loss during process or server failures. Redis Enterprise Software supports multiple methods to persist data to disk on a per-database basis to ensure data durability. + +You can configure [persistence](https://redis.com/redis-enterprise/technology/durable-redis/) during database creation or by editing an existing database. Although the persistence model can be changed dynamically, the switch can take time depending on the database size and the models being switched. + +## Configure database persistence + +You can configure persistence when you [create a database]({{< relref "/operate/rs/databases/create" >}}), or you can edit an existing database's configuration: + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select **Edit**. + +1. Expand the **Durability** section. + +1. For **Persistence**, select an [option](#data-persistence-options) from the list. + +1. Select **Save**. + +## Data persistence options + +There are six options for persistence in Redis Enterprise Software: + +| **Options** | **Description** | +| ------ | ------ | +| None | Data is not persisted to disk at all. | +| Append-only file (AOF) - fsync every write | Data is fsynced to disk with every write. | +| Append-only file (AOF) - fsync every 1 sec | Data is fsynced to disk every second. | +| Snapshot, every 1 hour | A snapshot of the database is created every hour. | +| Snapshot, every 6 hours | A snapshot of the database is created every 6 hours. | +| Snapshot, every 12 hours | A snapshot of the database is created every 12 hours. | + +## Select a persistence strategy + +When selecting your persistence strategy, you should take into account your tolerance for data loss and performance needs. There will always be tradeoffs between the two. +The fsync() system call syncs data from file buffers to disk. You can configure how often Redis performs an fsync() to most effectively make tradeoffs between performance and durability for your use case. +Redis supports three fsync policies: every write, every second, and disabled. + +Redis also allows snapshots through RDB files for persistence. Within Redis Enterprise, you can configure both snapshots and fsync policies. + +For any high availability needs, use replication to further reduce the risk of data loss. + +**For use cases where data loss has a high cost:** + +Append-only file (AOF) - fsync every write - Redis Enterprise sets the Redis directive `appendfsyncalways`. With this policy, Redis will wait for the write and the fsync to complete prior to sending an acknowledgement to the client that the data has written. This introduces the performance overhead of the fsync in addition to the execution of the command. The fsync policy always favors durability over performance and should be used when there is a high cost for data loss. + +**For use cases where data loss is tolerable only limitedly:** + +Append-only file (AOF) - fsync every 1 sec - Redis will fsync any newly written data every second. This policy balances performance and durability and should be used when minimal data loss is acceptable in the event of a failure. This is the default Redis policy. This policy could result in between 1 and 2 seconds worth of data loss but on average this will be closer to one second. + +{{< note >}} +If you use AOF for persistence, enable replication to improve performance. When both features are enabled for a database, the replica handles persistence, which prevents any performance impact on the master. +{{< /note >}} + +**For use cases where data loss is tolerable or recoverable for extended periods of time:** + +- Snapshot, every 1 hour - Performs a full backup every hour. +- Snapshot, every 6 hour - Performs a full backup every 6 hours. +- Snapshot, every 12 hour - Performs a full backup every 12 hours. +- None - Does not back up or persist data at all. + +## Append-only file (AOF) vs snapshot (RDB) + +Now that you know the available options, to assist in making a decision +on which option is right for your use case, here is a table about the +two: + +| **Append-only File (AOF)** | **Snapshot (RDB)** | +|------------|-----------------| +| More resource intensive | Less resource intensive | +| Provides better durability (recover the latest point in time) | Less durable | +| Slower time to recover (Larger files) | Faster recovery time | +| More disk space required (files tend to grow large and require compaction) | Requires less resources (I/O once every several hours and no compaction required) | + +## Active-Active data persistence + +Active-Active databases support AOF persistence only. Snapshot persistence is not supported for Active-Active databases. + +If an Active-Active database is using snapshot persistence, use `crdb-cli` to switch to AOF persistence: + +```text +crdb-cli crdb update --crdb-guid --default-db-config \ + '{"data_persistence": "aof", "aof_policy":"appendfsync-every-sec"}' +``` + +## Auto Tiering data persistence + +Auto Tiering flash storage is not considered persistent storage. + +Flash-based databases are expected to hold larger datasets, and shard repair times can take longer after node failures. To better protect the database against node failures with longer repair times, consider enabling master and replica dual data persistence. + +However, dual data persistence with replication adds some processor +and network overhead, especially for cloud configurations +with network-attached persistent storage, such as EBS-backed +volumes in AWS. + +There may be times when performance is critical for your use case and +you don't want to risk data persistence adding latency. + +You can enable or turn off data persistence on the master shards using the +following `rladmin` command: + +```sh +rladmin tune db master_persistence +``` +--- +Title: Manage database tags +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage tags for databases in a Redis Software cluster. +linkTitle: Database tags +toc: 'true' +weight: 17 +--- + +You can create custom tags to categorize databases in a Redis Software cluster. + +The **Databases** screen shows tags for each database in the list. + +{{The databases screen includes tags for each database.}} + +## Add database tags + +You can add tags when you [create a database]({{}}) or [edit an existing database's configuration]({{}}). + +To add tags to a database using the Cluster Manager UI: + +1. While in edit mode on the database's configuration screen, click **Add tags**. + + {{The Add tags button on the database configuration screen.}} + +1. Enter a key and value for the tag. Keys and values previously used by existing tags will appear as suggestions. + + {{The Manage tags dialog lets you add, edit, or delete tags.}} + +1. To add additional tags, click **Add tag**. + +1. After you finish adding tags, click **Done** to close the tag manager. + +1. Click **Create** or **Save**. + +## Edit database tags + +To edit a database's existing tags using the Cluster Manager UI: + +1. Go to the database's **Configuration** screen, then click **Edit**. + +1. Next to the existing **Tags**, click {{< image filename="/images/rs/buttons/edit-db-tags-button.png#no-click" alt="Edit tags button" width="22px" class="inline" >}}. + + {{The Edit tags button on the database configuration screen.}} + +1. Edit or delete existing tags, or click **Add tag** to add new tags. + +1. After you finish editing tags, click **Done** to close the tag manager. + +1. Click **Save**. +--- +Title: Configure database defaults +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster-wide policies that determine default settings when creating new + databases. +linkTitle: Database defaults +toc: 'true' +weight: 10 +--- + +Database defaults are cluster-wide policies that determine default settings when creating new databases. + +## Edit database defaults + +To edit default database configuration using the Cluster Manager UI: + +1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Database defaults**. + +1. Configure [database defaults](#db-defaults). + + {{Database defaults configuration panel.}} + +1. Select **Save**. + +## Database defaults {#db-defaults} + +### Endpoint configuration + +You can choose a predefined endpoint configuration to use the recommended database proxy and shards placement policies for your use case. If you want to set these policies manually instead, select **Custom** endpoint configuration. + +| Endpoint configuration | Database proxy | Shards placement | Description | +|-----------|------------|----------------|------------------|------------| +| Enterprise clustering | Single | Dense | Sets up a single endpoint that uses DNS to automatically reflect IP address updates after failover or topology changes. | +| Using a load balancer | All nodes | Sparse | Configure Redis with a load balancer like HAProxy or Nginx for environments without DNS. | +| Multiple endpoints | All primary shards | Sparse | To set up multiple endpoints, enable **OSS Cluster API** in the database settings and ensure client support. Clients initially connect to the primary node to retrieve the cluster topology, which allows direct connections to individual Redis proxies on each node. | +| Custom | Single, all primary shards, or all nodes | Dense or sparse | Manually choose default database proxy and shards placement policies. | + +### Database proxy + +Redis Enterprise Software uses [proxies]({{< relref "/operate/rs/references/terminology#proxy" >}}) to manage and optimize access to database shards. Each node in the cluster runs a single proxy process, which can be active (receives incoming traffic) or passive (waits for failovers). + +You can configure default [proxy policies]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}) to determine which nodes' proxies are active and bound to new databases by default. + +To configure the default database proxy policy using the Cluster Manager UI: + +1. [**Edit database defaults**](#edit-database-defaults). + +1. Select a predefined [**Endpoint Configuration**](#endpoint-configuration) to use a recommended database proxy policy, or choose **Custom** to set the policy manually. Changing the database proxy default in the Cluster Manager UI affects both sharded and non-sharded proxy policies. + + {{The Database defaults panel lets you select Database proxy and Shards placement if Endpoint Configuration is set to Custom.}} + +#### Non-sharded proxy policy + +To configure the default proxy policy for non-sharded databases, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_non_sharded_proxy_policy { single | all-master-shards | all-nodes } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_non_sharded_proxy_policy": "single | all-master-shards | all-nodes" } + ``` + +#### Sharded proxy policy + +To configure the default proxy policy for sharded databases, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_sharded_proxy_policy { single | all-master-shards | all-nodes } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_sharded_proxy_policy": "single | all-master-shards | all-nodes" } + ``` + +### Shards placement + +The default [shard placement policy]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy" >}}) determines the distribution of database shards across nodes in the cluster. + +Shard placement policies include: + +- `dense`: places shards on the smallest number of nodes. + +- `sparse`: spreads shards across many nodes. + +To configure default shard placement, use one of the following methods: + +- Cluster Manager UI: + + 1. [**Edit database defaults**](#edit-database-defaults). + + 1. Select a predefined [**Endpoint Configuration**](#endpoint-configuration) to use a recommended shards placement policy, or choose **Custom** to set the policy manually. + + {{The Database defaults panel lets you select Database proxy and Shards placement if Endpoint Configuration is set to Custom.}} + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_shards_placement { dense | sparse } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_shards_placement": "dense | sparse" } + ``` + +### Database version + +New databases use the default Redis database version unless you select a different **Database version** when you [create a database]({{}}) in the Cluster Manager UI or specify the `redis_version` in a [create database REST API request]({{< relref "/operate/rs/references/rest-api/requests/bdbs" >}}). + +To configure the Redis database version, use one of the following methods: + +- Cluster Manager UI: Edit **Database version** in [**Database defaults**](#edit-database-defaults) + + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_redis_version + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_provisioned_redis_version": "x.y" } + ``` + +### Internode encryption + +Enable [internode encryption]({{< relref "/operate/rs/security/encryption/internode-encryption" >}}) to encrypt data in transit between nodes for new databases by default. + +To enable or turn off internode encryption by default, use one of the following methods: + +- Cluster Manager UI: Edit **Internode Encryption** in [**Database defaults**](#edit-database-defaults) + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster data_internode_encryption { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "data_internode_encryption": } + ``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure settings specific to each database. +hideListLinks: true +linktitle: Configure +title: Configure database settings +toc: 'true' +weight: 20 +--- + +You can manage your Redis Enterprise Software databases with several tools: + +- [Cluster Manager UI](#edit-database-settings) (the web-based user interface) + +- Command-line tools: + + - [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) for standalone database configuration + + - [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) for Active-Active database configuration + + - [`redis-cli`]({{< relref "/develop/tools/cli" >}}) for Redis Open Source configuration + +- [REST API]({{< relref "/operate/rs/references/rest-api/_index.md" >}}) + +## Edit database settings + +You can change the configuration of a Redis Enterprise Software database at any time. + +To edit the configuration of a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database you want to edit. + +1. From the **Configuration** tab, select **Edit**. + +1. Change any [configurable database settings](#config-settings). + + {{< note >}} +For [Active-Active database instances]({{< relref "/operate/rs/databases/active-active" >}}), most database settings only apply to the instance that you are editing. + {{< /note >}} + +1. Select **Save**. + +## Configuration settings {#config-settings} + +### General + +- [**Tags**]({{}}) - Add custom tags to categorize the database. + +- **Database version** - Select the Redis version when you create a database. + +- **Database name** - The database name requirements are: + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +- **Endpoint port number** - You can define the port number that clients use to connect to the database. Otherwise, a port is randomly selected. + + {{< note >}} +You cannot change the [port number]({{< relref "/operate/rs/networking/port-configurations.md" >}}) +after the database is created. + {{< /note >}} + +### Capacity + +- **Memory limit** - [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit.md" >}}) include all database replicas and shards, including replica shards in database replication and database shards in database clustering. + + If the total size of the database in the cluster reaches the memory limit, the memory eviction policy for the database is enforced. + +- **RAM limit** - If you create a database with Auto Tiering enabled, you also need to set the RAM-to-Flash ratio. Minimum RAM is 10%. Maximum RAM is 50%. + +- [**Memory eviction**]({{}}) - By default, when the total size of the database reaches its memory limit, the database evicts keys according to the least recently used keys out of all keys with an "expire" field set to make room for new keys. You can select a different eviction policy. + +### Capabilities + +When you create a new in-memory database, you can enable multiple Redis Stack [**Capabilities**]({{}}). + +For Auto Tiering databases, you can enable capabilities that support Auto Tiering. See [Redis Enterprise and Redis Stack feature compatibility +]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}) for compatibility details. + +{{}} +To use Redis Stack capabilities, enable them when you create a new database. +You cannot enable them after database creation. +{{}} + +To add capabilities to the database: + +1. In the **Capabilities** section, select one or more capabilities. + +1. To customize capabilities, click **Parameters** and enter the optional custom configuration. + +1. Click **Done**. + +To change capabilities' parameters for an existing database using the Cluster Manager UI: + + 1. In the **Capabilities** section, click **Edit Parameters**. + + 1. After you finish editing the module's configuration parameters, click **Done** to close the parameter editor. + +### High Availability + +- [**Replication**]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}) - We recommend you use intra-cluster replication to create replica shards for each database for high availability. + + If the cluster is configured to support [rack-zone awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness.md" >}}), you can also enable rack-zone awareness for the database. + +- [**Replica high availability**]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) - Automatically migrates replica shards to an available node if a replica node fails or is promoted to primary. + +### Clustering + +- **Enable sharding** - You can either: + + - Turn on sharding to enable [database clustering]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}) and select the number of database shards. + + When database clustering is enabled, databases are subject to limitations on [Multi-key commands]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}). + + You can increase the number of shards in the database at any time. + + - Turn off sharding to use only one shard so that you can use [Multi-key commands]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}) without the limitations. + +- [**Shards placement**]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy" >}}) - Determines how to distribute database shards across nodes in the cluster. + + - _Dense_ places shards on the smallest number of nodes. + + - _Sparse_ spreads shards across many nodes. + +- [**OSS Cluster API**]({{< relref "/operate/rs/databases/configure/oss-cluster-api.md" >}}) - The OSS Cluster API configuration allows access to multiple endpoints for increased throughput. + + This configuration requires clients to connect to the primary node to retrieve the cluster topology before they can connect directly to proxies on each node. + + When you enable the OSS Cluster API, shard placement changes to _Sparse_, and the database proxy policy changes to _All primary shards_ automatically. + + {{}} +You must use a client that supports the cluster API to connect to a database that has the cluster API enabled. + {{}} + +- **Hashing policy** - You can accept the [standard hashing policy]({{}}), which is compatible with Redis Open Source, or define a [custom hashing policy]({{}}) to define where keys are located in the clustered database. + +- [**Database proxy**]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}) - Determines the number and location of active proxies, which manage incoming database operation requests. + +### Durability + +- [**Persistence**]({{}}) - To protect against loss of data stored in RAM, you can enable data persistence and store a copy of the data on disk with snapshots or an append-only file. + +- **Scheduled backup** - You can configure [periodic backups]({{}}) of the database, including the interval and backup location parameters. + +### TLS + +You can require [**TLS**]({{< relref "/operate/rs/security/encryption/tls/" >}}) encryption and authentication for all communications, TLS encryption and authentication for Replica Of communication only, and TLS authentication for clients. + +### Access control + +- **Unauthenticated access** - You can access the database as the default user without providing credentials. + +- **Password-only authentication** - When you configure a password for your database's default user, all connections to the database must authenticate with the [AUTH command]({{< relref "/commands/auth" >}}). + + If you also configure an access control list, connections can specify other users for authentication, and requests are allowed according to the Redis ACLs specified for that user. + + Creating a database without ACLs enables a *default* user with full access to the database. You can secure default user access by requiring a password. + +- **Access Control List** - You can specify the [user roles]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) that have access to the database and the [Redis ACLs]({{< relref "/operate/rs/security/access-control/redis-acl-overview" >}}) that apply to those connections. + + To define an access control list for a database: + + 1. In **Security > Access Control > Access Control List**, select **+ Add ACL**. + + 1. Select a [role]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) to grant database access. + + 1. Associate a [Redis ACL]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) with the role and database. + + 1. Select the check mark to add the ACL. + +### Alerts + +Select [alerts]({{}}) to show in the database status and configure their thresholds. + +You can also choose to [send alerts by email]({{}}) to relevant users. + +### Replica Of + +With [**Replica Of**]({{}}), you can make the database a repository for keys from other databases. + +### RESP3 support + +[RESP]({{}}) (Redis Serialization Protocol) is the protocol clients use to communicate with Redis databases. If you enable RESP3 support, the database will support the RESP3 protocol in addition to RESP2. + +For more information about Redis Software's compatibility with RESP3, see [RESP compatibility with Redis Enterprise]({{}}). + +### Internode encryption + +Enable **Internode encryption** to encrypt data in transit between nodes for this database. See [Internode encryption]({{< relref "/operate/rs/security/encryption/internode-encryption" >}}) for more information. + +--- +Title: Recover a failed database +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Recover a database after the cluster fails or the database is corrupted. +linktitle: Recover +weight: 35 +--- +When a cluster fails or a database is corrupted, you must: + +1. [Restore the cluster configuration]({{< relref "/operate/rs/clusters/cluster-recovery.md" >}}) from the CCS files +1. Recover the databases with their previous configuration and data + +To restore data to databases in the new cluster, +you must restore the database persistence files (backup, AOF, or snapshot files) to the databases. +These files are stored in the [persistence storage location]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}). + +The database recovery process includes: + +1. If the cluster failed, [recover the cluster]({{< relref "/operate/rs/clusters/cluster-recovery.md" >}}). +1. Identify recoverable databases. +1. Restore the database data. +1. Verify that the databases are active. + +## Prerequisites + +- Before you start database recovery, make sure that the cluster that hosts the database is healthy. + In the case of a cluster failure, + you must [recover the cluster]({{< relref "/operate/rs/clusters/cluster-recovery.md" >}}) before you recover the databases. + +- We recommend that you allocate new persistent storage drives for the new cluster nodes. + If you use the original storage drives, + make sure to back up all files on the old persistent storage drives to another location. + +## Recover databases + +After you prepare the cluster that hosts the database, +you can run the recovery process from the [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) +command-line interface (CLI). + +To recover the database: + +1. Mount the persistent storage drives with the recovery files to the new nodes. + These drives must contain the cluster configuration backup files and database persistence files. + + {{< note >}} +Make sure that the user `redislabs` has permissions to access the storage location +of the configuration and persistence files on each of the nodes. + {{< /note >}} + + If you use local persistent storage, place all of the recovery files on each of the cluster nodes. + +1. To see which databases are recoverable, run: + + ```sh + rladmin recover list + ``` + + The status for each database can be either ready for recovery or missing files. + An indication of missing files in any of the databases can result from: + + - The storage location is not found - Make sure the recovery path is set correctly on all nodes in the cluster. + - Files are not found in the storage location - Move the files to the storage location. + - No permission to read the files - Change the file permissions so that redislabs:redislabs has 640 permissions. + - Files are corrupted - Locate copies of the files that are not corrupted. + + If you cannot resolve the issues, contact [Redis support](https://redis.com/company/support/). + +1. Recover the database using one of the following [`rladmin recover`]({{< relref "/operate/rs/references/cli-utilities/rladmin/recover" >}}) commands: + + - Recover all databases from the persistence files located in the persistent storage drives: + + ```sh + rladmin recover all + ``` + + - Recover a single database from the persistence files located in the persistent storage drives: + + - By database ID: + + ```sh + rladmin recover db db: + ``` + + - By database name: + + ```sh + rladmin recover db + ``` + + - Recover only the database configuration for a single database (without the data): + + ```sh + rladmin recover db only_configuration + ``` + + {{< note >}} +- If persistence was not configured for the database, the database is restored empty. +- For Active-Active databases that still have live instances, we recommend that you recover the configuration for the failed instances and let the data update from the other instances. +- For Active-Active databases where all instances need to be recovered, we recommend you recover one instance with the data and only recover the configuration for the other instances. + The empty instances then update from the recovered data. +- If the persistence files of the databases from the old cluster are not stored in the persistent storage location of the new node, + you must first map the recovery path of each node to the location of the old persistence files. + To do this, run the `node recovery_path set` command in rladmin. + The persistence files for each database are located in the persistent storage path of the nodes from the old cluster, usually under `/var/opt/redislabs/persist/redis`. + {{< /note >}} + +1. To verify that the recovered databases are now active, run: + + ```sh + rladmin status + ``` + +After the databases are recovered, make sure your Redis clients can successfully connect to the databases. + +## Configure automatic recovery + +If you enable the automatic recovery cluster policy, Redis Enterprise tries to quickly recover as much data as possible from before the disaster. + +To enable automatic recovery, [update the cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) using the REST API: + +```sh +PUT /v1/cluster/policy +{ + "auto_recovery": true +} +``` + +Redis Enterprise tries to recover databases from the best existing persistence files. If a persistence file isn't available, which can happen if its host node is down, the automatic recovery process waits for it to become available. + +For each database, you can set the `recovery_wait_time` to define how many seconds the database waits for a persistence file to become available before recovery. After the wait time elapses, the recovery process continues, which can result in partial or full data loss. The default value is `-1`, which means to wait forever. Short wait times can increase the risk of potential data loss. + +To change `recovery_wait_time` for an existing database using the REST API: + +```sh +PUT /v1/bdbs/ +{ + "recovery_wait_time": 3600 +} +``` + +You can also set `recovery_wait_time` when you [create a database]({{< relref "/operate/rs/references/rest-api/requests/bdbs#post-bdbs-v1" >}}) using the REST API. +--- +Title: Database replication +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Replication +weight: 40 +--- +Database replication helps ensure high availability. +When replication is enabled, your dataset is replicated to a replica shard, +which is constantly synchronized with the primary shard. If the primary +shard fails, an automatic failover happens and the replica shard is promoted. That is, it becomes the new primary shard. + +When the old primary shard recovers, it becomes +the replica shard of the new primary shard. This auto-failover mechanism +guarantees that data is served with minimal interruption. + +You can tune your high availability configuration with: + +- [Rack/Zone +Awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness.md" >}}) - When rack-zone awareness is used additional logic ensures that master and replica shards never share the same rack, thus ensuring availability even under loss of an entire rack. +- [High Availability for Replica Shards]({{< relref "/operate/rs/databases/configure/replica-ha.md" >}}) - When high availability +for replica shards is used, the replica shard is automatically migrated on node failover to maintain high availability. + +{{< warning >}} +Enabling replication has implications for the total database size, +as explained in [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit.md" >}}). +{{< /warning >}} + +## Auto Tiering replication considerations + +We recommend that you set the sequential replication feature using +`rladmin`. This is due to the potential for relatively slow replication +times that can occur with Auto Tiering enabled databases. In some +cases, if sequential replication is not set up, you may run out of memory. + +While it does not cause data loss on the +primary shards, the replication to replica shards may not succeed as long +as there is high write-rate traffic on the primary and multiple +replications at the same time. + +The following `rladmin` command sets the number of primary shards eligible to +be replicated from the same cluster node, as well as the number of replica +shards on the same cluster node that can run the replication process at +any given time. + +The recommended sequential replication configuration is two, i.e.: + +```sh +rladmin tune cluster max_redis_forks 1 max_slave_full_syncs 1 +``` + +{{< note >}} +This means that at any given time, +only one primary and one replica can be part of a full sync replication process. +{{< /note >}} + +## Database replication backlog + +Redis databases that use [replication for high availability]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}) maintain a replication backlog (per shard) to synchronize the primary and replica shards of a database. +By default, the replication backlog is set to one percent (1%) of the database size divided by the database number of shards and ranges between 1MB to 250MB per shard. +Use the [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) and the [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) utilities to control the size of the replication backlog. You can set it to `auto` or set a specific size. + +The syntax varies between regular and Active-Active databases. + +For a regular Redis database: +```text +rladmin tune db repl_backlog +``` + +For an Active-Active database: +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"repl_backlog_size\": }" +``` + +### Active-Active replication backlog + +In addition to the database replication backlog, Active-Active databases maintain a backlog (per shard) to synchronize the database instances between clusters. +By default, the Active-Active replication backlog is set to one percent (1%) of the database size divided by the database number of shards, and ranges between 1MB to 250MB per shard. +Use the [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) utility to control the size of the CRDT replication backlog. You can set it to `auto` or set a specific size: + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"crdt_repl_backlog_size\": }" +``` + +**For Redis Software versions earlier than 6.0.20:** +The replication backlog and the CRDT replication backlog defaults are set to 1MB and cannot be set dynamically with 'auto' mode. +To control the size of the replication log, use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) to tune the local database instance in each cluster. +```text +rladmin tune db repl_backlog +```--- +Title: Discovery service +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Discovery service +weight: 30 +--- +The Discovery Service provides an IP-based connection management service +used when connecting to Redis Enterprise Software databases. When used +in conjunction with Redis Enterprise Software's other high availability +features, the Discovery Service assists an application scope with +topology changes such as adding, removing of nodes, node failovers and +so on. It does this by providing your application with the ability to +easily discover which node hosts the database endpoint. The API used for +discovery service is compliant with the Redis Sentinel API. + +Discovery Service is an alternative for applications that do not want to +depend on DNS name resolution for their connectivity. Discovery Service +and DNS based connectivity are not mutually exclusive. They can be used +side by side in a given cluster where some clients can use Discovery +Service based connection while others can use DNS name resolution when +connecting to databases. + +## How discovery service works + +The Discovery Service is available for querying on each node of the +cluster, listening on port 8001. To employ it, your application utilizes +a [Redis Sentinel enabled client +library]({{< relref "/operate/rs/databases/connect/supported-clients-browsers.md" >}}) +to connect to the Discovery Service and request the endpoint for the +given database. The Discovery Service replies with the database's +endpoint for that database. In case of a node failure, the Discovery +Service is updated by the cluster manager with the new endpoint and +clients unable to connect to the database endpoint due to the failover, +can re-query the discovery service for the new endpoint for the +database. + +The Discovery Service can return either the internal or external +endpoint for a database. If you query the discovery service for the +endpoint of a database named "db1", the Discovery Service returns +the external endpoint information by default. If only an internal +endpoint exists with no external endpoint the default behavior is to +return the internal endpoint. The "\@internal" is added to the end of +the database name to explicitly ask for the internal endpoint. to query +the internal endpoint explicitly with database name "db1", you can pass +in the database name as "db1\@internal". + +If you'd like to examine the metadata returned from Redis Enterprise +Software Discovery Service you can connect to port 8001 with redis-cli +utility and execute "SENTINEL masters". Following is a sample output +from one of the nodes of a Redis Enterprise Software cluster: + +```sh +$ ./redis-cli -p 8001 +127.0.0.1:8001> SENTINEL masters +1) 1) "name" +2) "db1@internal" +3) "ip" +4) "10.0.0.45" +5) "port" +6) "12000" +7) "flags" +8) "master,disconnected" +9) "num-other-sentinels" +10) "0" +2) 1) "name" +2) "db1" +3) "ip" +4) "10.0.0.45" +5) "port" +6) "12000" +7) "flags" +8) "master,disconnected" +9) "num-other-sentinels" +10) "0" +``` + +It is important to note that, the Discovery Service is not a full +implementation of the [Redis Sentinel +protocol]({{< relref "/operate/oss_and_stack/management/sentinel" >}}). There are aspects of the +protocol that are not applicable or would be duplication with existing +technology in Redis Enterprise Software. The Discovery Service +implements only the parts required to provide applications with easy +High Availability, be compatible with the protocol, and not rely on DNS +to derive which node in the cluster to communicate with. + +{{< note >}} +To use Redis Sentinel, every database name must be unique across the cluster. +{{< /note >}} + +## Redis client support + +We recommend these clients that are tested for use with the [Discovery Service]({{< relref "/operate/rs/databases/durability-ha/discovery-service.md" >}}) that uses the Redis Sentinel API: + +{{< embed-md "discovery-clients.md" >}} + +{{< note >}} +Redis Sentinel API can return endpoints for both master and replica +endpoints. +Discovery Service only supports master endpoints and does not +support returning replica endpoints for a database. +{{< /note >}} +--- +Title: Database clustering +alwaysopen: false +categories: +- docs +- operate +- rs +description: Clustering to allow customers to spread the load of a Redis process over + multiple cores and the RAM of multiple servers. +linktitle: Clustering +weight: 10 +--- +Source available [Redis](https://redislabs.com/redis-features/redis) is a single-threaded process +to provide speed and simplicity. +A single Redis process is bound by the CPU core that it is running on and available memory on the server. + +Redis Enterprise Software supports database clustering to allow customers +to spread the load of a Redis process over multiple cores and the RAM of multiple servers. +A database cluster is a set of Redis processes where each process manages a subset of the database keyspace. + +The keyspace of a Redis Enterprise cluster is partitioned into database shards. +Each shard resides on a single node and is managed by that node. +Each node in a Redis database cluster can manage multiple shards. +The key space in the shards is divided into hash slots. +The slot of a key is determined by a hash of the key name or part of the key name. + +Database clustering is transparent to the Redis client that connects to the database. +The Redis client accesses the database through a single endpoint that automatically routes all operations to the relevant shards. +You can connect an application to a single Redis process or a clustered database without any difference in the application logic. + +## Terminology + +In clustering, these terms are commonly used: + +- Tag or Hash Tag - A part of the key that is used in the hash calculation. +- Slot or Hash Slot - The result of the hash calculation. +- Shard - Redis process that is part of the Redis clustered database. + +## When to use clustering (sharding) + +Clustering is an efficient way of scaling Redis that should be used when: + +- The dataset is large enough to benefit from using the RAM resources of more than one node. + When a dataset is more than 25 GB (50 GB for RoF), we recommend that you enable clustering to create multiple shards of the database + and spread the data requests across nodes. +- The operations performed against the database are CPU-intensive, resulting in performance degradation. + By having multiple CPU cores manage the database's shards, the load of operations is distributed among them. + +## Number of shards + +When enabling database clustering, you can set the number of database +shards. The minimum number of shards per database is 2 and the maximum +depends on the subscription you purchased. + +After you enable database clustering and set the number of shards, you cannot deactivate database clustering or reduce the number of +shards. You can only increase the number of shards by a multiple of the +current number of shards. For example, if the current number of shards +is 3, you can increase the number of shards to 6, 9, or 12. + +## Supported hashing policies + +### Standard hashing policy + +When using the standard hashing policy, a clustered Redis Enterprise database behaves similarly to a standard [Redis Open Source cluster]({{< relref "/operate/oss_and_stack/reference/cluster-spec" >}}#hash-tags), except when using multiple hash tags in a key's name. We recommend using only a single hash tag in a key name for hashing in Redis Enterprise. + +- **Keys with a hash tag**: a key's hash tag is any substring between + `{` and `}` in the key's name. When a key's name + includes the pattern `{...}`, the hash tag is used as input for the + hashing function. + + For example, the following key names have the same + hash tag and map to the same hash slot: `foo{bar}`, + `{bar}baz`, and `foo{bar}baz`. + +- **Keys without a hash tag**: when a key does not contain the `{...}` + pattern, the entire key's name is used for hashing. + +You can use a hash tag to store related keys in the same hash +slot so multi-key operations can run on these keys. If you do not use a hash tag in the key's name, the keys are distributed evenly across the keyspace's shards. +If your application does not perform multi-key operations, you do not +need to use hash tags. + +### Custom hashing policy + +You can configure a custom hashing policy for a clustered database. A +custom hashing policy is required when different keys need to be kept +together on the same shard to allow multi-key operations. The custom +hashing policy is provided through a set of Perl Compatible Regular +Expressions (PCRE) rules that describe the dataset's key name patterns. + +To configure a custom hashing policy, enter the regular expression +(RegEx) rules that identify the substring in the key's name - hash tag +-- on which hashing is done. The hash tag is denoted in the +RegEx by the use of the \`tag\` named subpattern. Different keys that +have the same hash tag are stored and managed in the same slot. + +After you enable the custom hashing policy, the following default RegEx +rules are implemented. Update these rules to fit your specific logic: + +| RegEx Rule | Description | +| ------ | ------ | +| .\*{(?\.\*)}.\* | Hashing is done on the substring between the curly braces. | +| (?\.\*) | The entire key's name is used for hashing. | + +You can modify existing rules, add new ones, delete rules, or change +their order to suit your application's requirements. + +### Custom hashing policy notes and limitations + +1. You can define up to 32 RegEx rules, each up to 256 characters. +2. RegEx rules are evaluated in order, and the first rule matched + is used. Therefore, you should place common key name patterns at the + beginning of the rule list. +3. Key names that do not match any of the RegEx rules trigger an + error. +4. The '.\*(?\)' RegEx rule forces keys into a single slot + because the hash key is always empty. Therefore, when used, + this should be the last, catch-all rule. +5. The following flag is enabled in the regular expression parser: + PCRE_ANCHORED: the pattern is constrained to match only at the + start of the string being searched. + +## Change the hashing policy + +The hashing policy of a clustered database can be changed. However, +most hashing policy changes trigger the deletion (FLUSHDB) of the +data before they can be applied. + +Examples of such changes include: + +- Changing the hashing policy from standard to custom or conversely, + custom to standard. +- Changing the order of custom hashing policy rules. +- Adding new rules in the custom hashing policy. +- Deleting rules from the custom hashing policy. + +{{< note >}} +The recommended workaround for updates that are not enabled, +or require flushing the database, +is to back up the database and import the data to a newly configured database. +{{< /note >}} + +## Multi-key operations {#multikey-operations} + +Operations on multiple keys in a clustered database are supported with +the following limitations: + +- **Multi-key commands**: Redis offers several commands that accept + multiple keys as arguments. In a clustered database, most multi-key + commands are not allowed across slots. The following multi-key + commands **are allowed** across slots: DEL, MSET, MGET, EXISTS, UNLINK, TOUCH + + In Active-Active databases, multi-key write commands (DEL, MSET, UNLINK) can only be run on keys that are in the same slot. However, the following multi-key commands **are allowed** across slots in Active-Active databases: MGET, EXISTS, and TOUCH. + + Commands that affect all keys or keys that match a specified pattern are allowed + in a clustered database, for example: FLUSHDB, FLUSHALL, KEYS + + {{< note >}} +When using these commands in a sharded setup, +the command is distributed across multiple shards +and the responses from all shards are combined into a single response. + {{< /note >}} + +- **Geo commands**: For the [GEORADIUS]({{< relref "/commands/georadius" >}}) and + [GEORADIUSBYMEMBER]({{< relref "/commands/georadiusbymember" >}}) commands, the + STORE and STOREDIST options can only be used when all affected keys + reside in the same slot. +- **Transactions**: All operations within a WATCH / MULTI / EXEC block + should be performed on keys that are mapped to the same slot. +- **Lua scripts**: All keys used by a Lua script must be mapped to the same + slot and must be provided as arguments to the EVAL / EVALSHA commands + (as per the Redis specification). Using keys in a Lua script that + were not provided as arguments might violate the sharding concept + but do not result in the proper violation error being returned. +- **Renaming/Copy keys**: The use of the RENAME / RENAMENX / COPY commands is + allowed only when the key's original and new values are mapped to + the same slot. +--- +Title: Durability and high availability +alwaysopen: false +categories: +- docs +- operate +- rs +description: Overview of Redis Enterprise durability features such as replication, + clustering, and rack-zone awareness. +hideListLinks: true +linktitle: Durability and availability +weight: 60 +--- +Redis Enterprise Software comes with several features that make your data more durable and accessible. The following features can help protect your data in cases of failures or outages and help keep your data available when you need it. + +## Replication + +When you [replicate your database]({{}}), each database instance (primary shard) is copied to a replica shard. When a primary shard fails, the cluster automatically promotes a replica shard to primary. + +## Clustering + +[Clustering]({{}}) (or sharding) breaks your database into individual instances (shards) and spreads them across several nodes. Clustering lets you add resources to your cluster to scale your database and prevents node failures from causing availability loss. + +## Database persistence + +[Database persistence]({{}}) gives your database durability against process or server failures by saving data to disk at set intervals. + +## Active-Active geo-distributed replication + +[Active-Active Redis Enterprise databases]({{}}) allow reading and writing to the same dataset from multiple clusters in different geographic locations. This increases the durability of your database by reducing the likelihood of data or availability loss. It also reduces data access latency by serving requests from the nearest cluster. + +## Rack-zone awareness + +[Rack-zone awareness]({{}}) maps each node in your Redis Enterprise cluster to a physical rack or logical zone. The cluster uses this information to distribute primary shards and their replica shards in different racks or zones. This ensures data availability if a rack or zone fails. + +## Discovery service + +The [discovery service]({{}}) provides an IP-based connection management service used when connecting to Redis Enterprise Software databases. It lets your application discover which node hosts the database endpoint. The discovery service API complies with the [Redis Sentinel API]({{< relref "/operate/oss_and_stack/management/sentinel" >}}#sentinel-api).--- +Title: Consistency during replication +alwaysopen: false +categories: +- docs +- operate +- rs +description: Explains the order write operations are communicated from app to proxy to shards for both non-blocking Redis write operations and blocking write operations on replication. +linkTitle: Consistency +weight: 20 +--- +Redis Enterprise Software comes with the ability to replicate data +to another database instance for high availability and persist in-memory data on +disk permanently for durability. With the [`WAIT`]({{}}) command, you can +control the consistency and durability guarantees for the replicated and +persisted database. + +## Non-blocking Redis write operation + +Any updates that are issued to the database are typically performed with the following flow: + +1. The application issues a write. +2. The proxy communicates with the correct primary (also known as master) shard in the system that contains the given key. +3. The shard writes the data and sends an acknowledgment to the proxy. +4. The proxy sends the acknowledgment back to the application. +5. The write is communicated from the primary shard to the replica. +6. The replica acknowledges the write back to the primary shard. +7. The write to a replica is persisted to disk. +8. The write is acknowledged within the replica. + +{{< image filename="/images/rs/weak-consistency.png" >}} + +## Blocking write operation on replication + +With the [`WAIT`]({{}}) or [`WAITAOF`]({{}}) commands, applications can ask to wait for +acknowledgments only after replication or persistence is confirmed on +the replica. The flow of a write operation with `WAIT` or `WAITAOF` is: + +1. The application issues a write. +2. The proxy communicates with the correct primary shard in the system that contains the given key. +3. Replication communicates the update to the replica shard. +4. If using `WAITAOF` and the AOF every write setting, the replica persists the update to disk before sending the acknowledgment. +5. The acknowledgment is sent back from the replica all the way to the proxy with steps 5 to 8. + +The application only gets the acknowledgment from the write after durability is achieved with replication to the replica for `WAIT` or `WAITAOF` and to the persistent storage for `WAITAOF` only. + +{{< image filename="/images/rs/strong-consistency.png" >}} + +The `WAIT` command always returns the number of replicas that acknowledged the write commands sent by the current client before the `WAIT` command, both in the case where the specified number of replicas are reached, or when the timeout is reached. In Redis Enterprise Software, the number of replicas for HA enabled databases is always 1. + +See the [`WAITAOF`]({{}}) command for details for enhanced data safety and durability capabilities introduced with Redis 7.2. +--- +Title: Flush database data +alwaysopen: false +categories: +- docs +- operate +- rs +description: To delete the data in a database without deleting the database, you can + use Redis CLI to flush it from the database. You can also use Redis CLI, the admin + console, and the Redis Software REST API to flush data from Active-Active databases. +linkTitle: Flush database +weight: 40 +--- +To delete the data in a database without deleting the database configuration, +you can flush the data from the database. + +You can use the Cluster Manager UI to flush data from Active-Active databases. + +{{< warning title="Data Loss Warning" >}} +The flush command deletes ALL in-memory and persistence data in the database. +We recommend that you [back up your database]({{< relref "/operate/rs/databases/import-export/schedule-backups.md" >}}) before you flush the data. +{{< /warning >}} + +## Flush data from a database + +From the command line, you can flush a database with the redis-cli command or with your favorite Redis client. + +To flush data from a database with the redis-cli, run: + +```sh +redis-cli -h -p -a flushall +``` + +Example: + +```sh +redis-cli -h redis-12345.cluster.local -p 9443 -a xyz flushall +``` + +{{< note >}} +Port 9443 is the default [port configuration]({{< relref "/operate/rs/networking/port-configurations#https://docs.redis.com/latest/rs/networking/port-configurations#ports-and-port-ranges-used-by-redis-enterprise-software" >}}). +{{< /note >}} + + +## Flush data from an Active-Active database + +When you flush an Active-Active database (formerly known as CRDB), all of the replicas flush their data at the same time. + +To flush data from an Active-Active database, use one of the following methods: + +- Cluster Manager UI + + 1. On the **Databases** screen, select the database from the list, then click **Configuration**. + + 1. Click {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + + 1. Select **Flush database**. + + 1. Enter the name of the Active-Active database to confirm that you want to flush the data. + + 1. Click **Flush**. + +- Command line + + 1. To find the ID of the Active-Active database, run: + + ```sh + crdb-cli crdb list + ``` + + For example: + + ```sh + $ crdb-cli crdb list + CRDB-GUID NAME REPL-ID CLUSTER-FQDN + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 1 cluster1.local + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 2 cluster2.local + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 3 cluster3.local + ``` + + 1. To flush the Active-Active database, run: + + ```sh + crdb-cli crdb flush --crdb-guid + ``` + + The command output contains the task ID of the flush task, for example: + + ```sh + $ crdb-cli crdb flush --crdb-guid a16fe643-4a7b-4380-a5b2-96109d2e8bca + Task 63239280-d060-4639-9bba-fc6a242c19fc created + ---> Status changed: queued -> started + ``` + + 1. To check the status of the flush task, run: + + ```sh + crdb-cli task status --task-id + ``` + + For example: + + ```sh + $ crdb-cli task status --task-id 63239280-d060-4639-9bba-fc6a242c19fc + Task-ID: 63239280-d060-4639-9bba-fc6a242c19fc + CRDB-GUID: - + Status: finished + ``` + +- REST API + + 1. To find the ID of the Active-Active database, use [`GET /v1/crdbs`]({{< relref "/operate/rs/references/rest-api/requests/crdbs#get-all-crdbs" >}}): + + ```sh + GET https://[host][:port]/v1/crdbs + ``` + + 1. To flush the Active-Active database, use [`PUT /v1/crdbs/{guid}/flush`]({{< relref "/operate/rs/references/rest-api/requests/crdbs/flush#put-crdbs-flush" >}}): + + ```sh + PUT https://[host][:port]/v1/crdbs//flush + ``` + + The command output contains the task ID of the flush task. + + 1. To check the status of the flush task, use [`GET /v1/crdb_tasks`]({{< relref "/operate/rs/references/rest-api/requests/crdb_tasks#get-crdb_task" >}}): + + ```sh + GET https://[host][:port]/v1/crdb_tasks/ + ``` +--- +Title: Migrate a database to Active-Active +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use Replica Of to migrate your database to an Active-Active database. +linktitle: Migrate to Active-Active +weight: $weight +--- + +If you have data in a single-region Redis Enterprise Software database that you want to migrate to an [Active-Active database]({{< relref "/operate/rs/databases/active-active" >}}), +you'll need to create a new Active-Active database and migrate the data into the new database as a [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/" >}}) the existing database. +This process will gradually populate the data in the Active-Active database. + +Before data migration starts, all data is flushed from the Active-Active database. +The data is migrated to the Active-Active instance where you configured migration, and the data from that instance is copied to the other Active-Active instances. + +When data migration is finished, turn off migration and connect your applications to the Active-Active database. + +{{Active-Active data migration process}} + +## Prerequisites + +- During the migration, any applications that connect to the Active-Active database must be **read-only** to ensure the dataset is identical to the source database during the migration process. However, you can continue to write to the source database during the migration process. + +- If you used the mDNS protocol for the cluster name (FQDN), +the [client mDNS prerequisites]({{< relref "/operate/rs/networking/mdns" >}}) must be met in order to communicate with other clusters. + +## Migrate from a Redis Enterprise cluster + +You can migrate a Redis Enterprise database from the [same cluster](#migrate-from-the-same-cluster) or a [different cluster](#migrate-from-a-different-cluster). + +### Migrate from the same cluster + +To migrate a database to Active-Active in the same Redis Enterprise cluster: + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **Current cluster**: + + {{Migrate to Active-Active dialog with Current cluster tab selected.}} + +1. Select the source database from the list. + +1. Click **Add source**. + +1. Click **Save**. + +### Migrate from a different cluster + +{{< note >}} +For a source database on a different Redis Enterprise Software cluster, +you can [compress the replication data]({{< relref "/operate/rs/databases/import-export/replica-of#data-compression-for-replica-of" >}}) to save bandwidth. +{{< /note >}} + +To migrate a database to Active-Active in different Redis Enterprise clusters: + +1. Sign in to the Cluster Manager UI of the cluster hosting the source database. + + 1. In **Databases**, select the source database and then select the **Configuration** tab. + + 1. In the **Replica Of** section, select **Use this database as a source for another database**. + + 1. Copy the Replica Of source URL. + + {{Copy the Replica Of source URL from the Connection link to destination dialog.}} + + To change the internal password, select **Regenerate password**. + + If you regenerate the password, replication to existing destinations fails until their credentials are updated with the new password. + +1. Sign in to the Cluster Manager UI of the destination database’s cluster. + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **External**: + + {{Migrate to Active-Active dialog with External tab selected.}} + +1. For **Source database URL**, enter the Replica Of source URL you copied in step 1. + +1. Click **Add source**. + +1. Click **Save**. + +## Migrate from Redis Open Source + +To migrate a Redis Open Source database to Active-Active: + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **External**: + + {{Migrate to Active-Active dialog with External tab selected.}} + +1. Enter the **Source database URL**: + + - If the database has a password: + + ```sh + redis://:@: + ``` + + Where the password is the Redis password represented with URL encoding escape characters. + + - If the database does not have a password: + + ```sh + redis://: + ``` + +1. Click **Add source**. + +1. Click **Save**. + +## Stop sync after migration + +1. Wait until the migration is complete, indicated by the **Status** _Synced_. + + {{}} +Migration can take minutes to hours to complete depending on the dataset size and network quality. + {{}} + +1. On the Active-Active database's **Configuration** screen, click **Edit**. + +1. In the **Migrate to Active-Active** section, click **Stop sync**: + + {{The Migrate to Active-Active section shows the Active-Active database is synced with the source database.}} + +1. In the **Stop synchronization** dialog, click **Stop** to proceed. + +1. Redirect client connections to the Active-Active database after **Status** changes to _Sync stopped_: + + {{The Migrate to Active-Active section shows the Active-Active database stopped syncing with the source database.}} +--- +Title: Schedule periodic backups +alwaysopen: false +categories: +- docs +- operate +- rs +description: Schedule backups of your databases to make sure you always have valid backups. +linktitle: Schedule backups +weight: 40 +--- + +Periodic backups provide a way to restore data with minimal data loss. With Redis Enterprise Software, you can schedule periodic backups to occur once a day (every 24 hours), twice a day (every twelve hours), every four hours, or every hour. + +As of v6.2.8, you can specify the start time in UTC for 24-hour or 12-hour backups. + +To make an on-demand backup, [export your data]({{< relref "/operate/rs/databases/import-export/export-data.md" >}}). + +You can schedule backups to a variety of locations, including: + +- FTP server +- SFTP server +- Local mount point +- Amazon Simple Storage Service (S3) +- Azure Blob Storage +- Google Cloud Storage + +The backup process creates compressed (.gz) RDB files that you can [import into a database]({{< relref "/operate/rs/databases/import-export/import-data.md" >}}). If the database name is longer than 30 characters, only the first 30 are used in backup file names. + +When you back up a database configured for database clustering, +Redis Enterprise Software creates a backup file for each shard in the configuration. All backup files are copied to the storage location. + +{{< note >}} + +- Make sure that you have enough space available in your storage location. + If there is not enough space in the backup location, the backup fails. +- The backup configuration only applies to the database it is configured on. +- To limit the parallel backup for shards, set both [`tune cluster max_simultaneous_backups`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}) and [`tune node max_redis_forks`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-node" >}}). `max_simultaneous_backups` is set to 4 by default. + +{{< /note >}} + +## Schedule periodic backups + +Before scheduling periodic backups, verify that your storage location exists and is available to the user running Redis Enterprise Software (`redislabs` by default). You should verify that: + +- Permissions are set correctly. +- The user running Redis Enterprise Software is authorized to access the storage location. +- The authorization credentials work. + +Storage location access is verified before periodic backups are scheduled. + +To schedule periodic backups for a database: + +1. Sign in to the Redis Enterprise Software Cluster Manager UI using admin credentials. + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select the **Edit** button. + +1. Expand the **Durability** section. + +1. In the **Scheduled backup** section, click **Add backup path** to open the **Path configuration** dialog. + +1. Select the tab that corresponds to your storage location type, enter the location details, and select **Done**. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. + +1. Set the backup **Interval** and **Starting time**. + + | Setting | Description | + |--------------|-------------| + | **Interval** | Specifies the frequency of the backup; that is, the time between each backup snapshot.

Supported values include _Every 24 hours_, _Every 12 hours_, _Every 4 hours_, and _Every hour_. | + | **Starting time** | _v6.2.8 or later: _ Specifies the start time in UTC for the backup; available when **Interval** is set to _Every 24 hours_ or _Every 12 hours_.

If not specified, defaults to a time selected by Redis Enterprise Software. | + +7. Select **Save**. + +Access to the storage location is verified when you apply your updates. This means the location, credentials, and other details must exist and function before you can enable periodic backups. + +## Default backup start time + +If you do _not_ specify a start time for twenty-four or twelve hour backups, Redis Enterprise Software chooses a random starting time in UTC for you. + +This choice assumes that your database is deployed to a multi-tenant cluster containing multiple databases. This means that default start times are staggered (offset) to ensure availability. This is done by calculating a random offset which specifies a number of seconds added to the start time. + +Here's how it works: + +- Assume you're enabling the backup at 4:00 pm (1600 hours). +- You choose to back up your database every 12 hours. +- Because you didn't set a start time, the cluster randomly chooses an offset of 4,320 seconds (or 72 minutes). + +This means your first periodic backup occurs 72 minutes after the time you enabled periodic backups (4:00 pm + 72 minutes). Backups repeat every twelve hours at roughly same time. + +The backup time is imprecise because they're started by a trigger process that runs every five minutes. When the process wakes, it compares the current time to the scheduled backup time. If that time has passed, it triggers a backup. + +If the previous backup fails, the trigger process retries the backup until it succeeds. + +In addition, throttling and resource limits also affect backup times. + +For help with specific backup issues, [contact support](https://redis.com/company/support/). + + +## Supported storage locations {#supported-storage-locations} + +Database backups can be saved to a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When saved to a local mount point or a cloud provider, backup locations need to be available to [the group and user]({{< relref "/operate/rs/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view permissions and update objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before enabling backups to an FTP server, verify that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user specified in the FTP server location has read and write privileges. + +To store your backups on an FTP server, set its **Backup Path** using the following syntax: + +`ftp://[username]:[password]@[host]:[port]/[path]/` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the backup path, if needed. + +Example: `ftp://username:password@10.1.1.1/home/backups/` + +The user account needs permission to write files to the server. + +### SFTP server + +Before enabling backups to an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user specified in the SFTP server location has read and write privileges. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To backup to an SFTP server, enter the SFTP server location in the format: + +```sh +sftp://user:password@host<:custom_port>/path/ +``` + +For example: `sftp://username:password@10.1.1.1/home/backups/` + +### Local mount point + +Before enabling periodic backups to a local mount point, verify that: + +- The node can connect to the destination server, the one hosting the mount point. +- The `redislabs:redislabs` user has read and write privileges on the local mount point +and on the destination server. +- The backup location has enough disk space for your backup files. Backup files +are saved with filenames that include the timestamp, which means that earlier backups are not overwritten. + +To back up to a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to a shell running on Redis Enterprise Software server hosting the node. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the backup location, enter the mount point. + + For example: `/mnt/Public` + +1. Verify that the user running Redis Enterprise Software has permissions to access and update files in the mount location. + +### AWS Simple Storage Service + +To store backups in an Amazon Web Services (AWS) Simple Storage Service (S3) [bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html): + +1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/). + +1. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) if you do not already have one. + +1. [Create an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to add objects to the bucket. + +1. [Create an access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for that user if you do not already have one. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **AWS S3** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Access Key ID** field, enter the access key ID. + + - In the **Secret Access Key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. + +To connect to an S3-compatible storage location: + +1. Configure the S3 URL with [`rladmin cluster config`]({{}}): + + ```sh + rladmin cluster config s3_url + ``` + + Replace `` with the hostname or IP address of the S3-compatible storage location. + +1. Configure the S3 CA certificate: + + ```sh + rladmin cluster config s3_ca_cert + ``` + + Replace `` with the location of the S3 CA certificate `ca.pem`. + +### Google Cloud Storage + +For [Google Cloud](https://developers.google.com/console/) subscriptions, store your backups in a Google Cloud Storage bucket: + +1. Sign in to the Google Cloud Platform console. + +1. [Create a JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) if you do not already have one. + +1. [Create a bucket](https://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket) if you do not already have one. + +1. [Add a principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) to your bucket: + + - In the **New principals** field, add the `client_email` from the service account key. + + - Select "Storage Legacy Bucket Writer" from the **Role** list. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **Google Cloud Storage** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Client ID** field, enter the `client_id` from the service account key. + + - In the **Client Email** field, enter the `client_email` from the service account key. + + - In the **Private Key ID** field, enter the `private_key_id` from the service account key. + + - In the **Private Key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + +### Azure Blob Storage + +To store your backup in Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +To export to Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +1. [Create an Azure Storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create) if you do not already have one. + +1. [Create a container](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) if you do not already have one. + +1. [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) to find the storage account name and account keys. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **Azure Blob Storage** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Azure Account Name** field, enter your storage account name. + + - In the **Azure Account Key** field, enter the storage account key. + +To learn more, see [Authorizing access to data in Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth). +--- +Title: Export data from a database +alwaysopen: false +categories: +- docs +- operate +- rs +description: You can export data to import it into a new database or to make a backup. This + article shows how to do so. +linktitle: Export data +weight: 20 +--- + +You can export the data from a specific database at any time. The following destinations are supported: + +- FTP server +- SFTP server +- Amazon AWS S3 +- Local mount point +- Azure Blob Storage +- Google Cloud Storage + +If you export a database configured for database clustering, export files are created for each shard. + +## Storage space requirements + +Before exporting data, verify that you have enough space available in the storage destination and on the local storage associated with the node hosting the database. + +Export is a two-step process: a temporary copy of the data is saved to the local storage of the node and then copied to the storage destination. (The temporary file is removed after the copy operation.) + +Export fails when there isn't enough space for either step. + +## Export database data + +To export data from a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database from the list, then select **Configuration**. + +1. Click {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Export**. + +1. Select the tab that corresponds to your storage location type and enter the location details. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. + +1. Select **Export**. + +## Supported storage locations {#supported-storage-locations} + +Data can be exported to a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When saved to a local mount point or a cloud provider, export locations need to be available to [the group and user]({{< relref "/operate/rs/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view permissions and update objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before exporting data to an FTP server, verify that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user specified in the FTP server location has permission to read and write files to the server. + +To export data to an FTP server, set **Path** using the following syntax: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/ +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the export destination path, if needed. + +Example: `ftp://username:password@10.1.1.1/home/exports/` + +### Local mount point + +Before exporting data to a local mount point, verify that: + +- The node can connect to the server hosting the mount point. +- The `redislabs:redislabs` user has permission to read and write files to the local mount point and to the destination server. +- The export location has enough disk space for your exported data. + +To export to a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to the node's terminal. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the export location, enter the mount point. + + For example: `/mnt/Public` + +### SFTP server + +Before exporting data to an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user specified in the SFTP server location has permission to read and write files to the server. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To export data to an SFTP server, enter the SFTP server location in the format: + +```sh +sftp://[username]:[password]@[host]:[port]/[path]/ +``` + +Where: + +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the export destination path, if needed. + +For example: `sftp://username:password@10.1.1.1/home/exports/` + +### AWS Simple Storage Service + +To export data to an [Amazon Web Services](https://aws.amazon.com/) (AWS) Simple Storage Service (S3) bucket: + +1. Sign in to the [AWS console](https://console.aws.amazon.com/). + +1. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) if you do not already have one. + +1. [Create an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to add objects to the bucket. + +1. [Create an access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for that user if you do not already have one. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **AWS S3**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Access key ID** field, enter the access key ID. + + - In the **Secret access key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. + +To connect to an S3-compatible storage location: + +1. Configure the S3 URL with [`rladmin cluster config`]({{}}): + + ```sh + rladmin cluster config s3_url + ``` + + Replace `` with the hostname or IP address of the S3-compatible storage location. + +1. Configure the S3 CA certificate: + + ```sh + rladmin cluster config s3_ca_cert + ``` + + Replace `` with the location of the S3 CA certificate `ca.pem`. + +### Google Cloud Storage + +To export to a [Google Cloud](https://developers.google.com/console/) storage bucket: + +1. Sign in to the Google Cloud console. + +1. [Create a JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) if you do not already have one. + +1. [Create a bucket](https://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket) if you do not already have one. + +1. [Add a principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) to your bucket: + + - In the **New principals** field, add the `client_email` from the service account key. + + - Select "Storage Legacy Bucket Writer" from the **Role** list. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **Google Cloud Storage**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Client ID** field, enter the `client_id` from the service account key. + + - In the **Client Email** field, enter the `client_email` from the service account key. + + - In the **Private Key ID** field, enter the `private_key_id` from the service account key. + + - In the **Private key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + + +### Azure Blob Storage + +To export to Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +1. [Create an Azure Storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create) if you do not already have one. + +1. [Create a container](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) if you do not already have one. + +1. [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) to find the storage account name and account keys. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **Azure Blob Storage**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Account name** field, enter your storage account name. + + - In the **Account key** field, enter the storage account key. + +To learn more, see [Authorizing access to data in Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth). +--- +Title: Import data into a database +alwaysopen: false +categories: +- docs +- operate +- rs +description: You can import export or backup files of a specific Redis Enterprise + Software database to restore data. You can either import from a single file or from + multiple files, such as when you want to import from a backup of a clustered database. +linktitle: Import data +weight: 10 +--- +You can import, [export]({{< relref "/operate/rs/databases/import-export/export-data" >}}), +or [backup]({{< relref "/operate/rs/databases/import-export/schedule-backups" >}}) +files of a specific Redis Enterprise Software database to restore data. +You can either import from a single file or from multiple files, +such as when you want to import from a backup of a clustered database. + +{{< warning >}} +Importing data erases all existing content in the database. +{{< /warning >}} + +## Import data into a database + +To import data into a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database from the list, then select **Configuration**. +1. Click {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. +1. Select **Import**. +1. Select the tab that corresponds to your storage location type and enter the location details. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. +1. Select **Import**. + +## Supported storage locations {#supported-storage-services} + +Data can be imported from a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When importing from a local mount point or a cloud provider, import locations need to be available to [the group and user]({{< relref "/operate/rs/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before importing data from an FTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user that you specify in the FTP server location has permission to read files from the server. + +To import data from an FTP server, set **RDB file path/s** using the following syntax: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/[filename].rdb +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the file's location path. +- *filename*: the name of the file. + +Example: `ftp://username:password@10.1.1.1/home/backups/.rdb` + +Select **Add path** to add another import file path. + +### Local mount point + +Before importing data from a local mount point, make sure that: + +- The node can connect to the server hosting the mount point. + +- The `redislabs:redislabs` user has permission to read files on the local mount point and on the destination server. + +- You must mount the storage in the same path on all cluster nodes. You can also use local storage, but you must copy the imported files manually to all nodes because the import source folders on the nodes are not synchronized. + +To import from a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to the node's terminal. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the import location, enter the mount point. + + For example: `/mnt/Public/.rdb` + +As of version 6.2.12, Redis Enterprise reads files directly from the mount point using a [symbolic link](https://en.wikipedia.org/wiki/Symbolic_link) (symlink) instead of copying them to a temporary directory on the node. + +Select **Add path** to add another import file path. + +### SFTP server + +Before importing data from an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user that you specify in the SFTP server location has permission to read files from the server. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To import data from an SFTP server, enter the SFTP server location in the format: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/[filename].rdb +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the file's location path. +- *filename*: the name of the file. + +Example: `sftp://username:password@10.1.1.1/home/backups/[filename].rdb` + +Select **Add path** to add another import file path. + +### AWS Simple Storage Service {#aws-s3} + +Before you choose to import data from an [Amazon Web Services](https://aws.amazon.com/) (AWS) Simple Storage Service (S3) bucket, make sure you have: + +- The path to the file in your bucket in the format: `s3://[bucketname]/[path]/[filename].rdb` +- [Access key ID and Secret access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for an [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to read files from the bucket. + +In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + +- Select **AWS S3**. + +- In the **RDB file path/s** field, enter the path of your bucket. Select **Add path** to add another import file path. + +- In the **Access key ID** field, enter the access key ID. + +- In the **Secret access key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. + +To connect to an S3-compatible storage location: + +1. Configure the S3 URL with [`rladmin cluster config`]({{}}): + + ```sh + rladmin cluster config s3_url + ``` + + Replace `` with the hostname or IP address of the S3-compatible storage location. + +1. Configure the S3 CA certificate: + + ```sh + rladmin cluster config s3_ca_cert + ``` + + Replace `` with the location of the S3 CA certificate `ca.pem`. + +### Google Cloud Storage + +Before you import data from a [Google Cloud](https://developers.google.com/console/) storage bucket, make sure you have: + +- Storage location path in the format: `/bucket_name/[path]/[filename].rdb` +- A [JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) for your account +- A [principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) for your bucket with the `client_email` from the service account key and a [role](https://cloud.google.com/storage/docs/access-control/iam-roles) with permissions to get files from the bucket (such as the **Storage Legacy Object Reader** role, which grants `storage.objects.get` permissions) + +In the Redis Enterprise Software Cluster Manager UI, when you enter the import location details: + +- Select **Google Cloud Storage**. + +- In the **RDB file path/s** field, enter the path of your file. Select **Add path** to add another import file path. + +- In the **Client ID** field, enter the `client_id` from the service account key. + +- In the **Client email** field, enter the `client_email` from the service account key. + +- In the **Private key id** field, enter the `private_key_id` from the service account key. + +- In the **Private key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + +### Azure Blob Storage + +Before you choose to import from Azure Blob Storage, make sure that you have: + +- Storage location path in the format: `/container_name/[path/]/.rdb` +- Account name +- An authentication token, either an account key or an Azure [shared access signature](https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature) (SAS). + + To find the account name and account key, see [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage). + + Azure SAS support requires Redis Software version 6.0.20. To learn more about Azure SAS, see [Grant limited access to Azure Storage resources using shared access signatures](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). + +In the Redis Enterprise Software Cluster Manager UI, when you enter the import location details: + +- Select **Azure Blob Storage**. + +- In the **RDB file path/s** field, enter the path of your file. Select **Add path** to add another import file path. + +- In the **Azure Account Name** field, enter your storage account name. + +- In the **Azure Account Key** field, enter the storage account key. + +## Importing into an Active-Active database + +When importing data into an Active-Active database, there are two options: + +- [Flush all data]({{< relref "/operate/rs/databases/import-export/flush#flush-data-from-an-active-active-database" >}}) from the Active-Active database, then import the data into the database. +- Import data but merge it into the existing database. + +Because Active-Active databases have a numeric counter data type, +when you merge the imported data into the existing data RS increments counters by the value that is in the imported data. +The import through the Redis Enterprise Cluster Manager UI handles these data types for you. + +You can import data into an Active-Active database [from the Cluster Manager UI](#import-data-into-a-database). +When you import data into an Active-Active database, there is a special prompt warning that the imported data will be merged into the existing database. + +## Continue learning with Redis University + +{{< university-links >}} +--- +Title: Import and export data +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to import, export, flush, and migrate your data. +hideListLinks: false +linkTitle: Import and export +weight: 30 +--- +--- +Title: Create a database with Replica Of +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create Replica Of database +linkTitle: Create Replica Of database +weight: 10 +--- +Replica databases copy data from source databases (previously known as _master_), which enable read-only connections from apps and clients located in different geographic locations. + +To create a replica connection, you define a database as a replica of a source database. Replica Of databases (also known as _Active-Passive databases_) synchronize in the background. + +Sources databases can be: + +- Located in the same Redis Enterprise Software cluster +- Located in a different Redis Enterprise cluster +- Hosted by a different deployment, e.g. Redis Cloud +- Redis Open Source databases + +Your apps can connect to the source database to read and write data; they can also use any replica for read-only access. + +Replica Of can model a variety of data relationships, including: + +- One-to-many relationships, where multiple replicas copy a single source database. +- Many-to-one relationships, where a single replica collects data from multiple source databases. + +When you change the replica status of a database by adding, removing, or changing sources, the replica database is synchronized to the new sources. + +## Configure Replica Of + +You can configure a database as a Replica Of, where the source database is in one of the following clusters: + +- [Same Redis Enterprise cluster](#same-cluster) + +- [Different Redis Enterprise cluster](#different-cluster) + +- [Redis Open Source cluster](#source-available-cluster) + +The order of the multiple Replica Of sources has no material impact on replication. + +For best results when using the [Multicast DNS](https://en.wikipedia.org/wiki/Multicast_DNS) (mDNS) protocol to resolve the fully-qualified domain name (FQDN) of the cluster, verify that your client connections meet the [client mDNS prerequisites]({{< relref "/operate/rs/networking/mdns.md" >}}). + +{{< note >}} +As long as Replica Of is enabled, data in the target database will not expire and will not be evicted regardless of the set [data eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy.md" >}}). +{{< /note >}} + +### Same Redis Enterprise cluster {#same-cluster} + +To configure a Replica Of database in the same Redis Enterprise cluster as the source database: + +1. [Create a new database]({{< relref "/operate/rs/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **Current cluster**. + +1. Select the source database from the list. + +1. Select **Add source**. + +1. Select **Save**. + +### Different Redis Enterprise cluster {#different-cluster} + +To configure a Replica Of database in a different Redis Enterprise cluster from the source database: + +1. Sign in to the Cluster Manager UI of the cluster hosting the source database. + + 1. In **Databases**, select the source database and then select the **Configuration** tab. + + 1. In the **Replica Of** section, select **Use this database as a source for another database**. + + 1. Copy the Replica Of source URL. + + {{Copy the Replica Of source URL from the Connection link to destination dialog.}} + + To change the internal password, select **Regenerate password**. + + If you regenerate the password, replication to existing destinations fails until their credentials are updated with the new password. + +1. Sign in to the Cluster Manager UI of the destination database's cluster. + +1. [Create a new database]({{< relref "/operate/rs/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **External**. + +1. Enter the URL of the source database endpoint. + +1. Select **Add source**. + +1. Select **Save**. + +For source databases on different clusters, you can [compress replication data]({{< relref "/operate/rs/databases/import-export/replica-of/#data-compression-for-replica-of" >}}) to save bandwidth. + +### Redis Open Source cluster {#source-available-cluster} + +To use a database from a Redis Open Source cluster as a Replica Of source: + +1. [Create a new database]({{< relref "/operate/rs/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **External**. + +1. Enter the URL of the source endpoint in one of the following formats: + + - For databases with passwords: + + ```sh + redis://:@: + ``` + + Where the password is the Redis password represented with URL encoding escape characters. + + - For databases without passwords: + + ```sh + redis://: + ``` + +1. Select **Add source**. + +1. Select **Save**. + +## Configure TLS for Replica Of + +When you enable TLS for Replica Of, the Replica Of synchronization traffic uses TLS certificates to authenticate the communication between the source and destination clusters. + +To encrypt Replica Of synchronization traffic, configure encryption for the [source database](#encrypt-source-database-traffic) and the destination [replica database](#encrypt-replica-database-traffic). + +### Encrypt source database traffic + +{{}} + +### Encrypt replica database traffic + +To enable TLS for Replica Of in the destination database: + +1. From the Cluster Manager UI of the cluster hosting the source database: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand the **Server authentication (Proxy certificate)** section. + + {{Proxy certificate for server authentication.}} + + 1. Download or copy the proxy certificate. + +1. From the **Configuration** tab of the Replica Of destination database, select **Edit**. + +1. Expand the **Replica Of** section. + +1. Point to the source database entry and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit it. + +1. Paste or upload the source proxy certificate, then select **Done**. + +1. Select **Save**. +--- +Title: Replica Of Repeatedly Fails +alwaysopen: false +categories: +- docs +- operate +- rs +description: Troubleshoot when the Replica Of process repeatedly fails and restarts. +linktitle: Troubleshoot repeat failures +weight: 20 +--- +**Problem**: The Replica Of process repeatedly fails and restarts + +**Diagnostic**: A log entry in the Redis log of the source database shows repeated failures and restarts. + +**Cause**: The Redis "client-output-buffer-limit" setting on the source database +is configured to a relatively small value, which causes the connection drop. + +**Resolution**: Reconfigure the buffer on the source database to a bigger value: + +- If the source is a Redis database on a Redis Enterprise Software cluster, + increase the replica buffer size of the **source database** with: + + `rladmin tune db < db:id | name > slave_buffer < value >` + +- If the source is a Redis database not on a Redis Enterprise Software cluster, + use the [config set](http://redis.io/commands/config-set) command through + `redis-cli` to increase the client output buffer size of the **source database** with: + + `config set client-output-buffer-limit "slave "` + +**Additional information**: [Top Redis Headaches for DevOps - Replication Buffer](https://redislabs.com/blog/top-redis-headaches-for-devops-replication-buffer) +--- +Title: Replica Of geo-distributed Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: Replica Of provides read-only access to replicas of the dataset from different geographical locations. +hideListLinks: true +linkTitle: Replica Of +weight: $weight +aliases: /operate/rs/administering/active-passive/ +--- +In Redis Enterprise, the Replica Of feature provides active-passive geo-distribution to applications for read-only access +to replicas of the dataset from different geographical locations. +The Redis Enterprise implementation of active-passive replication is called Replica Of. + +In Replica Of, an administrator designates a database as a replica (destination) of one or more databases (sources). +After the initial data load from source to destination is completed, +all write commands are synchronized from the sources to the destination. +Replica Of lets you distribute the read load of your application across multiple databases or +synchronize the database, either within Redis Enterprise or external to Redis Enterprise, to another database. + +You can [create Active-Passive]({{< relref "/operate/rs/databases/import-export/replica-of/create.md" >}}) databases on Redis Enterprise Software or Redis Cloud. + +[Active-Active Geo-Distribution (CRDB)]({{< relref "/operate/rs/databases/active-active" >}}) +provides these benefits and also provides write access to all of the database replicas. + +{{< warning >}} +Configuring a database as a replica of the database that it replicates +creates a cyclical replication and is not supported. +{{< /warning >}} + +The Replica Of is defined in the context of the destination database +by specifying the source databases. + +A destination database can have a maximum of thirty-two (32) source +databases. + +If only one source is defined, then the command execution order in the +source is kept in the destination. However, when multiple sources are +defined, commands that are replicated from the source databases are +executed in the order in which they reach the destination database. As a +result, commands that were executed in a certain order when compared +across source databases might be executed in a different order on the +destination database. + +{{< note >}} +The Replica Of feature should not be confused with the +in-memory [Database +replication]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}) +feature, which is used for creating a master / replica configuration that +enables ensuring database high-availability. +{{< /note >}} + +## Replication process + +When a database is defined as a replica of another database, all its +existing data is deleted and replaced by data that is loaded from the +source database. + +Once the initial data load is completed, an ongoing synchronization +process takes place to keep the destination always synchronized with its +source. During the ongoing synchronization process, there is a certain +delay between the time when a command was executed on the source and +when it is executed on the destination. This delay is referred to as the +**Lag**. + +When there is a **synchronization error**, **the process might stop** or +it might continue running on the assumption that the error automatically +resolves. The result depends on the error type. See more details below. + +In addition, **the user can manually stop the synchronization process**. + +When the process is in the stopped state - whether stopped by the user +or by the system - the user can restart the process. **Restarting the +process causes the synchronization process to flush the DB and restart +the process from the beginning**. + +### Replica Of status + +The replication process can have the following statuses: + +- **Syncing** - indicates that the synchronization process has + started from scratch. Progress is indicated in percentages (%). +- **Synced** - indicates that the initial synchronization process was + completed and the destination is synchronizing changes on an ongoing + basis. The **Lag** delay in synchronization with the source is + indicated as a time duration. +- **Sync stopped** - indicates that the synchronization process is + currently not running and the user needs to restart it in order for + it to continue running. This status happens if the user stops the + process, or if certain errors arose that prevent synchronization + from continuing without manual intervention. See more details below. + +The statuses above are shown for the source database. In addition, a +timestamp is shown on the source indicating when the last command from +the source was executed on the destination. + +The system also displays the destination database status as an aggregate +of the statuses of all the sources. + +{{< note >}} +If you encounter issues with the Replica Of process, refer +to the troubleshooting section [Replica Of repeatedly +fails]({{< relref "/operate/rs/databases/import-export/replica-of/replicaof-repeatedly-fails.md" >}}). +{{< /note >}} + +### Synchronization errors + +Certain errors that occur during the synchronization process require +user intervention for their resolution. When such errors occur, the +synchronization process is automatically stopped. + +For other errors, the synchronization process continues running on the +assumption that the error automatically resolves. + +Examples of errors that require user intervention for their resolution +and that stop the synchronization process include: + +- Error authenticating with the source database. +- Cross slot violation error while executing a command on a sharded + destination database. +- Out-of-memory error on a source or on the destination + database. + +Example of an error that does not cause the synchronization process to +stop: + +- Connection error with the source database. A connection error might + occur occasionally, for example as result of temporary network + issues that get resolved. Depending on the connection error and its + duration the process might be able to start syncing from the last + point it reached (partial sync) or require a complete + resynchronization from scratch across all sources (full sync). + +## Encryption + +Replica Of supports the ability to encrypt uni-directional replication +communications between source and destination clusters utilizing TLS 1.2 +based encryption. + +## Data compression for Replica Of + +When the Replica Of is defined across different Redis Enterprise +Software clusters, it may be beneficial to compress the data that flows +through the network (depending on where the clusters physically reside +and the available network). + +Compressing the data reduces the traffic and can help: + +- Resolve throughput issues +- Reduce network traffic costs + +Compressing the data does have trade-offs, which is why it should not +always be turned on by default. For example: + +- It uses CPU and disk resources to compress the data before sending + it to the network and decompress it on the other side. +- It takes time to compress and decompress the data which can increase + latency. +- Replication is disk-based and done gradually, shard by shard in the + case of a multi-shard database. This may have an impact on + replication times depending on the speed of the disks and load on + the database. +- If traffic is too fast and the compression takes too much time it + can cause the replication process to fail and be restarted. + +It is advised that you test compression out in a lower environment +before enabling it in production. + +In the Redis Enterprise Software management UI, when designating a +Replica Of source from a different Redis Enterprise Software cluster, +there is also an option to enable compression. When enabled, gzip +compression with level -6 is utilized. + +## Database clustering (sharding) implications + +If a **source** database is sharded, that entire database is treated as +a single source for the destination database. + +If the **destination** database is sharded, when the commands replicated +from the source are executed on the destination database, the +destination database's hashing function is executed to determine to +which shard/s the command refers. + +The source and destination can have different shard counts and functions +for placement of keys. + +### Synchronization in Active-Passive Replication + +In Active-Passive databases, one cluster hosts the source database that receives read-write operations +and the other clusters host destination databases that receive synchronization updates from the source database. + +When there is a significant difference between the source and destination databases, +the destination database flushes all of the data from its memory and starts synchronizing the data again. +This process is called a **full sync**. + +For example, if the database updates for the destination databases +that are stored by the destination database in a synchronization backlog exceed their allocated memory, +the source database starts a full sync. + +{{% warning %}} +When you failover to the destination database for write operations, +make sure that you disable **Replica Of** before you direct clients to the destination database. +This avoids a full sync that can overwrite your data. +{{% /warning %}} + +## Active-Passive replication backlog + +In addition to the [database replication backlog]({{< relref "/operate/rs/databases/durability-ha/replication#database-replication-backlog" >}}), active-passive databases maintain a replication backlog (per shard) to synchronize the database instances between clusters. +By default, the replication backlog is set to one percent (1%) of the database size divided by the database number of shards and ranges between 1MB to 250MB per shard. +Use the [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) utility to control the size of the replication backlog. You can set it to `auto` or set a specific size. + +For an Active-Passive database: +```text +rladmin tune db repl_backlog +``` + +{{}} +On an Active-Passive database, the replication backlog configuration applies to both the replication backlog for shards synchronization and for synchronization of database instances between clusters. +{{}} +--- +alwaysopen: false +categories: +- docs +- operate +- rs +db_type: database +description: Create a database with Redis Enterprise Software. +linkTitle: Create a database +title: Create a Redis Enterprise Software database +toc: 'true' +weight: 10 +--- +Redis Enterprise Software lets you create databases and distribute them across a cluster of nodes. + +To create a new database: + +1. Sign in to the Cluster Manager UI at `https://:8443` + +1. Use one of the following methods to create a new database: + + - [Quick database](#quick-database) + + - [Create database](#create-database) with additional configuration + +1. If you did not specify a port number for the database, you can find the port number in the **Endpoint** field in the **Databases > Configuration > General** section. + +1. [Test client connectivity]({{< relref "/operate/rs/databases/connect/test-client-connectivity" >}}). + + +{{< note >}} +For databases with Active-Active replication for geo-distributed locations, +see [Create an Active-Active database]({{< relref "/operate/rs/databases/active-active/create.md" >}}). To create and manage Active-Active databases, use the legacy UI. +{{< /note >}} + +## Quick database + +To quickly create a database and skip additional configuration options during initial creation: + +1. On the **Databases** screen, select **Quick database**. + +1. Select a Redis version from the **Database version** list. + +1. Configure settings that are required for database creation but can be changed later: + + - Database name + + - Memory limit (GB) + +2. Configure optional settings that can't be changed after database creation: + + - Endpoint port (set by the cluster if not set manually) + + - Capabilities (previously modules) to enable + +1. Optionally select **Full options** to configure [additional settings]({{< relref "/operate/rs/databases/configure#config-settings" >}}). + +1. Select **Create**. + +## Create database + +To create a new database and configure additional settings: + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select the database type: + + - **Single Region** + + - **Active-Active database** - Multiple participating Redis Enterprise clusters can host instances of the same [Active-Active database]({{< relref "/operate/rs/databases/active-active" >}}) in different geographic locations. Every instance can receive write operations, which are synchronized across all instances without conflict. + + {{}} +For Active-Active databases, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/databases/active-active/create" >}}). + {{}} + +1. Select a Redis version from the **Database version** list. + +1. Enter a **Database name**. + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +1. To configure additional database settings, expand each relevant section to make changes. + + See [Configuration settings]({{< relref "/operate/rs/databases/configure#config-settings" >}}) for more information about each setting. + +1. Select **Create**. + +## Continue learning with Redis University + +{{< university-links >}}--- +Title: Delete databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Delete a database from the Cluster Manager UI. +linktitle: Delete +weight: 36 +--- + +When you delete a database, both the database configuration and data are removed. + +To delete a database from the Cluster Manager UI: + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select {{< image filename="/images/rs/icons/delete-icon.png#no-click" alt="Delete button" width="22px" class="inline" >}} **Delete**. + +1. In the **Delete database** dialog, confirm deletion. +--- +Title: Auto Tiering quick start +alwaysopen: false +categories: +- docs +- operate +- rs +description: Get started with Auto Tiering quickly, creating a cluster and database + using flash storage. +linkTitle: Quick start +weight: 80 +--- +This page guides you through a quick setup of [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}) with a single node for testing and demo purposes. + +For production environments, you can find more detailed installation instructions in the [install and setup]({{< relref "/operate/rs/installing-upgrading" >}}) section. + +The steps to set up a Redis Enterprise Software cluster using Auto Tiering +with a single node are: + +1. Install Redis Enterprise Software or run it in a Docker + container. +1. Set up a Redis Enterprise Software cluster with Auto Tiering. +1. Create a new database with Auto Tiering enabled. +1. Connect to your new database. + +## Install Redis Enterprise Software + +### Bare metal, VM, Cloud instance + +To install on bare metal, a virtual machine, or an instance: + +1. Download the binaries from the [Redis Enterprise download center](https://cloud.redis.io/#/sign-up/software?direct=true). + +1. Upload the binaries to a Linux-based operating system. + +1. Extract the image: + + ```sh + tar -vxf + ``` + +1. After the `tar` command completes, you can find a new `install.sh` script in the current directory: + + ```sh + sudo ./install.sh -y + ``` + +### Docker-based installation {#dockerbased-installation} + +For testing purposes, you can run a Redis Enterprise Software +Docker container on Windows, MacOS, and Linux. + +```sh +docker run -d --cap-add sys_resource --name rp -p 8443:8443 -p 12000:12000 redislabs/redis:latest +``` + +## Prepare and format flash memory + +After you [install Redis Enterprise Software](#install-redis-enterprise-software), use the `prepare_flash` script to prepare and format flash memory: + +```sh +sudo /opt/redislabs/sbin/prepare_flash.sh +``` + +This script finds unformatted disks and mounts them as RAID partitions in `/var/opt/redislabs/flash`. + +To verify the disk configuration, run: + +```sh +sudo lsblk +``` + +## Set up a cluster and enable Auto Tiering + +1. Direct your browser to `https://localhost:8443` on the host machine to +see the Redis Enterprise Software Cluster Manager UI. + + {{}} +Depending on your browser, you may see a certificate error. +Choose "continue to the website" to go to the setup screen. + {{}} + +1. Select **Create new cluster**. + +1. Set up account credentials for a cluster administrator, then select **Next** to proceed to cluster setup. + +1. Enter your cluster license key if you have one. Otherwise, the cluster uses the trial version. + +1. Provide a cluster FQDN such as `mycluster.local`, then select **Next**. + +1. In the **Storage configuration** section, turn on the **Enable flash storage** toggle. + +1. Select **Create cluster**. + +1. Select **OK** to confirm that you are aware of the replacement of the HTTPS TLS +certificate on the node, and proceed through the browser warning. + +## Create a database + +On the **Databases** screen: + +1. Select **Quick database**. + +1. Verify **Flash** is selected for **Runs on**. + + {{Create a quick database with Runs on Flash selected.}} + +1. Enter `12000` for the endpoint **Port** number. + +1. _(Optional)_ Select **Full options** to see available alerts. + +1. Select **Create**. + +You now have a database with Auto Tiering enabled! + +## Connect to your database + +After you create the database, you can connect to it and store data. See [Test client connection]({{}}) for connection options and examples. + +## Next steps + +To see the true performance and scale of Auto Tiering, you must tune your I/O path and set the flash path to the mounted path of SSD or NVMe flash memory as that is what it is designed to run on. For more information, see [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}). +--- +Title: Manage Auto Tiering storage engine +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage the storage engine used for your database with auto tiering enabled. +linkTitle: Manage storage engine +weight: 100 +--- + +## Manage the storage engine + +Redis Enterprise Auto Tiering supports two storage engines: + +* [Speedb](https://www.speedb.io/) (default, recommended) +* [RocksDB](https://rocksdb.org/) + +{{}}Switching between storage engines requires guidance by Redis Support or your Account Manager.{{}} + +### Change the storage engine + +1. Change the cluster level configuration for default storage engine. + + * API: + + ``` sh + curl -k -u : -X PUT -H "Content-Type: application/json" -d '{"bigstore_driver":"speedb"}' https://localhost:9443/v1/cluster + ``` + + * CLI: + + ```sh + rladmin cluster config bigstore_driver {speedb | rocksdb} + ``` + +2. Restart the each database on the cluster one by one. + + ```sh + rladmin restart db { db: | } + ``` + +{{}} We recommend restarting your database at times with low usage and avoiding peak hours. For databases without persistence enabled, we also recommend using export to backup your database first.{{}} + +## Monitor the storage engine + +To get the current cluster level default storage engine run: + +* Use the `rladmin info cluster` command look for ‘bigstore_driver’. + +* Use the REST API: + + ```sh + curl -k -u : -X GET -H "Content-Type: application/json" https://localhost:9443/v1/cluster + ``` + +Versions of Redis Enterprise 7.2 and later provide a metric called `bdb_bigstore_shard_count` to help track the shard count per database, filtered by `bdb_id` and by storage engine as shown below: + + + ```sh + bdb_bigstore_shard_count{bdb="1",cluster="mycluster.local",driver="rocksdb"} 1.0 + bdb_bigstore_shard_count{bdb="1",cluster="mycluster.local",driver="speedb"} 2.0 + ``` + +For more about metrics for Redis Enterprise’s integration with Prometheus, see [Prometheus integration]({{< relref "/integrate/prometheus-with-redis-enterprise/prometheus-metrics-definitions" >}}). +--- +Title: Auto Tiering +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Auto Tiering enables your data to span both RAM and dedicated flash memory. +hideListLinks: true +linktitle: Auto Tiering +weight: 50 +--- +Redis Enterprise's auto tiering offers users the unique ability to use solid state drives (SSDs) to extend databases beyond DRAM capacity. +Developers can build applications that require large datasets using the same Redis API. +Using SSDs can significantly reduce the infrastructure costs compared to only DRAM deployments. + +Frequently used data, called hot data, belongs in the fastest memory level to deliver a real-time user experience. +Data that is accessed less frequently, called warm data, can be kept in a slightly slower memory tier. +Redis Enterprise’s Auto tiering maintains hot data in DRAM, keeps warm data in SSDs, and transfers data between tiers automatically. + +Redis Enterprise’s auto tiering is based on a high-performance storage engine (Speedb) that manages the complexity of using SSDs and DRAM as the total available memory for databases in a Redis Enterprise cluster. This implementation offers a performance boost of up to 10k operations per second per core of the database, doubling the performance of Redis on Flash. + +Just like all-RAM databases, databases with Auto Tiering enabled are compatible with existing Redis applications. + +Auto Tiering is also supported on [Redis Cloud]({{< relref "/operate/rc/" >}}) and [Redis Enterprise Software for Kubernetes]({{< relref "/operate/rs/" >}}). + +## Use cases + +The benefits associated with Auto Tiering are dependent on the use case. + +Auto Tiering is ideal when your: + +- working set is significantly smaller than your dataset (high RAM hit rate) +- average key size is smaller than average value size (all key names are stored in RAM) +- most recent data is the most frequently used (high RAM hit rate) + +Auto Tiering is not recommended for: + +- Long key names (all key names are stored in RAM) +- Broad access patterns (any value could be pulled into RAM) +- Large working sets (working set is stored in RAM) +- Frequently moved data (moving to and from RAM too often can impact performance) + +Auto Tiering is not intended to be used for persistent storage. Redis Enterprise Software database persistent and ephemeral storage should be on different disks, either local or attached. + +## Where is my data? + +When using Auto Tiering, RAM storage holds: +- All keys (names) +- Key indexes +- Dictionaries +- Hot data (working set) + +All data is accessed through RAM. If a value in flash memory is accessed, it becomes part of the working set and is moved to RAM. These values are referred to as “hot data”. + +Inactive or infrequently accessed data is referred to as “warm data” and stored in flash memory. When more space is needed in RAM, warm data is moved from RAM to flash storage. + +{{}} When using Auto Tiering with RediSearch, it’s important to note that RediSearch indexes are also stored in RAM.{{}} + +## RAM to Flash ratio + +Redis Enterprise Software allows you to configure and tune the ratio of RAM-to-Flash for each database independently, optimizing performance for your specific use case. +While this is an online operation requiring no downtime for your database, it is recommended to perform it during maintenance windows as data might move between tiers (RAM <-> Flash). + +The RAM limit cannot be smaller than 10% of the total memory. We recommend you keep at least 20% of all values in RAM. Do not set the RAM limit to 100%. + +## Flash memory + +Implementing Auto Tiering requires pre planning around memory and sizing. Considerations and requirements for Auto Tiering include: + +- Flash memory must be locally attached. Using network-attached storage (NAS), storage area networks (SAN), or solutions such as AWS Elastic Block Storage (EBS) is not supported. +- Flash memory must be dedicated to Auto Tiering and not shared with other parts of the database, such as durability, binaries, or persistence. +- For the best performance, the SSDs should be NVMe based, but SATA can also be used. +- The available flash space must be greater than or equal to the total database size (RAM+Flash). The extra space accounts for write buffers and [write amplification](https://en.wikipedia.org/wiki/Write_amplification). + +{{}} The Redis Enterprise Software database persistent and ephemeral storage should be on different disks, either local or attached. {{}} + +Once these requirements are met, you can create and manage both databases with Auto Tiering enabled and +all-RAM databases in the same cluster. + +When you begin planning the deployment of an Auto Tiering enabled database in production, +we recommend working closely with the Redis technical team for sizing and performance tuning. + +### Cloud environments + +When running in a cloud environment: + +- Flash memory is on the ephemeral SSDs of the cloud instance (for example the local NVMe of AWS i4i instnaces and Azure Lsv2 and Lsv3 series). +- Persistent database storage needs to be network attached (for example, AWS EBS for AWS). + +{{}} +We specifically recommend "[Storage Optimized I4i - High I/O Instances](https://aws.amazon.com/ec2/instance-types/#storage-optimized)" because of the performance of NVMe for flash memory. {{}} + +### On-premises environments + +When you begin planning the deployment of Auto Tiering in production, we recommend working closely with the Redis technical team for sizing and performance tuning. + +On-premises environments support more deployment options than other environments such as: + +- Using Redis Stack features: + - [Search and query]({{< relref "/operate/oss_and_stack/stack-with-enterprise/search" >}}) + - [JSON]({{< relref "/operate/oss_and_stack/stack-with-enterprise/json" >}}) + - [Time series]({{< relref "/operate/oss_and_stack/stack-with-enterprise/timeseries" >}}) + - [Probabilistic data structures]({{< relref "/operate/oss_and_stack/stack-with-enterprise/bloom" >}}) + +{{}} Enabling Auto Tiering for Active-Active distributed databases requires validating and getting the Redis technical team's approval first . {{}} + +{{}} Auto Tiering is not supported running on network attached storage (NAS), storage area network (SAN), or with local HDD drives. {{}} + +## Next steps + +- [Auto Tiering metrics]({{< relref "/operate/rs/references/metrics/auto-tiering" >}}) +- [Auto Tiering quick start]({{< relref "/operate/rs/databases/auto-tiering/quickstart.md" >}}) + +- [Ephemeral and persistent storage]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}) +- [Hardware requirements]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}) +--- +Title: Troubleshooting pocket guide for Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Troubleshoot issues with Redis Enterprise Software, including connectivity + issues between the database and clients or applications. +linktitle: Troubleshoot +toc: 'true' +weight: 90 +--- + +If your client or application cannot connect to your database, verify the following. + +## Identify Redis host issues + +#### Check resource usage + +- Used disk space should be less than `90%`. To check the host machine's disk usage, run the [`df`](https://man7.org/linux/man-pages/man1/df.1.html) command: + + ```sh + $ df -h + Filesystem Size Used Avail Use% Mounted on + overlay 59G 23G 33G 41% / + /dev/vda1 59G 23G 33G 41% /etc/hosts + ``` + +- RAM and CPU utilization should be less than `80%`, and host resources must be available exclusively for Redis Enterprise Software. You should also make sure that swap memory is not being used or is not configured. + + 1. Run the [`free`](https://man7.org/linux/man-pages/man1/free.1.html) command to check memory usage: + + ```sh + $ free + total used free shared buff/cache available + Mem: 6087028 1954664 993756 409196 3138608 3440856 + Swap: 1048572 0 1048572 + ``` + + 1. Used CPU should be less than `80%`. To check CPU usage, use `top` or `vmstat`. + + Run [`top`](https://man7.org/linux/man-pages/man1/top.1.html): + + ```sh + $ top + Tasks: 54 total, 1 running, 53 sleeping, 0 stopped, 0 zombie + %Cpu(s): 1.7 us, 1.4 sy, 0.0 ni, 96.8 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st + KiB Mem : 6087028 total, 988672 free, 1958060 used, 3140296 buff/cache + KiB Swap: 1048572 total, 1048572 free, 0 used. 3437460 avail Mem + ``` + + Run [`vmstat`](https://man7.org/linux/man-pages/man8/vmstat.8.html): + + ```sh + $ vmstat + procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- + r b swpd free buff cache si so bi bo in cs us sy id wa st + 2 0 0 988868 177588 2962876 0 0 0 6 7 12 1 1 99 0 0 + ``` + + 1. If CPU or RAM usage is greater than 80%, ask your system administrator which process is the culprit. If the process is not related to Redis, terminate it. + +#### Sync clock with time server + +It is recommended to sync the host clock with a time server. + +Verify that time is synchronized with the time server using one of the following commands: + +- `ntpq -p` + +- `chronyc sources` + +- [`timedatectl`](https://man7.org/linux/man-pages/man1/timedatectl.1.html) + +#### Remove https_proxy and http_proxy variables + +1. Run [`printenv`](https://man7.org/linux/man-pages/man1/printenv.1.html) and check if `https_proxy` and `http_proxy` are configured as environment variables: + + ```sh + printenv | grep -i proxy + ``` + +1. If `https_proxy` or `http_proxy` exist, remove them: + + ```sh + unset https_proxy + ``` + ```sh + unset http_proxy + ``` + +#### Review system logs + +Review system logs including the syslog or journal for any error messages, warnings, or critical events. See [Logging]({{< relref "/operate/rs/clusters/logging" >}}) for more information. + +## Identify issues caused by security hardening + +- Temporarily deactivate any security hardening tools (such as selinux, cylance, McAfee, or dynatrace), and check if the problem is resolved. + +- The user `redislabs` must have read and write access to `/tmp` directory. Run the following commands to verify. + + 1. Create a test file in `/tmp` as the `redislabs` user: + ```sh + $ su - redislabs -s /bin/bash -c 'touch /tmp/test' + ``` + + 1. Verify the file was created successfully: + ```sh + $ ls -l /tmp/test + -rw-rw-r-- 1 redislabs redislabs 0 Aug 12 02:06 /tmp/test + ``` + +- Using a non-permissive file mode creation mask (`umask`) can cause issues. + + 1. Check the output of `umask`: + + ```sh + $ umask + 0022 + ``` + + 1. If `umask`'s output differs from the default value `0022`, it might prevent normal operation. Consult your system administrator and revert to the default `umask` setting. + +## Identify cluster issues + +- Use `supervisorctl status` to verify all processes are in a `RUNNING` state: + + ```sh + supervisorctl status + ``` + +- Run `rlcheck` and verify no errors appear: + + ```sh + rlcheck + ``` + +- Run [`rladmin status issues_only`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status" >}}) and verify that no issues appear: + + ```sh + $ rladmin status issues_only + CLUSTER NODES: + NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS + + DATABASES: + DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT + + ENDPOINTS: + DB:ID NAME ID NODE ROLE SSL + + SHARDS: + DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS + + ``` + +- Run [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}). For each shard, `USED_MEMORY` should be less than 25 GB. + + ```sh + $ rladmin status shards + SHARDS: + DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS + db:1 db1 redis:1 node:1 master 0-16383 2.13MB OK + ``` + +- Run [`rladmin cluster running_actions`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/running_actions" >}}) and confirm that no tasks are currently running (active): + + ```sh + $ rladmin cluster running_actions + No active tasks + ``` + +## Troubleshoot connectivity + +#### Database endpoint resolution + +1. On the client machine, check if the database endpoint can be resolved: + + ```sh + dig + ``` + +1. If endpoint resolution fails on the client machine, check on one of the cluster nodes: + + ```sh + dig @localhost + ``` + +1. If endpoint resolution succeeds on the cluster node but fails on the client machine, review the DNS configuration and fix any errors. + +1. If the endpoint can’t be resolved on the cluster node, [contact support](https://redis.com/company/support/). + +#### Client application issues + +1. To identify possible client application issues, test connectivity from the client machine to the database using [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}): + + [`INFO`]({{< relref "/commands/info" >}}): + + ```sh + redis-cli -h -p -a INFO + ``` + + [`PING`]({{< relref "/commands/ping" >}}): + + ```sh + redis-cli -h -p -a PING + ``` + + or if TLS is enabled: + + ```sh + redis-cli -h -p -a --tls --insecure --cert --key PING + ``` + +1. If the client machine cannot connect, try to connect to the database from one of the cluster nodes: + + ```sh + redis-cli -h -p -a PING + ``` + +1. If the cluster node is also unable to connect to the database, [contact Redis support](https://redis.com/company/support/). + +1. If the client fails to connect, but the cluster node succeeds, perform health checks on the client and network. + +#### Firewall access + +1. Run one of the following commands to verify that database access is not blocked by a firewall on the client machine or cluster: + + ```sh + iptables -L + ``` + + ```sh + ufw status + ``` + + ```sh + firewall-cmd –list-all + ``` + +1. To resolve firewall issues: + + 1. If a firewall is configured for your database, add the client IP address to the firewall rules. + + 1. Configure third-party firewalls and external proxies to allow the cluster FQDN, database endpoint IP address, and database ports. + +## Troubleshoot latency + +#### Server-side latency + +- Make sure the database's used memory does not reach the configured database max memory limit. For more details, see [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit" >}}). + +- Try to correlate the time of the latency with any surge in the following metrics: + + - Number of connections + + - Used memory + + - Evicted keys + + - Expired keys + +- Run [`SLOWLOG GET`]({{< relref "/commands/slowlog-get" >}}) using [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}) to identify slow commands such as [`KEYS`]({{< relref "/commands/keys" >}}) or [`HGETALL`]({{< relref "/commands/hgetall" >}}: + + ```sh + redis-cli -h -p -a SLOWLOG GET + ``` + + Consider using alternative commands such as [`SCAN`]({{< relref "/commands/scan" >}}), [`SSCAN`]({{< relref "/commands/sscan" >}}), [`HSCAN`]({{< relref "/commands/hscan" >}}) and [`ZSCAN`]({{< relref "/commands/zscan" >}}) + +- Keys with large memory footprints can cause latency. To identify such keys, compare the keys returned by [`SLOWLOG GET`]({{< relref "/commands/slowlog-get" >}}) with the output of the following commands: + + ```sh + redis-cli -h -p -a --memkeys + ``` + + ```sh + redis-cli -h -p -a --bigkeys + ``` + +- For additional diagnostics, see: + + - [Diagnosing latency issues]({{< relref "/operate/oss_and_stack/management/optimization/latency" >}}) + + - [View Redis slow log]({{< relref "/operate/rs/clusters/logging/redis-slow-log" >}}) + +#### Client-side latency + +Verify the following: + +- There is no memory or CPU pressure on the client host. + +- The client uses a connection pool instead of frequently opening and closing connections. + +- The client does not erroneously open multiple connections that can pressure the client or server. +--- +Title: Test client connection +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Test connection +weight: 20 +--- +In various scenarios, such as after creating a new cluster or upgrading +the cluster, you should verify clients can connect to the +database. + +To test client connectivity: + +1. After you [create a Redis database]({{< relref "/operate/rs/databases/create" >}}), copy the database endpoint, which contains the cluster name (FQDN). + + To view and copy endpoints for a database in the cluster, see the database’s **Configuration > General** section in the Cluster Manager UI: + + {{View public and private endpoints from the General section of the database's Configuration screen.}} + +1. Try to connect to the database endpoint from your client of choice, + and run database commands. + +1. If the database does not respond, try to connect to the database + endpoint using the IP address rather than the FQDN. If you + succeed, then DNS is not properly configured. For + additional details, see + [Configure cluster DNS]({{< relref "/operate/rs/networking/cluster-dns" >}}). + +If any issues occur when testing database connections, [contact +support](https://redis.com/company/support/). + +## Test database connections + +After you create a Redis database, you can connect to your +database and store data using one of the following methods: + +- [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}), the built-in command-line tool + +- [Redis Insight](https://redis.com/redis-enterprise/redis-insight/), a free Redis GUI that is available for macOS, Windows, and Linux + +- An application using a Redis client library, such as [`redis-py`](https://github.com/redis/redis-py) for Python. See the [client list]({{< relref "/develop/clients" >}}) to view all Redis clients by language. + +### Connect with redis-cli + +Connect to your database with `redis-cli` (located in the `/opt/redislabs/bin` directory), then store and retrieve a key: + +```sh +$ redis-cli -h -p +127.0.0.1:16653> set key1 123 +OK +127.0.0.1:16653> get key1 +"123" +``` + +For more `redis-cli` connection examples, see the [`redis-cli` reference]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}). + +### Connect with Redis Insight + +Redis Insight is a free Redis GUI that is available for macOS, Windows, and Linux. + +1. [Install Redis Insight]({{< relref "/develop/tools/insight" >}}). + +1. Open Redis Insight and select **Add Redis Database**. + +1. Enter the host and port in the **Host** and **Port** fields. + +1. Select **Use TLS** if [TLS]({{< relref "/operate/rs/security/encryption/tls" >}}) is set up. + +1. Select **Add Redis Database** to connect to the database. + +See the [Redis Insight documentation]({{< relref "/develop/tools/insight" >}}) for more information. + +### Connect with Python + +Python applications can connect +to the database using the `redis-py` client library. For installation instructions, see the +[`redis-py` README](https://github.com/redis/redis-py#readme) on GitHub. + +1. From the command line, create a new file called +`redis_test.py`: + + ```sh + vi redis_test.py + ``` + +1. Paste the following code in `redis_test.py`, and replace `` and `` with your database's endpoint details: + + ```python + import redis + + # Connect to the database + r = redis.Redis(host='', port=) + + # Store a key + print("set key1 123") + print(r.set('key1', '123')) + + # Retrieve the key + print("get key1") + print(r.get('key1')) + ``` + +1. Run the application: + + ```sh + python redis_test.py + ``` + +1. If the application successfully connects to your database, it outputs: + + ```sh + set key1 123 + True + get key1 + 123 + ``` +### Connect with discovery service + +You can also connect a Python application to the database using the discovery service, which complies with the Redis Sentinel API. + +In the IP-based connection method, you only need the database name, not the port number. +The following example uses the discovery service that listens on port 8001 on all nodes of the cluster +to discover the endpoint for the database named "db1". + +```python +from redis.sentinel import Sentinel + +# with IP based connections, a list of known node IP addresses is constructed +# to allow connection even if any one of the nodes in the list is unavailable. +sentinel_list = [ +('10.0.0.44', 8001), +('10.0.0.45', 8001), +('10.0.0.46', 8001) +] + +# change this to the db name you want to connect +db_name = 'db1' + +sentinel = Sentinel(sentinel_list, socket_timeout=0.1) +r = sentinel.master_for(db_name, socket_timeout=0.1) + +# set key "foo" to value "bar" +print(r.set('foo', 'bar')) +# set value for key "foo" +print(r.get('foo')) +``` + +For more `redis-py` connection examples, see the [`redis-py` developer documentation](https://redis-py.readthedocs.io/en/stable/examples/connection_examples.html). +--- +Title: Supported connection clients +categories: +- docs +- operate +- rs +description: Info about Redis client libraries and supported clients when using the + discovery service. +weight: 10 +--- +You can connect to Redis Enterprise Software databases programmatically using client libraries. + +## Redis client libraries + +To connect an application to a Redis database hosted by Redis Enterprise Software, use a [client library]({{< relref "/develop/clients" >}}) appropriate for your programming language. + +You can also use the `redis-cli` utility to connect to a database from the command line. + +For examples of each approach, see the [Redis Enterprise Software quickstart]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}). + +Note: You cannot use client libraries to configure Redis Enterprise Software. Instead, use: + +- The Redis Enterprise Software [Cluster Manager UI]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) +- The [REST API]({{< relref "/operate/rs/references/rest-api" >}}) +- Command-line utilities, such as [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) + +### Discovery service + +We recommend the following clients when using a [discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service.md" >}}) based on the Redis Sentinel API: + +- [redis-py]({{< relref "/develop/clients/redis-py" >}}) (Python client) +- [NRedisStack]({{< relref "/develop/clients/dotnet" >}}) (.NET client) +- [Jedis]({{< relref "/develop/clients/jedis" >}}) (synchronous Java client) +- [Lettuce]({{< relref "/develop/clients/lettuce" >}}) (asynchronous Java client) +- [go-redis]({{< relref "/develop/clients/go" >}}) (Go client) +- [Hiredis](https://github.com/redis/hiredis) (C client) + +If you need to use another client, you can use [Sentinel Tunnel](https://github.com/RedisLabs/sentinel_tunnel) +to discover the current Redis master with Sentinel and create a TCP tunnel between a local port on the client and the master. + +--- +Title: Connect to a database +categories: +- docs +- operate +- rs +description: Learn how to connect your application to a Redis database hosted by Redis + Enterprise Software and test your connection. +hideListLinks: true +linkTitle: Connect +weight: 20 +--- + +After you [set up a cluster]({{< relref "/operate/rs/clusters/new-cluster-setup" >}}) and [create a Redis database]({{< relref "/operate/rs/databases/create" >}}), you can connect to your database. + +To connect to your database, you need the database endpoint, which includes the cluster name (FQDN) and the database port. To view and copy public and private endpoints for a database in the cluster, see the database’s **Configuration > General** section in the Cluster Manager UI. + +{{View public and private endpoints from the General section of the database's Configuration screen.}} + +If you try to connect with the FQDN, and the database does not respond, try connecting with the IP address. If this succeeds, DNS is not properly configured. To set up DNS, see [Configure cluster DNS]({{< relref "/operate/rs/networking/cluster-dns" >}}). + +If you want to secure your connection, set up [TLS]({{< relref "/operate/rs/security/encryption/tls/" >}}). + +## Connect to a database + +Use one of the following connection methods to connect to your database: + +- [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli/" >}}) utility + +- [Redis Insight](https://redis.com/redis-enterprise/redis-insight/) + +- [Redis client]({{< relref "/develop/clients" >}}) for your preferred programming language + +For examples, see [Test client connection]({{< relref "/operate/rs/databases/connect/test-client-connectivity" >}}). + +## Continue learning with Redis University + +See the [Connect to a database on Redis Software](https://university.redis.io/course/zyxx6fdkcm5ahd) for the course. +--- +Title: Eviction policy +alwaysOpen: false +categories: +- docs +- operate +- rs +- kubernetes +description: The eviction policy determines what happens when a database reaches its + memory limit. +linkTitle: Eviction policy +weight: 10 +--- + +The eviction policy determines what happens when a database reaches its memory limit. + +To make room for new data, older data is _evicted_ (removed) according to the selected policy. + +To prevent this from happening, make sure your database is large enough to hold all desired keys. + +| **Eviction Policy** | **Description** | +|------------|-----------------| +|  noeviction | New values aren't saved when memory limit is reached

When a database uses replication, this applies to the primary database | +|  allkeys-lru | Keeps most recently used keys; removes least recently used (LRU) keys | +|  allkeys-lfu | Keeps frequently used keys; removes least frequently used (LFU) keys | +|  allkeys-random | Randomly removes keys | +|  volatile-lru | Removes least recently used keys with `expire` field set to true | +|  volatile-lfu | Removes least frequently used keys with `expire` field set to true | +|  volatile-random | Randomly removes keys with `expire` field set to true | +|  volatile-ttl | Removes least frequently used keys with `expire` field set to true and the shortest remaining time-to-live (TTL) value | + +## Eviction policy defaults + +`volatile-lru` is the default eviction policy for most databases. + +The default policy for [Active-Active databases]({{< relref "/operate/rs/databases/active-active" >}}) is _noeviction_ policy. + +## Active-Active database eviction + +The eviction policy mechanism for Active-Active databases kicks in earlier than for standalone databases because it requires propagation to all participating clusters. +The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. If memory usage continues to rise while the keys are being evicted, the rate of eviction will increase to prevent reaching the Out-of-Memory state. +As with standalone Redis Enterprise databases, Active-Active eviction is calculated per shard. +To prevent over eviction, internal heuristics might prevent keys from being evicted when the shard reaches the 80% memory limit. In such cases, keys will get evicted only when shard memory reaches 100%. + +In case of network issues between Active-Active instances, memory can be freed only when all instances are in sync. If there is no communication between participating clusters, it can result in eviction of all keys and the instance reaching an Out-of-Memory state. + +{{< note >}} +Data eviction policies are not supported for Active-Active databases with Auto Tiering . +{{< /note >}} + +## Avoid data eviction + +To avoid data eviction, make sure your database is large enough to hold required values. + +For larger databases, consider using [Auto Tiering ]({{< relref "/operate/rs/databases/auto-tiering/" >}}). + +Auto Tiering stores actively-used data (also known as _hot data_) in RAM and the remaining data in flash memory (SSD). +This lets you retain more data while ensuring the fastest access to the most critical data. +--- +Title: Database memory limits +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: When you set a database's memory limit, you define the maximum size the + database can reach. +linkTitle: Memory limits +weight: 20 +--- +When you set a database's memory limit, you define the maximum size the +database can reach in the cluster, across all database replicas and +shards, including both primary and replica shards. + +If the total size of the database in the cluster reaches the memory +limit, the data eviction policy is +applied. + +## Factors for sizing + +Factors to consider when sizing your database: + +- **dataset size**: you want your limit to be above your dataset size to leave room for overhead. +- **database throughput**: high throughput needs more shards, leading to a higher memory limit. +- [**modules**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}): using modules with your database consumes more memory. +- [**database clustering**]({{< relref "/operate/rs/databases/durability-ha/clustering.md" >}}): enables you to spread your data into shards across multiple nodes. +- [**database replication**]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}): enabling replication doubles memory consumption. + +Additional factors for Active-Active databases: + +- [**Active-Active replication**]({{< relref "/operate/rs/databases/active-active/_index.md" >}}): enabling Active-Active replication requires double the memory of regular replication, which can be up to two times (2x) the original data size per instance. +- [**database replication backlog**]({{< relref "/operate/rs/databases/active-active/manage#replication-backlog/" >}}) for synchronization between shards. By default, this is set to 1% of the database size. +- [**Active-Active replication backlog**]({{< relref "/operate/rs/databases/active-active/manage.md" >}}) for synchronization between clusters. By default, this is set to 1% of the database size. + + It's also important to know Active-Active databases have a lower threshold for activating the eviction policy, because it requires propagation to all participating clusters. The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. + +Additional factors for databases with Auto Tiering enabled: + +- The available flash space must be greater than or equal to the total database size (RAM+Flash). The extra space accounts for write buffers and [write amplification](https://en.wikipedia.org/wiki/Write_amplification). + +- [**database persistence**]({{< relref "/operate/rs/databases/configure/database-persistence.md" >}}): Auto Tiering uses dual database persistence where both the primary and replica shards persist to disk. This may add some processor and network overhead, especially in cloud configurations with network attached storage. + +## What happens when Redis Enterprise Software is low on RAM? + +Redis Enterprise Software manages node memory so that data is entirely in RAM (unless using Auto Tiering). If not enough RAM is available, Redis Enterprise prevents adding more data into the databases. + +Redis Enterprise Software protects the existing data and prevents the database from being able to store data into the shards. + +You can configure the cluster to move the data to another node, or even discard it according to the [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy.md" >}}) set on each database by the administrator. + +[Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}) +manages memory so that you can also use flash memory (SSD) to store data. + +### Order of events for low RAM + +1. If there are other nodes available, your shards migrate to other nodes. +2. If the eviction policy allows eviction, shards start to release memory, +which can result in data loss. +3. If the eviction policy does not allow eviction, you'll receive +out of memory (OOM) messages. +4. If shards can't free memory, Redis Enterprise relies on the OS processes to stop replicas, +but tries to avoid stopping primary shards. + +We recommend that you have a [monitoring platform]({{< relref "/operate/rs/monitoring/" >}}) that alerts you before a system gets low on RAM. +You must maintain sufficient free memory to make sure that you have a healthy Redis Enterprise installation. + +## Memory metrics + +The Cluster Manager UI provides metrics that can help you evaluate your memory use. + +- Free RAM +- RAM fragmentation +- Used memory +- Memory usage +- Memory limit + +See [console metrics]({{< relref "/operate/rs/references/metrics" >}}) for more detailed information. + +## Related info + +- [Memory and performance]({{< relref "/operate/rs/databases/memory-performance" >}}) +- [Disk sizing for heavy write scenarios]({{< relref "/operate/rs/clusters/optimize/disk-sizing-heavy-write-scenarios.md" >}}) +- [Turn off services to free system memory]({{< relref "/operate/rs/clusters/optimize/turn-off-services.md" >}}) +- [Eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy.md" >}}) +- [Shard placement policy]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy.md" >}}) +- [Database persistence]({{< relref "/operate/rs/databases/configure/database-persistence.md" >}}) +--- +Title: Shard placement policy +alwaysopen: false +categories: +- docs +- operate +- rs +description: Detailed info about the shard placement policy. +linkTitle: Shard placement policy +weight: 30 +--- +In Redis Enterprise Software, the location of master and replica shards on the cluster nodes can impact the database and node performance. +Master shards and their corresponding replica shards are always placed on separate nodes for data resiliency. +The shard placement policy helps to maintain optimal performance and resiliency. + +{{< embed-md "shard-placement-intro.md" >}} + +## Shard placement policies + +### Dense shard placement policy + +In the dense policy, the cluster places the database shards on as few nodes as possible. +When the node is not able to host all of the shards, some shards are moved to another node to maintain optimal node health. + +For example, for a database with two master and two replica shards on a cluster with three nodes and a dense shard placement policy, +the two master shards are hosted on one node and the two replica shards are hosted on another node. + +For Redis on RAM databases without the OSS cluster API enabled, use the dense policy to optimize performance. + +{{< image filename="/images/rs/dense_placement.png" >}} + +*Figure: Three nodes with two master shards (red) and two replica shards (white) with a dense placement policy* + +### Sparse shard placement policy + +In the sparse policy, the cluster places shards on as many nodes as possible to distribute the shards of a database across all available nodes. +When all nodes have database shards, the shards are distributed evenly across the nodes to maintain optimal node health. + +For example, for a database with two master and two replica shards on a cluster with three nodes and a sparse shard placement policy: + +- Node 1 hosts one of the master shards +- Node 2 hosts the replica for the first master shard +- Node 3 hosts the second master shard +- Node 1 hosts for the replica shard for master shard 2 + +For Redis on RAM databases with OSS cluster API enabled and for databases with Auto Tiering enabled, use the sparse policy to optimize performance. + +{{< image filename="/images/rs/sparse_placement.png" >}} + +*Figure: Three nodes with two master shards (red) and two replica shards (white) with a sparse placement policy* + +## Related articles + +You can [configure the shard placement policy]({{< relref "/operate/rs/databases/configure/shard-placement.md" >}}) for each database. +--- +Title: Memory and performance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Learn more about managing your memory and optimizing performance for + your database. +hideListLinks: true +linktitle: Memory and performance +weight: 70 +--- +Redis Enterprise Software has multiple mechanisms in its +architecture to help optimize storage and performance. + +## Memory limits + +Database memory limits define the maximum size your database can reach across all database replicas and [shards]({{< relref "/glossary#letter-s" >}}) on the cluster. Your memory limit will also determine the number of shards you'll need. + +Besides your dataset, the memory limit must also account for replication, Active-Active overhead, and module overhead, and a number of other factors. These can significantly increase your database size, sometimes increasing it by four times or more. + +For more information on memory limits, see [Database memory limits]({{< relref "/operate/rs/databases/memory-performance/memory-limit.md" >}}). + +## Eviction policies + +When a database exceeds its memory limit, eviction policies determine which data is removed. The eviction policy removes keys based on frequency of use, how recently used, randomly, expiration date, or a combination of these factors. The policy can also be set to `noeviction` to return a memory limit error when trying to insert more data. + +The default eviction policy for databases is `volatile-lru` which evicts the least recently used keys out of all keys with the `expire` field set. The default for Active-Active databases is `noeviction`. + +For more information, see [eviction policies]({{< relref "/operate/rs/databases/memory-performance/eviction-policy.md" >}}). + +## Database persistence + +Both RAM memory and flash memory are at risk of data loss if a server or process fails. Persisting your data to disk helps protect it against loss in those situations. You can configure persistence at the time of database creation, or by editing the database’s configuration. + +There are two main types of persistence strategies in Redis Enterprise Software: append-only files (AoF) and snapshots. + +Append-only files (AoF) keep a record of data changes and writes each change to the end of a file, allowing you to recover the dataset by replaying the writes in the append-only log. + +Snapshots capture all the data as it exists in one moment in time and writes it to disk, allowing you to recover the entire dataset as it existed at that moment in time. + +For more info on data persistence see [Database persistence with Redis Enterprise Software]({{< relref "/operate/rs/databases/configure/database-persistence.md" >}}) or [Durable Redis](https://redis.com/redis-enterprise/technology/durable-redis/). + +## Auto Tiering + +By default, Redis Enterprise Software stores your data entirely in [RAM](https://en.wikipedia.org/wiki/Random-access_memory) for improved performance. The [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}) feature enables your data to span both RAM and [SSD](https://en.wikipedia.org/wiki/Solid-state_drive) storage ([flash memory](https://en.wikipedia.org/wiki/Flash_memory)). Keys are always stored in RAM, but Auto Tiering manages the location of their values. Frequently used (hot) values are stored on RAM, but infrequently used (warm) values are moved to flash memory. This saves on expensive RAM space, which give you comparable performance at a lower cost for large datasets. + +For more info, see [Auto Tiering]({{< relref "/operate/rs/databases/auto-tiering/" >}}). + +## Shard placement + +The location of the primary and replica shards on the cluster nodes can impact your database performance. +Primary shards and their corresponding replica shards are always placed on separate nodes for data resiliency and high availability. +The shard placement policy helps to maintain optimal performance and resiliency. + +Redis Enterprise Software has two shard placement policies available: + +- **dense**: puts as many shards as possible on the smallest number of nodes +- **sparse**: spread the shards across as many nodes as possible + +For more info about the shard placement policy, see [Shard placement policy]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy.md" >}}) + +## Metrics + +From the Redis Enterprise Software Cluster Manager UI, you can monitor the performance of your clusters, nodes, databases, and shards with real-time metrics. You can also enable alerts for node, cluster, or database events such as high memory usage or throughput. + +With the Redis Enterprise Software API, you can also integrate Redis Enterprise metrics into other monitoring environments, such as Prometheus. + +For more info about monitoring with Redis Enterprise Software, see [Monitoring with metrics and alerts]({{< relref "/operate/rs/monitoring" >}}), and [Memory statistics]({{< relref "/operate/rs/databases/memory-performance/memory-limit#memory-metrics" >}}). + +## Scaling databases + +Each Redis Enterprise cluster can contain multiple databases. In Redis, +databases represent data that belong to a single application, tenant, or +microservice. Redis Enterprise is built to scale to 100s of databases +per cluster to provide flexible and efficient multi-tenancy models. + +Each database can contain few or many Redis shards. Sharding is +transparent to Redis applications. Master shards in the database process +data operations for a given subset of keys. The number of shards per +database is configurable and depend on the throughput needs of the +applications. Databases in Redis Enterprise can be resharded into more +Redis shards to scale throughput while maintaining sub-millisecond +latencies. Resharding is performed without downtime. + +{{< image filename="/images/rs/sharding.png" >}} + +Redis Enterprise places master shards and replicas in separate +nodes, racks, and zones, and uses in-memory replication to protect data +against failures. + +In Redis Enterprise, each database has a quota of RAM. The quota cannot +exceed the limits of the RAM available on the node. However, with Redis +Enterprise Flash, RAM is extended to the local flash drive (SATA, NVMe +SSDs etc). The total quota of the database can take advantage of both +RAM and Flash drive. The administrator can choose the RAM vs Flash ratio +and adjust that anytime in the lifetime of the database without +downtime. + +With Auto Tiering, instead of storing all keys and data for a +given shard in RAM, less frequently accessed values are pushed to flash. +If applications need to access a value that is in flash, Redis +Enterprise automatically brings the value into RAM. Depending on the +flash hardware in use, applications experience slightly higher latency +when bringing values back into RAM from flash. However subsequent +accesses to the same value is fast, once the value is in RAM. + +## Client-side caching + +Client-side caching allows Redis clients to store a subset of data in a local cache and avoid sending repeated requests to the Redis database. When used to cache frequently accessed data, this technique can improve performance by decreasing network traffic, latency, and load on the database. For more information about client-side caching, see the [client-side caching introduction]({{}}). + +Redis Software supports client-side caching for databases with Redis versions 7.4 and later. See [Client-side caching compatibility with Redis Software]({{}}) for more information about compatibility and configuration options. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: This page will help you find database management information in the Databases + section. +hideListLinks: false +linktitle: Databases +title: Manage databases +weight: 37 +--- + +You can manage your Redis Enterprise Software databases with several different tools: + +- Cluster Manager UI (the web-based user interface) +- Command-line tools ([`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}), [`redis-cli`]({{< relref "/develop/tools/cli" >}}), [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}})) +- [REST API]({{< relref "/operate/rs/references/rest-api/_index.md" >}}) + + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +db_type: database +description: How to migrate database shards to other nodes in a Redis Software cluster. +linkTitle: Migrate shards +title: Migrate database shards +toc: 'true' +weight: 32 +--- + +To migrate database shards to other nodes in the cluster, you can use the [`rladmin migrate`]({{}}) command or [REST API requests]({{}}). + +## Use cases for shard migration + +Migrate database shards to a different node in the following scenarios: + +- Before node removal. + +- To balance the database manually in case of latency issues or uneven load distribution across nodes. + +- To manage node resources, such as memory usage. + +## Considerations for shard migration + +For databases with replication: + +- Migrating a shard will not cause disruptions since a primary shard will still be available. + +- If you try to migrate a primary shard, it will be demoted to a replica shard and a replica shard will be promoted to primary before the migration. If you set `"preserve_roles": true` in the request, a second failover will occur after the migration finishes to change the migrated shard's role back to primary. + +For databases without replication, the migrated shard will not be available until the migration is done. + +Connected clients shouldn't be disconnected in either case. + +If too many primary shards are placed on the same node, it can impact database performance. + +## Migrate specific shard + +To migrate a specific database shard, use one of the following methods: + +- [`rladmin migrate shard`]({{}}): + + ```sh + rladmin migrate shard target_node + ``` + +- [Migrate shard]({{}}) REST API request: + + Specify the ID of the shard to migrate in the request path and the destination node's ID as the `target_node_uid` in the request body. See the [request reference]({{}}) for more options. + + ```sh + POST /v1/shards//actions/migrate + { + "target_node_uid": + } + ``` + + Example JSON response body: + + ```json + { + "action_uid": "", + "description": "Migrate was triggered" + } + ``` + + You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +## Migrate multiple shards + +To migrate multiple database shards, use one of the following methods: + +- [`rladmin migrate shard`]({{}}): + + ```sh + rladmin migrate shard target_node + ``` + +- [Migrate multiple shards]({{}}) REST API request: + + Specify the IDs of the shards to migrate in the `shard_uids` list and the destination node's ID as the `target_node_uid` in the request body. See the [request reference]({{}}) for more options. + + ```sh + POST /v1/shards/actions/migrate + { + "shard_uids": ["","",""], + "target_node_uid": + } + ``` + + Example JSON response body: + + ```json + { + "action_uid": "", + "description": "Migrate was triggered" + } + ``` + + You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +## Migrate all shards from a node + +To migrate all shards from a specific node to another node, run [`rladmin migrate all_shards`]({{}}): + +```sh +rladmin migrate node all_shards target_node +``` + +## Migrate primary shards + +You can use the [`rladmin migrate all_master_shards`]({{}}) command to migrate all primary shards for a specific database or node to another node in the cluster. + +To migrate a specific database's primary shards: + +```sh +rladmin migrate db db: all_master_shards target_node +``` + +To migrate all primary shards from a specific node: + +```sh +rladmin migrate node all_master_shards target_node +``` + +## Migrate replica shards + +You can use the [`rladmin migrate all_slave_shards`]({{}}) command to migrate all replica shards for a specific database or node to another node in the cluster. + +To migrate a specific database's replica shards: + +```sh +rladmin migrate db db: all_slave_shards target_node +``` + +To migrate all replica shards from a specific node: + +```sh +rladmin migrate node all_slave_shards target_node +``` +--- +Title: Considerations for planning Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Information about Active-Active database to take into consideration while + planning a deployment, such as compatibility, limitations, and special configuration +linktitle: Planning considerations +weight: 22 +url: '/operate/rs/7.4/databases/active-active/planning/' +--- + +In Redis Enterprise, Active-Active geo-distribution is based on [conflict-free replicated data type (CRDT) technology](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). Compared to databases without geo-distribution, Active-Active databases have more complex replication and networking, as well as a different data type. + +Because of the complexities of Active-Active databases, there are special considerations to keep in mind while planning your Active-Active database. + +See [Active-Active Redis]({{< relref "/operate/rs/7.4/databases/active-active/" >}}) for more information about geo-distributed replication. For more info on other high availability features, see [Durability and high availability]({{< relref "/operate/rs/7.4/databases/durability-ha/" >}}). + +## Participating clusters + +You need at least [two participating clusters]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup" >}}) for an Active-Active database. If your database requires more than ten participating clusters, contact Redis support. You can [add or remove participating clusters]({{< relref "/operate/rs/7.4/databases/active-active/manage#participating-clusters/" >}}) after database creation. + +{{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +{{}} + +Changes made from the Cluster Manager UI to an Active-Active database configuration only apply to the cluster you are editing. For global configuration changes across all clusters, use the `crdb-cli` command-line utility. + +## Memory limits + +Database memory limits define the maximum size of your database across all database replicas and [shards]({{< relref "/operate/rs/7.4/references/terminology.md#redis-instance-shard" >}}) on the cluster. Your memory limit also determines the number of shards. + +Besides your dataset, the memory limit must also account for replication, Active-Active metadata, and module overhead. These features can increase your database size, sometimes increasing it by two times or more. + +Factors to consider when sizing your database: + +- **dataset size**: you want your limit to be above your dataset size to leave room for overhead. +- **database throughput**: high throughput needs more shards, leading to a higher memory limit. +- [**modules**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}): using modules with your database can consume more memory. +- [**database clustering**]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}): enables you to spread your data into shards across multiple nodes (scale out). +- [**database replication**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}): enabling replication doubles memory consumption +- [**Active-Active replication**]({{< relref "/operate/rs/7.4/databases/active-active/_index.md" >}}): enabling Active-Active replication requires double the memory of regular replication, which can be up to two times (2x) the original data size per instance. +- [**database replication backlog**]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog/" >}}) for synchronization between shards. By default, this is set to 1% of the database size. +- [**Active-Active replication backlog**]({{< relref "/operate/rs/7.4/databases/active-active/manage.md" >}}) for synchronization between clusters. By default, this is set to 1% of the database size. + +It's also important to know Active-Active databases have a lower threshold for activating the eviction policy, because it requires propagation to all participating clusters. The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. + +For more information on memory limits, see [Memory and performance]({{< relref "/operate/rs/7.4/databases/memory-performance/" >}}) or [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}). + +## Networking + +Network requirements for Active-Active databases include: + +- A VPN between each network that hosts a cluster with an instance (if your database spans WAN). +- A network connection to [several ports](#network-ports) on each cluster from all nodes in all participating clusters. +- A [network time service](#network-time-service) running on each node in all clusters. + +Networking between the clusters must be configured before creating an Active-Active database. The setup will fail if there is no connectivity between the clusters. + +### Network ports + +Every node must have access to the REST API ports of every other node as well as other ports for proxies, VPNs, and the Cluster Manager UI. See [Network port configurations]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) for more details. These ports should be allowed through firewalls that may be positioned between the clusters. + +### Network Time Service {#network-time-service} + +Active-Active databases require a time service like NTP or Chrony to make sure the clocks on all cluster nodes are synchronized. +This is critical to avoid problems with internal cluster communications that can impact your data integrity. + +See [Synchronizing cluster node clocks]({{< relref "/operate/rs/7.4/clusters/configure/sync-clocks.md" >}}) for more information. + +## Redis modules {#redis-modules} + +Several Redis modules are compatible with Active-Active databases. Find the list of [compatible Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}). +{{< note >}} +Starting with v6.2.18, you can index, query, and perform full-text searches of nested JSON documents in Active-Active databases by combining RedisJSON and RediSearch. +{{< /note >}} + +## Limitations + +Active-Active databases have the following limitations: + +- An existing database can't be changed into an Active-Active database. To move data from an existing database to an Active-Active database, you must [create a new Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/create.md" >}}) and [migrate the data]({{< relref "/operate/rs/7.4/databases/import-export/migrate-to-active-active.md" >}}). +- [Discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}) is not supported with Active-Active databases. Active-Active databases require FQDNs or [mDNS]({{< relref "/operate/rs/7.4/networking/mdns.md" >}}). +- The `FLUSH` command is not supported from the CLI. To flush your database, use the API or Cluster Manager UI. +- The `UNLINK` command is a blocking command for all types of keys. +- Cross slot multi commands (such as `MSET`) are not supported with Active-Active databases. +- The hashing policy can't be changed after database creation. +- If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +--- +Title: Application failover with Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: How to failover your application to connect to a remote replica. +linkTitle: App failover +weight: 99 +url: '/operate/rs/7.4/databases/active-active/develop/app-failover-active-active/' +--- +Active-Active Redis deployments don't have a built-in failover or failback mechanism for application connections. +An application deployed with an Active-Active database connects to a replica of the database that is geographically nearby. +If that replica is not available, the application can failover to a remote replica, and failback again if necessary. +In this article we explain how this process works. + +Active-Active connection failover can improve data availability, but can negatively impact data consistency. +Active-Active replication, like Redis replication, is asynchronous. +An application that fails over to another replica can miss write operations. +If the failed replica saved the write operations in persistent storage, +then the write operations are processed when the failed replica recovers. + +## Detecting Failure + +Your application can detect two types of failure: + +1. **Local failures** - The local replica is down or otherwise unavailable +1. **Replication failures** - The local replica is available but fails to replicate to or from remote replicas + +### Local Failures + +Local failure is detected when the application is unable to connect to the database endpoint for any reason. Reasons for a local failure can include: multiple node failures, configuration errors, connection refused, connection timed out, unexpected protocol level errors. + +### Replication Failures + +Replication failures are more difficult to detect reliably without causing false positives. Replication failures can include: network split, replication configuration issues, remote replica failures. + +The most reliable method for health-checking replication is by using the Redis publish/subscribe (pub/sub) mechanism. + +{{< note >}} +Note that this document does not suggest that Redis pub/sub is reliable in the common sense. Messages can get lost in certain conditions, but that is acceptable in this case because typically the application determines that replication is down only after not being able to deliver a number of messages over a period of time. +{{< /note >}} + +When you use the pub/sub data type to detect failures, the application: + +1. Connects to all replicas and subscribes to a dedicated channel for each replica. +1. Connects to all replicas and periodically publishes a uniquely identifiable message. +1. Monitors received messages and ensures that it is able to receive its own messages within a predetermined window of time. + +You can also use known dataset changes to monitor the reliability of the replication stream, +but pub/sub is preferred method because: + +1. It does not involve dataset changes. +1. It does not make any assumptions about the dataset. +1. Pub/sub messages are delivered as replicated effects and are a more reliable indicator of a live replication link. In certain cases, dataset keys may appear to be modified even if the replication link fails. This happens because keys may receive updates through full-state replication (re-sync) or through online replication of effects. + +## Impact of sharding on failure detection + +If your sharding configuration is symmetric, make sure to use at least one key (PUB/SUB channels or real dataset key) per shard. Shards are replicated individually and are vulnerable to failure. Symmetric sharding configurations have the same number of shards and hash slots for all replicas. +We do not recommend an asymmetric sharding configuration, which requires at least one key per hash slot that intersects with a pair of shards. + +To make sure that there is at least one key per shard, the application should: + +1. Use the Cluster API to retrieve the database sharding configuration. +1. Compute a number of key names, such that there is one key per shard. +1. Use those key names as channel names for the pub/sub mechanism. + +### Failing over + +When the application needs to failover to another replica, it should simply re-establish its connections with the endpoint on the remote replica. Because Active/Active and Redis replication are asynchronous, the remote endpoint may not have all of the locally performed and acknowledged writes. + +It's best if your application doesn't read its own recent writes. Those writes can be either: + +1. Lost forever, if the local replica has an event such as a double failure or loss of persistent files. +1. Temporarily unavailable, but will be available at a later time if the local replica's failure is temporary. + + + +## Failback decision + +Your application can use the same checks described above to continue monitoring the state of the failed replica after failover. + +To monitor the state of a replica during the failback process, you must make sure the replica is available, re-synced with the remote replicas, and not in stale mode. The PUB/SUB mechanism is an effective way to monitor this. + +Dataset-based mechanisms are potentially less reliable for several reasons: +1. In order to determine that a local replica is not stale, it is not enough to simply read keys from it. You must also attempt to write to it. +1. As stated above, remote writes for some keys appear in the local replica before the replication link is back up and while the replica is still in stale mode. +1. A replica that was never written to never becomes stale, so on startup it is immediately ready but serves stale data for a longer period of time. + +## Replica Configuration Changes + +All failover and failback operations should be done strictly on the application side, and should not involve changes to the Active-Active configuration. +The only valid case for re-configuring the Active-Active deployment and removing a replica is when memory consumption becomes too high because garbage collection cannot be performed. +Once a replica is removed, it can only be re-joined as a new replica and it loses any writes that were not converged. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Overview of how developing applications differs for Active-Active databases + from standalone Redis databases. +linkTitle: Develop for Active-Active +title: Develop applications with Active-Active databases +weight: 10 +url: '/operate/rs/7.4/databases/active-active/develop/develop-for-aa/' +--- +Developing geo-distributed, multi-master applications can be difficult. +Application developers may have to understand a large number of race +conditions between updates to various sites, network, and cluster +failures that could reorder the events and change the outcome of the +updates performed across geo-distributed writes. + +Active-Active databases (formerly known as CRDB) are geo-distributed databases that span multiple Redis Enterprise Software (RS) clusters. +Active-Active databases depend on multi-master replication (MMR) and Conflict-free +Replicated Data Types (CRDTs) to power a simple development experience +for geo-distributed applications. Active-Active databases allow developers to use existing +Redis data types and commands, but understand the developers intent and +automatically handle conflicting concurrent writes to the same key +across multiple geographies. For example, developers can simply use the +INCR or INCRBY method in Redis in all instances of the geo-distributed +application, and Active-Active databases handle the additive nature of INCR to reflect the +correct final value. The following example displays a sequence of events +over time : t1 to t9. This Active-Active database has two member Active-Active databases : member CRDB1 and +member CRDB2. The local operations executing in each member Active-Active database is +listed under the member Active-Active database name. The "Sync" even represent the moment +where synchronization catches up to distribute all local member Active-Active database +updates to other participating clusters and other member Active-Active databases. + +| **Time** | **Member CRDB1** | **Member CRDB2** | +| :------: | :------: | :------: | +| t1 | INCRBY key1 7 | | +| t2 | | INCRBY key1 3 | +| t3 | GET key1
7 | GET key1
3 | +| t4 | — Sync — | — Sync — | +| t5 | GET key1
10 | GET key1
10 | +| t6 | DECRBY key1 3 | | +| t7 | | INCRBY key1 6 | +| t8 | — Sync — | — Sync — | +| t9 | GET key1
13 | GET key1
13 | + +Databases provide various approaches to address some of these concerns: + +- Active-Passive Geo-distributed deployments: With active-passive + distributions, all writes go to an active cluster. Redis Enterprise + provides a "Replica Of" capability that provides a similar approach. + This can be employed when the workload is heavily balanced towards + read and few writes. However, WAN performance and availability + is quite flaky and traveling large distances for writes take away + from application performance and availability. +- Two-phase Commit (2PC): This approach is designed around a protocol + that commits a transaction across multiple transaction managers. + Two-phase commit provides a consistent transactional write across + regions but fails transactions unless all participating transaction + managers are "available" at the time of the transaction. The number + of messages exchanged and its cross-regional availability + requirement make two-phase commit unsuitable for even moderate + throughputs and cross-geo writes that go over WANs. +- Sync update with Quorum-based writes: This approach synchronously + coordinates a write across majority number of replicas across + clusters spanning multiple regions. However, just like two-phase + commit, number of messages exchanged and its cross-regional + availability requirement make geo-distributed quorum writes + unsuitable for moderate throughputs and cross geo writes that go + over WANs. +- Last-Writer-Wins (LWW) Conflict Resolution: Some systems provide + simplistic conflict resolution for all types of writes where the + system clocks are used to determine the winner across conflicting + writes. LWW is lightweight and can be suitable for simpler data. + However, LWW can be destructive to updates that are not necessarily + conflicting. For example adding a new element to a set across two + geographies concurrently would result in only one of these new + elements appearing in the final result with LWW. +- MVCC (multi-version concurrency control): MVCC systems maintain + multiple versions of data and may expose ways for applications to + resolve conflicts. Even though MVCC system can provide a flexible + way to resolve conflicting writes, it comes at a cost of great + complexity in the development of a solution. + +Even though types and commands in Active-Active databases look identical to standard Redis +types and commands, the underlying types in RS are enhanced to maintain +more metadata to create the conflict-free data type experience. This +section explains what you need to know about developing with Active-Active databases on +Redis Enterprise Software. + +## Lua scripts + +Active-Active databases support Lua scripts, but unlike standard Redis, Lua scripts always +execute in effects replication mode. There is currently no way to +execute them in script-replication mode. + +## Eviction + +The default policy for Active-Active databases is _noeviction_ mode. Redis Enterprise version 6.0.20 and later support all eviction policies for Active-Active databases, unless [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering" >}})(previously known as Redis on Flash) is enabled. +For details, see [eviction for Active-Active databases]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy#active-active-database-eviction" >}}). + + +## Expiration + +Expiration is supported with special multi-master semantics. + +If a key's expiration time is changed at the same time on different +members of the Active-Active database, the longer extended time set via TTL on a key is +preserved. As an example: + +If this command was performed on key1 on cluster #1 + +```sh +127.0.0.1:6379> EXPIRE key1 10 +``` + +And if this command was performed on key1 on cluster #2 + +```sh +127.0.0.1:6379> EXPIRE key1 50 +``` + +The EXPIRE command setting the key to 50 would win. + +And if this command was performed on key1 on cluster #3: + +```sh +127.0.0.1:6379> PERSIST key1 +``` + +It would win out of the three clusters hosting the Active-Active database as it sets the +TTL on key1 to an infinite time. + +The replica responsible for the "winning" expire value is also +responsible to expire the key and propagate a DEL effect when this +happens. A "losing" replica is from this point on not responsible +for expiring the key, unless another EXPIRE command resets the TTL. +Furthermore, a replica that is NOT the "owner" of the expired value: + +- Silently ignores the key if a user attempts to access it in READ + mode, e.g. treating it as if it was expired but not propagating a + DEL. +- Expires it (sending a DEL) before making any modifications if a user + attempts to access it in WRITE mode. + + {{< note >}} +Expiration values are in the range of [0, 2^49] for Active-Active databases and [0, 2^64] for non Active-Active databases. + {{< /note >}} + +## Out-of-Memory (OOM) {#outofmemory-oom} + +If a member Active-Active database is in an out of memory situation, that member is marked +"inconsistent" by RS, the member stops responding to user traffic, and +the syncer initiates full reconciliation with other peers in the Active-Active database. + +## Active-Active Database Key Counts + +Keys are counted differently for Active-Active databases: + +- DBSIZE (in `shard-cli dbsize`) reports key header instances + that represent multiple potential values of a key before a replication conflict is resolved. +- expired_keys (in `bdb-cli info`) can be more than the keys count in DBSIZE (in `shard-cli dbsize`) + because expires are not always removed when a key becomes a tombstone. + A tombstone is a key that is logically deleted but still takes memory + until it is collected by the garbage collector. +- The Expires average TTL (in `bdb-cli info`) is computed for local expires only. + +## INFO + +The INFO command has an additional crdt section which provides advanced +troubleshooting information (applicable to support etc.): + +| **Section** | **Field** | **Description** | +| ------ | ------ | ------ | +| **CRDT Context** | crdt_config_version | Currently active Active-Active database configuration version. | +| | crdt_slots | Hash slots assigned and reported by this shard. | +| | crdt_replid | Unique Replica/Shard IDs. | +| | crdt_clock | Clock value of local vector clock. | +| | crdt_ovc | Locally observed Active-Active database vector clock. | +| **Peers** | A list of currently connected Peer Replication peers. This is similar to the slaves list reported by Redis. | | +| **Backlogs** | A list of Peer Replication backlogs currently maintained. Typically in a full mesh topology only a single backlog is used for all peers, as the requested Ids are identical. | | +| **CRDT Stats** | crdt_sync_full | Number of inbound full synchronization processes performed. | +| | crdt_sync_partial_ok | Number of partial (backlog based) re-synchronization processes performed. | +| | crdt_sync_partial-err | Number of partial re-synchronization processes failed due to exhausted backlog. | +| | crdt_merge_reqs | Number of inbound merge requests processed. | +| | crdt_effect_reqs | Number of inbound effect requests processed. | +| | crdt_ovc_filtered_effect_reqs | Number of inbound effect requests filtered due to old vector clock. | +| | crdt_gc_pending | Number of elements pending garbage collection. | +| | crdt_gc_attempted | Number of attempts to garbage collect tombstones. | +| | crdt_gc_collected | Number of tombstones garbaged collected successfully. | +| | crdt_gc_gvc_min | The minimal globally observed vector clock, as computed locally from all received observed clocks. | +| | crdt_stale_released_with_merge | Indicates last stale flag transition was a result of a complete full sync. | +| **CRDT Replicas** | A list of crdt_replica \ entries, each describes the known state of a remote instance with the following fields: | | +| | config_version | Last configuration version reported. | +| | shards | Number of shards. | +| | slots | Total number of hash slots. | +| | slot_coverage | A flag indicating remote shards provide full coverage (i.e. all shards are alive). | +| | max_ops_lag | Number of local operations not yet observed by the least updated remote shard | +| | min_ops_lag | Number of local operations not yet observed by the most updated remote shard | +--- +Title: Sorted sets in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using sorted sets with an Active-Active database. +linkTitle: Sorted sets +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/sorted-sets/' +--- +{{< note >}} +[Redis Geospatial (Geo)]({{< relref "/commands/GEOADD" >}}) is based on Sorted Sets, so the same Active-Active database development instructions apply to Geo. +{{< /note >}} + +Similar to Redis Sets, Redis Sorted Sets are non-repeating collections +of Strings. The difference between the two is that every member of a +Sorted Set is associated with a score used to order the Sorted Set from +lowest to highest. While members are unique, they may have the same +score. + +With Sorted Sets, you can quickly add, remove or update elements as +well as get ranges by score or by rank (position). Sorted Sets in Active-Active databases +behave the same and maintain additional metadata to handle concurrent +conflicting writes. Conflict resolution is done in two +phases: + +1. First, the database resolves conflict at the set level using "OR + Set" (Observed-Remove Set). With OR-Set behavior, writes across + multiple Active-Active database instances are typically unioned except in cases of + conflicts. Conflicting writes can happen when an Active-Active database instance + deletes an element while the other adds or updates the same element. + In this case, an observed Remove rule is followed, and only + instances it has already seen are removed. In all other cases, the + Add / Update element wins. +1. Second, the database resolves conflict at the score level. In this + case, the score is treated as a counter and applies the same + conflict resolution as regular counters. + +See the following examples to get familiar with Sorted Sets' +behavior in Active-Active database: + +Example of Simple Sorted Set with No +Conflict: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | | ZADD Z 1.2 y | +| t4 | — Sync — | — Sync — | +| t5 | ZRANGE Z 0 -1 => x y | ZRANGE Z 0 -1 => x y | + +**Explanation**: +When adding two different elements to a Sorted Set from different +replicas (in this example, x with score 1.1 was added by Instance 1 to +Sorted Set Z, and y with score 1.2 was added by Instance 2 to Sorted Set +Z) in a non-concurrent manner (i.e. each operation happened separately +and after both instances were in sync), the end result is a Sorted +Set including both elements in each Active-Active database instance. +Example of Sorted Set and Concurrent +Add: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | | ZADD Z 2.1 x | +| t3 | ZSCORE Z x => 1.1 | ZSCORE Z x => 2.1 | +| t4 | — Sync — | — Sync — | +| t5 | ZSCORE Z x => 2.1 | ZSCORE Z x => 2.1 | + +**Explanation**: +When concurrently adding an element x to a Sorted Set Z by two different +Active-Active database instances (Instance 1 added score 1.1 and Instance 2 added score +2.1), the Active-Active database implements Last Write Win (LWW) to determine the score of +x. In this scenario, Instance 2 performed the ZADD operation at time +t2\>t1 and therefore the Active-Active database sets the score 2.1 to +x. + +Example of Sorted Set with Concurrent Add Happening at the Exact Same +Time: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | ZADD Z 2.1 x | +| t2 | ZSCORE Z x => 1.1 | ZSCORE Z x => 2.1 | +| t3 | — Sync — | — Sync — | +| t4 | ZSCORE Z x => 1.1 | ZSCORE Z x => 1.1 | + +**Explanation**: +The example above shows a relatively rare situation, in which two Active-Active database +instances concurrently added the same element x to a Sorted Set at the +same exact time but with a different score, i.e. Instance 1 added x with +a 1.1 score and Instance 2 added x with a 2.1 score. After syncing, the +Active-Active database realized that both operations happened at the same time and +resolved the conflict by arbitrarily (but consistently across all Active-Active database +instances) giving precedence to Instance 1. +Example of Sorted Set with Concurrent Counter +Increment: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 1.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | ZINCRBY Z 1.0 x | ZINCRBY Z 1.0 x | +| t4 | — Sync — | — Sync — | +| t5 | ZSCORE Z x => 3.1 | ZSCORE Z x => 3.1 | + +**Explanation**: +The result is the sum of all +ZINCRBY +operations performed by all Active-Active database instances. + +Example of Removing an Element from a Sorted +Set: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | ZADD Z 4.1 x | | +| t2 | — Sync — | — Sync — | +| t3 | ZSCORE Z x => 4.1 | ZSCORE Z x => 4.1 | +| t4 | ZREM Z x | ZINCRBY Z 2.0 x | +| t5 | ZSCORE Z x => nill | ZSCORE Z x => 6.1 | +| t6 | — Sync — | — Sync — | +| t7 | ZSCORE Z x => 2.0 | ZSCORE Z x => 2.0 | + +**Explanation**: +At t4 - t5, concurrent ZREM and ZINCRBY operations ran on Instance 1 +and Instance 2 respectively. Before the instances were in sync, the ZREM +operation could only delete what had been seen by Instance 1, so +Instance 2 was not affected. Therefore, the ZSCORE operation shows the +local effect on x. At t7, after both instances were in-sync, the Active-Active database +resolved the conflict by subtracting 4.1 (the value of element x in +Instance 1) from 6.1 (the value of element x in Instance 2). +--- +Title: Strings and bitfields in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using strings and bitfields with an Active-Active database. +linkTitle: Strings and bitfields +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/strings/' +--- +Active-Active databases support both strings and bitfields. + +{{}} +Active-Active **bitfield** support was added in RS version 6.0.20. +{{}} + +Changes to both of these data structures will be replicated across Active-Active member databases. + +## Replication semantics + +Except in the case of [string counters]({{< relref "#string-counter-support" >}}) (see below), both strings and bitfields are replicated using a "last write wins" approach. The reason for this is that strings and bitfields are effectively binary objects. So, unlike with lists, sets, and hashes, the conflict resolution semantics of a given operation on a string or bitfield are undefined. + +### How "last write wins" works + +A wall-clock timestamp (OS time) is stored in the metadata of every string +and bitfield operation. If the replication syncer cannot determine the order of operations, +the value with the latest timestamp wins. This is the only case with Active-Active databases where OS time is used to resolve a conflict. + +Here's an example where an update happening to the same key at a later +time (t2) wins over the update at t1. + +| **Time** | **Region 1** | **Region 2** | +| :------: | :------: | :------: | +| t1 | SET text “a” | | +| t2 | | SET text “b” | +| t3 | — Sync — | — Sync — | +| t4 | SET text “c” | | +| t5 | — Sync — | — Sync — | +| t6 | | SET text “d” | + +### String counter support + +When you're using a string as counter (for instance, with the [INCR]({{< relref "/commands/incr" >}}) or [INCRBY]({{< relref "/commands/incrby" >}}) commands), +then conflicts will be resolved semantically. + +On conflicting writes, counters accumulate the total counter operations +across all member Active-Active databases in each sync. + +Here's an example of how counter +values works when synced between two member Active-Active databases. With +each sync, the counter value accumulates the private increment and +decrements of each site and maintain an accurate counter across +concurrent writes. + +| **Time** | **Region 1** | **Region 2** | +| :------: | :------: | :------: | +| t1 | INCRBY counter 7 | | +| t2 | | INCRBY counter 3 | +| t3 | GET counter
7 | GET counter
3 | +| t4 | — Sync — | — Sync — | +| t5 | GET counter
10 | GET counter
10 | +| t6 | DECRBY counter 3 | | +| t7 | | INCRBY counter 6 | +| t8 | — Sync — | — Sync — | +| t9 | GET counter
13 | GET counter
13 | + +{{< note >}} +Active-Active databases support 59-bit counters. +This limitation is to protect from overflowing a counter in a concurrent operation. +{{< /note >}} +--- +Title: Hashes in an Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using hashes with an Active-Active database. +linkTitle: Hashes +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/hashes/' +--- +Hashes are great for structured data that contain a map of fields and +values. They are used for managing distributed user or app session +state, user preferences, form data and so on. Hash fields contain string +type and string types operate just like the standard Redis string types +when it comes to CRDTs. Fields in hashes can be initialized as a string +using HSET or HMSET or can be used to initialize counter types that are +numeric integers using HINCRBY or floats using HINCRBYFLOAT. + +Hashes in Active-Active databases behave the same and maintain additional metadata to +achieve an "OR-Set" behavior to handle concurrent conflicting writes. +With the OR-Set behavior, writes to add new fields across multiple Active-Active database +instances are typically unioned except in cases of conflicts. +Conflicting instance writes can happen when an Active-Active database instance deletes a +field while the other adds the same field. In this case and observed +remove rule is followed. That is, remove can only remove fields it has +already seen and in all other cases element add/update wins. + +Field values behave just like CRDT strings. String values can be types +string, counter integer based on the command used for initialization of +the field value. See "String Data Type in Active-Active databases" and "String Data Type +with Counter Value in Active-Active databases" for more details. + +Here is an example of an "add wins" case: + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | HSET key1 field1 “a” | | +| t2 | | HSET key1 field2 “b” | +| t4 | - Sync - | - Sync - | +| t5 | HGETALL key1
1) “field2”
2) “b”
3) “field1”
4) “a” | HGETALL key1
1) “field2”
2) “b”
3) “field1”
4) “a” | +--- +Title: JSON in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using JSON with an Active-Active database. +linkTitle: JSON +weight: $weight +tocEmbedHeaders: true +url: '/operate/rs/7.4/databases/active-active/develop/data-types/json/' +--- +Active-Active databases support JSON data structures. + +The design is based on [A Conflict-Free Replicated JSON Datatype](https://arxiv.org/abs/1608.03960) by Kleppmann and Beresford, but the implementation includes some changes. Several [conflict resolution rule](#conflict-resolution-rules) examples were adapted from this paper as well. + +## Prerequisites + +To use JSON in an Active-Active database, you must enable JSON during database creation. + +Active-Active Redis Cloud databases add JSON by default. See [Create an Active-Active subscription]({{< relref "/operate/rc/databases/create-database/create-active-active-database#select-capabilities" >}}) in the Redis Cloud documentation for details. + +In Redis Enterprise Software, JSON is not enabled by default for Active-Active databases. See [Create an Active-Active JSON database]({{< relref "/operate/oss_and_stack/stack-with-enterprise/json/active-active#create-an-active-active-json-database" >}}) in the Redis Stack and Redis Enterprise documentation for instructions. + +{{}} + +{{}} +--- +Title: Sets in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using sets with an Active-Active database. +linkTitle: Sets +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/sets/' +--- +A Redis set is an unordered collection of strings. It is possible to +add, remove, and test for the existence of members with Redis commands. +A Redis set maintains a unique collection of elements. Sets can be great +for maintaining a list of events (click streams), users (in a group +conversation), products (in recommendation lists), engagements (likes, +shares) and so on. + +Sets in Active-Active databases behave the same and maintain additional metadata to +achieve an "OR-Set" behavior to handle concurrent conflicting +writes. With the OR-Set behavior, writes across multiple Active-Active database instances +are typically unioned except in cases of conflicts. Conflicting instance +writes can happen when a Active-Active database instance deletes an element while the +other adds the same element. In this case and observed remove rule is +followed. That is, remove can only remove instances it has already seen +and in all other cases element add wins. + +Here is an example of an "add wins" case: + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SADD key1 “a” | | +| t2 | | SADD key1 “b” | +| t3 | SMEMBERS key1 “a” | SMEMBERS key1 “b” | +| t4 | — Sync — | — Sync — | +| t3 | SMEMBERS key1 “a” “b” | SMEMBERS key1 “a” “b” | + +Here is an example of an "observed remove" case. + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SMEMBERS key1 “a” “b” | SMEMBERS key1 “a” “b” | +| t2 | SREM key1 “a” | SADD key1 “c” | +| t3 | SREM key1 “c” | | +| t4 | — Sync — | — Sync — | +| t3 | SMEMBERS key1 “c” “b” | SMEMBERS key1 “c” “b” | +--- +Title: Streams in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using streams with an Active-Active database. +linkTitle: Streams +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/streams/' +--- +A [Redis Stream]({{< relref "/develop/data-types/streams" >}}) is a data structure that acts like an append-only log. +Each stream entry consists of: + +- A unique, monotonically increasing ID +- A payload consisting of a series key-value pairs + +You add entries to a stream with the XADD command. You access stream entries using the XRANGE, XREADGROUP, and XREAD commands (however, see the caveat about XREAD below). + +## Streams and Active-Active + +Active-Active databases allow you to write to the same logical stream from more than one region. +Streams are synchronized across the regions of an Active-Active database. + +In the example below, we write to a stream concurrently from two regions. Notice that after syncing, both regions have identical streams: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TimeRegion 1Region 2
t1XADD messages * text helloXADD messages * text goodbye
t2XRANGE messages - +
→ [1589929244828-1]
XRANGE messages - +
→ [1589929246795-2]
t3— Sync —— Sync —
t4XRANGE messages - +
→ [1589929244828-1, 1589929246795-2]
XRANGE messages - +
→ [1589929244828-1, 1589929246795-2]
+ +Notice also that the synchronized streams contain no duplicate IDs. As long as you allow the database to generate your stream IDs, you'll never have more than one stream entry with the same ID. + +{{< note >}} +Redis Open Source uses one radix tree (referred to as `rax` in the code base) to implement each stream. However, Active-Active databases implement a single logical stream using one `rax` per region. +Each region adds entries only to its associated `rax` (but can remove entries from all `rax` trees). +This means that XREAD and XREADGROUP iterate simultaneously over all `rax` trees and return the appropriate entry by comparing the entry IDs from each `rax`. +{{< /note >}} + +### Conflict resolution + +Active-Active databases use an "observed-remove" approach to automatically resolve potential conflicts. + +With this approach, a delete only affects the locally observable data. + +In the example below, a stream, `x`, is created at _t1_. At _t3_, the stream exists in two regions. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TimeRegion 1Region 2
t1XADD messages * text hello
t2— Sync —— Sync —
t3XRANGE messages - +
→ [1589929244828-1]
XRANGE messages - +
→ [1589929244828-1]
t4DEL messagesXADD messages * text goodbye
t5— Sync —— Sync —
t6XRANGE messages - +
→ [1589929246795-2]
XRANGE messages - +
→ [1589929246795-2]
+ +At _t4_, the stream is deleted from Region 1. At the same time, an entry with ID ending in `3700` is added to the same stream at Region 2. After the sync, at _t6_, the entry with ID ending in `3700` exists in both regions. This is because that entry was not visible when the local stream was deleted at _t4_. + +### ID generation modes + +Usually, you should allow Redis streams generate its own stream entry IDs. You do this by specifying `*` as the ID in calls to XADD. However, you _can_ provide your own custom ID when adding entries to a stream. + +Because Active-Active databases replicate asynchronously, providing your own IDs can create streams with duplicate IDs. This can occur when you write to the same stream from multiple regions. + +| Time | Region 1 | Region 2 | +| ---- | ------------------------------- | ------------------------------- | +| _t1_ | `XADD x 100-1 f1 v1` | `XADD x 100-1 f1 v1` | +| _t2_ | _— Sync —_ | _— Sync —_ | +| _t3_ | `XRANGE x - +`
**→ [100-1, 100-1]** | `XRANGE x - +`
**→ [100-1, 100-1]** | + +In this scenario, two entries with the ID `100-1` are added at _t1_. After syncing, the stream `x` contains two entries with the same ID. + +{{< note >}} +Stream IDs in Redis Open Source consist of two integers separated by a dash ('-'). When the server generates the ID, the first integer is the current time in milliseconds, and the second integer is a sequence number. So, the format for stream IDs is MS-SEQ. +{{< /note >}} + +To prevent duplicate IDs and to comply with the original Redis streams design, Active-Active databases provide three ID modes for XADD: + +1. **Strict**: In _strict_ mode, XADD allows server-generated IDs (using the '`*`' ID specifier) or IDs consisting only of the millisecond (MS) portion. When the millisecond portion of the ID is provided, the ID's sequence number is calculated using the database's region ID. This prevents duplicate IDs in the stream. Strict mode rejects full IDs (that is, IDs containing both milliseconds and a sequence number). +1. **Semi-strict**: _Semi-strict_ mode is just like _strict_ mode except that it allows full IDs (MS-SEQ). Because it allows full IDs, duplicate IDs are possible in this mode. +1. **Liberal**: XADD allows any monotonically ascending ID. When given the millisecond portion of the ID, the sequence number will be set to `0`. This mode may also lead to duplicate IDs. + +The default and recommended mode is _strict_, which prevents duplicate IDs. + +{{% warning %}} +Why do you want to prevent duplicate IDs? First, XDEL, XCLAIM, and other commands can affect more than one entry when duplicate IDs are present in a stream. Second, duplicate entries may be removed if a database is exported or renamed. +{{% /warning %}} + +To change XADD's ID generation mode, use the `rladmin` command-line utility: + +Set _strict_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode strict +``` + +Set _semi-strict_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode semi-strict +``` + +Set _liberal_ mode: +```sh +rladmin tune db crdb crdt_xadd_id_uniqueness_mode liberal +``` + +### Iterating a stream with XREAD + +In Redis Open Source and in non-Active-Active databases, you can use XREAD to iterate over the entries in a Redis Stream. However, with an Active-Active database, XREAD may skip entries. This can happen when multiple regions write to the same stream. + +In the example below, XREAD skips entry `115-2`. + +| Time | Region 1 | Region 2 | +| ---- | -------------------------------------------------- | -------------------------------------------------- | +| _t1_ | `XADD x 110 f1 v1` | `XADD x 115 f1 v1` | +| _t2_ | `XADD x 120 f1 v1` | | +| _t3_ | `XADD x 130 f1 v1` | | +| _t4_ | `XREAD COUNT 2 STREAMS x 0`
**→ [110-1, 120-1]** | | +| _t5_ | _— Sync —_ | _— Sync —_ | +| _t6_ | `XREAD COUNT 2 STREAMS x 120-1`
**→ [130-1]** | | +| _t7_ | `XREAD STREAMS x 0`
**→[110-1, 115-2, 120-1, 130-1]** | `XREAD STREAMS x 0`
**→[110-1, 115-2, 120-1, 130-1]** | + + +You can use XREAD to reliably consume a stream only if all writes to the stream originate from a single region. Otherwise, you should use XREADGROUP, which always guarantees reliable stream consumption. + +## Consumer groups + +Active-Active databases fully support consumer groups with Redis Streams. Here is an example of creating two consumer groups concurrently: + +| Time | Region 1 | Region 2 | +| ---- | --------------------------- | --------------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | `XGROUP CREATE x group2 0` | +| _t2_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group2]** | +| _t3_ | _— Sync —_ | — Sync — | +| _t4_ | `XINFO GROUPS x`
**→ [group1, group2]** | `XINFO GROUPS x`
**→ [group1, group2]** | + + +{{< note >}} +Redis Open Source uses one radix tree (`rax`) to hold the global pending entries list and another `rax` for each consumer's PEL. +The global PEL is a unification of all consumer PELs, which are disjoint. + +An Active-Active database stream maintains a global PEL and a per-consumer PEL for each region. + +When given an ID different from the special ">" ID, XREADGROUP iterates simultaneously over all of the PELs for all consumers. +It returns the next entry by comparing entry IDs from the different PELs. +{{< /note >}} + +### Conflict resolution + +The "delete wins" approach is a way to automatically resolve conflicts with consumer groups. +In case of concurrent consumer group operations, a delete will "win" over other concurrent operations on the same group. + +In this example, the DEL at _t4_ deletes both the observed `group1` and the non-observed `group2`: + +| Time | Region 1 | Region 2 | +| ---- | ----------------------- | ----------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | | +| _t2_ | _— Sync —_ | _— Sync —_ | +| _t3_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group1]** | +| _t4_ | `DEL x` | `XGROUP CREATE x group2 0` | +| _t5_ | _— Sync —_ | _— Sync —_ | +| _t6_ | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | + +In this example, the XGROUP DESTROY at _t4_ affects both the observed `group1` created in Region 1 and the non-observed `group1` created in Region 3: + +| time | Region 1 | Region 2 | Region 3 | +| ---- | ----------------------- | ----------------------- | --------------------- | +| _t1_ | `XGROUP CREATE x group1 0` | | | +| _t2_ | _— Sync —_ | _— Sync —_ | | +| _t3_ | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ [group1]** | `XINFO GROUPS x`
**→ []** | +| _t4_ | | `XGROUP DESTROY x group1` | `XGROUP CREATE x group1 0` | +| _t5_ | _— Sync —_ | _— Sync — | — Sync — | +| _t6_ | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | `EXISTS x`
**→ 0** | + +### Group replication + +Calls to XREADGROUP and XACK change the state of a consumer group or consumer. However, it's not efficient to replicate every change to a consumer or consumer group. + +To maintain consumer groups in Active-Active databases with optimal performance: + +1. Group existence (CREATE/DESTROY) is replicated. +1. Most XACK operations are replicated. +1. Other operations, such as XGROUP, SETID, DELCONSUMER, are not replicated. + +For example: + +| Time | Region 1 | Region 2 | +| ---- | ------------------------------------------------- | ------------------------ | +| _t1_ | `XADD messages 110 text hello` | | +| _t2_ | `XGROUP CREATE messages group1 0` | | +| _t3_ | `XREADGROUP GROUP group1 Alice STREAMS messages >`
**→ [110-1]** | | +| _t4_ | _— Sync —_ | _— Sync —_ | +| _t5_ | `XRANGE messages - +`
**→ [110-1]** | XRANGE messages - +
**→ [110-1]** | +| _t6_ | `XINFO GROUPS messages`
**→ [group1]** | XINFO GROUPS messages
**→ [group1]** | +| _t7_ | `XINFO CONSUMERS messages group1`
**→ [Alice]** | XINFO CONSUMERS messages group1
**→ []** | +| _t8_ | `XPENDING messages group1 - + 1`
**→ [110-1]** | XPENDING messages group1 - + 1
**→ []** | + +Using XREADGROUP across regions can result in regions reading the same entries. +This is due to the fact that Active-Active Streams is designed for at-least-once reads or a single consumer. +As shown in the previous example, Region 2 is not aware of any consumer group activity, so redirecting the XREADGROUP traffic from Region 1 to Region 2 results in reading entries that have already been read. + +### Replication performance optimizations + +Consumers acknowledge messages using the XACK command. Each ack effectively records the last consumed message. This can result in a lot of cross-region traffic. To reduce this traffic, we replicate XACK messages only when all of the read entries are acknowledged. + +| Time | Region 1 | Region 2 | Explanation | +| ---- | --------------------------------------------------------------- | ------------ | --------------------------------------------------------------------------------------------------------------- | +| _t1_ | `XADD x 110-0 f1 v1` | | | +| _t2_ | `XADD x 120-0 f1 v1` | | | +| _t3_ | `XADD x 130-0 f1 v1` | | | +| _t4_ | `XGROUP CREATE x group1 0` | | | +| _t5_ | `XREADGROUP GROUP group1 Alice STREAMS x >`
**→ [110-0, 120-0, 130-0]** | | | +| _t6_ | `XACK x group1 110-0` | | | +| _t7_ | _— Sync —_ | _— Sync —_ | 110-0 and its preceding entries (none) were acknowledged. We replicate an XACK effect for 110-0. | +| _t8_ | `XACK x group1 130-0` | | | +| _t9_ | _— Sync —_ | _— Sync —_ | 130-0 was acknowledged, but not its preceding entries (120-0). We DO NOT replicate an XACK effect for 130-0 | +| _t10_ | `XACK x group1 120-0` | | | +| _t11_ | _— Sync —_ | _— Sync —_ | 120-0 and its preceding entries (110-0 through 130-0) were acknowledged. We replicate an XACK effect for 130-0. | + +In this scenario, if we redirect the XREADGROUP traffic from Region 1 to Region 2 we do not re-read entries 110-0, 120-0 and 130-0. +This means that the XREADGROUP does not return already-acknowledged entries. + +### Guarantees + +Unlike XREAD, XREADGOUP will never skip stream entries. +In traffic redirection, XREADGROUP may return entries that have been read but not acknowledged. It may also even return entries that have already been acknowledged. + +## Summary + +With Active-Active streams, you can write to the same logical stream from multiple regions. As a result, the behavior of Active-Active streams differs somewhat from the behavior you get with Redis Open Source. This is summarized below: + +### Stream commands + +1. When using the _strict_ ID generation mode, XADD does not permit full stream entry IDs (that is, an ID containing both MS and SEQ). +1. XREAD may skip entries when iterating a stream that is concurrently written to from more than one region. For reliable stream iteration, use XREADGROUP instead. +1. XSETID fails when the new ID is less than current ID. + +### Consumer group notes + +The following consumer group operations are replicated: + +1. Consecutive XACK operations +1. Consumer group creation and deletion (that is, XGROUP CREATE and XGROUP DESTROY) + +All other consumer group metadata is not replicated. + +A few other notes: + +1. XGROUP SETID and DELCONSUMER are not replicated. +1. Consumers exist locally (XREADGROUP creates a consumer implicitly). +1. Renaming a stream (using RENAME) deletes all consumer group information. +--- +Title: Lists in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using list with an Active-Active database. +linkTitle: Lists +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/lists/' +--- +Redis lists are simply lists of strings, sorted by insertion order. It +is possible to add elements to a Redis List that push new elements to +the head (on the left) or to the tail (on the right) of the list. Redis +lists can be used to easily implement queues (using LPUSH and RPOP, for +example) and stacks (using LPUSH and LPOP, for +example). + +Lists in Active-Active databases are just the same as regular Redis Lists. See the +following examples to get familiar with Lists' behavior in an +Active-Active database. + +Simple Lists +example: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH mylist “hello” | | +| t2 | — Sync — | — Sync — | +| t3 | | LPUSH mylist “world” | +| t4 | — Sync — | — Sync — | +| t5 | LRANGE mylist 0 -1 =>“world” “hello” | LRANGE mylist 0 -1 => “world” “hello” | + +**Explanation**: +The final list contains both the "world" and "hello" elements, in that +order (Instance 2 observed "hello" when it added +"world"). + +Example of Lists with Concurrent +Insertions: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x | | +| t2 | — Sync — | — Sync — | +| t3 | LINSERT L AFTER x y1 | | +| t4 | | LINSERT L AFTER x y2 | +| t5 | LRANGE L 0 -1 => x y1 | LRANGE L 0 -1 => x y2 | +| t6 | — Sync — | — Sync — | +| t7 | LRANGE L 0 -1 => x y1 y2 | LRANGE L 0 -1 => x y1 y2 | + +**Explanation**: +Instance 1 added an element y1 after x, and then Instance 2 added element y2 after x. +The final List contains all three elements: x is the first element, after it y1 and then y2. +The Active-Active database resolves the conflict arbitrarily but applies the resolution consistently across all Active-Active database instances. + +Example of Deleting a List while Pushing a New +Element: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x | | +| t2 | — Sync — | — Sync — | +| t3 | LRANGE L 0 -1 => x | LRANGE L 0 -1 => x | +| t4 | LPUSH L y | DEL L | +| t5 | — Sync — | — Sync — | +| t6 | LRANGE L 0 -1 => y | LRANGE L 0 -1 => y | + +**Explanation** +At t4 - t6, DEL deletes only observed elements. This is why L still +contains y. + +Example of Popping Elements from a +List: + +| **Time** | **CRDB Instance 1** | **CRDB Instance 2** | +| ------: | :------: | :------: | +| t1 | LPUSH L x y z | | +| t2 | — Sync — | — Sync — | +| t3 | | RPOP L => x | +| t4 | — Sync — | — Sync — | +| t5 | RPOP L => y | | +| t6 | — Sync — | — Sync — | +| t7 | RPOP L => z | RPOP L => z | + +**Explanation**: +At t1, the operation pushes elements x, y, z to List L. At t3, the +sequential pops behave as expected from a queue. At t7, the concurrent +pop in both instances might show the same result. The instance was not +able to sync regarding the z removal so, from the point of view of each +instance, z is located in the List and can be popped. After syncing, +both lists are empty. + +Be aware of the behavior of Lists in Active-Active databases when using List as a stack +or queue. As seen in the above example, two parallel RPOP operations +performed by two different Active-Active database instances can get the same element in +the case of a concurrent operation. Lists in Active-Active databases guarantee that each +element is POP-ed at least once, but cannot guarantee that each +element is POP-ed only once. Such behavior should be taken into +account when, for example, using Lists in Active-Active databases as building blocks for +inter-process communication systems. + +In that case, if the same element cannot be handled twice by the +applications, it's recommended that the POP operations be performed by +one Active-Active database instance, whereas the PUSH operations can be performed by +multiple instances. +--- +Title: HyperLogLog in Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Information about using hyperloglog with an Active-Active database. +linkTitle: HyperLogLog +weight: $weight +url: '/operate/rs/7.4/databases/active-active/develop/data-types/hyperloglog/' +--- +**HyperLogLog** is an algorithm that addresses the [count-distinct problem](https://en.wikipedia.org/wiki/Count-distinct_problem). +To do this it approximates the numbers of items in a [set](https://en.wikipedia.org/wiki/Multiset). +Determining the _exact_ cardinality of a set requires memory according to the cardinality of the set. +Because it estimates the cardinality by probability, the HyperLogLog algorithm can run with more reasonable memory requirements. + +## HyperLogLog in Redis + +Redis Open Source implements [HyperLogLog](https://redislabs.com/redis-best-practices/counting/hyperloglog/) (HLL) as a native data structure. +It supports adding elements ([PFADD]({{< relref "/commands/pfadd" >}}) to an HLL, counting elements ([PFCOUNT]({{< relref "/commands/pfcount" >}}) of HLLs, and merging of ([PFMERGE]({{< relref "/commands/pfmerge" >}}) HLLs. + +Here is an example of a simple write case: + +| Time | Replica 1 | Replica 2 | +| ---- | ----------------- | ----------------- | +| t1 | PFADD hll x | | +| t2 | --- sync --- | | +| t3 | | PFADD hll y | +| t4 | --- sync --- | | +| t5 | PFCOUNT hll --> 2 | PFCOUNT hll --> 2 | + +Here is an example of a concurrent add case: + +| Time | Replica 1 | Replica 2 | +| ---- | ----------------- | ----------------- | +| t1 | PFADD hll x | PFADD hll y | +| t2 | PFCOUNT hll --> 1 | PFCOUNT hll --> 1 | +| t3 | --- sync --- | | +| t4 | PFCOUNT hll --> 2 | PFCOUNT hll --> 2 | + +## The DEL-wins approach + +Other collections in the Redis-CRDT implementation use the observed remove method to resolve conflicts. +The CRDT-HLL uses the DEL-wins method. +If a DEL request is received at the same time as any other request (ADD/MERGE/EXPIRE) on the HLL-key +the replicas consistently converge to delete key. +In the observed remove method used by other collections (sets, lists, sorted-sets and hashes), +only the replica that received the DEL request removes the elements, but elements added concurrently in other replicas exist in the consistently converged collection. +We chose to use the DEL-wins method for the CRDT-HLL to maintain the original time and space complexity of the HLL in Redis Open Source. + +Here is an example of a DEL-wins case: + +| HLL | | | \| | Set | | | +| ---- | --------------- | --------------- | --- | ---- | ------------------- | ------------------- | +| | | | \| | | | | +| Time | Replica 1 | Replica 2 | \| | Time | Replica 1 | Replica 2 | +| | | | \| | | | | +| t1 | PFADD h e1 | | \| | t1 | SADD s e1 | | +| t2 | --- sync --- | | \| | t2 | --- sync --- | | +| t3 | PFCOUNT h --> 1 | PFCOUNT h --> 1 | \| | t3 | SCARD s --> 1 | SCARD s --> 1 | +| t4 | PFADD h e2 | Del h | \| | t4 | SADD s e2 | Del S | +| t5 | PFCOUNT h --> 2 | PFCOUNT h --> 0 | \| | t5 | SCARD s --> 2 | SCARD s --> 0 | +| t6 | --- sync --- | | \| | t6 | --- sync --- | | +| t7 | PFCOUNT h --> 0 | PFCOUNT h --> 0 | \| | t7 | SCARD s --> 1 | SCARD s --> 1 | +| t8 | Exists h --> 0 | Exists h --> 0 | \| | t8 | Exists s --> 1 | Exists s --> 1 | +| | | | \| | t9 | SMEMBERS s --> {e2} | SMEMBERS s --> {e2} | + +## HLL in Active-Active databases versus HLL in Redis Open Source + +In Active-Active databases, we implemented HLL within the CRDT on the basis of the Redis implementation with a few exceptions: + +- Redis keeps the HLL data structure as an encoded string object + such that you can potentially run any string request can on a key that contains an HLL. In CRDT, only get and set are supported for HLL. +- In CRDT, if you do SET on a key that contains a value encoded as an HLL, then the value will remain an HLL. If the value is not encoded as HLL, then it will be a register. +--- +Title: Data types for Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Introduction to differences in data types between standalone and Active-Active + Redis databases. +hideListLinks: true +linktitle: Data types +weight: 90 +url: '/operate/rs/7.4/databases/active-active/develop/data-types/' +--- + + +Active-Active databases use conflict-free replicated data types (CRDTs). From a developer perspective, most supported data types work the same for Active-Active and standard Redis databases. However, a few methods also come with specific requirements in Active-Active databases. + +Even though they look identical to standard Redis data types, there are specific rules that govern the handling of +conflicting concurrent writes for each data type. + +As conflict handling rules differ between data types, some commands have slightly different requirements in Active-Active databases versus standard Redis databases. + +See the following articles for more information + +--- +Title: Active-Active Redis applications +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: General information to keep in mind while developing applications for + an Active-Active database. +hideListLinks: true +linktitle: Develop applications +weight: 99 +url: '/operate/rs/7.4/databases/active-active/develop/' +--- +Developing globally distributed applications can be challenging, as +developers have to think about race conditions and complex combinations +of events under geo-failovers and cross-region write conflicts. In Redis Enterprise Software (RS), Active-Active databases +simplify developing such applications by directly using built-in smarts +for handling conflicting writes based on the data type in use. Instead +of depending on just simplistic "last-writer-wins" type conflict +resolution, geo-distributed Active-Active databases (formerly known as CRDBs) combines techniques defined in CRDT +(conflict-free replicated data types) research with Redis types to +provide smart and automatic conflict resolution based on the data types +intent. + +An Active-Active database is a globally distributed database that spans multiple Redis +Enterprise Software clusters. Each Active-Active database can have many Active-Active database instances +that come with added smarts for handling globally distributed writes +using the proven +[CRDT](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) +approach. +[CRDT](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) +research describes a set of techniques for creating systems that can +handle conflicting writes. CRDBs are powered by Multi-Master Replication +(MMR) provides a straightforward and effective way to replicate your +data between regions and simplify development of complex applications +that can maintain correctness under geo-failovers and concurrent +cross-region writes to the same data. + +{{< image filename="/images/rs/crdbs.png" alt="Geo-replication world map">}} + +Active-Active databases replicate data between multiple Redis Enterprise Software +clusters. Common uses for Active-Active databases include disaster recovery, +geographically redundant applications, and keeping data closer to your +user's locations. MMR is always multi-directional amongst the clusters +configured in the Active-Active database. For unidirectional replication, please see the +Replica Of capabilities in Redis Enterprise Software. + +## Example of synchronization + +In the example below, database writes are concurrent at the point in +times t1 and t2 and happen before a sync can communicate the changes. +However, writes at times t4 and t6 are not concurrent as a sync happened +in between. + +| **Time** | **CRDB Instance1** | **CRDB Instance2** | +| ------: | :------: | :------: | +| t1 | SET key1 “a” | | +| t2 | | SET key1 “b” | +| t3 | — Sync — | — Sync — | +| t4 | SET key1 “c” | | +| t5 | — Sync — | — Sync — | +| t6 | | SET key1 “d” | + +[Learn more about +synchronization]({{< relref "/operate/rs/7.4/databases/active-active" >}}) for +each supported data type and [how to develop]({{< relref "/operate/rs/7.4/databases/active-active/develop/develop-for-aa.md" >}}) with them on Redis Enterprise Software. +--- +Title: Configure distributed synchronization +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to configure distributed synchronization so that any available proxy + endpoint can manage synchronization traffic. +linktitle: Distributed synchronization +weight: 80 +url: '/operate/rs/7.4/databases/active-active/synchronization-mode/' +--- +Replicated databases, such as [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) and [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active" >}}) databases, +use proxy endpoints to synchronize database changes with the databases on other participating clusters. + +To improve the throughput and lower the latency for synchronization traffic, +you can configure a replicated database to use distributed synchronization where any available proxy endpoint can manage synchronization traffic. + +Every database by default has one proxy endpoint that manages client and synchronization communication with the database shards, +and that proxy endpoint is used for database synchronization. +This is called centralized synchronization. + +To prepare a database to use distributed synchronization you must first make sure that the database [proxy policy]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy.md" >}}) +is defined so that either each node has a proxy endpoint or each primary (master) shard has a proxy endpoint. +After you have multiple proxies for the database, +you can configure the database synchronization to use distributed synchronization. + +## Configure distributed synchronization + +{{< note >}} +You may use the database name in place of `db:` in the following `rladmin` commands. +{{< /note >}} + +To configure distributed synchronization: + +1. To check the proxy policy for the database, run: `rladmin status` + + The output of the status command shows the list of endpoints on the cluster and the proxy policy for the endpoint. + + ```sh + ENDPOINTS: + DB:ID NAME ID NODE ROLE SSL + db:1 db endpoint:1:1 node:1 all-master-shards No + ``` + + If the proxy policy (also known as a _role_) is `single`, configure the policy to `all-nodes` or `all-master-shards` according to your needs with the command: + + ```sh + rladmin bind db db: endpoint policy + ``` + +1. To configure the database to use distributed synchronization, run: + + ```sh + rladmin tune db db: syncer_mode distributed + ``` + + To change back to centralized synchronization, run: + + ```sh + rladmin tune db db: syncer_mode centralized + ``` + +## Verify database synchronization + +Use `rladmin` to verify a database synchronization role: + +```sh +rladmin info db db: +``` + +The current database role is reported as the `syncer_mode` value: + +```sh +$ rladmin info db db: +db: []: + // (Other settings removed) + syncer_mode: centralized +``` +--- +Title: Manage Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage your Active-Active database settings. +linktitle: Manage +weight: 30 +url: '/operate/rs/7.4/databases/active-active/manage/' +--- + +You can configure and manage your Active-Active database from either the Cluster Manager UI or the command line. + +To change the global configuration of the Active-Active database, use [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}). + +If you need to apply changes locally to one database instance, you use the Cluster Manager UI or [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}). + +## Database settings + +Many Active-Active database settings can be changed after database creation. One notable exception is database clustering. Database clustering can't be turned on or off after the database has been created. + +## Participating clusters + +You can add and remove participating clusters of an Active-Active database to change the topology. +To manage the changes to Active-Active topology, use [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/" >}}) or the participating clusters list in the Cluster Manager UI. + +### Add participating clusters + +All existing participating clusters must be online and in a syncing state when you add new participating clusters. + +New participating clusters create the Active-Active database instance based on the global Active-Active database configuration. +After you add new participating clusters to an existing Active-Active database, +the new database instance can accept connections and read operations. +The new instance does not accept write operations until it is in the syncing state. + +{{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. +{{}} + +To add a new participating cluster to an existing Active-Active configuration using the Cluster Manager UI: + +1. Select the Active-Active database from the **Databases** list and go to its **Configuration** screen. + +1. Click **Edit**. + +1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + +1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + {{Add cluster panel.}} + +1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Click **Save**. + + +### Remove participating clusters + +All existing participating clusters must be online and in a syncing state when you remove an online participating cluster. +If you must remove offline participating clusters, you can forcefully remove them. +If a forcefully removed participating cluster tries to rejoin the cluster, +its Active-Active database membership will be out of date. +The joined participating clusters reject updates sent from the removed participating cluster. +To prevent rejoin attempts, purge the forcefully removed instance from the participating cluster. + +To remove a participating cluster using the Cluster Manager UI: + +1. Select the Active-Active database from the **Databases** list and go to its **Configuration** screen. + +1. Click **Edit**. + +1. In the **Participating clusters** section, point to the cluster you want to delete in the **Other participating clusters** list: + + {{Edit and delete buttons appear when you point to an entry in the Other participating clusters list.}} + +1. Click {{< image filename="/images/rs/buttons/delete-button.png#no-click" alt="The Delete button" width="25px" class="inline" >}} to remove the cluster. + +1. Click **Save**. + +## Replication backlog + +Redis databases that use [replication for high availability]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}) maintain a replication backlog (per shard) to synchronize the primary and replica shards of a database. In addition to the database replication backlog, Active-Active databases maintain a backlog (per shard) to synchronize the database instances between clusters. + +By default, both the database and Active-Active replication backlogs are set to one percent (1%) of the database size divided by the number of shards. This can range between 1MB to 250MB per shard for each backlog. + +### Change the replication backlog size + +Use the [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) utility to control the size of the replication backlogs. You can set it to `auto` or set a specific size. + +Update the database replication backlog configuration with the `crdb-cli` command shown below. + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"repl_backlog_size\": }" +``` + +Update the Active-Active (CRDT) replication backlog with the command shown below: + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"crdt_repl_backlog_size\": }" +``` + +## Data persistence + +Active-Active supports AOF (Append-Only File) data persistence only. Snapshot persistence is _not_ supported for Active-Active databases and should not be used. + +If an Active-Active database is currently using snapshot data persistence, use `crdb-cli` to switch to AOF persistence: +```text + crdb-cli crdb update --crdb-guid --default-db-config '{"data_persistence": "aof", "aof_policy":"appendfsync-every-sec"}' +``` + + +--- +Title: Enable causal consistency +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable causal consistency in an Active-Active database. +linkTitle: Causal consistency +weight: 70 +url: '/operate/rs/7.4/databases/active-active/causal-consistency/' +--- +When you enable causal consistency in Active-Active databases, +the order of operations on a specific key are maintained across all Active-Active database instances. + +For example, if operations A and B were applied on the same key and the effect of A was observed by the instance that initiated B before B was applied to the key. +All instances of an Active-Active database would then observe the effect of A before observing the effect of B. +This way, any causal relationship between operations on the same key is also observed and maintained by every replica. + +### Enable causal consistency + +When you create an Active-Active database, you can enable causal consistency in the Cluster Manager UI: + +1. In the **Participating clusters** section of the **Create Active-Active database** screen, locate **Causal Consistency**: + + {{The Participating clusters section of the Create Active-Active database screen.}} + +1. Click **Change** to open the **Causal Consistency** dialog. + +1. Select **Enabled**: + + {{Enabled is selected in the Causal Consistency dialog.}} + +1. Click **Change** to confirm your selection. + +After database creation, you can only turn causal consistency on or off using the REST API or `crdb-cli`. +The updated setting only affects commands and operations received after the change. + +### Causal consistency side effects + +When the causal consistency option is enabled, each instance maintains the order of operations it received from another instance +and relays that information to all other N-2 instances, +where N represents the number of instances used by the Active-Active database. + +As a result, network traffic is increased by a factor of (N-2). +The memory consumed by each instance and overall performance are also impacted when causal consistency is activated. + +--- +Title: Connect to your Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to connect to an Active-Active database using redis-cli or a sample + Python application. +linkTitle: Connect +weight: 26 +url: '/operate/rs/7.4/databases/active-active/connect/' +--- + +With the Redis database created, you are ready to connect to your +database to store data. You can use one of the following ways to test +connectivity to your database: + +- Connect with redis-cli, the built-in command-line tool +- Connect with a _Hello World_ application written in Python + +Remember we have two member Active-Active databases that are available for connections and +concurrent reads and writes. The member Active-Active databases are using bi-directional +replication to for the global Active-Active database. + +{{< image filename="/images/rs/crdb-diagram.png" >}} + +### Connecting using redis-cli {#connecting-using-rediscli} + +redis-cli is a simple command-line tool to interact with redis database. + +1. To use redis-cli on port 12000 from the node 1 terminal, run: + + ```sh + redis-cli -p 12000 + ``` + +1. Store and retrieve a key in the database to test the connection with these + commands: + + - `set key1 123` + - `get key1` + + The output of the command looks like this: + + ```sh + 127.0.0.1:12000> set key1 123 + OK + 127.0.0.1:12000> get key1 + "123" + ``` + +1. Enter the terminal of node 1 in cluster 2, run the redis-cli, and + retrieve key1. + + The output of the commands looks like this: + + ```sh + $ redis-cli -p 12000 + 127.0.0.1:12000> get key1 + "123" + ``` + +### Connecting using _Hello World_ application in Python + +A simple python application running on the host machine can also connect +to the database. + +{{< note >}} +Before you continue, you must have python and +[redis-py](https://github.com/andymccurdy/redis-py#installation) +(python library for connecting to Redis) configured on the host machine +running the container. +{{< /note >}} + +1. In the command-line terminal, create a new file called "redis_test.py" + + ```sh + vi redis_test.py + ``` + +1. Paste this code into the "redis_test.py" file. + + This application stores a value in key1 in cluster 1, gets that value from + key1 in cluster 1, and gets the value from key1 in cluster 2. + + ```py + import redis + rp1 = redis.StrictRedis(host='localhost', port=12000, db=0) + rp2 = redis.StrictRedis(host='localhost', port=12002, db=0) + print ("set key1 123 in cluster 1") + print (rp1.set('key1', '123')) + print ("get key1 cluster 1") + print (rp1.get('key1')) + print ("get key1 from cluster 2") + print (rp2.get('key1')) + ``` + +1. To run the "redis_test.py" application, run: + + ```sh + python redis_test.py + ``` + + If the connection is successful, the output of the application looks like: + + ```sh + set key1 123 in cluster 1 + True + get key1 cluster 1 + "123" + get key1 from cluster 2 + "123" + ``` +--- +Title: Create an Active-Active geo-replicated database +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to create an Active-Active database and things to consider when setting + it up. +linkTitle: Create +weight: 25 +url: '/operate/rs/7.4/databases/active-active/create/' +--- +[Active-Active geo-replicated databases]({{< relref "/operate/rs/7.4/databases/active-active" >}}) (formerly known as CRDBs) give applications write access +to replicas of the dataset in different geographical locations. + +The participating Redis Enterprise Software clusters that host the instances can be distributed in different geographic locations. +Every instance of an Active-Active database can receive write operations, and all operations are [synchronized]({{< relref "/operate/rs/7.4/databases/active-active/develop#example-of-synchronization" >}}) to all instances without conflict. + +## Steps to create an Active-Active database + +1. **Create a service account** - On each participating cluster, create a dedicated user account with the Admin role. +1. **Confirm connectivity** - Confirm network connectivity between the participating clusters. +1. **Create Active-Active database** - Connect to one of your clusters and create a new Active-Active database. +1. **Add participating clusters** - Add the participating clusters to the Active-Active database with the user credentials for the service account. +1. **Verify creation** - Log in to each of the participating clusters and verify your Active-Active database was created on them. +1. **Confirm Active-Active database synchronization** - Test writing to one cluster and reading from a different cluster. + +## Prerequisites + +- Two or more machines with the same version of Redis Enterprise Software installed +- Network connectivity and cluster FQDN name resolution between all participating clusters +- [Network time service]({{< relref "/operate/rs/7.4/databases/active-active#network-time-service-ntp-or-chrony" >}}) listener (ntpd) configured and running on each node in all clusters + +## Create an Active-Active database + +1. Create service accounts on each participating cluster: + + 1. In a browser, open the Cluster Manager UI for the participating cluster. + + The default address is: `https://:8443` + + 1. Go to the **Access Control > Users** tab: + + {{Add role with name}} + + 1. Click **+ Add user**. + + 1. Enter the username, email, and password for the user. + + 1. Select the **Admin** role. + + 1. Click **Save**. + +1. To verify network connectivity between participating clusters, + run the following `telnet` command from each participating cluster to all other participating clusters: + + ```sh + telnet 9443 + ``` + +1. In a browser, open the Cluster Manager UI of the cluster where you want to create the Active-Active database. + + The default address is: `https://:8443` + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select **Active-Active database**. + +1. Enter the cluster's local admin credentials, then click **Save**: + + {{Enter the cluster's admin username and password.}} + +1. Add participating clusters that will host instances of the Active-Active database: + + 1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + + 1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + {{Add cluster panel.}} + + {{}} +If an Active-Active database [runs on flash memory]({{}}), you cannot add participating clusters that run on RAM only. + {{}} + + 1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Enter a **Database name**. + +1. If your cluster supports [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}), in **Runs on** you can select **Flash** so that your database uses Flash memory. We recommend that you use AOF every 1 sec for the best performance during the initial Active-Active database sync of a new replica. + +1. To configure additional database settings, expand each relevant section to make changes. + + See [Configuration settings](#configuration-settings) for more information about each setting. + +1. Click **Create**. + +## Configuration settings + +- **Database version** - The Redis version used by your database. + +- **Database name** - The database name requirements are: + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +- **Port** - You can define the port number that clients use to connect to the database. Otherwise, a port is randomly selected. + + {{< note >}} +You cannot change the [port number]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) +after the database is created. + {{< /note >}} + +- **Memory limit** - [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}) include all database replicas and shards, including replica shards in database replication and database shards in database clustering. + + If the total size of the database in the cluster reaches the memory limit, the data eviction policy for the database is enforced. + + {{< note >}} +If you create a database with Auto Tiering enabled, you also need to set the RAM-to-Flash ratio +for this database. Minimum RAM is 10%. Maximum RAM is 50%. + {{< /note >}} + +- [**Capabilities**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) (previously **Modules**) - When you create a new in-memory database, you can enable multiple Redis Stack capabilities in the database. For Auto Tiering databases, you can enable capabilities that support Auto Tiering. See [Redis Enterprise and Redis Stack feature compatibility +]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}) for compatibility details. + + {{}} +To use Redis Stack capabilities, enable them when you create a new database. +You cannot enable them after database creation. + {{}} + + To add capabilities to the database: + + 1. In the **Capabilities** section, select one or more capabilities. + + 1. To customize capabilities, select **Parameters** and enter the optional custom configuration. + + 1. Select **Done**. + +### TLS + +If you enable TLS when you create the Active-Active database, the nodes use the TLS mode **Require TLS for CRDB communication only** to require TLS authentication and encryption for communications between participating clusters. + +After you create the Active-Active database, you can set the TLS mode to **Require TLS for all communications** so client communication from applications are also authenticated and encryption. + +### High availability & durability + +- [**Replication**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}) - We recommend that all Active-Active database use replication for best intercluster synchronization performance. + + When replication is enabled, every Active-Active database master shard is replicated to a corresponding replica shard. The replica shards are then used to synchronize data between the instances, and the master shards are dedicated to handling client requests. + + We also recommend that you enable [replica HA]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) to ensure that the replica shards are highly-available for this synchronization. + +- [**Data persistence**]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}) - To protect against loss of data stored in RAM, you can enable data persistence to store a copy of the data on disk. + + Active-Active databases support append-only file (AOF) persistence only. Snapshot persistence is not supported for Active-Active databases. + +- **Eviction policy** - The default eviction policy for Active-Active databases is `noeviction`. Redis Enterprise version 6.0.20 and later support all eviction policies for Active-Active databases, unless [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering" >}}) is enabled. + +### Clustering + +- In the **Database clustering** option, you can either: + + - Make sure the Database clustering is enabled and select the number of shards + that you want to have in the database. When database clustering is enabled, + databases are subject to limitations on [Multi-key commands]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}). + You can increase the number of shards in the database at any time. + + - Clear the **Database clustering** option to use only one shard so that you + can use [Multi-key commands]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}) + without the limitations. + + {{}} +You cannot enable or turn off database clustering after the Active-Active database is created. + {{}} + +- [**OSS Cluster API**]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api.md" >}}) - {{< embed-md "oss-cluster-api-intro.md" >}} + +### Access control + +- **Unauthenticated access** - You can access the database as the default user without providing credentials. + +- **Password-only authentication** - When you configure a password for your database's default user, all connections to the database must authenticate with the [AUTH command]({{< relref "/commands/auth" >}}. + + If you also configure an access control list, connections can specify other users for authentication, and requests are allowed according to the Redis ACLs specified for that user. + + Creating a database without ACLs enables a *default* user with full access to the database. You can secure default user access by requiring a password. + +- **Access Control List** - You can specify the [user roles]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) that have access to the database and the [Redis ACLs]({{< relref "/operate/rs/7.4/security/access-control/redis-acl-overview" >}}) that apply to those connections. + + You can only configure access control after the Active-Active database is created. In each participating cluster, add ACLs after database creation. + + To define an access control list for a database: + + 1. In **Security > Access Control > Access Control List**, select **+ Add ACL**. + + 1. Select a [role]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) to grant database access. + + 1. Associate a [Redis ACL]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) with the role and database. + + 1. Select the check mark to add the ACL. + +### Causal consistency + +[**Causal consistency**]({{< relref "/operate/rs/7.4/databases/active-active/causal-consistency" >}}) in an Active-Active database guarantees that the order of operations on a specific key is maintained across all instances of an Active-Active database. + +To enable causal consistency for an existing Active-Active database, use the REST API. + + +## Test Active-Active database connections + +With the Redis database created, you are ready to connect to your database. See [Connect to Active-Active databases]({{< relref "/operate/rs/7.4/databases/active-active/connect.md" >}}) for tutorials and examples of multiple connection methods. +--- +Title: Delete Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Considerations while deleting Active-Active databases. +linktitle: Delete +weight: 35 +url: '/operate/rs/7.4/databases/active-active/delete/' +--- + +When you delete an Active-Active database (formerly known as CRDB), +all instances of the Active-Active database are deleted from all participating clusters. + +{{% warning %}} +This action is immediate, non-reversible, and has no rollback. +{{% /warning %}} + +Because Active-Active databases are made up of instances on multiple participating clusters, +to restore a deleted Active-Active database you must create the database again with all of its instances +and then restore the data to the database from backup. + +We recommended that you: + +- Back up your data and test the restore on another database before you delete an Active-Active database. +- Consider [flushing the data]({{< relref "/operate/rs/7.4/databases/import-export/flush.md" >}}) from the database + so that you can keep the Active-Active database configuration and restore the data to it if necessary. +--- +Title: Syncer process +alwaysopen: false +categories: +- docs +- operate +- rs +description: Detailed information about the syncer process and its role in distributed + databases. +linktitle: Syncer process +weight: 90 +url: '/operate/rs/7.4/databases/active-active/syncer/' +--- + +## Syncer process + +Each node in a cluster containing an instance of an Active-Active database hosts a process called the syncer. +The syncer process: + +1. Connects to the proxy on another participating cluster +1. Reads data from that database instance +1. Writes the data to the local cluster's primary(master) shard + +Some replication capabilities are also included in [Redis Open Source]({{< relref "/operate/oss_and_stack/management/replication" >}}). + +The primary (also known as master) shard at the top of the primary-replica tree creates a replication ID. +This replication ID is identical for all replicas in that tree. +When a new primary is appointed, the replication ID changes, but a partial sync from the previous ID is still possible. + + +In a partial sync, the backlog of operations since the offset are transferred as raw operations. +In a full sync, the data from the primary is transferred to the replica as an RDB file which is followed by a partial sync. + +Partial synchronization requires a backlog large enough to store the data operations until connection is restored. See [replication backlog]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog" >}}) for more info on changing the replication backlog size. + +### Syncer in Active-Active replication + +In the case of an Active-Active database: + +- Multiple past replication IDs and offsets are stored to allow for multiple syncs +- The [Active-Active replication backlog]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog" >}}) is also sent to the replica during a full sync. + +{{< warning >}} +Full sync triggers heavy data transfers between geo-replicated instances of an Active-Active database. +{{< /warning >}} + +An Active-Active database uses partial synchronization in the following situations: + +- Failover of primary shard to replica shard +- Restart or crash of replica shard that requires sync from primary +- Migrate replica shard to another node +- Migrate primary shard to another node as a replica using failover and replica migration +- Migrate primary shard and preserve roles using failover, replica migration, and second failover to return shard to primary + +{{< note >}} +Synchronization of data from the primary shard to the replica shard is always a full synchronization. +{{< /note >}} +--- +Title: Get started with Redis Enterprise Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Quick start guide to create an Active-Active database for test and development. +linktitle: Get started +weight: 20 +url: '/operate/rs/7.4/databases/active-active/get-started/' +--- + +To get started, this article will help you set up an Active-Active database, formerly known as CRDB (conflict-free replicated database), spanning across two Redis Enterprise Software +clusters for test and development environments. Here are the steps: + +1. Run two Redis Enterprise Software Docker containers. + +1. Set up each container as a cluster. + +1. Create a new Redis Enterprise Active-Active database. + +1. Test connectivity to the Active-Active database. + +To run an Active-Active database on installations from the [Redis Enterprise Software download package]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}), +set up two Redis Enterprise Software installations and continue from Step 2. + +{{}} +This getting started guide is for development or demonstration environments. +For production environments, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}) for instructions. +{{}} + +## Run two containers + +To spin up two Redis Enterprise Software containers, run these commands: + +```sh +docker run -d --cap-add sys_resource -h rs1_node1 --name rs1_node1 -p 8443:8443 -p 9443:9443 -p 12000:12000 redislabs/redis +``` + +```sh +docker run -d --cap-add sys_resource -h rs2_node1 --name rs2_node1 -p 8445:8443 -p 9445:9443 -p 12002:12000 redislabs/redis +``` + +The **-p** options map the Cluster Manager UI port (8443), REST API port (9443), and +database access port differently for each container to make sure that all +containers can be accessed from the host OS that is running the containers. + +## Set up two clusters + +1. For cluster 1, go to `https://localhost:8443` in a browser on the +host machine to access the Redis Enterprise Software Cluster Manager UI. + + {{}} +Depending on your browser, you may see a certificate error. Continue to the website. + {{}} + +1. Click **Create new cluster**: + + {{When you first install Redis Enterprise Software, you need to set up a cluster.}} + +1. Enter an email and password for the administrator account, then click **Next** to proceed to cluster setup: + + {{Set the credentials for your admin user.}} + +1. Enter your cluster license key if you have one. Otherwise, a trial version is installed. + + {{Enter your cluster license key if you have one.}} + +1. In the **Configuration** section of the **Cluster** settings page, enter a cluster FQDN, for example `cluster1.local`: + + {{Configure the cluster FQDN.}} + +1. On the node setup screen, keep the default settings and click **Create cluster**: + + {{Configure the node specific settings.}} + +1. Click **OK** to confirm that you are aware of the replacement of the HTTPS SSL/TLS + certificate on the node, and proceed through the browser warning. + +1. Repeat the previous steps for cluster 2 with these differences: + + - In your web browser, go to `https://localhost:8445` to set up the cluster 2. + + - For the **Cluster name (FQDN)**, enter a different name, such as `cluster2.local`. + +Now you have two Redis Enterprise Software clusters with FQDNs +`cluster1.local` and `cluster2.local`. + +{{}} +Each Active-Active instance must have a unique fully-qualified domain name (FQDN). +{{}} + +## Create an Active-Active database + +1. Sign in to cluster1.local's Cluster Manager UI at `https://localhost:8443` + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select **Active-Active database**. + +1. Enter the cluster's local admin credentials, then click **Save**: + + {{Enter the cluster's admin username and password.}} + +1. Add participating clusters that will host instances of the Active-Active database: + + 1. In the **Participating clusters** section, go to **Other participating clusters** and click **+ Add cluster**. + + 1. In the **Add cluster** configuration panel, enter the new cluster's URL, port number, and the admin username and password for the new participating cluster: + + In the **Other participating clusters** list, add the address and admin credentials for the other cluster: `https://cluster2.local:9443` + + {{Add cluster panel.}} + + 1. Click **Join cluster** to add the cluster to the list of participating clusters. + +1. Enter `database1` for **Database name** and `12000` for **Port**: + + {{Database name and port text boxes.}} + +1. Configure additional settings: + + 1. In the **High availability & durability** section, turn off **Replication** since each cluster has only one node in this setup: + + {{Turn off replication in the High availability & durability section.}} + + + 1. In the **Clustering** section, either: + + - Make sure that **Sharding** is enabled and select the number of shards you want to have in the database. When database clustering is enabled, + databases are subject to limitations on [Multi-key commands]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering" >}}). + You can increase the number of shards in the database at any time. + + - Turn off **Sharding** to use only one shard and avoid [Multi-key command]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering" >}}) limitations. + + {{< note >}} +You cannot enable or turn off database clustering after the Active-Active database is created. + {{< /note >}} + +1. Click **Create**. + + {{< note >}} +{{< embed-md "docker-memory-limitation.md" >}} + {{< /note >}} + +1. After the Active-Active database is created, sign in to the Cluster Manager UIs for cluster 1 at `https://localhost:8443` and cluster 2 at `https://localhost:8445`. + +1. Make sure each cluster has an Active-Active database member database with the name `database1`. + + In a real-world deployment, cluster 1 and cluster 2 would most likely be + in separate data centers in different regions. However, for + local testing we created the scale-minimized deployment using two + local clusters running on the same host. + + +## Test connection + +With the Redis database created, you are ready to connect to your +database. See [Connect to Active-Active databases]({{< relref "/operate/rs/7.4/databases/active-active/connect" >}}) for tutorials and examples of multiple connection methods. +--- +Title: Active-Active geo-distributed Redis +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Overview of the Active-Active database in Redis Enterprise Software +hideListLinks: true +linktitle: Active-Active +weight: 40 +url: '/operate/rs/7.4/databases/active-active/' +--- +In Redis Enterprise, Active-Active geo-distribution is based on [CRDT technology](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). +The Redis Enterprise implementation of CRDT is called an Active-Active database (formerly known as CRDB). +With Active-Active databases, applications can read and write to the same data set from different geographical locations seamlessly and with latency less than one millisecond (ms), +without changing the way the application connects to the database. + +Active-Active databases also provide disaster recovery and accelerated data read-access for geographically distributed users. + + +## High availability + +The [high availability]({{< relref "/operate/rs/7.4/databases/durability-ha/" >}}) that Active-Active replication provides is built upon a number of Redis Enterprise Software features (such as [clustering]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}), [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}), and [replica HA]({{< relref "/operate/rs/7.4/databases/configure/replica-ha.md" >}})) as well as some features unique to Active-Active ([multi-primary replication]({{}}), [automatic conflict resolution]({{}}), and [strong eventual consistency]({{}})). + +Clustering and replication are used together in Active-Active databases to distribute multiple copies of the dataset across multiple nodes and multiple clusters. As a result, a node or cluster is less likely to become a single point of failure. If a primary node or primary shard fails, a replica is automatically promoted to primary. To avoid having one node hold all copies of certain data, the [replica HA]({{< relref "/operate/rs/7.4/databases/configure/replica-ha.md" >}}) feature (enabled by default) automatically migrates replica shards to available nodes. + +## Multi-primary replication + +In Redis Enterprise Software, replication copies data from primary shards to replica shards. Active-Active geo-distributed replication also copies both primary and replica shards to other clusters. Each Active-Active database needs to span at least two clusters; these are called participating clusters. + +Each participating cluster hosts an instance of your database, and each instance has its own primary node. Having multiple primary nodes means you can connect to the proxy in any of your participating clusters. Connecting to the closest cluster geographically enables near-local latency. Multi-primary replication (previously referred to as multi-master replication) also means that your users still have access to the database if one of the participating clusters fails. + +{{< note >}} +Active-Active databases do not replicate the entire database, only the data. +Database configurations, LUA scripts, and other support info are not replicated. +{{< /note >}} + +## Syncer + +Keeping multiple copies of the dataset consistent across multiple clusters is no small task. To achieve consistency between participating clusters, Redis Active-Active replication uses a process called the [syncer]({{< relref "/operate/rs/7.4/databases/active-active/syncer" >}}). + +The syncer keeps a [replication backlog]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog/" >}}), which stores changes to the dataset that the syncer sends to other participating clusters. The syncer uses partial syncs to keep replicas up to date with changes, or a full sync in the event a replica or primary is lost. + +## Conflict resolution + +Because you can connect to any participating cluster to perform a write operation, concurrent and conflicting writes are always possible. Conflict resolution is an important part of the Active-Active technology. Active-Active databases only use [conflict-free replicated data types (CRDTs)](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). These data types provide a predictable conflict resolution and don't require any additional work from the application or client side. + +When developing with CRDTs for Active-Active databases, you need to consider some important differences. See [Develop applications with Active-Active databases]({{< relref "/operate/rs/7.4/databases/active-active/develop/_index.md" >}}) for related information. + + +## Strong eventual consistency + +Maintaining strong consistency for replicated databases comes with tradeoffs in scalability and availability. Redis Active-Active databases use a strong eventual consistency model, which means that local values may differ across replicas for short periods of time, but they all eventually converge to one consistent state. Redis uses vector clocks and the CRDT conflict resolution to strengthen consistency between replicas. You can also enable the causal consistency feature to preserve the order of operations as they are synchronized among replicas. + +Other Redis Enterprise Software features can also be used to enhance the performance, scalability, or durability of your Active-Active database. These include [data persistence]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}), [multiple active proxies]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy.md" >}}), [distributed synchronization]({{< relref "/operate/rs/7.4/databases/active-active/synchronization-mode.md" >}}), [OSS Cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api.md" >}}), and [rack-zone awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}). + +## Next steps + +- [Plan your Active-Active deployment]({{< relref "/operate/rs/7.4/databases/active-active/planning.md" >}}) +- [Get started with Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/get-started.md" >}}) +- [Create an Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/create.md" >}}) +--- +Title: Configure high availability for replica shards +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure high availability for replica shards so that the cluster automatically + migrates the replica shards to an available node. +linkTitle: Replica high availability +weight: 50 +url: '/operate/rs/7.4/databases/configure/replica-ha/' +--- + +When you enable [database replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}), +Redis Enterprise Software creates a replica of each primary (master) shard. The replica shard will always be +located on a different node than the primary shard to make your data highly available. If the primary shard +fails or if the node hosting the primary shard fails, then the replica is promoted to primary. + +Without replica high availability (_replica\_ha_) enabled, the promoted primary shard becomes a single point of failure +as the only copy of the data. + +Enabling _replica\_ha_ configures the cluster to automatically replicate the promoted replica on an available node. +This automatically returns the database to a state where there are two copies of the data: +the former replica shard which has been promoted to primary and a new replica shard. + +An available node: + +1. Meets replica migration requirements, such as [rack-awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}). +1. Has enough available RAM to store the replica shard. +1. Does not also contain the master shard. + +In practice, replica migration creates a new replica shard and copies the data from the master shard to the new replica shard. + +For example: + +1. Node:2 has a master shard and node:3 has the corresponding replica shard. +1. Either: + + - Node:2 fails and the replica shard on node:3 is promoted to master. + - Node:3 fails and the master shard is no longer replicated to the replica shard on the failed node. + +1. If replica HA is enabled, a new replica shard is created on an available node. +1. The data from the master shard is replicated to the new replica shard. + +{{< note >}} +- Replica HA follows all prerequisites of replica migration, such as [rack-awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}). +- Replica HA migrates as many shards as possible based on available DRAM in the target node. When no DRAM is available, replica HA stops migrating replica shards to that node. +{{< /note >}} + +## Configure high availability for replica shards + +If replica high availability is enabled for both the cluster and a database, +the database's replica shards automatically migrate to another node when a master or replica shard fails. +If replica HA is not enabled at the cluster level, +replica HA will not migrate replica shards even if replica HA is enabled for a database. + +Replica high availability is enabled for the cluster by default. + +When you create a database using the Cluster Manager UI, replica high availability is enabled for the database by default if you enable replication. + +{{When you select the Replication checkbox in the High availability & durability section of the database configuration screen, the Replica high availability checkbox is also selected by default.}} + +To use replication without replication high availability, clear the **Replica high availability** checkbox. + +You can also enable or turn off replica high availability for a database using `rladmin` or the REST API. + +{{< note >}} +For Active-Active databases, replica HA is enabled for the database by default to make sure that replica shards are available for Active-Active replication. +{{< /note >}} + +### Configure cluster policy for replica HA + +{{}} +The replica HA cluster policy is deprecated as of Redis Enterprise Software version 7.2.4. +{{}} + +To enable or turn off replica high availability by default for the entire cluster, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster slave_ha { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "slave_ha": } + ``` + +### Turn off replica HA for a database + +To turn off replica high availability for a specific database using `rladmin`, run: + +``` text +rladmin tune db db: slave_ha disabled +``` + +You can use the database name in place of `db:` in the preceding command. + + +## Configuration options + +You can see the current configuration options for replica HA with: + +``` text +rladmin info cluster +``` + +### Grace period + +By default, replica HA has a 10-minute grace period after node failure and before new replica shards are created. + +{{}}The default grace period is 30 minutes for containerized applications using [Redis Enterprise Software for Kubernetes]({{< relref "/operate/kubernetes/" >}}).{{}} + +To configure this grace period from rladmin, run: + +``` text +rladmin tune cluster slave_ha_grace_period +``` + + +### Shard priority + +Replica shard migration is based on priority. When memory resources are limited, the most important replica shards are migrated first: + +1. `slave_ha_priority` - Replica shards with higher + integer values are migrated before shards with lower values. + + To assign priority to a database, run: + + ``` text + rladmin tune db db: slave_ha_priority + ``` + + You can use the database name in place of `db:` in the preceding command. + +1. Active-Active databases - Active-Active database synchronization uses replica shards to synchronize between the replicas. +1. Database size - It is easier and more efficient to move replica shards of smaller databases. +1. Database UID - The replica shards of databases with a higher UID are moved first. + +### Cooldown periods + +Both the cluster and the database have cooldown periods. + +After node failure, the cluster cooldown period (`slave_ha_cooldown_period`) prevents another replica migration due to another node failure for any +database in the cluster until the cooldown period ends. The default is one hour. + +After a database is migrated with replica HA, +it cannot go through another migration due to another node failure until the cooldown period for the database (`slave_ha_bdb_cooldown_period`) ends. The default is two hours. + +To configure cooldown periods, use [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + +- For the cluster: + + ``` text + rladmin tune cluster slave_ha_cooldown_period + ``` + +- For all databases in the cluster: + + ``` text + rladmin tune cluster slave_ha_bdb_cooldown_period + ``` + +### Alerts + +The following alerts are sent during replica HA activation: + +- Shard migration begins after the grace period. +- Shard migration fails because there is no available node (sent hourly). +- Shard migration is delayed because of the cooldown period. +--- +Title: Change database upgrade configuration +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure cluster-wide policies that affect default database upgrades. +linkTitle: Upgrade configuration +toc: 'true' +weight: 15 +url: '/operate/rs/7.4/databases/configure/db-upgrade/' +--- + +Database upgrade configuration includes cluster-wide policies that affect default database upgrades. + +## Edit upgrade configuration + +To edit database upgrade configuration using the Cluster Manager UI: + +1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Upgrade configuration**. + +1. Change database [upgrade configuration settings](#upgrade-config-settings). + +1. Select **Save**. + +## Upgrade configuration settings {#upgrade-config-settings} + +### Database shard parallel upgrade + +To change the number of shards upgraded in parallel during database upgrades, use one of the following methods: + +- Cluster Manager UI – Edit **Database shard parallel upgrade** in [**Upgrade configuration**](#edit-upgrade-configuration) + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster parallel_shards_upgrade { all | } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "parallel_shards_upgrade": } + ``` + +### RESP3 support + +The cluster-wide option `resp3_default` determines the default value of the `resp3` option, which enables or deactivates RESP3 for a database, upon upgrading a database to version 7.2 or later. `resp3_default` is set to `enabled` by default. + +To change `resp3_default` to `disabled`, use one of the following methods: + +- Cluster Manager UI – Edit **RESP3 support** in [**Upgrade configuration**](#edit-upgrade-configuration) + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster resp3_default { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "resp3_default": } + ``` +--- +Title: Enable OSS Cluster API +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: OSS Cluster API +weight: 20 +aliases: + - /operate/rs/concepts/data-access/oss-cluster-api +url: '/operate/rs/7.4/databases/configure/oss-cluster-api/' +--- + +Review [Redis OSS Cluster API]({{< relref "/operate/rs/7.4/clusters/optimize/oss-cluster-api" >}}) to determine if you should enable this feature for your database. + +## Prerequisites + +The Redis OSS Cluster API is supported only when a database meets specific criteria. + +The database must: + +- Use the standard [hashing policy]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering#supported-hashing-policies" >}}). +- Have the [proxy policy]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}) set to either `all-master-shards` or `all-nodes`. + +In addition, the database must _not_: + +- Use node `include` or `exclude` in the proxy policy. +- Use [RediSearch]({{< relref "/operate/oss_and_stack/stack-with-enterprise/search" >}}), [RedisTimeSeries]({{< relref "/operate/oss_and_stack/stack-with-enterprise/timeseries" >}}), or [RedisGears]({{< relref "/operate/oss_and_stack/stack-with-enterprise/gears-v1" >}}) modules. + +The OSS Cluster API setting applies to individual databases instead of the entire cluster. + +## Enable OSS Cluster API support + +You can use the Cluster Manager UI or the `rladmin` utility to enable OSS Cluster API support for a database. + +### Cluster Manager UI + +When you use the Cluster Manager UI to enable the OSS Cluster API, it automatically configures the [prerequisites]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api#prerequisites" >}}). + +To enable the OSS Cluster API for an existing database in the Cluster Manager UI: + +1. From the database's **Configuration** tab, select **Edit**. + +1. Expand the **Clustering** section. + +1. Turn on the **OSS Cluster API** toggle. + + {{Use the *OSS Cluster API* setting to enable the API for the selected database.}} + +1. Select **Save**. + +You can also use the Cluster Manager UI to enable the setting when creating a new database. + +### Command line (`rladmin`) + +You can use the [`rladmin` utility]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/" >}}) to enable the OSS Cluster API for Redis Enterprise Software databases, including Replica Of databases. + +For Active-Active (CRDB) databases, [use the crdb-cli utility](#active-active-databases). + +Ensure the [prerequisites]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api#prerequisites" >}}) have been configured. Then, enable the OSS Cluster API for a Redis database from the command line: + +```sh +$ rladmin tune db oss_cluster enabled +``` + +To determine the current setting for a database from the command line, use `rladmin info db` to return the value of the `oss_cluster` setting. + +```sh +$ rladmin info db test | grep oss_cluster: + oss_cluster: enabled +``` + +The OSS Cluster API setting applies to the specified database only; it does not apply to the cluster. + +### Active-Active databases + +Ensure the [prerequisites]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api#prerequisites" >}}) have been configured. Then, use the `crdb-cli` utility to enable the OSS Cluster API for Active-Active databases: + +```sh +$ crdb-cli crdb update --crdb-guid --oss-cluster true +``` + +For best results, you should do this when you first create the database. + +Here's the basic process: + +1. Create the Active-Active database: + + ```sh + $ crdb-cli crdb create --name \ + --memory-size 10g --port \ + --sharding true --shards-count 2 \ + --replication true --oss-cluster true --proxy-policy all-master-shards \ + --instance fqdn=,username=,password= \ + --instance fqdn=,username=,password= \ + --instance fqdn=,username=,password= + ``` + +1. Obtain the CRDB-GUID ID for the new database: + + ```sh + $ crdb-cli crdb list + CRDB-GUID NAME REPL-ID CLUSTER-FQDN + Test 4 cluster1.local + ``` + +1. Use the CRDB-GUID ID to enable the OSS Cluster API: + + ```sh + $ crdb-cli crdb update --crdb-guid \ + --oss-cluster true + ``` + +The OSS Cluster API setting applies to all of the instances of the Active-Active database. + +## Turn off OSS Cluster API support + +To deactivate OSS Cluster API support for a database, either: + +- Use the Cluster Manager UI to turn off the **OSS Cluster API** toggle from the database **Configuration** settings. + +- Use the appropriate utility to deactivate the OSS Cluster API setting. + + For standard databases, including Replica Of, use `rladmin`: + + ```sh + $ rladmin tune db oss_cluster disabled + ``` + + For Active-Active databases, use `crdb-cli`: + + ```sh + $ crdb-cli crdb update --crdb-guid \ + --oss-cluster false + ``` + +## Multi-key command support + +When you enable the OSS Cluster API for a database, +[multi-key commands]({{< relref "/operate/rc/databases/configuration/clustering#multikey-operations" >}}) are only allowed when all keys are mapped to the same slot. + +To verify that your database meets this requirement, make sure that the `CLUSTER KEYSLOT` reply is the same for all keys affected by the multi-key command. To learn more, see [multi-key operations]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering#multikey-operations" >}}). +--- +Title: Configure shard placement +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure shard placement to improve performance. +linktitle: Shard placement +weight: 60 +url: '/operate/rs/7.4/databases/configure/shard-placement/' +--- +In Redis Enterprise Software , the location of master and replica shards on the cluster nodes can impact the database and node performance. +Master shards and their corresponding replica shards are always placed on separate nodes for data resiliency. +The [shard placement policy]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy.md" >}}) helps to maintain optimal performance and resiliency. + +{{< embed-md "shard-placement-intro.md" >}} + +## Default shard placement policy + +When you create a new cluster, the cluster configuration has a `dense` default shard placement policy. +When you create a database, this default policy is applied to the new database. + +To see the current default shard placement policy, run `rladmin info cluster`: + +{{< image filename="/images/rs/shard_placement_info_cluster.png" >}} + +To change the default shard placement policy so that new databases are created with the `sparse` shard placement policy, run: + +```sh +rladmin tune cluster default_shards_placement [ dense | sparse ] +``` + +## Shard placement policy for a database + +To see the shard placement policy for a database in `rladmin status`. + +{{< image filename="/images/rs/shard_placement_rladmin_status.png" >}} + +To change the shard placement policy for a database, run: + +```sh +rladmin placement db [ database name | database ID ] [ dense | sparse ] +``` +--- +Title: Configure proxy policy +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Proxy policy +weight: 40 +url: '/operate/rs/7.4/databases/configure/proxy-policy/' +--- +Redis Enterprise Software (RS) provides high-performance data access +through a proxy process that manages and optimizes access to shards +within the RS cluster. Each node contains a single proxy process. +Each proxy can be active and take incoming traffic or it can be passive +and wait for failovers. + +## Proxy policies + +A database can have one of these proxy policies: + +| **Proxy Policy** | **Description** | +|------------|-----------------| +| Single | There is only a single proxy that is bound to the database. This is the default database configuration and preferable in most use cases. | +| All Master Shards | There are multiple proxies that are bound to the database, one on each node that hosts a database master shard. This mode fits most use cases that require multiple proxies. | +| All Nodes | There are multiple proxies that are bound to the database, one on each node in the cluster, regardless of whether or not there is a shard from this database on the node. This mode should be used only in special cases, such as [using a load balancer]({{< relref "/operate/rs/7.4/networking/cluster-lba-setup.md" >}}). | + +{{< note >}} +Manual intervention is also available via the rladmin bind add and +remove commands. +{{< /note >}} + +## Database configuration + +A database can be configured with a proxy policy using rladmin bind. + +Warning: Any configuration update which causes existing proxies to be +unbounded can cause existing client connections to get disconnected. + +You can run rladmin to control and view the existing settings for proxy +configuration. + +The **info** command on cluster returns the existing proxy policy for +sharded and non-sharded (single shard) databases. + +```sh +$ rladmin info cluster +cluster configuration: +   repl_diskless: enabled + default_non_sharded_proxy_policy: single +   default_sharded_proxy_policy: single +   default_shards_placement: dense +   default_shards_overbooking: disabled +   default_fork_evict_ram: enabled +   default_redis_version: 3.2 +   redis_migrate_node_threshold: 0KB (0 bytes) +   redis_migrate_node_threshold_percent: 8 (%) +   redis_provision_node_threshold: 0KB (0 bytes) +   redis_provision_node_threshold_percent: 12 (%) +   max_simultaneous_backups: 4 +   watchdog profile: local-network +``` + +You can configure the proxy policy using the `bind` command in +rladmin. The following command is an example that changes the bind +policy for a database named "db1" with an endpoint id "1:1" to "All +Master Shards" proxy policy. + +```sh +rladmin bind db db1 endpoint 1:1 policy all-master-shards +``` + +The next command performs the same task using the database id in place of the name. The id of this database is "1". + +```sh +rladmin bind db db:1 endpoint 1:1 policy all-master-shards +``` + +{{< note >}} +You can find the endpoint id for the endpoint argument by running +*status* command for rladmin. Look for the endpoint id information under +the *ENDPOINT* section of the output. +{{< /note >}} + +### Reapply policies after topology changes + +If you want to reapply the policy after topology changes, such as node restarts, +failovers and migrations, run this command to reset the policy: + +```sh +rladmin bind db db: endpoint policy +``` + +This is not required with single policies. + +#### Other implications + +During the regular operation of the cluster different actions might take +place, such as automatic migration or automatic failover, which change +what proxy needs to be bound to what database. When such actions take +place the cluster attempts, as much as possible, to automatically change +proxy bindings to adhere to the defined policies. That said, the cluster +attempts to prevent any existing client connections from being +disconnected, and hence might not entirely enforce the policies. In such +cases, you can enforce the policy using the appropriate rladmin +commands. + +## About multiple active proxy support + +RS allows multiple databases to be created. Each database gets an +endpoint (a unique URL and port on the FQDN). This endpoint receives all +the traffic for all operations for that database. By default, RS binds +this database endpoint to one of the proxies on a single node in the +cluster. This proxy becomes an active proxy and receives all the +operations for the given database. (note that if the node with the +active proxy fails, a new proxy on another node takes over as part of +the failover process automatically). + +In most cases, a single proxy can handle a large number of operations +without consuming additional resources. However, under high load, +network bandwidth or a high rate of packets per second (PPS) on the +single active proxy can become a bottleneck to how fast database +operation can be performed. In such cases, having multiple active +proxies, across multiple nodes, mapped to the same external database +endpoint, can significantly improve throughput. + +With the multiple active proxies capability, RS enables you to configure +a database to have multiple internal proxies in order to improve +performance, in some cases. It is important to note that, even though +multiple active proxies can help improve the throughput of database +operations, configuring multiple active proxies may cause additional +latency in operations as the shards and proxies are spread across +multiple nodes in the cluster. + +{{< note >}} +When the network on a single active proxy becomes the bottleneck, +you might also look into enabling the multiple NIC support in RS. With +nodes that have multiple physical NICs (Network Interface Cards), you +can configure RS to separate internal and external traffic onto +independent physical NICs. For more details, refer to [Manage IP addresses]({{< relref "/operate/rs/7.4/networking/multi-ip-ipv6" >}}). +{{< /note >}} + +Having multiple proxies for a database can improve RS's ability for fast +failover in case of proxy and/or node failure. With multiple proxies for +a database, there is no need for a client to wait for the cluster +to spin up another proxy and a DNS change in most cases, the client +just uses the next IP in the list to connect to another proxy. +--- +Title: Configure database persistence +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to configure database persistence with either an append-only file + (AOF) or snapshots. +linktitle: Persistence +weight: 30 +url: '/operate/rs/7.4/databases/configure/database-persistence/' +--- +All data is stored and managed exclusively in either RAM or RAM + flash Memory ([Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}})) and therefore, is at risk of being lost upon a process or server +failure. As Redis Enterprise Software is not just a caching solution, but also a full-fledged database, [persistence](https://redis.com/redis-enterprise/technology/durable-redis/) to disk +is critical. Therefore, Redis Enterprise Software supports persisting data to disk on a per-database basis and in multiple ways. + +[Persistence](https://redis.com/redis-enterprise/technology/durable-redis/) can be configured either during database creation or by editing an existing +database's configuration. While the persistence model can be changed dynamically, it can take time for your database to switch from one persistence model to the other. It depends on what you are switching from and to, but also on the size of your database. + +## Configure database persistence + +You can configure persistence when you [create a database]({{< relref "/operate/rs/7.4/databases/create" >}}), or you can edit an existing database's configuration: + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select **Edit**. + +1. Expand the **High Availability** section. + +1. For **Persistence**, select an [option](#data-persistence-options) from the list. + +1. Select **Save**. + +## Data persistence options + +There are six options for persistence in Redis Enterprise Software: + +| **Options** | **Description** | +| ------ | ------ | +| None | Data is not persisted to disk at all. | +| Append-only file (AOF) - fsync every write | Data is fsynced to disk with every write. | +| Append-only file (AOF) - fsync every 1 sec | Data is fsynced to disk every second. | +| Snapshot, every 1 hour | A snapshot of the database is created every hour. | +| Snapshot, every 6 hours | A snapshot of the database is created every 6 hours. | +| Snapshot, every 12 hours | A snapshot of the database is created every 12 hours. | + +## Select a persistence strategy + +When selecting your persistence strategy, you should take into account your tolerance for data loss and performance needs. There will always be tradeoffs between the two. +The fsync() system call syncs data from file buffers to disk. You can configure how often Redis performs an fsync() to most effectively make tradeoffs between performance and durability for your use case. +Redis supports three fsync policies: every write, every second, and disabled. + +Redis also allows snapshots through RDB files for persistence. Within Redis Enterprise, you can configure both snapshots and fsync policies. + +For any high availability needs, use replication to further reduce the risk of data loss. + +**For use cases where data loss has a high cost:** + +Append-only file (AOF) - fsync every write - Redis Enterprise sets the Redis directive `appendfsyncalways`. With this policy, Redis will wait for the write and the fsync to complete prior to sending an acknowledgement to the client that the data has written. This introduces the performance overhead of the fsync in addition to the execution of the command. The fsync policy always favors durability over performance and should be used when there is a high cost for data loss. + +**For use cases where data loss is tolerable only limitedly:** + +Append-only file (AOF) - fsync every 1 sec - Redis will fsync any newly written data every second. This policy balances performance and durability and should be used when minimal data loss is acceptable in the event of a failure. This is the default Redis policy. This policy could result in between 1 and 2 seconds worth of data loss but on average this will be closer to one second. + +{{< note >}} +If you use AOF for persistence, enable replication to improve performance. When both features are enabled for a database, the replica handles persistence, which prevents any performance impact on the master. +{{< /note >}} + +**For use cases where data loss is tolerable or recoverable for extended periods of time:** + +- Snapshot, every 1 hour - Performs a full backup every hour. +- Snapshot, every 6 hour - Performs a full backup every 6 hours. +- Snapshot, every 12 hour - Performs a full backup every 12 hours. +- None - Does not back up or persist data at all. + +## Append-only file (AOF) vs snapshot (RDB) + +Now that you know the available options, to assist in making a decision +on which option is right for your use case, here is a table about the +two: + +| **Append-only File (AOF)** | **Snapshot (RDB)** | +|------------|-----------------| +| More resource intensive | Less resource intensive | +| Provides better durability (recover the latest point in time) | Less durable | +| Slower time to recover (Larger files) | Faster recovery time | +| More disk space required (files tend to grow large and require compaction) | Requires less resources (I/O once every several hours and no compaction required) | + +## Active-Active data persistence + +Active-Active databases support AOF persistence only. Snapshot persistence is not supported for Active-Active databases. + +If an Active-Active database is using snapshot persistence, use `crdb-cli` to switch to AOF persistence: + +```text +crdb-cli crdb update --crdb-guid --default-db-config \ + '{"data_persistence": "aof", "aof_policy":"appendfsync-every-sec"}' +``` + +## Auto Tiering data persistence + +Auto Tiering flash storage is not considered persistent storage. + +Flash-based databases are expected to hold larger datasets, and shard repair times can take longer after node failures. To better protect the database against node failures with longer repair times, consider enabling master and replica dual data persistence. + +However, dual data persistence with replication adds some processor +and network overhead, especially for cloud configurations +with network-attached persistent storage, such as EBS-backed +volumes in AWS. + +There may be times when performance is critical for your use case and +you don't want to risk data persistence adding latency. + +You can enable or turn off data persistence on the master shards using the +following `rladmin` command: + +```sh +rladmin tune db master_persistence +``` +--- +Title: Configure database defaults +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster-wide policies that determine default settings when creating new + databases. +linkTitle: Database defaults +toc: 'true' +weight: 10 +url: '/operate/rs/7.4/databases/configure/db-defaults/' +--- + +Database defaults are cluster-wide policies that determine default settings when creating new databases. + +## Edit database defaults + +To edit default database configuration using the Cluster Manager UI: + +1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Database defaults**. + +1. Configure [database defaults](#db-defaults). + + {{Database defaults configuration panel.}} + +1. Select **Save**. + +## Database defaults {#db-defaults} + +### Endpoint configuration + +You can choose a predefined endpoint configuration to use the recommended database proxy and shards placement policies for your use case. If you want to set these policies manually instead, select **Custom** endpoint configuration. + +| Endpoint configuration | Database proxy | Shards placement | Description | +|-----------|------------|----------------|------------------|------------| +| Enterprise clustering | Single | Dense | Sets up a single endpoint that uses DNS to automatically reflect IP address updates after failover or topology changes. | +| Using a load balancer | All nodes | Sparse | Configure Redis with a load balancer like HAProxy or Nginx for environments without DNS. | +| Multiple endpoints | All primary shards | Sparse | To set up multiple endpoints, enable **OSS Cluster API** in the database settings and ensure client support. Clients initially connect to the primary node to retrieve the cluster topology, which allows direct connections to individual Redis proxies on each node. | +| Custom | Single, all primary shards, or all nodes | Dense or sparse | Manually choose default database proxy and shards placement policies. | + +### Database proxy + +Redis Enterprise Software uses [proxies]({{< relref "/operate/rs/7.4/references/terminology#proxy" >}}) to manage and optimize access to database shards. Each node in the cluster runs a single proxy process, which can be active (receives incoming traffic) or passive (waits for failovers). + +You can configure default [proxy policies]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}) to determine which nodes' proxies are active and bound to new databases by default. + +To configure the default database proxy policy using the Cluster Manager UI: + +1. [**Edit database defaults**](#edit-database-defaults). + +1. Select a predefined [**Endpoint Configuration**](#endpoint-configuration) to use a recommended database proxy policy, or choose **Custom** to set the policy manually. Changing the database proxy default in the Cluster Manager UI affects both sharded and non-sharded proxy policies. + + {{The Database defaults panel lets you select Database proxy and Shards placement if Endpoint Configuration is set to Custom.}} + +#### Non-sharded proxy policy + +To configure the default proxy policy for non-sharded databases, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_non_sharded_proxy_policy { single | all-master-shards | all-nodes } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_non_sharded_proxy_policy": "single | all-master-shards | all-nodes" } + ``` + +#### Sharded proxy policy + +To configure the default proxy policy for sharded databases, use one of the following methods: + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_sharded_proxy_policy { single | all-master-shards | all-nodes } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_sharded_proxy_policy": "single | all-master-shards | all-nodes" } + ``` + +### Shards placement + +The default [shard placement policy]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy" >}}) determines the distribution of database shards across nodes in the cluster. + +Shard placement policies include: + +- `dense`: places shards on the smallest number of nodes. + +- `sparse`: spreads shards across many nodes. + +To configure default shard placement, use one of the following methods: + +- Cluster Manager UI: + + 1. [**Edit database defaults**](#edit-database-defaults). + + 1. Select a predefined [**Endpoint Configuration**](#endpoint-configuration) to use a recommended shards placement policy, or choose **Custom** to set the policy manually. + + {{The Database defaults panel lets you select Database proxy and Shards placement if Endpoint Configuration is set to Custom.}} + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_shards_placement { dense | sparse } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_shards_placement": "dense | sparse" } + ``` + +### Database version + +New databases use the default Redis database version unless you select a different **Database version** when you [create a database]({{}}) in the Cluster Manager UI or specify the `redis_version` in a [create database REST API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs" >}}). + +To configure the Redis database version, use one of the following methods: + +- Cluster Manager UI: Edit **Database version** in [**Database defaults**](#edit-database-defaults) + + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster default_redis_version + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_provisioned_redis_version": "x.y" } + ``` + +### Internode encryption + +Enable [internode encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption" >}}) to encrypt data in transit between nodes for new databases by default. + +To enable or turn off internode encryption by default, use one of the following methods: + +- Cluster Manager UI: Edit **Internode Encryption** in [**Database defaults**](#edit-database-defaults) + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster data_internode_encryption { enabled | disabled } + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "data_internode_encryption": } + ``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure settings specific to each database. +hideListLinks: true +linktitle: Configure +title: Configure database settings +toc: 'true' +weight: 20 +url: '/operate/rs/7.4/databases/configure/' +--- + +You can manage your Redis Enterprise Software databases with several tools: + +- [Cluster Manager UI](#edit-database-settings) (the web-based user interface) + +- Command-line tools: + + - [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) for standalone database configuration + + - [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) for Active-Active database configuration + + - [`redis-cli`]({{< relref "/develop/tools/cli" >}}) for Redis Open Source configuration + +- [REST API]({{< relref "/operate/rs/7.4/references/rest-api/_index.md" >}}) + +## Edit database settings + +You can change the configuration of a Redis Enterprise Software database at any time. + +To edit the configuration of a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database you want to edit. + +1. From the **Configuration** tab, select **Edit**. + +1. Change any [configurable database settings](#config-settings). + + {{< note >}} +For [Active-Active database instances]({{< relref "/operate/rs/7.4/databases/active-active" >}}), most database settings only apply to the instance that you are editing. + {{< /note >}} + +1. Select **Save**. + +## Configuration settings {#config-settings} + +- **Database version** - Select the Redis version when you create a database. + +- **Name** - The database name requirements are: + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +- **Endpoint port number** - You can define the port number that clients use to connect to the database. Otherwise, a port is randomly selected. + + {{< note >}} +You cannot change the [port number]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) +after the database is created. + {{< /note >}} + +- **Memory limit** - [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}) include all database replicas and shards, including replica shards in database replication and database shards in database clustering. + + If the total size of the database in the cluster reaches the memory limit, the data eviction policy for the database is enforced. + + {{< note >}} +If you create a database with Auto Tiering enabled, you also need to set the RAM-to-Flash ratio +for this database. Minimum RAM is 10%. Maximum RAM is 50%. + {{< /note >}} + +- [**Capabilities**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) (previously **Modules**) - When you create a new in-memory database, you can enable multiple Redis Stack capabilities in the database. For Auto Tiering databases, you can enable capabilities that support Auto Tiering. See [Redis Enterprise and Redis Stack feature compatibility +]({{< relref "/operate/oss_and_stack/stack-with-enterprise/enterprise-capabilities" >}}) for compatibility details. + + {{< note >}} +To use Redis Stack capabilities, enable them when you create a new database. +You cannot enable them after database creation. + {{< /note >}} + + To add capabilities to the database: + + 1. In the **Capabilities** section, select one or more capabilities. + + 1. To customize capabilities, select **Parameters** and enter the optional custom configuration. + + 1. Select **Done**. + +### High availability & durability + +- [**Replication**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}) - We recommend you use intra-cluster replication to create replica shards for each database for high availability. + + If the cluster is configured to support [rack-zone awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}), you can also enable rack-zone awareness for the database. + +- [**Replica high availability**]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) - Automatically migrates replica shards to an available node if a replica node fails or is promoted to primary. + +- [**Persistence**]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}) - To protect against loss of data stored in RAM, you can enable data persistence and store a copy of the data on disk with snapshots or an Append Only File. + +- [**Data eviction policy**]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy.md" >}}) - By default, when the total size of the database reaches its memory limit the database evicts keys according to the least recently used keys out of all keys with an "expire" field set in order to make room for new keys. You can select a different data eviction policy. + +### Clustering + +- **Sharding** - You can either: + - Turn on **Sharding** to enable [database clustering]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}) and select the number of database shards. + + When database clustering is enabled, databases are subject to limitations on [Multi-key commands]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}). + + You can increase the number of shards in the database at any time. + + You can accept the [standard hashing policy]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering#standard-hashing-policy" >}}), which is compatible with Redis Open Source, or define a [custom hashing policy]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering#custom-hashing-policy" >}}) to define where keys are located in the clustered database. + + - Turn off **Sharding** to use only one shard so that you can use [Multi-key commands]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}) without the limitations. + +- [**OSS Cluster API**]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api.md" >}}) - The OSS Cluster API configuration allows access to multiple endpoints for increased throughput. + + This configuration requires clients to connect to the primary node to retrieve the cluster topology before they can connect directly to proxies on each node. + + When you enable the OSS Cluster API, shard placement changes to _Sparse_, and the database proxy policy changes to _All primary shards_ automatically. + + {{}} +You must use a client that supports the cluster API to connect to a database that has the cluster API enabled. + {{}} + +- [**Shards placement**]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy" >}}) - Determines how to distribute database shards across nodes in the cluster. + + - _Dense_ places shards on the smallest number of nodes. + + - _Sparse_ spreads shards across many nodes. + +- [**Database proxy**]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}) - Determines the number and location of active proxies, which manage incoming database operation requests. + +### Replica Of + +With [**Replica Of**]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create.md" >}}), you can make the database a repository for keys from other databases. + +### Scheduled backup + +You can configure [periodic backups]({{< relref "/operate/rs/7.4/databases/import-export/schedule-backups" >}}) of the database, including the interval and backup location parameters. + +### Alerts + +Select [alerts]({{< relref "/operate/rs/7.4/clusters/monitoring#database-alerts" >}}) to show in the database status and configure their thresholds. + +You can also choose to [send alerts by email]({{< relref "/operate/rs/7.4/clusters/monitoring#send-alerts-by-email" >}}) to relevant users. + +### TLS + +You can require [**TLS**]({{< relref "/operate/rs/7.4/security/encryption/tls/" >}}) encryption and authentication for all communications, TLS encryption and authentication for Replica Of communication only, and TLS authentication for clients. + +### Access control + +- **Unauthenticated access** - You can access the database as the default user without providing credentials. + +- **Password-only authentication** - When you configure a password for your database's default user, all connections to the database must authenticate with the [AUTH command]({{< relref "/commands/auth" >}}). + + If you also configure an access control list, connections can specify other users for authentication, and requests are allowed according to the Redis ACLs specified for that user. + + Creating a database without ACLs enables a *default* user with full access to the database. You can secure default user access by requiring a password. + +- **Access Control List** - You can specify the [user roles]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) that have access to the database and the [Redis ACLs]({{< relref "/operate/rs/7.4/security/access-control/redis-acl-overview" >}}) that apply to those connections. + + To define an access control list for a database: + + 1. In **Security > Access Control > Access Control List**, select **+ Add ACL**. + + 1. Select a [role]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) to grant database access. + + 1. Associate a [Redis ACL]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) with the role and database. + + 1. Select the check mark to add the ACL. + +### Internode encryption + +Enable **Internode encryption** to encrypt data in transit between nodes for this database. See [Internode encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption" >}}) for more information. + +--- +Title: Recover a failed database +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Recover a database after the cluster fails or the database is corrupted. +linktitle: Recover +weight: 35 +url: '/operate/rs/7.4/databases/recover/' +--- +When a cluster fails or a database is corrupted, you must: + +1. [Restore the cluster configuration]({{< relref "/operate/rs/7.4/clusters/cluster-recovery.md" >}}) from the CCS files +1. Recover the databases with their previous configuration and data + +To restore data to databases in the new cluster, +you must restore the database persistence files (backup, AOF, or snapshot files) to the databases. +These files are stored in the [persistence storage location]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}). + +The database recovery process includes: + +1. If the cluster failed, [recover the cluster]({{< relref "/operate/rs/7.4/clusters/cluster-recovery.md" >}}). +1. Identify recoverable databases. +1. Restore the database data. +1. Verify that the databases are active. + +## Prerequisites + +- Before you start database recovery, make sure that the cluster that hosts the database is healthy. + In the case of a cluster failure, + you must [recover the cluster]({{< relref "/operate/rs/7.4/clusters/cluster-recovery.md" >}}) before you recover the databases. + +- We recommend that you allocate new persistent storage drives for the new cluster nodes. + If you use the original storage drives, + make sure to back up all files on the old persistent storage drives to another location. + +## Recover databases + +After you prepare the cluster that hosts the database, +you can run the recovery process from the [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) +command-line interface (CLI). + +To recover the database: + +1. Mount the persistent storage drives with the recovery files to the new nodes. + These drives must contain the cluster configuration backup files and database persistence files. + + {{< note >}} +Make sure that the user `redislabs` has permissions to access the storage location +of the configuration and persistence files on each of the nodes. + {{< /note >}} + + If you use local persistent storage, place all of the recovery files on each of the cluster nodes. + +1. To see which databases are recoverable, run: + + ```sh + rladmin recover list + ``` + + The status for each database can be either ready for recovery or missing files. + An indication of missing files in any of the databases can result from: + + - The storage location is not found - Make sure the recovery path is set correctly on all nodes in the cluster. + - Files are not found in the storage location - Move the files to the storage location. + - No permission to read the files - Change the file permissions so that redislabs:redislabs has 640 permissions. + - Files are corrupted - Locate copies of the files that are not corrupted. + + If you cannot resolve the issues, contact [Redis support](https://redis.com/company/support/). + +1. Recover the database using one of the following [`rladmin recover`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/recover" >}}) commands: + + - Recover all databases from the persistence files located in the persistent storage drives: + + ```sh + rladmin recover all + ``` + + - Recover a single database from the persistence files located in the persistent storage drives: + + - By database ID: + + ```sh + rladmin recover db db: + ``` + + - By database name: + + ```sh + rladmin recover db + ``` + + - Recover only the database configuration for a single database (without the data): + + ```sh + rladmin recover db only_configuration + ``` + + {{< note >}} +- If persistence was not configured for the database, the database is restored empty. +- For Active-Active databases that still have live instances, we recommend that you recover the configuration for the failed instances and let the data update from the other instances. +- For Active-Active databases where all instances need to be recovered, we recommend you recover one instance with the data and only recover the configuration for the other instances. + The empty instances then update from the recovered data. +- If the persistence files of the databases from the old cluster are not stored in the persistent storage location of the new node, + you must first map the recovery path of each node to the location of the old persistence files. + To do this, run the `node recovery_path set` command in rladmin. + The persistence files for each database are located in the persistent storage path of the nodes from the old cluster, usually under `/var/opt/redislabs/persist/redis`. + {{< /note >}} + +1. To verify that the recovered databases are now active, run: + + ```sh + rladmin status + ``` + +After the databases are recovered, make sure your Redis clients can successfully connect to the databases. + +## Configure automatic recovery + +If you enable the automatic recovery cluster policy, Redis Enterprise tries to quickly recover as much data as possible from before the disaster. + +To enable automatic recovery, [update the cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) using the REST API: + +```sh +PUT /v1/cluster/policy +{ + "auto_recovery": true +} +``` + +Redis Enterprise tries to recover databases from the best existing persistence files. If a persistence file isn't available, which can happen if its host node is down, the automatic recovery process waits for it to become available. + +For each database, you can set the `recovery_wait_time` to define how many seconds the database waits for a persistence file to become available before recovery. After the wait time elapses, the recovery process continues, which can result in partial or full data loss. The default value is `-1`, which means to wait forever. Short wait times can increase the risk of potential data loss. + +To change `recovery_wait_time` for an existing database using the REST API: + +```sh +PUT /v1/bdbs/ +{ + "recovery_wait_time": 3600 +} +``` + +You can also set `recovery_wait_time` when you [create a database]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#post-bdbs-v1" >}}) using the REST API. +--- +Title: Database replication +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Replication +weight: 40 +url: '/operate/rs/7.4/databases/durability-ha/replication/' +--- +Database replication helps ensure high availability. +When replication is enabled, your dataset is replicated to a replica shard, +which is constantly synchronized with the primary shard. If the primary +shard fails, an automatic failover happens and the replica shard is promoted. That is, it becomes the new primary shard. + +When the old primary shard recovers, it becomes +the replica shard of the new primary shard. This auto-failover mechanism +guarantees that data is served with minimal interruption. + +You can tune your high availability configuration with: + +- [Rack/Zone +Awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}) - When rack-zone awareness is used additional logic ensures that master and replica shards never share the same rack, thus ensuring availability even under loss of an entire rack. +- [High Availability for Replica Shards]({{< relref "/operate/rs/7.4/databases/configure/replica-ha.md" >}}) - When high availability +for replica shards is used, the replica shard is automatically migrated on node failover to maintain high availability. + +{{< warning >}} +Enabling replication has implications for the total database size, +as explained in [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}). +{{< /warning >}} + +## Auto Tiering replication considerations + +We recommend that you set the sequential replication feature using +`rladmin`. This is due to the potential for relatively slow replication +times that can occur with Auto Tiering enabled databases. In some +cases, if sequential replication is not set up, you may run out of memory. + +While it does not cause data loss on the +primary shards, the replication to replica shards may not succeed as long +as there is high write-rate traffic on the primary and multiple +replications at the same time. + +The following `rladmin` command sets the number of primary shards eligible to +be replicated from the same cluster node, as well as the number of replica +shards on the same cluster node that can run the replication process at +any given time. + +The recommended sequential replication configuration is two, i.e.: + +```sh +rladmin tune cluster max_redis_forks 1 max_slave_full_syncs 1 +``` + +{{< note >}} +This means that at any given time, +only one primary and one replica can be part of a full sync replication process. +{{< /note >}} + +## Database replication backlog + +Redis databases that use [replication for high availability]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}) maintain a replication backlog (per shard) to synchronize the primary and replica shards of a database. +By default, the replication backlog is set to one percent (1%) of the database size divided by the database number of shards and ranges between 1MB to 250MB per shard. +Use the [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) and the [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) utilities to control the size of the replication backlog. You can set it to `auto` or set a specific size. + +The syntax varies between regular and Active-Active databases. + +For a regular Redis database: +```text +rladmin tune db repl_backlog +``` + +For an Active-Active database: +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"repl_backlog_size\": }" +``` + +### Active-Active replication backlog + +In addition to the database replication backlog, Active-Active databases maintain a backlog (per shard) to synchronize the database instances between clusters. +By default, the Active-Active replication backlog is set to one percent (1%) of the database size divided by the database number of shards, and ranges between 1MB to 250MB per shard. +Use the [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) utility to control the size of the CRDT replication backlog. You can set it to `auto` or set a specific size: + +```text +crdb-cli crdb update --crdb-guid --default-db-config "{\"crdt_repl_backlog_size\": }" +``` + +**For Redis Software versions earlier than 6.0.20:** +The replication backlog and the CRDT replication backlog defaults are set to 1MB and cannot be set dynamically with 'auto' mode. +To control the size of the replication log, use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) to tune the local database instance in each cluster. +```text +rladmin tune db repl_backlog +``` +--- +Title: Discovery service +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Discovery service +weight: 30 +url: '/operate/rs/7.4/databases/durability-ha/discovery-service/' +--- +The Discovery Service provides an IP-based connection management service +used when connecting to Redis Enterprise Software databases. When used +in conjunction with Redis Enterprise Software's other high availability +features, the Discovery Service assists an application scope with +topology changes such as adding, removing of nodes, node failovers and +so on. It does this by providing your application with the ability to +easily discover which node hosts the database endpoint. The API used for +discovery service is compliant with the Redis Sentinel API. + +Discovery Service is an alternative for applications that do not want to +depend on DNS name resolution for their connectivity. Discovery Service +and DNS based connectivity are not mutually exclusive. They can be used +side by side in a given cluster where some clients can use Discovery +Service based connection while others can use DNS name resolution when +connecting to databases. + +## How discovery service works + +The Discovery Service is available for querying on each node of the +cluster, listening on port 8001. To employ it, your application utilizes +a [Redis Sentinel enabled client +library]({{< relref "/operate/rs/7.4/databases/connect/supported-clients-browsers.md" >}}) +to connect to the Discovery Service and request the endpoint for the +given database. The Discovery Service replies with the database's +endpoint for that database. In case of a node failure, the Discovery +Service is updated by the cluster manager with the new endpoint and +clients unable to connect to the database endpoint due to the failover, +can re-query the discovery service for the new endpoint for the +database. + +The Discovery Service can return either the internal or external +endpoint for a database. If you query the discovery service for the +endpoint of a database named "db1", the Discovery Service returns +the external endpoint information by default. If only an internal +endpoint exists with no external endpoint the default behavior is to +return the internal endpoint. The "\@internal" is added to the end of +the database name to explicitly ask for the internal endpoint. to query +the internal endpoint explicitly with database name "db1", you can pass +in the database name as "db1\@internal". + +If you'd like to examine the metadata returned from Redis Enterprise +Software Discovery Service you can connect to port 8001 with redis-cli +utility and execute "SENTINEL masters". Following is a sample output +from one of the nodes of a Redis Enterprise Software cluster: + +```sh +$ ./redis-cli -p 8001 +127.0.0.1:8001> SENTINEL masters +1) 1) "name" +2) "db1@internal" +3) "ip" +4) "10.0.0.45" +5) "port" +6) "12000" +7) "flags" +8) "master,disconnected" +9) "num-other-sentinels" +10) "0" +2) 1) "name" +2) "db1" +3) "ip" +4) "10.0.0.45" +5) "port" +6) "12000" +7) "flags" +8) "master,disconnected" +9) "num-other-sentinels" +10) "0" +``` + +It is important to note that, the Discovery Service is not a full +implementation of the [Redis Sentinel +protocol]({{< relref "/operate/oss_and_stack/management/sentinel" >}}). There are aspects of the +protocol that are not applicable or would be duplication with existing +technology in Redis Enterprise Software. The Discovery Service +implements only the parts required to provide applications with easy +High Availability, be compatible with the protocol, and not rely on DNS +to derive which node in the cluster to communicate with. + +{{< note >}} +To use Redis Sentinel, every database name must be unique across the cluster. +{{< /note >}} + +## Redis client support + +We recommend these clients that are tested for use with the [Discovery Service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}) that uses the Redis Sentinel API: + +{{< embed-md "discovery-clients.md" >}} + +{{< note >}} +Redis Sentinel API can return endpoints for both master and replica +endpoints. +Discovery Service only supports master endpoints and does not +support returning replica endpoints for a database. +{{< /note >}} +--- +Title: Database clustering +alwaysopen: false +categories: +- docs +- operate +- rs +description: Clustering to allow customers to spread the load of a Redis process over + multiple cores and the RAM of multiple servers. +linktitle: Clustering +weight: 10 +url: '/operate/rs/7.4/databases/durability-ha/clustering/' +--- +Source available [Redis](https://redislabs.com/redis-features/redis) is a single-threaded process +to provide speed and simplicity. +A single Redis process is bound by the CPU core that it is running on and available memory on the server. + +Redis Enterprise Software supports database clustering to allow customers +to spread the load of a Redis process over multiple cores and the RAM of multiple servers. +A database cluster is a set of Redis processes where each process manages a subset of the database keyspace. + +The keyspace of a Redis Enterprise cluster is partitioned into database shards. +Each shard resides on a single node and is managed by that node. +Each node in a Redis database cluster can manage multiple shards. +The key space in the shards is divided into hash slots. +The slot of a key is determined by a hash of the key name or part of the key name. + +Database clustering is transparent to the Redis client that connects to the database. +The Redis client accesses the database through a single endpoint that automatically routes all operations to the relevant shards. +You can connect an application to a single Redis process or a clustered database without any difference in the application logic. + +## Terminology + +In clustering, these terms are commonly used: + +- Tag or Hash Tag - A part of the key that is used in the hash calculation. +- Slot or Hash Slot - The result of the hash calculation. +- Shard - Redis process that is part of the Redis clustered database. + +## When to use clustering (sharding) + +Clustering is an efficient way of scaling Redis that should be used when: + +- The dataset is large enough to benefit from using the RAM resources of more than one node. + When a dataset is more than 25 GB (50 GB for RoF), we recommend that you enable clustering to create multiple shards of the database + and spread the data requests across nodes. +- The operations performed against the database are CPU-intensive, resulting in performance degradation. + By having multiple CPU cores manage the database's shards, the load of operations is distributed among them. + +## Number of shards + +When enabling database clustering, you can set the number of database +shards. The minimum number of shards per database is 2 and the maximum +depends on the subscription you purchased. + +After you enable database clustering and set the number of shards, you cannot deactivate database clustering or reduce the number of +shards. You can only increase the number of shards by a multiple of the +current number of shards. For example, if the current number of shards +is 3, you can increase the number of shards to 6, 9, or 12. + +## Supported hashing policies + +### Standard hashing policy + +When using the standard hashing policy, a clustered Redis Enterprise database behaves similarly to a standard [Redis Open Source cluster]({{< relref "/operate/oss_and_stack/reference/cluster-spec" >}}#hash-tags), except when using multiple hash tags in a key's name. We recommend using only a single hash tag in a key name for hashing in Redis Enterprise. + +- **Keys with a hash tag**: a key's hash tag is any substring between + `{` and `}` in the key's name. When a key's name + includes the pattern `{...}`, the hash tag is used as input for the + hashing function. + + For example, the following key names have the same + hash tag and map to the same hash slot: `foo{bar}`, + `{bar}baz`, and `foo{bar}baz`. + +- **Keys without a hash tag**: when a key does not contain the `{...}` + pattern, the entire key's name is used for hashing. + +You can use a hash tag to store related keys in the same hash +slot so multi-key operations can run on these keys. If you do not use a hash tag in the key's name, the keys are distributed evenly across the keyspace's shards. +If your application does not perform multi-key operations, you do not +need to use hash tags. + +### Custom hashing policy + +You can configure a custom hashing policy for a clustered database. A +custom hashing policy is required when different keys need to be kept +together on the same shard to allow multi-key operations. The custom +hashing policy is provided through a set of Perl Compatible Regular +Expressions (PCRE) rules that describe the dataset's key name patterns. + +To configure a custom hashing policy, enter the regular expression +(RegEx) rules that identify the substring in the key's name - hash tag +-- on which hashing is done. The hash tag is denoted in the +RegEx by the use of the \`tag\` named subpattern. Different keys that +have the same hash tag are stored and managed in the same slot. + +After you enable the custom hashing policy, the following default RegEx +rules are implemented. Update these rules to fit your specific logic: + +| RegEx Rule | Description | +| ------ | ------ | +| .\*{(?\.\*)}.\* | Hashing is done on the substring between the curly braces. | +| (?\.\*) | The entire key's name is used for hashing. | + +You can modify existing rules, add new ones, delete rules, or change +their order to suit your application's requirements. + +### Custom hashing policy notes and limitations + +1. You can define up to 32 RegEx rules, each up to 256 characters. +2. RegEx rules are evaluated in order, and the first rule matched + is used. Therefore, you should place common key name patterns at the + beginning of the rule list. +3. Key names that do not match any of the RegEx rules trigger an + error. +4. The '.\*(?\)' RegEx rule forces keys into a single slot + because the hash key is always empty. Therefore, when used, + this should be the last, catch-all rule. +5. The following flag is enabled in the regular expression parser: + PCRE_ANCHORED: the pattern is constrained to match only at the + start of the string being searched. + +## Change the hashing policy + +The hashing policy of a clustered database can be changed. However, +most hashing policy changes trigger the deletion (FLUSHDB) of the +data before they can be applied. + +Examples of such changes include: + +- Changing the hashing policy from standard to custom or conversely, + custom to standard. +- Changing the order of custom hashing policy rules. +- Adding new rules in the custom hashing policy. +- Deleting rules from the custom hashing policy. + +{{< note >}} +The recommended workaround for updates that are not enabled, +or require flushing the database, +is to back up the database and import the data to a newly configured database. +{{< /note >}} + +## Multi-key operations {#multikey-operations} + +Operations on multiple keys in a clustered database are supported with +the following limitations: + +- **Multi-key commands**: Redis offers several commands that accept + multiple keys as arguments. In a clustered database, most multi-key + commands are not allowed across slots. The following multi-key + commands **are allowed** across slots: DEL, MSET, MGET, EXISTS, UNLINK, TOUCH + + In Active-Active databases, multi-key write commands (DEL, MSET, UNLINK) can only be run on keys that are in the same slot. However, the following multi-key commands **are allowed** across slots in Active-Active databases: MGET, EXISTS, and TOUCH. + + Commands that affect all keys or keys that match a specified pattern are allowed + in a clustered database, for example: FLUSHDB, FLUSHALL, KEYS + + {{< note >}} +When using these commands in a sharded setup, +the command is distributed across multiple shards +and the responses from all shards are combined into a single response. + {{< /note >}} + +- **Geo commands**: For the [GEORADIUS]({{< relref "/commands/georadius" >}}) and + [GEORADIUSBYMEMBER]({{< relref "/commands/georadiusbymember" >}}) commands, the + STORE and STOREDIST options can only be used when all affected keys + reside in the same slot. +- **Transactions**: All operations within a WATCH / MULTI / EXEC block + should be performed on keys that are mapped to the same slot. +- **Lua scripts**: All keys used by a Lua script must be mapped to the same + slot and must be provided as arguments to the EVAL / EVALSHA commands + (as per the Redis specification). Using keys in a Lua script that + were not provided as arguments might violate the sharding concept + but do not result in the proper violation error being returned. +- **Renaming/Copy keys**: The use of the RENAME / RENAMENX / COPY commands is + allowed only when the key's original and new values are mapped to + the same slot. +--- +Title: Durability and high availability +alwaysopen: false +categories: +- docs +- operate +- rs +description: Overview of Redis Enterprise durability features such as replication, + clustering, and rack-zone awareness. +hideListLinks: true +linktitle: Durability and availability +weight: 60 +url: '/operate/rs/7.4/databases/durability-ha/' +--- +Redis Enterprise Software comes with several features that make your data more durable and accessible. The following features can help protect your data in cases of failures or outages and help keep your data available when you need it. + +## Replication + +When you [replicate your database]({{}}), each database instance (shard) is copied one or more times. Your database will have one primary shard and one or more replica shards. When a primary shard fails, Redis Enterprise automatically promotes a replica shard to primary. + +## Clustering + +[Clustering]({{}}) (or sharding) breaks your database into individual instances (shards) and spreads them across several nodes. Clustering lets you add resources to your cluster to scale your database and prevents node failures from causing availability loss. + +## Database persistence + +[Database persistence]({{}}) gives your database durability against process or server failures by saving data to disk at set intervals. + +## Active-Active geo-distributed replication + +[Active-Active Redis Enterprise databases]({{}}) distribute your replicated data across multiple nodes and availability zones. This increases the durability of your database by reducing the likelihood of data or availability loss. It also reduces data access latency. + +## Rack-zone awareness + +[Rack-zone awareness]({{}}) maps each node in your Redis Enterprise cluster to a physical rack or logical zone. The cluster uses this information to distribute primary shards and their replica shards in different racks or zones. This ensures data availability if a rack or zone fails. + +## Discovery service + +The [discovery service]({{}}) provides an IP-based connection management service used when connecting to Redis Enterprise Software databases. It lets your application discover which node hosts the database endpoint. The discovery service API complies with the [Redis Sentinel API]({{< relref "/operate/oss_and_stack/management/sentinel" >}}#sentinel-api). +--- +Title: Consistency during replication +alwaysopen: false +categories: +- docs +- operate +- rs +description: Explains the order write operations are communicated from app to proxy to shards for both non-blocking Redis write operations and blocking write operations on replication. +linkTitle: Consistency +weight: 20 +url: '/operate/rs/7.4/databases/durability-ha/consistency/' +--- +Redis Enterprise Software comes with the ability to replicate data +to another database instance for high availability and persist in-memory data on +disk permanently for durability. With the [`WAIT`]({{}}) command, you can +control the consistency and durability guarantees for the replicated and +persisted database. + +## Non-blocking Redis write operation + +Any updates that are issued to the database are typically performed with the following flow: + +1. The application issues a write. +2. The proxy communicates with the correct primary (also known as master) shard in the system that contains the given key. +3. The shard writes the data and sends an acknowledgment to the proxy. +4. The proxy sends the acknowledgment back to the application. +5. The write is communicated from the primary shard to the replica. +6. The replica acknowledges the write back to the primary shard. +7. The write to a replica is persisted to disk. +8. The write is acknowledged within the replica. + +{{< image filename="/images/rs/weak-consistency.png" >}} + +## Blocking write operation on replication + +With the [`WAIT`]({{}}) or [`WAITAOF`]({{}}) commands, applications can ask to wait for +acknowledgments only after replication or persistence is confirmed on +the replica. The flow of a write operation with `WAIT` or `WAITAOF` is: + +1. The application issues a write. +2. The proxy communicates with the correct primary shard in the system that contains the given key. +3. Replication communicates the update to the replica shard. +4. If using `WAITAOF` and the AOF every write setting, the replica persists the update to disk before sending the acknowledgment. +5. The acknowledgment is sent back from the replica all the way to the proxy with steps 5 to 8. + +The application only gets the acknowledgment from the write after durability is achieved with replication to the replica for `WAIT` or `WAITAOF` and to the persistent storage for `WAITAOF` only. + +{{< image filename="/images/rs/strong-consistency.png" >}} + +The `WAIT` command always returns the number of replicas that acknowledged the write commands sent by the current client before the `WAIT` command, both in the case where the specified number of replicas are reached, or when the timeout is reached. In Redis Enterprise Software, the number of replicas for HA enabled databases is always 1. + +See the [`WAITAOF`]({{}}) command for details for enhanced data safety and durability capabilities introduced with Redis 7.2. +--- +Title: Flush database data +alwaysopen: false +categories: +- docs +- operate +- rs +description: To delete the data in a database without deleting the database, you can + use Redis CLI to flush it from the database. You can also use Redis CLI, the admin + console, and the Redis Software REST API to flush data from Active-Active databases. +linkTitle: Flush database +weight: 40 +url: '/operate/rs/7.4/databases/import-export/flush/' +--- +To delete the data in a database without deleting the database configuration, +you can flush the data from the database. + +You can use the Cluster Manager UI to flush data from Active-Active databases. + +{{< warning title="Data Loss Warning" >}} +The flush command deletes ALL in-memory and persistence data in the database. +We recommend that you [back up your database]({{< relref "/operate/rs/7.4/databases/import-export/schedule-backups.md" >}}) before you flush the data. +{{< /warning >}} + +## Flush data from a database + +From the command line, you can flush a database with the redis-cli command or with your favorite Redis client. + +To flush data from a database with the redis-cli, run: + +```sh +redis-cli -h -p -a flushall +``` + +Example: + +```sh +redis-cli -h redis-12345.cluster.local -p 9443 -a xyz flushall +``` + +{{< note >}} +Port 9443 is the default [port configuration]({{< relref "/operate/rs/7.4/networking/port-configurations#https://docs.redis.com/latest/rs/networking/port-configurations#ports-and-port-ranges-used-by-redis-enterprise-software" >}}). +{{< /note >}} + + +## Flush data from an Active-Active database + +When you flush an Active-Active database (formerly known as CRDB), all of the replicas flush their data at the same time. + +To flush data from an Active-Active database: + +- Cluster Manager UI + + 1. If you are using the new Cluster Manager UI, switch to the legacy admin console. + + {{Select switch to legacy admin console from the dropdown.}} + + 1. Go to **database** and select the Active-Active database that you want to flush. + 1. Go to **configuration** and click **Flush** at the bottom of the page. + 1. Enter the name of the Active-Active database to confirm that you want to flush the data. + +- Command line + + 1. To find the ID of the Active-Active database, run: + + ```sh + crdb-cli crdb list + ``` + + For example: + + ```sh + $ crdb-cli crdb list + CRDB-GUID NAME REPL-ID CLUSTER-FQDN + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 1 cluster1.local + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 2 cluster2.local + a16fe643-4a7b-4380-a5b2-96109d2e8bca crdb1 3 cluster3.local + ``` + + 1. To flush the Active-Active database, run: + + ```sh + crdb-cli crdb flush --crdb-guid + ``` + + The command output contains the task ID of the flush task, for example: + + ```sh + $ crdb-cli crdb flush --crdb-guid a16fe643-4a7b-4380-a5b2-96109d2e8bca + Task 63239280-d060-4639-9bba-fc6a242c19fc created + ---> Status changed: queued -> started + ``` + + 1. To check the status of the flush task, run: + + ```sh + crdb-cli task status --task-id + ``` + + For example: + + ```sh + $ crdb-cli task status --task-id 63239280-d060-4639-9bba-fc6a242c19fc + Task-ID: 63239280-d060-4639-9bba-fc6a242c19fc + CRDB-GUID: - + Status: finished + ``` + +- REST API + + 1. To find the ID of the Active-Active database, use [`GET /v1/crdbs`]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdbs#get-all-crdbs" >}}): + + ```sh + GET https://[host][:port]/v1/crdbs + ``` + + 1. To flush the Active-Active database, use [`PUT /v1/crdbs/{guid}/flush`]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdbs/flush#put-crdbs-flush" >}}): + + ```sh + PUT https://[host][:port]/v1/crdbs//flush + ``` + + The command output contains the task ID of the flush task. + + 1. To check the status of the flush task, use [`GET /v1/crdb_tasks`]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdb_tasks#get-crdb_task" >}}): + + ```sh + GET https://[host][:port]/v1/crdb_tasks/ + ``` +--- +Title: Migrate a database to Active-Active +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use Replica Of to migrate your database to an Active-Active database. +linktitle: Migrate to Active-Active +weight: $weight +url: '/operate/rs/7.4/databases/import-export/migrate-to-active-active/' +--- + +If you have data in a single-region Redis Enterprise Software database that you want to migrate to an [Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active" >}}), +you'll need to create a new Active-Active database and migrate the data into the new database as a [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) the existing database. +This process will gradually populate the data in the Active-Active database. + +Before data migration starts, all data is flushed from the Active-Active database. +The data is migrated to the Active-Active instance where you configured migration, and the data from that instance is copied to the other Active-Active instances. + +When data migration is finished, turn off migration and connect your applications to the Active-Active database. + +{{Active-Active data migration process}} + +## Prerequisites + +- During the migration, any applications that connect to the Active-Active database must be **read-only** to ensure the dataset is identical to the source database during the migration process. However, you can continue to write to the source database during the migration process. + +- If you used the mDNS protocol for the cluster name (FQDN), +the [client mDNS prerequisites]({{< relref "/operate/rs/7.4/networking/mdns" >}}) must be met in order to communicate with other clusters. + +## Migrate from a Redis Enterprise cluster + +You can migrate a Redis Enterprise database from the [same cluster](#migrate-from-the-same-cluster) or a [different cluster](#migrate-from-a-different-cluster). + +### Migrate from the same cluster + +To migrate a database to Active-Active in the same Redis Enterprise cluster: + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **Current cluster**: + + {{Migrate to Active-Active dialog with Current cluster tab selected.}} + +1. Select the source database from the list. + +1. Click **Add source**. + +1. Click **Save**. + +### Migrate from a different cluster + +{{< note >}} +For a source database on a different Redis Enterprise Software cluster, +you can [compress the replication data]({{< relref "/operate/rs/7.4/databases/import-export/replica-of#data-compression-for-replica-of" >}}) to save bandwidth. +{{< /note >}} + +To migrate a database to Active-Active in different Redis Enterprise clusters: + +1. Sign in to the Cluster Manager UI of the cluster hosting the source database. + + 1. In **Databases**, select the source database and then select the **Configuration** tab. + + 1. In the **Replica Of** section, select **Use this database as a source for another database**. + + 1. Copy the Replica Of source URL. + + {{Copy the Replica Of source URL from the Connection link to destination dialog.}} + + To change the internal password, select **Regenerate password**. + + If you regenerate the password, replication to existing destinations fails until their credentials are updated with the new password. + +1. Sign in to the Cluster Manager UI of the destination database’s cluster. + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **External**: + + {{Migrate to Active-Active dialog with External tab selected.}} + +1. For **Source database URL**, enter the Replica Of source URL you copied in step 1. + +1. Click **Add source**. + +1. Click **Save**. + +## Migrate from Redis Open Source + +To migrate a Redis Open Source database to Active-Active: + +1. Create a new Active-Active database. For prerequisites and detailed instructions, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}). + +1. After the Active-Active database is active, click **Edit** on the **Configuration** screen. + +1. Expand the **Migrate to Active-Active** section: + + {{Migrate to Active-Active section.}} + +1. Click **+ Add source database**. + +1. In the **Migrate to Active-Active** dialog, select **External**: + + {{Migrate to Active-Active dialog with External tab selected.}} + +1. Enter the **Source database URL**: + + - If the database has a password: + + ```sh + redis://:@: + ``` + + Where the password is the Redis password represented with URL encoding escape characters. + + - If the database does not have a password: + + ```sh + redis://: + ``` + +1. Click **Add source**. + +1. Click **Save**. + +## Stop sync after migration + +1. Wait until the migration is complete, indicated by the **Status** _Synced_. + + {{}} +Migration can take minutes to hours to complete depending on the dataset size and network quality. + {{}} + +1. On the Active-Active database's **Configuration** screen, click **Edit**. + +1. In the **Migrate to Active-Active** section, click **Stop sync**: + + {{The Migrate to Active-Active section shows the Active-Active database is synced with the source database.}} + +1. In the **Stop synchronization** dialog, click **Stop** to proceed. + +1. Redirect client connections to the Active-Active database after **Status** changes to _Sync stopped_: + + {{The Migrate to Active-Active section shows the Active-Active database stopped syncing with the source database.}} +--- +Title: Schedule periodic backups +alwaysopen: false +categories: +- docs +- operate +- rs +description: Schedule backups of your databases to make sure you always have valid backups. +linktitle: Schedule backups +weight: 40 +url: '/operate/rs/7.4/databases/import-export/schedule-backups/' +--- + +Periodic backups provide a way to restore data with minimal data loss. With Redis Enterprise Software, you can schedule periodic backups to occur once a day (every 24 hours), twice a day (every twelve hours), every four hours, or every hour. + +As of v6.2.8, you can specify the start time in UTC for 24-hour or 12-hour backups. + +To make an on-demand backup, [export your data]({{< relref "/operate/rs/7.4/databases/import-export/export-data.md" >}}). + +You can schedule backups to a variety of locations, including: + +- FTP server +- SFTP server +- Local mount point +- Amazon Simple Storage Service (S3) +- Azure Blob Storage +- Google Cloud Storage + +The backup process creates compressed (.gz) RDB files that you can [import into a database]({{< relref "/operate/rs/7.4/databases/import-export/import-data.md" >}}). + +When you back up a database configured for database clustering, +Redis Enterprise Software creates a backup file for each shard in the configuration. All backup files are copied to the storage location. + +{{< note >}} + +- Make sure that you have enough space available in your storage location. + If there is not enough space in the backup location, the backup fails. +- The backup configuration only applies to the database it is configured on. +- To limit the parallel backup for shards, set both [`tune cluster max_simultaneous_backups`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}) and [`tune node max_redis_forks`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-node" >}}). `max_simultaneous_backups` is set to 4 by default. + +{{< /note >}} + +## Schedule periodic backups + +Before scheduling periodic backups, verify that your storage location exists and is available to the user running Redis Enterprise Software (`redislabs` by default). You should verify that: + +- Permissions are set correctly. +- The user running Redis Enterprise Software is authorized to access the storage location. +- The authorization credentials work. + +Storage location access is verified before periodic backups are scheduled. + +To schedule periodic backups for a database: + +1. Sign in to the Redis Enterprise Software Cluster Manager UI using admin credentials. + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select the **Edit** button. + +1. Expand the **Scheduled backup** section. + +1. Select **Add backup path** to open the **Path configuration** dialog. + +1. Select the tab that corresponds to your storage location type, enter the location details, and select **Done**. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. + +1. Set the backup **Interval** and **Starting time**. + + | Setting | Description | + |--------------|-------------| + | **Interval** | Specifies the frequency of the backup; that is, the time between each backup snapshot.

Supported values include _Every 24 hours_, _Every 12 hours_, _Every 4 hours_, and _Every hour_. | + | **Starting time** | _v6.2.8 or later: _ Specifies the start time in UTC for the backup; available when **Interval** is set to _Every 24 hours_ or _Every 12 hours_.

If not specified, defaults to a time selected by Redis Enterprise Software. | + +7. Select **Save**. + +Access to the storage location is verified when you apply your updates. This means the location, credentials, and other details must exist and function before you can enable periodic backups. + +## Default backup start time + +If you do _not_ specify a start time for twenty-four or twelve hour backups, Redis Enterprise Software chooses a random starting time in UTC for you. + +This choice assumes that your database is deployed to a multi-tenant cluster containing multiple databases. This means that default start times are staggered (offset) to ensure availability. This is done by calculating a random offset which specifies a number of seconds added to the start time. + +Here's how it works: + +- Assume you're enabling the backup at 4:00 pm (1600 hours). +- You choose to back up your database every 12 hours. +- Because you didn't set a start time, the cluster randomly chooses an offset of 4,320 seconds (or 72 minutes). + +This means your first periodic backup occurs 72 minutes after the time you enabled periodic backups (4:00 pm + 72 minutes). Backups repeat every twelve hours at roughly same time. + +The backup time is imprecise because they're started by a trigger process that runs every five minutes. When the process wakes, it compares the current time to the scheduled backup time. If that time has passed, it triggers a backup. + +If the previous backup fails, the trigger process retries the backup until it succeeds. + +In addition, throttling and resource limits also affect backup times. + +For help with specific backup issues, [contact support](https://redis.com/company/support/). + + +## Supported storage locations {#supported-storage-locations} + +Database backups can be saved to a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When saved to a local mount point or a cloud provider, backup locations need to be available to [the group and user]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view permissions and update objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before enabling backups to an FTP server, verify that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user specified in the FTP server location has read and write privileges. + +To store your backups on an FTP server, set its **Backup Path** using the following syntax: + +`ftp://[username]:[password]@[host]:[port]/[path]/` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the backup path, if needed. + +Example: `ftp://username:password@10.1.1.1/home/backups/` + +The user account needs permission to write files to the server. + +### SFTP server + +Before enabling backups to an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user specified in the SFTP server location has read and write privileges. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To backup to an SFTP server, enter the SFTP server location in the format: + +```sh +sftp://user:password@host<:custom_port>/path/ +``` + +For example: `sftp://username:password@10.1.1.1/home/backups/` + +### Local mount point + +Before enabling periodic backups to a local mount point, verify that: + +- The node can connect to the destination server, the one hosting the mount point. +- The `redislabs:redislabs` user has read and write privileges on the local mount point +and on the destination server. +- The backup location has enough disk space for your backup files. Backup files +are saved with filenames that include the timestamp, which means that earlier backups are not overwritten. + +To back up to a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to a shell running on Redis Enterprise Software server hosting the node. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the backup location, enter the mount point. + + For example: `/mnt/Public` + +1. Verify that the user running Redis Enterprise Software has permissions to access and update files in the mount location. + +### AWS Simple Storage Service + +To store backups in an Amazon Web Services (AWS) Simple Storage Service (S3) [bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html): + +1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/). + +1. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) if you do not already have one. + +1. [Create an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to add objects to the bucket. + +1. [Create an access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for that user if you do not already have one. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **AWS S3** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Access Key ID** field, enter the access key ID. + + - In the **Secret Access Key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. To connect to an S3-compatible storage location, run [`rladmin cluster config`]({{}}): + +```sh +rladmin cluster config s3_url +``` + +Replace `` with the hostname or IP address of the S3-compatible storage location. + +### Google Cloud Storage + +For [Google Cloud](https://developers.google.com/console/) subscriptions, store your backups in a Google Cloud Storage bucket: + +1. Sign in to the Google Cloud Platform console. + +1. [Create a JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) if you do not already have one. + +1. [Create a bucket](https://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket) if you do not already have one. + +1. [Add a principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) to your bucket: + + - In the **New principals** field, add the `client_email` from the service account key. + + - Select "Storage Legacy Bucket Writer" from the **Role** list. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **Google Cloud Storage** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Client ID** field, enter the `client_id` from the service account key. + + - In the **Client Email** field, enter the `client_email` from the service account key. + + - In the **Private Key ID** field, enter the `private_key_id` from the service account key. + + - In the **Private Key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + +### Azure Blob Storage + +To store your backup in Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +To export to Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +1. [Create an Azure Storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create) if you do not already have one. + +1. [Create a container](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) if you do not already have one. + +1. [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) to find the storage account name and account keys. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details: + + - Select the **Azure Blob Storage** tab on the **Path configuration** dialog. + + - In the **Path** field, enter the path of your bucket. + + - In the **Azure Account Name** field, enter your storage account name. + + - In the **Azure Account Key** field, enter the storage account key. + +To learn more, see [Authorizing access to data in Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth). +--- +Title: Export data from a database +alwaysopen: false +categories: +- docs +- operate +- rs +description: You can export data to import it into a new database or to make a backup. This + article shows how to do so. +linktitle: Export data +weight: 20 +url: '/operate/rs/7.4/databases/import-export/export-data/' +--- + +You can export the data from a specific database at any time. The following destinations are supported: + +- FTP server +- SFTP server +- Amazon AWS S3 +- Local mount point +- Azure Blob Storage +- Google Cloud Storage + +If you export a database configured for database clustering, export files are created for each shard. + +## Storage space requirements + +Before exporting data, verify that you have enough space available in the storage destination and on the local storage associated with the node hosting the database. + +Export is a two-step process: a temporary copy of the data is saved to the local storage of the node and then copied to the storage destination. (The temporary file is removed after the copy operation.) + +Export fails when there isn't enough space for either step. + +## Export database data + +To export data from a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database from the list, then select **Configuration**. + +1. Click {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + +1. Select **Export**. + +1. Select the tab that corresponds to your storage location type and enter the location details. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. + +1. Select **Export**. + +## Supported storage locations {#supported-storage-locations} + +Data can be exported to a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When saved to a local mount point or a cloud provider, export locations need to be available to [the group and user]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view permissions and update objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before exporting data to an FTP server, verify that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user specified in the FTP server location has permission to read and write files to the server. + +To export data to an FTP server, set **Path** using the following syntax: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/ +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the export destination path, if needed. + +Example: `ftp://username:password@10.1.1.1/home/exports/` + +### Local mount point + +Before exporting data to a local mount point, verify that: + +- The node can connect to the server hosting the mount point. +- The `redislabs:redislabs` user has permission to read and write files to the local mount point and to the destination server. +- The export location has enough disk space for your exported data. + +To export to a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to the node's terminal. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the export location, enter the mount point. + + For example: `/mnt/Public` + +### SFTP server + +Before exporting data to an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user specified in the SFTP server location has permission to read and write files to the server. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To export data to an SFTP server, enter the SFTP server location in the format: + +```sh +sftp://[username]:[password]@[host]:[port]/[path]/ +``` + +Where: + +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the export destination path, if needed. + +For example: `sftp://username:password@10.1.1.1/home/exports/` + +### AWS Simple Storage Service + +To export data to an [Amazon Web Services](https://aws.amazon.com/) (AWS) Simple Storage Service (S3) bucket: + +1. Sign in to the [AWS console](https://console.aws.amazon.com/). + +1. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) if you do not already have one. + +1. [Create an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to add objects to the bucket. + +1. [Create an access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for that user if you do not already have one. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **AWS S3**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Access key ID** field, enter the access key ID. + + - In the **Secret access key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. To connect to an S3-compatible storage location, run [`rladmin cluster config`]({{}}): + +```sh +rladmin cluster config s3_url +``` + +Replace `` with the hostname or IP address of the S3-compatible storage location. + +### Google Cloud Storage + +To export to a [Google Cloud](https://developers.google.com/console/) storage bucket: + +1. Sign in to the Google Cloud console. + +1. [Create a JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) if you do not already have one. + +1. [Create a bucket](https://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket) if you do not already have one. + +1. [Add a principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) to your bucket: + + - In the **New principals** field, add the `client_email` from the service account key. + + - Select "Storage Legacy Bucket Writer" from the **Role** list. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **Google Cloud Storage**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Client ID** field, enter the `client_id` from the service account key. + + - In the **Client Email** field, enter the `client_email` from the service account key. + + - In the **Private Key ID** field, enter the `private_key_id` from the service account key. + + - In the **Private key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + + +### Azure Blob Storage + +To export to Microsoft Azure Blob Storage, sign in to the Azure portal and then: + +1. [Create an Azure Storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create) if you do not already have one. + +1. [Create a container](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) if you do not already have one. + +1. [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) to find the storage account name and account keys. + +1. In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + + - Select **Azure Blob Storage**. + + - In the **Path** field, enter the path of your bucket. + + - In the **Account name** field, enter your storage account name. + + - In the **Account key** field, enter the storage account key. + +To learn more, see [Authorizing access to data in Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth). +--- +Title: Import data into a database +alwaysopen: false +categories: +- docs +- operate +- rs +description: You can import export or backup files of a specific Redis Enterprise + Software database to restore data. You can either import from a single file or from + multiple files, such as when you want to import from a backup of a clustered database. +linktitle: Import data +weight: 10 +url: '/operate/rs/7.4/databases/import-export/import-data/' +--- +You can import, [export]({{< relref "/operate/rs/7.4/databases/import-export/export-data" >}}), +or [backup]({{< relref "/operate/rs/7.4/databases/import-export/schedule-backups" >}}) +files of a specific Redis Enterprise Software database to restore data. +You can either import from a single file or from multiple files, +such as when you want to import from a backup of a clustered database. + +{{< warning >}} +Importing data erases all existing content in the database. +{{< /warning >}} + +## Import data into a database + +To import data into a database using the Cluster Manager UI: + +1. On the **Databases** screen, select the database from the list, then select **Configuration**. +1. Click {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. +1. Select **Import**. +1. Select the tab that corresponds to your storage location type and enter the location details. + + See [Supported storage locations](#supported-storage-locations) for more information about each storage location type. +1. Select **Import**. + +## Supported storage locations {#supported-storage-services} + +Data can be imported from a local mount point, transferred to [a URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) using FTP/SFTP, or stored on cloud provider storage. + +When importing from a local mount point or a cloud provider, import locations need to be available to [the group and user]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) running Redis Enterprise Software, `redislabs:redislabs` by default. + +Redis Enterprise Software needs the ability to view objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation. + +The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info. + +### FTP server + +Before importing data from an FTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the FTP server. +- The user that you specify in the FTP server location has permission to read files from the server. + +To import data from an FTP server, set **RDB file path/s** using the following syntax: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/[filename].rdb +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the file's location path. +- *filename*: the name of the file. + +Example: `ftp://username:password@10.1.1.1/home/backups/.rdb` + +Select **Add path** to add another import file path. + +### Local mount point + +Before importing data from a local mount point, make sure that: + +- The node can connect to the server hosting the mount point. + +- The `redislabs:redislabs` user has permission to read files on the local mount point and on the destination server. + +- You must mount the storage in the same path on all cluster nodes. You can also use local storage, but you must copy the imported files manually to all nodes because the import source folders on the nodes are not synchronized. + +To import from a local mount point: + +1. On each node in the cluster, create the mount point: + 1. Connect to the node's terminal. + 1. Mount the remote storage to a local mount point. + + For example: + + ```sh + sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public + ``` + +1. In the path for the import location, enter the mount point. + + For example: `/mnt/Public/.rdb` + +As of version 6.2.12, Redis Enterprise reads files directly from the mount point using a [symbolic link](https://en.wikipedia.org/wiki/Symbolic_link) (symlink) instead of copying them to a temporary directory on the node. + +Select **Add path** to add another import file path. + +### SFTP server + +Before importing data from an SFTP server, make sure that: + +- Your Redis Enterprise cluster can connect and authenticate to the SFTP server. +- The user that you specify in the SFTP server location has permission to read files from the server. +- The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key. + + To use the cluster auto generated key: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand **Cluster SSH Public Key**. + + 1. Download or copy the cluster SSH public key to the appropriate location on the SFTP server. + + Use the server documentation to determine the appropriate location for the SSH public key. + +To import data from an SFTP server, enter the SFTP server location in the format: + +```sh +[protocol]://[username]:[password]@[host]:[port]/[path]/[filename].rdb +``` + +Where: + +- *protocol*: the server's protocol, can be either `ftp` or `ftps`. +- *username*: your username, if needed. +- *password*: your password, if needed. +- *hostname*: the hostname or IP address of the server. +- *port*: the port number of the server, if needed. +- *path*: the file's location path. +- *filename*: the name of the file. + +Example: `sftp://username:password@10.1.1.1/home/backups/[filename].rdb` + +Select **Add path** to add another import file path. + +### AWS Simple Storage Service {#aws-s3} + +Before you choose to import data from an [Amazon Web Services](https://aws.amazon.com/) (AWS) Simple Storage Service (S3) bucket, make sure you have: + +- The path to the file in your bucket in the format: `s3://[bucketname]/[path]/[filename].rdb` +- [Access key ID and Secret access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) for an [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) with permission to read files from the bucket. + +In the Redis Enterprise Software Cluster Manager UI, when you enter the export location details: + +- Select **AWS S3**. + +- In the **RDB file path/s** field, enter the path of your bucket. Select **Add path** to add another import file path. + +- In the **Access key ID** field, enter the access key ID. + +- In the **Secret access key** field, enter the secret access key. + +You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. To connect to an S3-compatible storage location, run [`rladmin cluster config`]({{}}): + +```sh +rladmin cluster config s3_url +``` + +Replace `` with the hostname or IP address of the S3-compatible storage location. + +### Google Cloud Storage + +Before you import data from a [Google Cloud](https://developers.google.com/console/) storage bucket, make sure you have: + +- Storage location path in the format: `/bucket_name/[path]/[filename].rdb` +- A [JSON service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) for your account +- A [principal](https://cloud.google.com/storage/docs/access-control/using-iam-permissions#bucket-add) for your bucket with the `client_email` from the service account key and a [role](https://cloud.google.com/storage/docs/access-control/iam-roles) with permissions to get files from the bucket (such as the **Storage Legacy Object Reader** role, which grants `storage.objects.get` permissions) + +In the Redis Enterprise Software Cluster Manager UI, when you enter the import location details: + +- Select **Google Cloud Storage**. + +- In the **RDB file path/s** field, enter the path of your file. Select **Add path** to add another import file path. + +- In the **Client ID** field, enter the `client_id` from the service account key. + +- In the **Client email** field, enter the `client_email` from the service account key. + +- In the **Private key id** field, enter the `private_key_id` from the service account key. + +- In the **Private key** field, enter the `private_key` from the service account key. + Replace `\n` with new lines. + +### Azure Blob Storage + +Before you choose to import from Azure Blob Storage, make sure that you have: + +- Storage location path in the format: `/container_name/[path/]/.rdb` +- Account name +- An authentication token, either an account key or an Azure [shared access signature](https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature) (SAS). + + To find the account name and account key, see [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage). + + Azure SAS support requires Redis Software version 6.0.20. To learn more about Azure SAS, see [Grant limited access to Azure Storage resources using shared access signatures](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). + +In the Redis Enterprise Software Cluster Manager UI, when you enter the import location details: + +- Select **Azure Blob Storage**. + +- In the **RDB file path/s** field, enter the path of your file. Select **Add path** to add another import file path. + +- In the **Azure Account Name** field, enter your storage account name. + +- In the **Azure Account Key** field, enter the storage account key. + +## Importing into an Active-Active database + +When importing data into an Active-Active database, there are two options: + +- [Flush all data]({{< relref "/operate/rs/7.4/databases/import-export/flush#flush-data-from-an-active-active-database" >}}) from the Active-Active database, then import the data into the database. +- Import data but merge it into the existing database. + +Because Active-Active databases have a numeric counter data type, +when you merge the imported data into the existing data RS increments counters by the value that is in the imported data. +The import through the Redis Enterprise Cluster Manager UI handles these data types for you. + +You can import data into an Active-Active database [from the Cluster Manager UI](#import-data-into-a-database). +When you import data into an Active-Active database, there is a special prompt warning that the imported data will be merged into the existing database. +--- +Title: Import and export data +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to import, export, flush, and migrate your data. +hideListLinks: false +linkTitle: Import and export +weight: 30 +url: '/operate/rs/7.4/databases/import-export/' +--- +--- +Title: Create a database with Replica Of +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create Replica Of database +linkTitle: Create Replica Of database +weight: 10 +url: '/operate/rs/7.4/databases/import-export/replica-of/create/' +--- +Replica databases copy data from source databases (previously known as _master_), which enable read-only connections from apps and clients located in different geographic locations. + +To create a replica connection, you define a database as a replica of a source database. Replica Of databases (also known as _Active-Passive databases_) synchronize in the background. + +Sources databases can be: + +- Located in the same Redis Enterprise Software cluster +- Located in a different Redis Enterprise cluster +- Hosted by a different deployment, e.g. Redis Cloud +- Redis Open Source databases + +Your apps can connect to the source database to read and write data; they can also use any replica for read-only access. + +Replica Of can model a variety of data relationships, including: + +- One-to-many relationships, where multiple replicas copy a single source database. +- Many-to-one relationships, where a single replica collects data from multiple source databases. + +When you change the replica status of a database by adding, removing, or changing sources, the replica database is synchronized to the new sources. + +## Configure Replica Of + +You can configure a database as a Replica Of, where the source database is in one of the following clusters: + +- [Same Redis Enterprise cluster](#same-cluster) + +- [Different Redis Enterprise cluster](#different-cluster) + +- [Redis Open Source cluster](#source-available-cluster) + +The order of the multiple Replica Of sources has no material impact on replication. + +For best results when using the [Multicast DNS](https://en.wikipedia.org/wiki/Multicast_DNS) (mDNS) protocol to resolve the fully-qualified domain name (FQDN) of the cluster, verify that your client connections meet the [client mDNS prerequisites]({{< relref "/operate/rs/7.4/networking/mdns.md" >}}). + +{{< note >}} +As long as Replica Of is enabled, data in the target database will not expire and will not be evicted regardless of the set [data eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy.md" >}}). +{{< /note >}} + +### Same Redis Enterprise cluster {#same-cluster} + +To configure a Replica Of database in the same Redis Enterprise cluster as the source database: + +1. [Create a new database]({{< relref "/operate/rs/7.4/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **Current cluster**. + +1. Select the source database from the list. + +1. Select **Add source**. + +1. Select **Save**. + +### Different Redis Enterprise cluster {#different-cluster} + +To configure a Replica Of database in a different Redis Enterprise cluster from the source database: + +1. Sign in to the Cluster Manager UI of the cluster hosting the source database. + + 1. In **Databases**, select the source database and then select the **Configuration** tab. + + 1. In the **Replica Of** section, select **Use this database as a source for another database**. + + 1. Copy the Replica Of source URL. + + {{Copy the Replica Of source URL from the Connection link to destination dialog.}} + + To change the internal password, select **Regenerate password**. + + If you regenerate the password, replication to existing destinations fails until their credentials are updated with the new password. + +1. Sign in to the Cluster Manager UI of the destination database's cluster. + +1. [Create a new database]({{< relref "/operate/rs/7.4/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **External**. + +1. Enter the URL of the source database endpoint. + +1. Select **Add source**. + +1. Select **Save**. + +For source databases on different clusters, you can [compress replication data]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/#data-compression-for-replica-of" >}}) to save bandwidth. + +### Redis Open Source cluster {#source-available-cluster} + +To use a database from a Redis Open Source cluster as a Replica Of source: + +1. [Create a new database]({{< relref "/operate/rs/7.4/databases/create" >}}) or select an existing database from the **Databases** screen. + +1. For an existing database, select **Edit** from the **Configuration** tab. + +1. Expand the **Replica Of** section. + +1. Select **+ Add source database**. + +1. In the **Connect a Replica Of source database** dialog, select **External**. + +1. Enter the URL of the source endpoint in one of the following formats: + + - For databases with passwords: + + ```sh + redis://:@: + ``` + + Where the password is the Redis password represented with URL encoding escape characters. + + - For databases without passwords: + + ```sh + redis://: + ``` + +1. Select **Add source**. + +1. Select **Save**. + +## Configure TLS for Replica Of + +When you enable TLS for Replica Of, the Replica Of synchronization traffic uses TLS certificates to authenticate the communication between the source and destination clusters. + +To encrypt Replica Of synchronization traffic, configure encryption for the [source database](#encrypt-source-database-traffic) and the destination [replica database](#encrypt-replica-database-traffic). + +### Encrypt source database traffic + +{{}} + +### Encrypt replica database traffic + +To enable TLS for Replica Of in the destination database: + +1. From the Cluster Manager UI of the cluster hosting the source database: + + 1. Go to **Cluster > Security > Certificates**. + + 1. Expand the **Server authentication (Proxy certificate)** section. + + {{Proxy certificate for server authentication.}} + + 1. Download or copy the proxy certificate. + +1. From the **Configuration** tab of the Replica Of destination database, select **Edit**. + +1. Expand the **Replica Of** section. + +1. Point to the source database entry and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit it. + +1. Paste or upload the source proxy certificate, then select **Done**. + +1. Select **Save**. +--- +Title: Replica Of Repeatedly Fails +alwaysopen: false +categories: +- docs +- operate +- rs +description: Troubleshoot when the Replica Of process repeatedly fails and restarts. +linktitle: Troubleshoot repeat failures +weight: 20 +url: '/operate/rs/7.4/databases/import-export/replica-of/replicaof-repeatedly-fails/' +--- +**Problem**: The Replica Of process repeatedly fails and restarts + +**Diagnostic**: A log entry in the Redis log of the source database shows repeated failures and restarts. + +**Cause**: The Redis "client-output-buffer-limit" setting on the source database +is configured to a relatively small value, which causes the connection drop. + +**Resolution**: Reconfigure the buffer on the source database to a bigger value: + +- If the source is a Redis database on a Redis Enterprise Software cluster, + increase the replica buffer size of the **source database** with: + + `rladmin tune db < db:id | name > slave_buffer < value >` + +- If the source is a Redis database not on a Redis Enterprise Software cluster, + use the [config set](http://redis.io/commands/config-set) command through + `redis-cli` to increase the client output buffer size of the **source database** with: + + `config set client-output-buffer-limit "slave "` + +**Additional information**: [Top Redis Headaches for DevOps - Replication Buffer](https://redislabs.com/blog/top-redis-headaches-for-devops-replication-buffer) +--- +Title: Replica Of geo-distributed Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: Replica Of provides read-only access to replicas of the dataset from different geographical locations. +hideListLinks: true +linkTitle: Replica Of +weight: $weight +url: '/operate/rs/7.4/databases/import-export/replica-of/' +--- +In Redis Enterprise, the Replica Of feature provides active-passive geo-distribution to applications for read-only access +to replicas of the dataset from different geographical locations. +The Redis Enterprise implementation of active-passive replication is called Replica Of. + +In Replica Of, an administrator designates a database as a replica (destination) of one or more databases (sources). +After the initial data load from source to destination is completed, +all write commands are synchronized from the sources to the destination. +Replica Of lets you distribute the read load of your application across multiple databases or +synchronize the database, either within Redis Enterprise or external to Redis Enterprise, to another database. + +You can [create Active-Passive]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create.md" >}}) databases on Redis Enterprise Software or Redis Cloud. + +[Active-Active Geo-Distribution (CRDB)]({{< relref "/operate/rs/7.4/databases/active-active" >}}) +provides these benefits and also provides write access to all of the database replicas. + +{{< warning >}} +Configuring a database as a replica of the database that it replicates +creates a cyclical replication and is not supported. +{{< /warning >}} + +The Replica Of is defined in the context of the destination database +by specifying the source databases. + +A destination database can have a maximum of thirty-two (32) source +databases. + +If only one source is defined, then the command execution order in the +source is kept in the destination. However, when multiple sources are +defined, commands that are replicated from the source databases are +executed in the order in which they reach the destination database. As a +result, commands that were executed in a certain order when compared +across source databases might be executed in a different order on the +destination database. + +{{< note >}} +The Replica Of feature should not be confused with the +in-memory [Database +replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}) +feature, which is used for creating a master / replica configuration that +enables ensuring database high-availability. +{{< /note >}} + +## Replication process + +When a database is defined as a replica of another database, all its +existing data is deleted and replaced by data that is loaded from the +source database. + +Once the initial data load is completed, an ongoing synchronization +process takes place to keep the destination always synchronized with its +source. During the ongoing synchronization process, there is a certain +delay between the time when a command was executed on the source and +when it is executed on the destination. This delay is referred to as the +**Lag**. + +When there is a **synchronization error**, **the process might stop** or +it might continue running on the assumption that the error automatically +resolves. The result depends on the error type. See more details below. + +In addition, **the user can manually stop the synchronization process**. + +When the process is in the stopped state - whether stopped by the user +or by the system - the user can restart the process. **Restarting the +process causes the synchronization process to flush the DB and restart +the process from the beginning**. + +### Replica Of status + +The replication process can have the following statuses: + +- **Syncing** - indicates that the synchronization process has + started from scratch. Progress is indicated in percentages (%). +- **Synced** - indicates that the initial synchronization process was + completed and the destination is synchronizing changes on an ongoing + basis. The **Lag** delay in synchronization with the source is + indicated as a time duration. +- **Sync stopped** - indicates that the synchronization process is + currently not running and the user needs to restart it in order for + it to continue running. This status happens if the user stops the + process, or if certain errors arose that prevent synchronization + from continuing without manual intervention. See more details below. + +The statuses above are shown for the source database. In addition, a +timestamp is shown on the source indicating when the last command from +the source was executed on the destination. + +The system also displays the destination database status as an aggregate +of the statuses of all the sources. + +{{< note >}} +If you encounter issues with the Replica Of process, refer +to the troubleshooting section [Replica Of repeatedly +fails]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/replicaof-repeatedly-fails.md" >}}). +{{< /note >}} + +### Synchronization errors + +Certain errors that occur during the synchronization process require +user intervention for their resolution. When such errors occur, the +synchronization process is automatically stopped. + +For other errors, the synchronization process continues running on the +assumption that the error automatically resolves. + +Examples of errors that require user intervention for their resolution +and that stop the synchronization process include: + +- Error authenticating with the source database. +- Cross slot violation error while executing a command on a sharded + destination database. +- Out-of-memory error on a source or on the destination + database. + +Example of an error that does not cause the synchronization process to +stop: + +- Connection error with the source database. A connection error might + occur occasionally, for example as result of temporary network + issues that get resolved. Depending on the connection error and its + duration the process might be able to start syncing from the last + point it reached (partial sync) or require a complete + resynchronization from scratch across all sources (full sync). + +## Encryption + +Replica Of supports the ability to encrypt uni-directional replication +communications between source and destination clusters utilizing TLS 1.2 +based encryption. + +## Data compression for Replica Of + +When the Replica Of is defined across different Redis Enterprise +Software clusters, it may be beneficial to compress the data that flows +through the network (depending on where the clusters physically reside +and the available network). + +Compressing the data reduces the traffic and can help: + +- Resolve throughput issues +- Reduce network traffic costs + +Compressing the data does have trade-offs, which is why it should not +always be turned on by default. For example: + +- It uses CPU and disk resources to compress the data before sending + it to the network and decompress it on the other side. +- It takes time to compress and decompress the data which can increase + latency. +- Replication is disk-based and done gradually, shard by shard in the + case of a multi-shard database. This may have an impact on + replication times depending on the speed of the disks and load on + the database. +- If traffic is too fast and the compression takes too much time it + can cause the replication process to fail and be restarted. + +It is advised that you test compression out in a lower environment +before enabling it in production. + +In the Redis Enterprise Software management UI, when designating a +Replica Of source from a different Redis Enterprise Software cluster, +there is also an option to enable compression. When enabled, gzip +compression with level -6 is utilized. + +## Database clustering (sharding) implications + +If a **source** database is sharded, that entire database is treated as +a single source for the destination database. + +If the **destination** database is sharded, when the commands replicated +from the source are executed on the destination database, the +destination database's hashing function is executed to determine to +which shard/s the command refers. + +The source and destination can have different shard counts and functions +for placement of keys. + +### Synchronization in Active-Passive Replication + +In Active-Passive databases, one cluster hosts the source database that receives read-write operations +and the other clusters host destination databases that receive synchronization updates from the source database. + +When there is a significant difference between the source and destination databases, +the destination database flushes all of the data from its memory and starts synchronizing the data again. +This process is called a **full sync**. + +For example, if the database updates for the destination databases +that are stored by the destination database in a synchronization backlog exceed their allocated memory, +the source database starts a full sync. + +{{% warning %}} +When you failover to the destination database for write operations, +make sure that you disable **Replica Of** before you direct clients to the destination database. +This avoids a full sync that can overwrite your data. +{{% /warning %}} + +## Active-Passive replication backlog + +In addition to the [database replication backlog]({{< relref "/operate/rs/7.4/databases/durability-ha/replication#database-replication-backlog" >}}), active-passive databases maintain a replication backlog (per shard) to synchronize the database instances between clusters. +By default, the replication backlog is set to one percent (1%) of the database size divided by the database number of shards and ranges between 1MB to 250MB per shard. +Use the [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) utility to control the size of the replication backlog. You can set it to `auto` or set a specific size. + +For an Active-Passive database: +```text +rladmin tune db repl_backlog +``` + +{{}} +On an Active-Passive database, the replication backlog configuration applies to both the replication backlog for shards synchronization and for synchronization of database instances between clusters. +{{}} +--- +alwaysopen: false +categories: +- docs +- operate +- rs +db_type: database +description: Create a database with Redis Enterprise Software. +linkTitle: Create a database +title: Create a Redis Enterprise Software database +toc: 'true' +weight: 10 +url: '/operate/rs/7.4/databases/create/' +--- +Redis Enterprise Software lets you create databases and distribute them across a cluster of nodes. + +To create a new database: + +1. Sign in to the Cluster Manager UI at `https://:8443` + +1. Use one of the following methods to create a new database: + + - [Quick database](#quick-database) + + - [Create database](#create-database) with additional configuration + +1. If you did not specify a port number for the database, you can find the port number in the **Endpoint** field in the **Databases > Configuration > General** section. + +1. [Test client connectivity]({{< relref "/operate/rs/7.4/databases/connect/test-client-connectivity" >}}). + + +{{< note >}} +For databases with Active-Active replication for geo-distributed locations, +see [Create an Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/create.md" >}}). To create and manage Active-Active databases, use the legacy UI. +{{< /note >}} + +## Quick database + +To quickly create a database and skip additional configuration options during initial creation: + +1. On the **Databases** screen, select **Quick database**. + +1. Select a Redis version from the **Database version** list. + +1. Configure settings that are required for database creation but can be changed later: + + - Database name + + - Memory limit (GB) + +2. Configure optional settings that can't be changed after database creation: + + - Endpoint port (set by the cluster if not set manually) + + - Capabilities (previously modules) to enable + +1. Optionally select **Full options** to configure [additional settings]({{< relref "/operate/rs/7.4/databases/configure#config-settings" >}}). + +1. Select **Create**. + +## Create database + +To create a new database and configure additional settings: + +1. Open the **Create database** menu with one of the following methods: + + - Click the **+** button next to **Databases** in the navigation menu: + + {{Create database menu has two options: Single Region and Active-Active database.}} + + - Go to the **Databases** screen and select **Create database**: + + {{Create database menu has two options: Single Region and Active-Active database.}} + +1. Select the database type: + + - **Single Region** + + - **Active-Active database** - Multiple participating Redis Enterprise clusters can host instances of the same [Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active" >}}) in different geographic locations. Every instance can receive write operations, which are synchronized across all instances without conflict. + + {{}} +For Active-Active databases, see [Create an Active-Active geo-replicated database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}). + {{}} + +1. Select a Redis version from the **Database version** list. + +1. Enter a **Database name**. + + - Maximum of 63 characters + + - Only letters, numbers, or hyphens (-) are valid characters + + - Must start and end with a letter or digit + + - Case-sensitive + +1. To configure additional database settings, expand each relevant section to make changes. + + See [Configuration settings]({{< relref "/operate/rs/7.4/databases/configure#config-settings" >}}) for more information about each setting. + +1. Select **Create**. +--- +Title: Delete databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Delete a database from the Cluster Manager UI. +linktitle: Delete +weight: 36 +url: '/operate/rs/7.4/databases/delete/' +--- + +When you delete a database, both the database configuration and data are removed. + +To delete a database from the Cluster Manager UI: + +1. From the **Databases** list, select the database, then select **Configuration**. + +1. Select {{< image filename="/images/rs/icons/delete-icon.png#no-click" alt="Delete button" width="22px" class="inline" >}} **Delete**. + +1. In the **Delete database** dialog, confirm deletion. +--- +Title: Auto Tiering quick start +alwaysopen: false +categories: +- docs +- operate +- rs +description: Get started with Auto Tiering quickly, creating a cluster and database + using flash storage. +linkTitle: Quick start +weight: 80 +url: '/operate/rs/7.4/databases/auto-tiering/quickstart/' +--- +This page guides you through a quick setup of [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}) with a single node for testing and demo purposes. + +For production environments, you can find more detailed installation instructions in the [install and setup]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) section. + +The steps to set up a Redis Enterprise Software cluster using Auto Tiering +with a single node are: + +1. Install Redis Enterprise Software or run it in a Docker + container. +1. Set up a Redis Enterprise Software cluster with Auto Tiering. +1. Create a new database with Auto Tiering enabled. +1. Connect to your new database. + +## Install Redis Enterprise Software + +### Bare metal, VM, Cloud instance + +To install on bare metal, a virtual machine, or an instance: + +1. Download the binaries from the [Redis Enterprise download center](https://cloud.redis.io/#/sign-up/software?direct=true). + +1. Upload the binaries to a Linux-based operating system. + +1. Extract the image: + + ```sh + tar -vxf + ``` + +1. After the `tar` command completes, you can find a new `install.sh` script in the current directory: + + ```sh + sudo ./install.sh -y + ``` + +### Docker-based installation {#dockerbased-installation} + +For testing purposes, you can run a Redis Enterprise Software +Docker container on Windows, MacOS, and Linux. + +```sh +docker run -d --cap-add sys_resource --name rp -p 8443:8443 -p 12000:12000 redislabs/redis:latest +``` + +## Prepare and format flash memory + +After you [install Redis Enterprise Software](#install-redis-enterprise-software), use the `prepare_flash` script to prepare and format flash memory: + +```sh +sudo /opt/redislabs/sbin/prepare_flash.sh +``` + +This script finds unformatted disks and mounts them as RAID partitions in `/var/opt/redislabs/flash`. + +To verify the disk configuration, run: + +```sh +sudo lsblk +``` + +## Set up a cluster and enable Auto Tiering + +1. Direct your browser to `https://localhost:8443` on the host machine to +see the Redis Enterprise Software Cluster Manager UI. + + {{}} +Depending on your browser, you may see a certificate error. +Choose "continue to the website" to go to the setup screen. + {{}} + +1. Select **Create new cluster**. + +1. Set up account credentials for a cluster administrator, then select **Next** to proceed to cluster setup. + +1. Enter your cluster license key if you have one. Otherwise, the cluster uses the trial version. + +1. Provide a cluster FQDN such as `mycluster.local`, then select **Next**. + +1. In the **Storage configuration** section, turn on the **Enable flash storage** toggle. + +1. Select **Create cluster**. + +1. Select **OK** to confirm that you are aware of the replacement of the HTTPS TLS +certificate on the node, and proceed through the browser warning. + +## Create a database + +On the **Databases** screen: + +1. Select **Quick database**. + +1. Verify **Flash** is selected for **Runs on**. + + {{Create a quick database with Runs on Flash selected.}} + +1. Enter `12000` for the endpoint **Port** number. + +1. _(Optional)_ Select **Full options** to see available alerts. + +1. Select **Create**. + +You now have a database with Auto Tiering enabled! + +## Connect to your database + +You are ready to connect to your database to store data. See the [test connectivity]({{< relref "/operate/rs/7.4/databases/connect/test-client-connectivity.md" >}}) page to learn how to connect to your database. + +## Next steps + +If you want to generate load against the +database or add a bunch of data for cluster testing, see the [memtier_benchmark quick start]({{< relref "/operate/rs/7.4/clusters/optimize/memtier-benchmark.md" >}}) for help. + +To see the true performance and scale of Auto Tiering, you must tune your I/O path and set the flash path to the mounted path of SSD or NVMe flash memory as that is what it is designed to run on. For more information, see [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}). +--- +Title: Manage Auto Tiering storage engine +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage the storage engine used for your database with auto tiering enabled. +linkTitle: Manage storage engine +weight: 100 +url: '/operate/rs/7.4/databases/auto-tiering/storage-engine/' +--- + +## Manage the storage engine + +Redis Enterprise Auto Tiering supports two storage engines: + +* [Speedb](https://www.speedb.io/) (default, recommended) +* [RocksDB](https://rocksdb.org/) + +{{}}Switching between storage engines requires guidance by Redis Support or your Account Manager.{{}} + +### Change the storage engine + +1. Change the cluster level configuration for default storage engine. + + * API: + + ``` sh + curl -k -u : -X PUT -H "Content-Type: application/json" -d '{"bigstore_driver":"speedb"}' https://localhost:9443/v1/cluster + ``` + + * CLI: + + ```sh + rladmin cluster config bigstore_driver {speedb | rocksdb} + ``` + +2. Restart the each database on the cluster one by one. + + ```sh + rladmin restart db { db: | } + ``` + +{{}} We recommend restarting your database at times with low usage and avoiding peak hours. For databases without persistence enabled, we also recommend using export to backup your database first.{{}} + +## Monitor the storage engine + +To get the current cluster level default storage engine run: + +* Use the `rladmin info cluster` command look for ‘bigstore_driver’. + +* Use the REST API: + + ```sh + curl -k -u : -X GET -H "Content-Type: application/json" https://localhost:9443/v1/cluster + ``` + +Versions of Redis Enterprise 7.2 and later provide a metric called `bdb_bigstore_shard_count` to help track the shard count per database, filtered by `bdb_id` and by storage engine as shown below: + + + ```sh + bdb_bigstore_shard_count{bdb="1",cluster="mycluster.local",driver="rocksdb"} 1.0 + bdb_bigstore_shard_count{bdb="1",cluster="mycluster.local",driver="speedb"} 2.0 + ``` + +For more about metrics for Redis Enterprise’s integration with Prometheus, see [Prometheus integration]({{< relref "/integrate/prometheus-with-redis-enterprise/prometheus-metrics-definitions" >}}). +--- +Title: Auto Tiering +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Auto Tiering enables your data to span both RAM and dedicated flash memory. +hideListLinks: true +linktitle: Auto Tiering +weight: 50 +url: '/operate/rs/7.4/databases/auto-tiering/' +--- +Redis Enterprise's auto tiering offers users the unique ability to use solid state drives (SSDs) to extend databases beyond DRAM capacity. +Developers can build applications that require large datasets using the same Redis API. +Using SSDs can significantly reduce the infrastructure costs compared to only DRAM deployments. + +Frequently used data, called hot data, belongs in the fastest memory level to deliver a real-time user experience. +Data that is accessed less frequently, called warm data, can be kept in a slightly slower memory tier. +Redis Enterprise’s Auto tiering maintains hot data in DRAM, keeps warm data in SSDs, and transfers data between tiers automatically. + +Redis Enterprise’s auto tiering is based on a high-performance storage engine (Speedb) that manages the complexity of using SSDs and DRAM as the total available memory for databases in a Redis Enterprise cluster. This implementation offers a performance boost of up to 10k operations per second per core of the database, doubling the performance of Redis on Flash. + +Just like all-RAM databases, databases with Auto Tiering enabled are compatible with existing Redis applications. + +Auto Tiering is also supported on [Redis Cloud]({{< relref "/operate/rc/" >}}) and [Redis Enterprise Software for Kubernetes]({{< relref "/operate/rs/" >}}). + +## Use cases + +The benefits associated with Auto Tiering are dependent on the use case. + +Auto Tiering is ideal when your: + +- working set is significantly smaller than your dataset (high RAM hit rate) +- average key size is smaller than average value size (all key names are stored in RAM) +- most recent data is the most frequently used (high RAM hit rate) + +Auto Tiering is not recommended for: + +- Long key names (all key names are stored in RAM) +- Broad access patterns (any value could be pulled into RAM) +- Large working sets (working set is stored in RAM) +- Frequently moved data (moving to and from RAM too often can impact performance) + +Auto Tiering is not intended to be used for persistent storage. Redis Enterprise Software database persistent and ephemeral storage should be on different disks, either local or attached. + +## Where is my data? + +When using Auto Tiering, RAM storage holds: +- All keys (names) +- Key indexes +- Dictionaries +- Hot data (working set) + +All data is accessed through RAM. If a value in flash memory is accessed, it becomes part of the working set and is moved to RAM. These values are referred to as “hot data”. + +Inactive or infrequently accessed data is referred to as “warm data” and stored in flash memory. When more space is needed in RAM, warm data is moved from RAM to flash storage. + +{{}} When using Auto Tiering with RediSearch, it’s important to note that RediSearch indexes are also stored in RAM.{{}} + +## RAM to Flash ratio + +Redis Enterprise Software allows you to configure and tune the ratio of RAM-to-Flash for each database independently, optimizing performance for your specific use case. +While this is an online operation requiring no downtime for your database, it is recommended to perform it during maintenance windows as data might move between tiers (RAM <-> Flash). + +The RAM limit cannot be smaller than 10% of the total memory. We recommend you keep at least 20% of all values in RAM. Do not set the RAM limit to 100%. + +## Flash memory + +Implementing Auto Tiering requires pre planning around memory and sizing. Considerations and requirements for Auto Tiering include: + +- Flash memory must be locally attached (as opposed to network attached storage (NAS) and storage area networks (SAN)). +- Flash memory must be dedicated to Auto Tiering and not shared with other parts of the database, such as durability, binaries, or persistence. +- For the best performance, the SSDs should be NVMe based, but SATA can also be used. +- The available flash space must be greater than or equal to the total database size (RAM+Flash). The extra space accounts for write buffers and [write amplification](https://en.wikipedia.org/wiki/Write_amplification). + +{{}} The Redis Enterprise Software database persistent and ephemeral storage should be on different disks, either local or attached. {{}} + +Once these requirements are met, you can create and manage both databases with Auto Tiering enabled and +all-RAM databases in the same cluster. + +When you begin planning the deployment of an Auto Tiering enabled database in production, +we recommend working closely with the Redis technical team for sizing and performance tuning. + +### Cloud environments + +When running in a cloud environment: + +- Flash memory is on the ephemeral SSDs of the cloud instance (for example the local NVMe of AWS i4i instnaces and Azure Lsv2 and Lsv3 series). +- Persistent database storage needs to be network attached (for example, AWS EBS for AWS). + +{{}} +We specifically recommend "[Storage Optimized I4i - High I/O Instances](https://aws.amazon.com/ec2/instance-types/#storage-optimized)" because of the performance of NVMe for flash memory. {{}} + +### On-premises environments + +When you begin planning the deployment of Auto Tiering in production, we recommend working closely with the Redis technical team for sizing and performance tuning. + +On-premises environments support more deployment options than other environments such as: + +- Using Redis Stack features: + - [Search and query]({{< relref "/operate/oss_and_stack/stack-with-enterprise/search" >}}) + - [JSON]({{< relref "/operate/oss_and_stack/stack-with-enterprise/json" >}}) + - [Time series]({{< relref "/operate/oss_and_stack/stack-with-enterprise/timeseries" >}}) + - [Probabilistic data structures]({{< relref "/operate/oss_and_stack/stack-with-enterprise/bloom" >}}) + +{{}} Enabling Auto Tiering for Active-Active distributed databases requires validating and getting the Redis technical team's approval first . {{}} + +{{}} Auto Tiering is not supported running on network attached storage (NAS), storage area network (SAN), or with local HDD drives. {{}} + +## Next steps + +- [Auto Tiering metrics]({{< relref "/operate/rs/7.4/references/metrics/auto-tiering" >}}) +- [Auto Tiering quick start]({{< relref "/operate/rs/7.4/databases/auto-tiering/quickstart.md" >}}) + +- [Ephemeral and persistent storage]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}) +- [Hardware requirements]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}) +--- +Title: Troubleshooting pocket guide for Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Troubleshoot issues with Redis Enterprise Software, including connectivity + issues between the database and clients or applications. +linktitle: Troubleshoot +toc: 'true' +weight: 90 +url: '/operate/rs/7.4/databases/connect/troubleshooting-guide/' +--- + +If your client or application cannot connect to your database, verify the following. + +## Identify Redis host issues + +#### Check resource usage + +- Used disk space should be less than `90%`. To check the host machine's disk usage, run the [`df`](https://man7.org/linux/man-pages/man1/df.1.html) command: + + ```sh + $ df -h + Filesystem Size Used Avail Use% Mounted on + overlay 59G 23G 33G 41% / + /dev/vda1 59G 23G 33G 41% /etc/hosts + ``` + +- RAM and CPU utilization should be less than `80%`, and host resources must be available exclusively for Redis Enterprise Software. You should also make sure that swap memory is not being used or is not configured. + + 1. Run the [`free`](https://man7.org/linux/man-pages/man1/free.1.html) command to check memory usage: + + ```sh + $ free + total used free shared buff/cache available + Mem: 6087028 1954664 993756 409196 3138608 3440856 + Swap: 1048572 0 1048572 + ``` + + 1. Used CPU should be less than `80%`. To check CPU usage, use `top` or `vmstat`. + + Run [`top`](https://man7.org/linux/man-pages/man1/top.1.html): + + ```sh + $ top + Tasks: 54 total, 1 running, 53 sleeping, 0 stopped, 0 zombie + %Cpu(s): 1.7 us, 1.4 sy, 0.0 ni, 96.8 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st + KiB Mem : 6087028 total, 988672 free, 1958060 used, 3140296 buff/cache + KiB Swap: 1048572 total, 1048572 free, 0 used. 3437460 avail Mem + ``` + + Run [`vmstat`](https://man7.org/linux/man-pages/man8/vmstat.8.html): + + ```sh + $ vmstat + procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- + r b swpd free buff cache si so bi bo in cs us sy id wa st + 2 0 0 988868 177588 2962876 0 0 0 6 7 12 1 1 99 0 0 + ``` + + 1. If CPU or RAM usage is greater than 80%, ask your system administrator which process is the culprit. If the process is not related to Redis, terminate it. + +#### Sync clock with time server + +It is recommended to sync the host clock with a time server. + +Verify that time is synchronized with the time server using one of the following commands: + +- `ntpq -p` + +- `chronyc sources` + +- [`timedatectl`](https://man7.org/linux/man-pages/man1/timedatectl.1.html) + +#### Remove https_proxy and http_proxy variables + +1. Run [`printenv`](https://man7.org/linux/man-pages/man1/printenv.1.html) and check if `https_proxy` and `http_proxy` are configured as environment variables: + + ```sh + printenv | grep -i proxy + ``` + +1. If `https_proxy` or `http_proxy` exist, remove them: + + ```sh + unset https_proxy + ``` + ```sh + unset http_proxy + ``` + +#### Review system logs + +Review system logs including the syslog or journal for any error messages, warnings, or critical events. See [Logging]({{< relref "/operate/rs/7.4/clusters/logging" >}}) for more information. + +## Identify issues caused by security hardening + +- Temporarily deactivate any security hardening tools (such as selinux, cylance, McAfee, or dynatrace), and check if the problem is resolved. + +- The user `redislabs` must have read and write access to `/tmp` directory. Run the following commands to verify. + + 1. Create a test file in `/tmp` as the `redislabs` user: + ```sh + $ su - redislabs -s /bin/bash -c 'touch /tmp/test' + ``` + + 1. Verify the file was created successfully: + ```sh + $ ls -l /tmp/test + -rw-rw-r-- 1 redislabs redislabs 0 Aug 12 02:06 /tmp/test + ``` + +- Using a non-permissive file mode creation mask (`umask`) can cause issues. + + 1. Check the output of `umask`: + + ```sh + $ umask + 0022 + ``` + + 1. If `umask`'s output differs from the default value `0022`, it might prevent normal operation. Consult your system administrator and revert to the default `umask` setting. + +## Identify cluster issues + +- Use `supervisorctl status` to verify all processes are in a `RUNNING` state: + + ```sh + supervisorctl status + ``` + +- Run `rlcheck` and verify no errors appear: + + ```sh + rlcheck + ``` + +- Run [`rladmin status issues_only`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) and verify that no issues appear: + + ```sh + $ rladmin status issues_only + CLUSTER NODES: + NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS + + DATABASES: + DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT + + ENDPOINTS: + DB:ID NAME ID NODE ROLE SSL + + SHARDS: + DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS + + ``` + +- Run [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}). For each shard, `USED_MEMORY` should be less than 25 GB. + + ```sh + $ rladmin status shards + SHARDS: + DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS + db:1 db1 redis:1 node:1 master 0-16383 2.13MB OK + ``` + +- Run [`rladmin cluster running_actions`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/running_actions" >}}) and confirm that no tasks are currently running (active): + + ```sh + $ rladmin cluster running_actions + No active tasks + ``` + +## Troubleshoot connectivity + +#### Database endpoint resolution + +1. On the client machine, check if the database endpoint can be resolved: + + ```sh + dig + ``` + +1. If endpoint resolution fails on the client machine, check on one of the cluster nodes: + + ```sh + dig @localhost + ``` + +1. If endpoint resolution succeeds on the cluster node but fails on the client machine, review the DNS configuration and fix any errors. + +1. If the endpoint can’t be resolved on the cluster node, [contact support](https://redis.com/company/support/). + +#### Client application issues + +1. To identify possible client application issues, test connectivity from the client machine to the database using [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}): + + [`INFO`]({{< relref "/commands/info" >}}): + + ```sh + redis-cli -h -p -a INFO + ``` + + [`PING`]({{< relref "/commands/ping" >}}): + + ```sh + redis-cli -h -p -a PING + ``` + + or if TLS is enabled: + + ```sh + redis-cli -h -p -a --tls --insecure --cert --key PING + ``` + +1. If the client machine cannot connect, try to connect to the database from one of the cluster nodes: + + ```sh + redis-cli -h -p -a PING + ``` + +1. If the cluster node is also unable to connect to the database, [contact Redis support](https://redis.com/company/support/). + +1. If the client fails to connect, but the cluster node succeeds, perform health checks on the client and network. + +#### Firewall access + +1. Run one of the following commands to verify that database access is not blocked by a firewall on the client machine or cluster: + + ```sh + iptables -L + ``` + + ```sh + ufw status + ``` + + ```sh + firewall-cmd –list-all + ``` + +1. To resolve firewall issues: + + 1. If a firewall is configured for your database, add the client IP address to the firewall rules. + + 1. Configure third-party firewalls and external proxies to allow the cluster FQDN, database endpoint IP address, and database ports. + +## Troubleshoot latency + +#### Server-side latency + +- Make sure the database's used memory does not reach the configured database max memory limit. For more details, see [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit" >}}). + +- Try to correlate the time of the latency with any surge in the following metrics: + + - Number of connections + + - Used memory + + - Evicted keys + + - Expired keys + +- Run [`SLOWLOG GET`]({{< relref "/commands/slowlog-get" >}}) using [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}) to identify slow commands such as [`KEYS`]({{< relref "/commands/keys" >}}) or [`HGETALL`]({{< relref "/commands/hgetall" >}}: + + ```sh + redis-cli -h -p -a SLOWLOG GET + ``` + + Consider using alternative commands such as [`SCAN`]({{< relref "/commands/scan" >}}), [`SSCAN`]({{< relref "/commands/sscan" >}}), [`HSCAN`]({{< relref "/commands/hscan" >}}) and [`ZSCAN`]({{< relref "/commands/zscan" >}}) + +- Keys with large memory footprints can cause latency. To identify such keys, compare the keys returned by [`SLOWLOG GET`]({{< relref "/commands/slowlog-get" >}}) with the output of the following commands: + + ```sh + redis-cli -h -p -a --memkeys + ``` + + ```sh + redis-cli -h -p -a --bigkeys + ``` + +- For additional diagnostics, see: + + - [Diagnosing latency issues]({{< relref "/operate/oss_and_stack/management/optimization/latency" >}}) + + - [View Redis slow log]({{< relref "/operate/rs/7.4/clusters/logging/redis-slow-log" >}}) + +#### Client-side latency + +Verify the following: + +- There is no memory or CPU pressure on the client host. + +- The client uses a connection pool instead of frequently opening and closing connections. + +- The client does not erroneously open multiple connections that can pressure the client or server. +--- +Title: Test client connection +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Test connection +weight: 20 +url: '/operate/rs/7.4/databases/connect/test-client-connectivity/' +--- +In various scenarios, such as after creating a new cluster or upgrading +the cluster, you should verify clients can connect to the +database. + +To test client connectivity: + +1. After you [create a Redis database]({{< relref "/operate/rs/7.4/databases/create" >}}), copy the database endpoint, which contains the cluster name (FQDN). + + To view and copy endpoints for a database in the cluster, see the database’s **Configuration > General** section in the Cluster Manager UI: + + {{View public and private endpoints from the General section of the database's Configuration screen.}} + +1. Try to connect to the database endpoint from your client of choice, + and run database commands. + +1. If the database does not respond, try to connect to the database + endpoint using the IP address rather than the FQDN. If you + succeed, then DNS is not properly configured. For + additional details, see + [Configure cluster DNS]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}). + +If any issues occur when testing database connections, [contact +support](https://redis.com/company/support/). + +## Test database connections + +After you create a Redis database, you can connect to your +database and store data using one of the following methods: + +- [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}), the built-in command-line tool + +- [Redis Insight](https://redis.com/redis-enterprise/redis-insight/), a free Redis GUI that is available for macOS, Windows, and Linux + +- An application using a Redis client library, such as [`redis-py`](https://github.com/redis/redis-py) for Python. See the [client list]({{< relref "/develop/clients/" >}}) to view all Redis clients by language. + +### Connect with redis-cli + +Connect to your database with `redis-cli` (located in the `/opt/redislabs/bin` directory), then store and retrieve a key: + +```sh +$ redis-cli -h -p +127.0.0.1:16653> set key1 123 +OK +127.0.0.1:16653> get key1 +"123" +``` + +For more `redis-cli` connection examples, see the [`redis-cli` reference]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}). + +### Connect with Redis Insight + +Redis Insight is a free Redis GUI that is available for macOS, Windows, and Linux. + +1. [Install Redis Insight]({{< relref "/develop/tools/insight/" >}}). + +1. Open Redis Insight and select **Add Redis Database**. + +1. Enter the host and port in the **Host** and **Port** fields. + +1. Select **Use TLS** if [TLS]({{< relref "/operate/rs/7.4/security/encryption/tls" >}}) is set up. + +1. Select **Add Redis Database** to connect to the database. + +See the [Redis Insight documentation]({{< relref "/develop/tools/insight/" >}}) for more information. + +### Connect with Python + +Python applications can connect +to the database using the `redis-py` client library. For installation instructions, see the +[`redis-py` README](https://github.com/redis/redis-py#readme) on GitHub. + +1. From the command line, create a new file called +`redis_test.py`: + + ```sh + vi redis_test.py + ``` + +1. Paste the following code in `redis_test.py`, and replace `` and `` with your database's endpoint details: + + ```python + import redis + + # Connect to the database + r = redis.Redis(host='', port=) + + # Store a key + print("set key1 123") + print(r.set('key1', '123')) + + # Retrieve the key + print("get key1") + print(r.get('key1')) + ``` + +1. Run the application: + + ```sh + python redis_test.py + ``` + +1. If the application successfully connects to your database, it outputs: + + ```sh + set key1 123 + True + get key1 + 123 + ``` +### Connect with discovery service + +You can also connect a Python application to the database using the discovery service, which complies with the Redis Sentinel API. + +In the IP-based connection method, you only need the database name, not the port number. +The following example uses the discovery service that listens on port 8001 on all nodes of the cluster +to discover the endpoint for the database named "db1". + +```python +from redis.sentinel import Sentinel + +# with IP based connections, a list of known node IP addresses is constructed +# to allow connection even if any one of the nodes in the list is unavailable. +sentinel_list = [ +('10.0.0.44', 8001), +('10.0.0.45', 8001), +('10.0.0.46', 8001) +] + +# change this to the db name you want to connect +db_name = 'db1' + +sentinel = Sentinel(sentinel_list, socket_timeout=0.1) +r = sentinel.master_for(db_name, socket_timeout=0.1) + +# set key "foo" to value "bar" +print(r.set('foo', 'bar')) +# set value for key "foo" +print(r.get('foo')) +``` + +For more `redis-py` connection examples, see the [`redis-py` developer documentation](https://redis-py.readthedocs.io/en/stable/examples/connection_examples.html). +--- +Title: Supported connection clients +categories: +- docs +- operate +- rs +description: Info about Redis client libraries and supported clients when using the + discovery service. +weight: 10 +url: '/operate/rs/7.4/databases/connect/supported-clients-browsers/' +--- +You can connect to Redis Enterprise Software databases programmatically using client libraries. + +## Redis client libraries + +To connect an application to a Redis database hosted by Redis Enterprise Software, use a [client library]({{< relref "/develop/clients/" >}}) appropriate for your programming language. + +You can also use the `redis-cli` utility to connect to a database from the command line. + +For examples of each approach, see the [Redis Enterprise Software quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}). + +Note: You cannot use client libraries to configure Redis Enterprise Software. Instead, use: + +- The Redis Enterprise Software [Cluster Manager UI]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) +- The [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) +- Command-line utilities, such as [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) + +### Discovery service + +We recommend the following clients when using a [discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}) based on the Redis Sentinel API: + +- [redis-py]({{< relref "/develop/clients/redis-py" >}}) (Python client) +- [NRedisStack]({{< relref "/develop/clients/dotnet" >}}) (.NET client) +- [Jedis]({{< relref "/develop/clients/jedis" >}}) (synchronous Java client) +- [Lettuce]({{< relref "/develop/clients/lettuce" >}}) (asynchronous Java client) +- [go-redis]({{< relref "/develop/clients/go" >}}) (Go client) +- [Hiredis](https://github.com/redis/hiredis) (C client) + +If you need to use another client, you can use [Sentinel Tunnel](https://github.com/RedisLabs/sentinel_tunnel) +to discover the current Redis master with Sentinel and create a TCP tunnel between a local port on the client and the master. + +--- +Title: Connect to a database +categories: +- docs +- operate +- rs +description: Learn how to connect your application to a Redis database hosted by Redis + Enterprise Software and test your connection. +hideListLinks: true +linkTitle: Connect +weight: 20 +url: '/operate/rs/7.4/databases/connect/' +--- + +After you [set up a cluster]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup" >}}) and [create a Redis database]({{< relref "/operate/rs/7.4/databases/create" >}}), you can connect to your database. + +To connect to your database, you need the database endpoint, which includes the cluster name (FQDN) and the database port. To view and copy public and private endpoints for a database in the cluster, see the database’s **Configuration > General** section in the Cluster Manager UI. + +{{View public and private endpoints from the General section of the database's Configuration screen.}} + +If you try to connect with the FQDN, and the database does not respond, try connecting with the IP address. If this succeeds, DNS is not properly configured. To set up DNS, see [Configure cluster DNS]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}). + +If you want to secure your connection, set up [TLS]({{< relref "/operate/rs/7.4/security/encryption/tls/" >}}). + +## Connect to a database + +Use one of the following connection methods to connect to your database: + +- [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli/" >}}) utility + +- [Redis Insight](https://redis.com/redis-enterprise/redis-insight/) + +- [Redis client]({{< relref "/develop/clients/" >}}) for your preferred programming language + +For examples, see [Test client connection]({{< relref "/operate/rs/7.4/databases/connect/test-client-connectivity" >}}). +--- +Title: Eviction policy +alwaysOpen: false +categories: +- docs +- operate +- rs +- kubernetes +description: The eviction policy determines what happens when a database reaches its + memory limit. +linkTitle: Eviction policy +weight: 10 +url: '/operate/rs/7.4/databases/memory-performance/eviction-policy/' +--- + +The eviction policy determines what happens when a database reaches its memory limit. + +To make room for new data, older data is _evicted_ (removed) according to the selected policy. + +To prevent this from happening, make sure your database is large enough to hold all desired keys. + +| **Eviction Policy** | **Description** | +|------------|-----------------| +|  noeviction | New values aren't saved when memory limit is reached

When a database uses replication, this applies to the primary database | +|  allkeys-lru | Keeps most recently used keys; removes least recently used (LRU) keys | +|  allkeys-lfu | Keeps frequently used keys; removes least frequently used (LFU) keys | +|  allkeys-random | Randomly removes keys | +|  volatile-lru | Removes least recently used keys with `expire` field set to true | +|  volatile-lfu | Removes least frequently used keys with `expire` field set to true | +|  volatile-random | Randomly removes keys with `expire` field set to true | +|  volatile-ttl | Removes least frequently used keys with `expire` field set to true and the shortest remaining time-to-live (TTL) value | + +## Eviction policy defaults + +`volatile-lru` is the default eviction policy for most databases. + +The default policy for [Active-Active databases]({{< relref "/operate/rs/7.4/databases/active-active" >}}) is _noeviction_ policy. + +## Active-Active database eviction + +The eviction policy mechanism for Active-Active databases kicks in earlier than for standalone databases because it requires propagation to all participating clusters. +The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. If memory usage continues to rise while the keys are being evicted, the rate of eviction will increase to prevent reaching the Out-of-Memory state. +As with standalone Redis Enterprise databases, Active-Active eviction is calculated per shard. +To prevent over eviction, internal heuristics might prevent keys from being evicted when the shard reaches the 80% memory limit. In such cases, keys will get evicted only when shard memory reaches 100%. + +In case of network issues between Active-Active instances, memory can be freed only when all instances are in sync. If there is no communication between participating clusters, it can result in eviction of all keys and the instance reaching an Out-of-Memory state. + +{{< note >}} +Data eviction policies are not supported for Active-Active databases with Auto Tiering . +{{< /note >}} + +## Avoid data eviction + +To avoid data eviction, make sure your database is large enough to hold required values. + +For larger databases, consider using [Auto Tiering ]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}). + +Auto Tiering stores actively-used data (also known as _hot data_) in RAM and the remaining data in flash memory (SSD). +This lets you retain more data while ensuring the fastest access to the most critical data. +--- +Title: Database memory limits +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: When you set a database's memory limit, you define the maximum size the + database can reach. +linkTitle: Memory limits +weight: 20 +url: '/operate/rs/7.4/databases/memory-performance/memory-limit/' +--- +When you set a database's memory limit, you define the maximum size the +database can reach in the cluster, across all database replicas and +shards, including both primary and replica shards. + +If the total size of the database in the cluster reaches the memory +limit, the data eviction policy is +applied. + +## Factors for sizing + +Factors to consider when sizing your database: + +- **dataset size**: you want your limit to be above your dataset size to leave room for overhead. +- **database throughput**: high throughput needs more shards, leading to a higher memory limit. +- [**modules**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}): using modules with your database consumes more memory. +- [**database clustering**]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering.md" >}}): enables you to spread your data into shards across multiple nodes. +- [**database replication**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}): enabling replication doubles memory consumption. + +Additional factors for Active-Active databases: + +- [**Active-Active replication**]({{< relref "/operate/rs/7.4/databases/active-active/_index.md" >}}): enabling Active-Active replication requires double the memory of regular replication, which can be up to two times (2x) the original data size per instance. +- [**database replication backlog**]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog/" >}}) for synchronization between shards. By default, this is set to 1% of the database size. +- [**Active-Active replication backlog**]({{< relref "/operate/rs/7.4/databases/active-active/manage.md" >}}) for synchronization between clusters. By default, this is set to 1% of the database size. + + It's also important to know Active-Active databases have a lower threshold for activating the eviction policy, because it requires propagation to all participating clusters. The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. + +Additional factors for databases with Auto Tiering enabled: + +- The available flash space must be greater than or equal to the total database size (RAM+Flash). The extra space accounts for write buffers and [write amplification](https://en.wikipedia.org/wiki/Write_amplification). + +- [**database persistence**]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}): Auto Tiering uses dual database persistence where both the primary and replica shards persist to disk. This may add some processor and network overhead, especially in cloud configurations with network attached storage. + +## What happens when Redis Enterprise Software is low on RAM? + +Redis Enterprise Software manages node memory so that data is entirely in RAM (unless using Auto Tiering). If not enough RAM is available, Redis Enterprise prevents adding more data into the databases. + +Redis Enterprise Software protects the existing data and prevents the database from being able to store data into the shards. + +You can configure the cluster to move the data to another node, or even discard it according to the [eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy.md" >}}) set on each database by the administrator. + +[Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}) +manages memory so that you can also use flash memory (SSD) to store data. + +### Order of events for low RAM + +1. If there are other nodes available, your shards migrate to other nodes. +2. If the eviction policy allows eviction, shards start to release memory, +which can result in data loss. +3. If the eviction policy does not allow eviction, you'll receive +out of memory (OOM) messages. +4. If shards can't free memory, Redis Enterprise relies on the OS processes to stop replicas, +but tries to avoid stopping primary shards. + +We recommend that you have a [monitoring platform]({{< relref "/operate/rs/7.4/clusters/monitoring/" >}}) that alerts you before a system gets low on RAM. +You must maintain sufficient free memory to make sure that you have a healthy Redis Enterprise installation. + +## Memory metrics + +The Cluster Manager UI provides metrics that can help you evaluate your memory use. + +- Free RAM +- RAM fragmentation +- Used memory +- Memory usage +- Memory limit + +See [console metrics]({{< relref "/operate/rs/7.4/references/metrics" >}}) for more detailed information. + +## Related info + +- [Memory and performance]({{< relref "/operate/rs/7.4/databases/memory-performance" >}}) +- [Disk sizing for heavy write scenarios]({{< relref "/operate/rs/7.4/clusters/optimize/disk-sizing-heavy-write-scenarios.md" >}}) +- [Turn off services to free system memory]({{< relref "/operate/rs/7.4/clusters/optimize/turn-off-services.md" >}}) +- [Eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy.md" >}}) +- [Shard placement policy]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy.md" >}}) +- [Database persistence]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}) +--- +Title: Shard placement policy +alwaysopen: false +categories: +- docs +- operate +- rs +description: Detailed info about the shard placement policy. +linkTitle: Shard placement policy +weight: 30 +url: '/operate/rs/7.4/databases/memory-performance/shard-placement-policy/' +--- +In Redis Enterprise Software, the location of master and replica shards on the cluster nodes can impact the database and node performance. +Master shards and their corresponding replica shards are always placed on separate nodes for data resiliency. +The shard placement policy helps to maintain optimal performance and resiliency. + +{{< embed-md "shard-placement-intro.md" >}} + +## Shard placement policies + +### Dense shard placement policy + +In the dense policy, the cluster places the database shards on as few nodes as possible. +When the node is not able to host all of the shards, some shards are moved to another node to maintain optimal node health. + +For example, for a database with two master and two replica shards on a cluster with three nodes and a dense shard placement policy, +the two master shards are hosted on one node and the two replica shards are hosted on another node. + +For Redis on RAM databases without the OSS cluster API enabled, use the dense policy to optimize performance. + +{{< image filename="/images/rs/dense_placement.png" >}} + +*Figure: Three nodes with two master shards (red) and two replica shards (white) with a dense placement policy* + +### Sparse shard placement policy + +In the sparse policy, the cluster places shards on as many nodes as possible to distribute the shards of a database across all available nodes. +When all nodes have database shards, the shards are distributed evenly across the nodes to maintain optimal node health. + +For example, for a database with two master and two replica shards on a cluster with three nodes and a sparse shard placement policy: + +- Node 1 hosts one of the master shards +- Node 2 hosts the replica for the first master shard +- Node 3 hosts the second master shard +- Node 1 hosts for the replica shard for master shard 2 + +For Redis on RAM databases with OSS cluster API enabled and for databases with Auto Tiering enabled, use the sparse policy to optimize performance. + +{{< image filename="/images/rs/sparse_placement.png" >}} + +*Figure: Three nodes with two master shards (red) and two replica shards (white) with a sparse placement policy* + +## Related articles + +You can [configure the shard placement policy]({{< relref "/operate/rs/7.4/databases/configure/shard-placement.md" >}}) for each database. +--- +Title: Memory and performance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Learn more about managing your memory and optimizing performance for + your database. +hideListLinks: true +linktitle: Memory and performance +weight: 70 +url: '/operate/rs/7.4/databases/memory-performance/' +--- +Redis Enterprise Software has multiple mechanisms in its +architecture to help optimize storage and performance. + +## Memory limits + +Database memory limits define the maximum size your database can reach across all database replicas and [shards]({{< relref "/glossary#letter-s" >}}) on the cluster. Your memory limit will also determine the number of shards you'll need. + +Besides your dataset, the memory limit must also account for replication, Active-Active overhead, and module overhead, and a number of other factors. These can significantly increase your database size, sometimes increasing it by four times or more. + +For more information on memory limits, see [Database memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}). + +## Eviction policies + +When a database exceeds its memory limit, eviction policies determine which data is removed. The eviction policy removes keys based on frequency of use, how recently used, randomly, expiration date, or a combination of these factors. The policy can also be set to `noeviction` to return a memory limit error when trying to insert more data. + +The default eviction policy for databases is `volatile-lru` which evicts the least recently used keys out of all keys with the `expire` field set. The default for Active-Active databases is `noeviction`. + +For more information, see [eviction policies]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy.md" >}}). + +## Database persistence + +Both RAM memory and flash memory are at risk of data loss if a server or process fails. Persisting your data to disk helps protect it against loss in those situations. You can configure persistence at the time of database creation, or by editing the database’s configuration. + +There are two main types of persistence strategies in Redis Enterprise Software: append-only files (AoF) and snapshots. + +Append-only files (AoF) keep a record of data changes and writes each change to the end of a file, allowing you to recover the dataset by replaying the writes in the append-only log. + +Snapshots capture all the data as it exists in one moment in time and writes it to disk, allowing you to recover the entire dataset as it existed at that moment in time. + +For more info on data persistence see [Database persistence with Redis Enterprise Software]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}) or [Durable Redis](https://redis.com/redis-enterprise/technology/durable-redis/). + +## Auto Tiering + +By default, Redis Enterprise Software stores your data entirely in [RAM](https://en.wikipedia.org/wiki/Random-access_memory) for improved performance. The [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}) feature enables your data to span both RAM and [SSD](https://en.wikipedia.org/wiki/Solid-state_drive) storage ([flash memory](https://en.wikipedia.org/wiki/Flash_memory)). Keys are always stored in RAM, but Auto Tiering manages the location of their values. Frequently used (hot) values are stored on RAM, but infrequently used (warm) values are moved to flash memory. This saves on expensive RAM space, which give you comparable performance at a lower cost for large datasets. + +For more info, see [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}). + +## Shard placement + +The location of the primary and replica shards on the cluster nodes can impact your database performance. +Primary shards and their corresponding replica shards are always placed on separate nodes for data resiliency and high availability. +The shard placement policy helps to maintain optimal performance and resiliency. + +Redis Enterprise Software has two shard placement policies available: + +- **dense**: puts as many shards as possible on the smallest number of nodes +- **sparse**: spread the shards across as many nodes as possible + +For more info about the shard placement policy, see [Shard placement policy]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy.md" >}}) + +## Metrics + +From the Redis Enterprise Software Cluster Manager UI, you can monitor the performance of your clusters, nodes, databases, and shards with real-time metrics. You can also enable alerts for node, cluster, or database events such as high memory usage or throughput. + +With the Redis Enterprise Software API, you can also integrate Redis Enterprise metrics into other monitoring environments, such as Prometheus. + +For more info about monitoring with Redis Enterprise Software, see [Monitoring with metrics and alerts]({{< relref "/operate/rs/7.4/clusters/monitoring/_index.md" >}}), and [Memory statistics]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit#memory-metrics" >}}). + +## Scaling databases + +Each Redis Enterprise cluster can contain multiple databases. In Redis, +databases represent data that belong to a single application, tenant, or +microservice. Redis Enterprise is built to scale to 100s of databases +per cluster to provide flexible and efficient multi-tenancy models. + +Each database can contain few or many Redis shards. Sharding is +transparent to Redis applications. Master shards in the database process +data operations for a given subset of keys. The number of shards per +database is configurable and depend on the throughput needs of the +applications. Databases in Redis Enterprise can be resharded into more +Redis shards to scale throughput while maintaining sub-millisecond +latencies. Resharding is performed without downtime. + +{{< image filename="/images/rs/sharding.png" >}} + +Redis Enterprise places master shards and replicas in separate +nodes, racks, and zones, and uses in-memory replication to protect data +against failures. + +In Redis Enterprise, each database has a quota of RAM. The quota cannot +exceed the limits of the RAM available on the node. However, with Redis +Enterprise Flash, RAM is extended to the local flash drive (SATA, NVMe +SSDs etc). The total quota of the database can take advantage of both +RAM and Flash drive. The administrator can choose the RAM vs Flash ratio +and adjust that anytime in the lifetime of the database without +downtime. + +With Auto Tiering, instead of storing all keys and data for a +given shard in RAM, less frequently accessed values are pushed to flash. +If applications need to access a value that is in flash, Redis +Enterprise automatically brings the value into RAM. Depending on the +flash hardware in use, applications experience slightly higher latency +when bringing values back into RAM from flash. However subsequent +accesses to the same value is fast, once the value is in RAM. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: This page will help you find database management information in the Databases + section. +hideListLinks: false +linktitle: Databases +title: Manage databases +weight: 37 +url: '/operate/rs/7.4/databases/' +--- + +You can manage your Redis Enterprise Software databases with several different tools: + +- Cluster Manager UI (the web-based user interface) +- Command-line tools ([`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}), [`redis-cli`]({{< relref "/develop/tools/cli" >}}), [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}})) +- [REST API]({{< relref "/operate/rs/7.4/references/rest-api/_index.md" >}}) + + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +db_type: database +description: How to migrate database shards to other nodes in a Redis Software cluster. +linkTitle: Migrate shards +title: Migrate database shards +toc: 'true' +weight: 32 +url: '/operate/rs/7.4/databases/migrate-shards/' +--- + +To migrate database shards to other nodes in the cluster, you can use the [`rladmin migrate`]({{}}) command or [REST API requests]({{}}). + +## Use cases for shard migration + +Migrate database shards to a different node in the following scenarios: + +- Before node removal. + +- To balance the database manually in case of latency issues or uneven load distribution across nodes. + +- To manage node resources, such as memory usage. + +## Considerations for shard migration + +For databases with replication: + +- Migrating a shard will not cause disruptions since a primary shard will still be available. + +- If you try to migrate a primary shard, it will be demoted to a replica shard and a replica shard will be promoted to primary before the migration. If you set `"preserve_roles": true` in the request, a second failover will occur after the migration finishes to change the migrated shard's role back to primary. + +For databases without replication, the migrated shard will not be available until the migration is done. + +Connected clients shouldn't be disconnected in either case. + +If too many primary shards are placed on the same node, it can impact database performance. + +## Migrate specific shard + +To migrate a specific database shard, use one of the following methods: + +- [`rladmin migrate shard`]({{}}): + + ```sh + rladmin migrate shard target_node + ``` + +- [Migrate shard]({{}}) REST API request: + + Specify the ID of the shard to migrate in the request path and the destination node's ID as the `target_node_uid` in the request body. See the [request reference]({{}}) for more options. + + ```sh + POST /v1/shards//actions/migrate + { + "target_node_uid": + } + ``` + + Example JSON response body: + + ```json + { + "action_uid": "", + "description": "Migrate was triggered" + } + ``` + + You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +## Migrate multiple shards + +To migrate multiple database shards, use one of the following methods: + +- [`rladmin migrate shard`]({{}}): + + ```sh + rladmin migrate shard target_node + ``` + +- [Migrate multiple shards]({{}}) REST API request: + + Specify the IDs of the shards to migrate in the `shard_uids` list and the destination node's ID as the `target_node_uid` in the request body. See the [request reference]({{}}) for more options. + + ```sh + POST /v1/shards/actions/migrate + { + "shard_uids": ["","",""], + "target_node_uid": + } + ``` + + Example JSON response body: + + ```json + { + "action_uid": "", + "description": "Migrate was triggered" + } + ``` + + You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +## Migrate all shards from a node + +To migrate all shards from a specific node to another node, run [`rladmin migrate all_shards`]({{}}): + +```sh +rladmin migrate node all_shards target_node +``` + +## Migrate primary shards + +You can use the [`rladmin migrate all_master_shards`]({{}}) command to migrate all primary shards for a specific database or node to another node in the cluster. + +To migrate a specific database's primary shards: + +```sh +rladmin migrate db db: all_master_shards target_node +``` + +To migrate all primary shards from a specific node: + +```sh +rladmin migrate node all_master_shards target_node +``` + +## Migrate replica shards + +You can use the [`rladmin migrate all_slave_shards`]({{}}) command to migrate all replica shards for a specific database or node to another node in the cluster. + +To migrate a specific database's replica shards: + +```sh +rladmin migrate db db: all_slave_shards target_node +``` + +To migrate all replica shards from a specific node: + +```sh +rladmin migrate node all_slave_shards target_node +``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes internode encryption which improves the security of data in + transit. +linkTitle: Internode encryption +title: Internode encryption +weight: 15 +url: '/operate/rs/7.4/security/encryption/internode-encryption/' +--- +As of v6.2.4, Redis Enterprise Software supports _internode encryption_, which encrypts internal communication between nodes. This improves the security of data as it travels within a cluster. + +Internode encryption is enabled for the _control plane_, which manages the cluster and its databases. + +Internode encryption is supported for the _data plane_, which encrypts communication used to replicate shards between nodes and proxy communication with shards located on different nodes. + +The following diagram shows how this works. + +{{A diagram showing the interaction between data internode encryption, control plane encryption, and various elements of a cluster.}} + +Data internode encryption is disabled by default for individual databases in order to optimize for performance. Encryption adds latency and overhead; the impact is measurable and varies according to the database, its field types, and the details of the underlying use case. + +You can enable data internode encryption for a database by changing the database configuration settings. This lets you choose when to favor performance and when to encrypt data. + +## Prerequisites + +Internode encryption requires certain prerequisites. + +You need to: + +- Upgrade all nodes in the cluster to v6.2.4 or later. + +- Open port 3342 for the TLS channel used for encrypted communication. + + +## Enable data internode encryption + +To enable internode encryption for a database (also called _data internode encryption_), you need to enable the appropriate setting for each database you wish to encrypt. To do so, you can: + +- Use the Cluster Manager UI to enable the **Internode Encryption** setting from the database **Security** screen. + +- Use the `rladmin` command-line utility to set the [data_internode_encryption]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-db" >}}) setting for the database: + + ``` shell + rladmin tune db data_internode_encryption enabled + ``` + +- Use the Redis Enterprise Software REST API to set the `data_internode_encryption` setting for the database. + + ``` rest + put /v1/bdbs/${database_id} + { “data_internode_encryption” : true } + ``` + +When you change the data internode encryption setting for a database, all active remote client connections are disconnected. This restarts the internal (DMC) proxy and disconnects all client connections. + +## Change cluster policy + +To enable internode encryption for new databases by default, use one of the following methods: + +- Cluster Manager UI + + 1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + + 1. Select **Database defaults**. + + 1. Go to **Internode Encryption** and click **Change**. + + 1. Select **Enabled** to enable internode encryption for new databases by default. + + 1. Click **Change**. + + 1. Select **Save**. + +- [rladmin tune cluster]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster data_internode_encryption enabled + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "data_internode_encryption": true } + ``` + +## Encryption ciphers and settings + +To encrypt internode communications, Redis Enterprise Software uses TLS 1.2 and the following cipher suites: + +- ECDHE-RSA-AES256-GCM-SHA384 +- ECDHE-RSA-AES128-GCM-SHA256 + +As of Redis Enterprise Software v7.4, internode encryption also supports TLS 1.3 with the following cipher suites: + +- TLS_AES_128_GCM_SHA256 +- TLS_AES_256_GCM_SHA384 + +The TLS layer determines which TLS version to use. + +No configurable settings are exposed; internode encryption is used internally within a cluster and not exposed to any outside service. + +## Certificate authority and rotation + +Starting with v6.2.4, internode communication is managed, in part, by two certificates: one for the control plane and one for the data plane. These certificates are signed by a private certificate authority (CA). The CA is not exposed outside of the cluster, so it cannot be accessed by external processes or services. In addition, each cluster generates a unique CA that is not used anywhere else. + +The private CA is generated when a cluster is created or upgraded to 6.2.4. + +When nodes join the cluster, the cluster CA is used to generate certificates for the new node, one for each plane. Certificates signed by the private CA are not shared between clusters and they're not exposed outside the cluster. + +All certificates signed by the internal CA expire after ninety (90) days and automatically rotate every thirty (30) days. Alerts also monitor certificate expiration and trigger when certificate expiration falls below 45 days. If you receive such an alert, contact support. + +You can use the Redis Enterprise Software REST API to rotate certificates manually: + +``` rest +POST /v1/cluster/certificates/rotate +``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Configure TLS protocol +title: Configure TLS protocol +weight: 50 +url: '/operate/rs/7.4/security/encryption/tls/tls-protocols/' +--- + +You can change TLS protocols to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. + +## Configure TLS protocol + +The communications for which you can modify TLS protocols are: + +- Control plane - The TLS configuration for cluster administration. +- Data plane - The TLS configuration for the communication between applications and databases. +- Discovery service (Sentinel) - The TLS configuration for the [discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}). + +You can configure TLS protocols with the [Cluster Manager UI](#edit-tls-ui), [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}), or the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster#put-cluster" >}}). + +{{}} +- After you set the minimum TLS version, Redis Enterprise Software does not accept communications with TLS versions older than the specified version. + +- If you set TLS 1.3 as the minimum TLS version, clients must support TLS 1.3 to connect to Redis Enterprise. +{{}} + +TLS support depends on the operating system. You cannot enable support for protocols or versions that aren't supported by the operating system running Redis Enterprise Software. In addition, updates to the operating system or to Redis Enterprise Software can impact protocol and version support. + +If you have trouble enabling specific versions of TLS, verify that they're supported by your operating system and that they're configured correctly. + +{{}} +TLSv1.2 is generally recommended as the minimum TLS version for encrypted communications. Check with your security team to confirm which TLS protocols meet your organization's policies. +{{}} + +### Edit TLS settings in the UI {#edit-tls-ui} + +To configure minimum TLS versions using the Cluster Manager UI: + +1. Go to **Cluster > Security**, then select the **TLS** tab. + +1. Click **Edit**. + +1. Select the minimum TLS version for cluster connections, database connections, and the discovery service: + + {{Cluster > Security > TLS settings in edit mode in the Cluster Manager UI.}} + +1. Select the TLS mode for the discovery service: + + - **Allowed** - Allows both TLS and non-TLS connections + - **Required** - Allows only TLS connections + - **Disabled** - Allows only non-TLS connections + +1. Click **Save**. + +### Control plane TLS + +To set the minimum TLS protocol for the control plane using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_control_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +For example: + +```sh +rladmin cluster config min_control_TLS_version 1.2 +``` + +### Data plane TLS + +To set the minimum TLS protocol for the data path using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_data_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +For example: + +```sh +rladmin cluster config min_data_TLS_version 1.2 +``` + + +### Discovery service TLS + +To enable TLS for the discovery service using `rladmin`: + +- Default: Allows both TLS and non-TLS connections +- Syntax: `rladmin cluster config sentinel_tls_mode ` +- `ssl_policy` values available: + - `allowed` - Allows both TLS and non-TLS connections + - `required` - Allows only TLS connections + - `disabled` - Allows only non-TLS connections + +To set the minimum TLS protocol for the discovery service using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_sentinel_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +To enforce a minimum TLS version for the discovery service, run the following commands: + +1. Allow only TLS connections: + + ```sh + rladmin cluster config sentinel_tls_mode required + ``` + +1. Set the minimal TLS version: + + ```sh + rladmin cluster config min_sentinel_TLS_version 1.2 + ``` + +1. Restart the discovery service on all cluster nodes to apply your changes: + + ```sh + supervisorctl restart sentinel_service + ``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows how to configure cipher suites. +linkTitle: Configure cipher suites +title: Configure cipher suites +weight: 60 +url: '/operate/rs/7.4/security/encryption/tls/ciphers/' +--- + +Ciphers are algorithms that help secure connections between clients and servers. You can change the ciphers to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. + +## TLS 1.2 cipher suites + +| Name | Configurable | Description | +|------------|--------------|-------------| +| control_cipher_suites | ✅ Yes | Cipher list for TLS 1.2 communications for cluster administration (control plane) | +| data_cipher_list | ✅ Yes | Cipher list for TLS 1.2 communications between applications and databases (data plane) | +| sentinel_cipher_suites | ✅ Yes | Cipher list for [discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service" >}}) (Sentinel) TLS 1.2 communications | + +## TLS 1.3 cipher suites + +| Name | Configurable | Description | +|------------|--------------|-------------| +| control_cipher_suites_tls_1_3 | ❌ No | Cipher list for TLS 1.3 communications for cluster administration (control plane) | +| data_cipher_suites_tls_1_3 | ✅ Yes | Cipher list for TLS 1.3 communications between applications and databases (data plane) | +| sentinel_cipher_suites_tls_1_3 | ❌ No | Cipher list for [discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service" >}}) (Sentinel) TLS 1.3 communications | + +## Configure cipher suites + +You can configure ciphers with the [Cluster Manager UI](#edit-ciphers-ui), [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}), or the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster#put-cluster" >}}). + +{{}} +Configuring cipher suites overwrites existing ciphers rather than appending new ciphers to the list. +{{}} + +When you modify your cipher suites, make sure: + +- The configured TLS version matches the required cipher suites. +- The certificates in use are properly signed to support the required cipher suites. + +{{}} +- Redis Enterprise Software doesn't support static [Diffie–Hellman (`DH`) key exchange](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange) ciphers. + +- Support for Ephemeral Diffie–Hellman (ECDHE) key exchange ciphers depends on the operating system version and security policy. +{{}} + +### Edit cipher suites in the UI {#edit-ciphers-ui} + +To configure cipher suites using the Cluster Manager UI: + +1. Go to **Cluster > Security**, then select the **TLS** tab. + +1. In the **Cipher suites lists** section, click **Configure**: + + {{Cipher suites lists as shown in the Cluster Manager UI.}} + +1. Edit the TLS cipher suites in the text boxes: + + {{Edit cipher suites drawer in the Cluster Manager UI.}} + +1. Click **Save**. + +### Control plane cipher suites {#control-plane-ciphers-tls-1-2} + +As of Redis Enterprise Software version 6.0.12, control plane cipher suites can use the BoringSSL library format for TLS connections to the Cluster Manager UI. See the BoringSSL documentation for a full list of available [BoringSSL configurations](https://github.com/google/boringssl/blob/master/ssl/test/runner/cipher_suites.go#L99-L131). + +#### Configure TLS 1.2 control plane cipher suites + +To configure TLS 1.2 cipher suites for cluster communication, use the following [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config control_cipher_suites +``` + +See the example below to configure cipher suites for the control plane: + +```sh +rladmin cluster config control_cipher_suites ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305 +``` +{{}} +- The deprecated 3DES and RC4 cipher suites are no longer supported. +{{}} + + +### Data plane cipher suites {#data-plane-ciphers-tls-1-2} + +Data plane cipher suites use the OpenSSL library format in Redis Enterprise Software version 6.0.20 or later. For a list of available OpenSSL configurations, see [Ciphers](https://www.openssl.org/docs/man1.1.1/man1/ciphers.html) (OpenSSL). + +#### Configure TLS 1.2 data plane cipher suites + +To configure TLS 1.2 cipher suites for communications between applications and databases, use the following [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config data_cipher_list +``` + +See the example below to configure cipher suites for the data plane: + +```sh +rladmin cluster config data_cipher_list AES128-SHA:AES256-SHA +``` +{{}} +- The deprecated 3DES and RC4 cipher suites are no longer supported. +{{}} + +#### Configure TLS 1.3 data plane cipher suites + +To configure TLS 1.3 cipher suites for communications between applications and databases, use the following [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config data_cipher_suites_tls_1_3 +``` + +The following example configures TLS 1.3 cipher suites for the data plane: + +```sh +rladmin cluster config data_cipher_suites_tls_1_3 TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 +``` + +### Discovery service cipher suites {#discovery-service-ciphers-tls-1-2} + +Sentinel service cipher suites use the golang.org OpenSSL format for [discovery service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service" >}}) TLS connections in Redis Enterprise Software version 6.0.20 or later. See their documentation for a list of [available configurations](https://golang.org/src/crypto/tls/cipher_suites.go). + +#### Configure TLS 1.2 discovery service cipher suites + +To configure TLS 1.2 cipher suites for the discovery service cipher suites, use the following [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config sentinel_cipher_suites +``` + +See the example below to configure cipher suites for the sentinel service: + +```sh +rladmin cluster config sentinel_cipher_suites TLS_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 +``` +--- +Title: Enable TLS +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows how to enable TLS. +linkTitle: Enable TLS +weight: 40 +url: '/operate/rs/7.4/security/encryption/tls/enable-tls/' +--- + +You can use TLS authentication for one or more of the following types of communication: + +- Communication from clients (applications) to your database +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) +- Communication to and from your database to other clusters for synchronization using [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/_index.md" >}}) + +{{}} +When you enable or turn off TLS, the change applies to new connections but does not affect existing connections. Clients must close existing connections and reconnect to apply the change. +{{}} + +## Enable TLS for client connections {#client} + +To enable TLS for client connections: + +1. From your database's **Security** tab, select **Edit**. + +1. In the **TLS - Transport Layer Security for secure connections** section, make sure the checkbox is selected. + +1. In the **Apply TLS for** section, select **Clients and databases + Between databases**. + +1. Select **Save**. + +To enable mutual TLS for client connections: + +1. Select **Mutual TLS (Client authentication)**. + + {{Mutual TLS authentication configuration.}} + +1. For each client certificate, select **+ Add certificate**, paste or upload the client certificate, then select **Done**. + + If your database uses Replica Of or Active-Active replication, you also need to add the syncer certificates for the participating clusters. See [Enable TLS for Replica Of cluster connections](#enable-tls-for-replica-of-cluster-connections) or [Enable TLS for Active-Active cluster connections](#enable-tls-for-active-active-cluster-connections) for instructions. + +1. You can configure **Additional certificate validations** to further limit connections to clients with valid certificates. + + Additional certificate validations occur only when loading a [certificate chain](https://en.wikipedia.org/wiki/Chain_of_trust#Computer_security) that includes the [root certificate](https://en.wikipedia.org/wiki/Root_certificate) and intermediate [CA](https://en.wikipedia.org/wiki/Certificate_authority) certificate but does not include a leaf (end-entity) certificate. If you include a leaf certificate, mutual client authentication skips any additional certificate validations. + + 1. Select a certificate validation option. + + | Validation option | Description | + |-------------------|-------------| + | _No validation_ | Authenticates clients with valid certificates. No additional validations are enforced. | + | _By Subject Alternative Name_ | A client certificate is valid only if its Common Name (CN) matches an entry in the list of valid subjects. Ignores other [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes. | + | _By full Subject Name_ | A client certificate is valid only if its [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes match an entry in the list of valid subjects. | + + 1. If you selected **No validation**, you can skip this step. Otherwise, select **+ Add validation** to create a new entry and then enter valid [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes for your client certificates. All `Subject` attributes are case-sensitive. + + | Subject attribute
(case-sensitive) | Description | + |-------------------|-------------| + | _Common Name (CN)_ | Name of the client authenticated by the certificate (_required_) | + | _Organization (O)_ | The client's organization or company name | + | _Organizational Unit (OU)_ | Name of the unit or department within the organization | + | _Locality (L)_ | The organization's city | + | _State / Province (ST)_ | The organization's state or province | + | _Country (C)_ | 2-letter code that represents the organization's country | + + You can only enter a single value for each field, except for the _Organizational Unit (OU)_ field. If your client certificate has a `Subject` with multiple _Organizational Unit (OU)_ values, press the `Enter` or `Return` key after entering each value to add multiple Organizational Units. + + {{An example that shows adding a certificate validation with multiple organizational units.}} + + **Breaking change:** If you use the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) instead of the Cluster Manager UI to configure additional certificate validations, note that `authorized_names` is deprecated as of Redis Enterprise v6.4.2. Use `authorized_subjects` instead. See the [BDB object reference]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) for more details. + +1. Select **Save**. + + {{< note >}} +By default, Redis Enterprise Software validates client certificate expiration dates. You can use `rladmin` to turn off this behavior. + +```sh +rladmin tune db < db:id | name > mtls_allow_outdated_certs enabled +``` + + {{< /note >}} + +## Enable TLS for Active-Active cluster connections + +To enable TLS for Active-Active cluster connections: + +1. If you are using the new Cluster Manager UI, switch to the legacy admin console. + + {{Select switch to legacy admin console from the dropdown.}} + +1. [Retrieve syncer certificates.](#retrieve-syncer-certificates) + +1. [Configure TLS certificates for Active-Active.](#configure-tls-certificates-for-active-active) + +1. [Configure TLS on all participating clusters.](#configure-tls-on-all-participating-clusters) + +{{< note >}} +You cannot enable or turn off TLS after the Active-Active database is created, but you can change the TLS configuration. +{{< /note >}} + +### Retrieve syncer certificates + +For each participating cluster, copy the syncer certificate from the **general** settings tab. + +{{< image filename="/images/rs/general-settings-syncer-cert.png" alt="general-settings-syncer-cert" >}} + +### Configure TLS certificates for Active-Active + +1. During database creation (see [Create an Active-Active Geo-Replicated Database]({{< relref "/operate/rs/7.4/databases/active-active/create.md" >}}), select **Edit** from the **configuration** tab. +1. Enable **TLS**. + - **Enforce client authentication** is selected by default. If you clear this option, you will still enforce encryption, but TLS client authentication will be deactivated. +1. Select **Require TLS for CRDB communication only** from the dropdown menu. + {{< image filename="/images/rs/crdb-tls-all.png" alt="crdb-tls-all" >}} +1. Select **Add** {{< image filename="/images/rs/icon_add.png#no-click" alt="Add" >}} +1. Paste a syncer certificate into the text box. + {{< image filename="/images/rs/database-tls-replica-certs.png" alt="Database TLS Configuration" >}} +1. Save the syncer certificate. {{< image filename="/images/rs/icon_save.png#no-click" alt="Save" >}} +1. Repeat this process, adding the syncer certificate for each participating cluster. +1. Optional: If also you want to require TLS for client connections, select **Require TLS for All Communications** from the dropdown and add client certificates as well. +1. Select **Update** at the bottom of the screen to save your configuration. + +### Configure TLS on all participating clusters + +Repeat this process on all participating clusters. + +To enforce TLS authentication, Active-Active databases require syncer certificates for each cluster connection. If every participating cluster doesn't have a syncer certificate for every other participating cluster, synchronization will fail. + +## Enable TLS for Replica Of cluster connections + +{{}} +--- +Title: Transport Layer Security (TLS) +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of Transport Layer Security (TLS). +hideListLinks: true +linkTitle: TLS +weight: 10 +url: '/operate/rs/7.4/security/encryption/tls/' +--- +[Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), a successor to SSL, ensures the privacy of data sent between applications and Redis databases. TLS also secures connections between Redis Enterprise Software nodes. + +You can [use TLS authentication]({{< relref "/operate/rs/7.4/security/encryption/tls/enable-tls" >}}) for the following types of communication: + +- Communication from clients (applications) to your database +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of" >}}) +- Communication to and from your database to other clusters for synchronization using [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/" >}}) + +## Protocols and ciphers + +TLS protocols and ciphers define the overall suite of algorithms that clients are able to connect to the servers with. + +You can change the [TLS protocols]({{< relref "/operate/rs/7.4/security/encryption/tls/tls-protocols" >}}) and [ciphers]({{< relref "/operate/rs/7.4/security/encryption/tls/ciphers" >}}) to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable PEM encryption to encrypt all private keys on disk. +linkTitle: Encrypt private keys +title: Encrypt private keys +toc: 'true' +weight: 50 +url: '/operate/rs/7.4/security/encryption/pem-encryption/' +--- + +Enable PEM encryption to automatically encrypt all private keys on disk. Public keys (`.cert` files) are not encrypted. + +When certificates are rotated, the encrypted private keys are also rotated. + +## Enable PEM encryption + +To enable PEM encryption and encrypt private keys on the disk, use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) or the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}). + + +- [`rladmin cluster config`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config encrypt_pkeys enabled + ``` + +- [Update cluster settings]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster#put-cluster" >}}) REST API request: + + ```sh + PUT /v1/cluster + { "encrypt_pkeys": true } + ``` + +## Deactivate PEM encryption + +To deactivate PEM encryption and decrypt private keys on the disk, use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) or the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}). + +- [`rladmin cluster config`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config encrypt_pkeys disabled + ``` + +- [Update cluster settings]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster#put-cluster" >}}) REST API request: + + ```sh + PUT /v1/cluster + { "encrypt_pkeys": false } + ``` +--- +Title: Encryption in Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Encryption in Redis Enterprise Software. +hideListLinks: true +linkTitle: Encryption +toc: 'true' +weight: 60 +url: '/operate/rs/7.4/security/encryption/' +--- + +Redis Enterprise Software uses encryption to secure communications between clusters, nodes, databases, and clients and to protect [data in transit](https://en.wikipedia.org/wiki/Data_in_transit), [at rest](https://en.wikipedia.org/wiki/Data_at_rest), and [in use](https://en.wikipedia.org/wiki/Data_in_use). + +## Encrypt data in transit + +### TLS + +Redis Enterprise Software uses [Transport Layer Security (TLS)]({{}}) to encrypt communications for the following: + +- Cluster Manager UI + +- Command-line utilities + +- REST API + +- Internode communication + +You can also [enable TLS authentication]({{< relref "/operate/rs/7.4/security/encryption/tls/enable-tls" >}}) for the following: + +- Communication from clients or applications to your database + +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) + +- Communication to and from your database to other clusters for [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/_index.md" >}}) synchronization + +### Internode encryption + +[Internode encryption]({{}}) uses TLS to encrypt data in transit between cluster nodes. + +By default, internode encryption is enabled for the control plane, which manages the cluster and databases. If you also want to encrypt replication and proxy communications between database shards on different nodes, [enable data internode encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption#enable-data-internode-encryption" >}}). + +### Require HTTPS for REST API endpoints + +By default, the Redis Enterprise Software API supports communication over HTTP and HTTPS. However, you can [turn off HTTP support]({{< relref "/operate/rs/7.4/references/rest-api/encryption" >}}) to ensure that API requests are encrypted. + +## Encrypt data at rest + +### File system encryption + +To encrypt data stored on disk, use file system-based encryption capabilities available on Linux operating systems before you install Redis Enterprise Software. + +### Private key encryption + +Enable PEM encryption to [encrypt all private keys]({{< relref "/operate/rs/7.4/security/encryption/pem-encryption" >}}) on disk. + +## Encrypt data in use + +### Client-side encryption + +Use client-side encryption to encrypt the data an application stores in a Redis database. The application decrypts the data when it retrieves it from the database. + +You can add client-side encryption logic to your application or use built-in client functions. + +Client-side encryption has the following limitations: + +- Operations that must operate on the data, such as increments, comparisons, and searches will not function properly. + +- Increases management overhead. + +- Reduces performance. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to audit connection events. +linkTitle: Audit events +title: Audit connection events +weight: 15 +url: '/operate/rs/7.4/security/audit-events/' +--- + +Starting with version 6.2.18, Redis Enterprise Software lets you audit database connection and authentication events. This helps you track and troubleshoot connection activity. + +The following events are tracked: + +- Database connection attempts +- Authentication requests, including requests for new and existing connections +- Database disconnects + +When tracked events are triggered, notifications are sent via TCP to an address and port defined when auditing is enabled. Notifications appear in near real time and are intended to be consumed by an external listener, such as a TCP listener, third-party service, or related utility. + +For development and testing environments, notifications can be saved to a local file; however, this is neither supported nor intended for production environments. + +For performance reasons, auditing is not enabled by default. In addition, auditing occurs in the background (asynchronously) and is non-blocking by design. That is, the action that triggered the notification continues without regard to the status of the notification or the listening tool. + +## Enable audit notifications + +### Cluster audits + +To enable auditing for your cluster, use: + +- `rladmin` + + ``` + rladmin cluster config auditing db_conns \ + audit_protocol \ + audit_address
\ + audit_port \ + audit_reconnect_interval \ + audit_reconnect_max_attempts + ``` + + where: + + - _audit\_protocol_ indicates the protocol used to process notifications. For production systems, _TCP_ is the only value. + + - _audit\_address_ defines the TCP/IP address where one can listen for notifications + + - _audit\_port_ defines the port where one can listen for notifications + + - _audit\_reconnect\_interval_ defines the interval (in seconds) between attempts to reconnect to the listener. Default is 1 second. + + - _audit\_reconnect\_max\_attempts_ defines the maximum number of attempts to reconnect. Default is 0. (infinite) + + Development systems can set _audit\_protocol_ to `local` for testing and training purposes; however, this setting is _not_ supported for production use. + + When `audit_protocol` is set to `local`, `
` should be set to a [stream socket](https://man7.org/linux/man-pages/man7/unix.7.html) defined on the machine running Redis Enterprise and _``_ should not be specified: + + ``` + rladmin cluster config auditing db_conns \ + audit_protocol local audit_address + ``` + + The output file (and path) must be accessible by the user and group running Redis Enterprise Software. + +- the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/auditing-db-conns#put-cluster-audit-db-conns" >}}) + + ``` + PUT /v1/cluster/auditing/db_conns + { + "audit_address": "
", + "audit_port": , + "audit_protocol": "TCP", + "audit_reconnect_interval": , + "audit_reconnect_max_attempts": + } + ``` + + where `
` is a string containing the TCP/IP address, `` is a numeric value representing the port, `` is a numeric value representing the interval in seconds, and `` is a numeric value representing the maximum number of attempts to execute. + +### Database audits + +Once auditing is enabled for your cluster, you can audit individual databases. To do so, use: + +- `rladmin` + + ``` + rladmin tune db db: db_conns_auditing enabled + ``` + + where the value of the _db:_ parameter is either the cluster ID of the database or the database name. + + To deactivate auditing, set `db_conns_auditing` to `disabled`. + + Use `rladmin info` to retrieve additional details: + + ``` + rladmin info db + rladmin info cluster + ``` + +- the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#put-bdbs" >}}) + + ``` + PUT /v1/bdbs/1 + { "db_conns_auditing": true } + ``` + + To deactivate auditing, set `db_conns_auditing` to `false`. + +You must enable auditing for your cluster before auditing a database; otherwise, an error appears: + +> _Error setting description: Unable to enable DB Connections Auditing before feature configurations are set. +> Error setting error_code: db_conns_auditing_config_missing_ + +To resolve this error, enable the protocol for your cluster _before_ attempting to audit a database. + +### Policy defaults for new databases + +To audit connections for new databases by default, use: + +- `rladmin` + + ``` + rladmin tune cluster db_conns_auditing enabled + ``` + + To deactivate this policy, set `db_conns_auditing` to `disabled`. + +- the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) + + ``` + PUT /v1/cluster/policy + { "db_conns_auditing": true } + ``` + + To deactivate this policy, set `db_conns_auditing` to `false`. + +## Notification examples + +Audit event notifications are reported as JSON objects. + +### New connection + +This example reports a new connection for a database: + +``` json +{ + "ts":1655821384, + "new_conn": + { + "id":2285001002 , + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5" + } +} +``` + +### Authentication request + +Here is a sample authentication request for a database: + +``` json +{ + "ts":1655821384, + "action":"auth", + "id":2285001002 , + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5", + "status":2, + "username":"user_one", + "identity":"user:1", + "acl-rules":"~* +@all" +} +``` + +The `status` field reports the following: + +- Values of 2, 7, or 8 indicate success. + +- Values of 3 or 5 indicate that the client authentication is in progress and should conclude later. + +- Other values indicate failures. + +### Database disconnect + +Here's what's reported when a database connection is closed: + +``` json +{ + "ts":1655821384, + "close_conn": + { + "id":2285001002, + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5" + } +} +``` + +## Notification field reference + +The field value that appears immediately after the timestamp describes the action that triggered the notification. The following values may appear: + +- `new_conn` indicates a new external connection +- `new_int_conn` indicates a new internal connection +- `close_conn` occurs when a connection is closed +- `"action":"auth"` indicates an authentication request and can refer to new authentication requests or authorization checks on existing connections + +In addition, the following fields may also appear in audit event notifications: + +| Field name | Description | +|:---------:|-------------| +| `acl-rules` | ACL rules associated with the connection, which includes a rule for the `default` user. | +| `bdb_name` | Destination database name - The name of the database being accessed. | +| `bdb_uid` | Destination database ID - The cluster ID of the database being accessed. | +| `hname` | Client hostname - The hostname of the client. Currently empty; reserved for future use. | +| `id` | Connection ID - Unique connection ID assigned by the proxy. | +| `identity` | Identity - A unique ID the proxy assigned to the user for the current connection. | +| `srcip` | Source IP address - Source TCP/IP address of the client accessing the Redis database. | +| `srcp` | Source port - Port associated with the source IP address accessing the Redis database. Combine the port with the address to uniquely identify the socket. | +| `status` | Status result code - An integer representing the result of an authentication request. | +| `trgip` | Target IP address - The IP address of the destination being accessed by the action. | +| `trgp` | Target port - The port of the destination being accessed by the action. Combine the port with the destination IP address to uniquely identify the database being accessed. | +| `ts` | Timestamp - The date and time of the event, in [Coordinated Universal Time](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) (UTC). Granularity is within one second. | +| `username` | Authentication username - Username associated with the connection; can include `default` for databases that allow default access. (Passwords are _not_ recorded). | + +## Status result codes + +The `status` field reports the results of an authentication request as an integer. Here's what different values mean: + +| Error value | Error code | Description | +|:-------------:|------------|-------------| +| `0` | AUTHENTICATION_FAILED | Invalid username and/or password. | +| `1` | AUTHENTICATION_FAILED_TOO_LONG | Username or password are too long. | +| `2` | AUTHENTICATION_NOT_REQUIRED | Client tried to authenticate, but authentication isn't necessary. | +| `3` | AUTHENTICATION_DIRECTORY_PENDING | Attempting to receive authentication info from the directory in async mode. | +| `4` | AUTHENTICATION_DIRECTORY_ERROR | Authentication attempt failed because there was a directory connection error. | +| `5` | AUTHENTICATION_SYNCER_IN_PROGRESS | Syncer SASL handshake. Return SASL response and wait for the next request. | +| `6` | AUTHENTICATION_SYNCER_FAILED | Syncer SASL handshake. Returned SASL response and closed the connection. | +| `7` | AUTHENTICATION_SYNCER_OK | Syncer authenticated. Returned SASL response. | +| `8` | AUTHENTICATION_OK | Client successfully authenticated. | + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Monitor certificates on a Redis Enterprise cluster. +linkTitle: Monitor certificates +title: Monitor certificates +weight: 10 +url: '/operate/rs/7.4/security/certificates/monitor-certificates/' +--- + +You can monitor certificates used by Redis Enterprise Software. + +### Monitor certificates with Prometheus + +Redis Enterprise Software exposes the expiration time (in seconds) of each certificate on each node. To learn how to monitor Redis Enterprise Software metrics using Prometheus, see the [Prometheus integration quick start]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}). + +Here are some examples of the `node_cert_expiration_seconds` metric: + +```sh +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="cm",node="1",path="/etc/opt/redislabs/cm_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="api",node="1",path="/etc/opt/redislabs/api_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="proxy",node="1",path="/etc/opt/redislabs/proxy_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="metrics_exporter",node="1",path="/etc/opt/redislabs/metrics_exporter_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="syncer",node="1",path="/etc/opt/redislabs/syncer_cert.pem"} 31104000.0 +``` + +The following certificates relate to [internode communication TLS encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption" >}}) and are automatically rotated by Redis Enterprise Software: + +```sh +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="ccs_internode_encryption",node="1",path="/etc/opt/redislabs/ccs_internode_encryption_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="data_internode_encryption",node="1",path="/etc/opt/redislabs/data_internode_encryption_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="mesh_ca_signed",node="1",path="/etc/opt/redislabs/mesh_ca_signed_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="gossip_ca_signed",node="1",path="/etc/opt/redislabs/gossip_ca_signed_cert.pem"} 2592000.0 +``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create self-signed certificates to install on a Redis Enterprise cluster. +linkTitle: Create certificates +title: Create certificates +weight: 10 +url: '/operate/rs/7.4/security/certificates/create-certificates/' +--- + +When you first install Redis Enterprise Software, self-signed certificates are created to enable encryption for Redis Enterprise endpoints. These certificates expire after a year (365 days) and must be renewed. + +You can renew these certificates by replacing them with new self-signed certificates or by replacing them with certificates signed by a [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority) (CA). + +## Renew self-signed certificates + +As of [v6.2.18-70]({{< relref "/operate/rs/release-notes/rs-6-2-18-releases/rs-6-2-18-70" >}}), Redis Enterprise Software includes a script to generate self-signed certificates. + +By default, the `generate_self_signed_certs.sh` script is located in `/opt/redislabs/utils/`. + +Here, you learn how to use this script to generate new certificates and how to install them. + +### Step 1: Generate new certificates + +Sign in to the machine hosting the cluster's master node and then run the following command: + +``` bash +% sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "" -d -t +``` + +where: + +- _\_ is the fully qualified domain name (FQDN) of the cluster. (This is the name given to the cluster when first created.) +- _\_ is an optional FQDN for the cluster. Multiple domain names are allowed, separated by whitespace. Quotation marks (`""`) should enclose the full set of names. +- _\_ is an integer specifying the number of days the certificate should be valid. We recommend against setting this longer than a year (365 days). + + _\_ is optional and defaults to `365`. + +- _\_ is a string identifying the name of the certificate to generate. + + The following values are supported: + + | Value | Description | + |-------|-------------| + | `api` | The REST API | + | `cm` | The Cluster Manager UI | + | `metrics` | The metrics exporter | + | `proxy` | The database endpoint | + | `syncer` | The synchronization process | + | `all` | Generates all certificates in a single operation | + + _Type_ is optional and defaults to `all`. + +When you run the script, it either reports success (`"Self signed cert generated successfully"`) or an error message. Use the error message to troubleshoot any issues. + +The following example generates all self signed certificates for `mycluster.example.com`; these certificates expire one year after the command is run: + +``` bash +$ sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "mycluster.example.com"` +``` + +Suppose you want to create a Cluster Manager UI certificate to support two clusters for a period of two years. The following example shows how: + +``` bash +$ sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "mycluster.example.com anothercluster.example.com" -d 730 -t cm +``` + +Here, a certificate file and certificate key are generated to support the following domains: + +``` text +mycluster.example.com +*.mycluster.example.com +anothercluster.example.com +*.anothercluster.example.com +``` + +### Step 2: Locate the new certificate files + +When successful, the script generates two .PEM files for each generated certificate: a certificate file and a certificate key, each named after the type of certificate generated (see earlier table for individual certificate names.) + +These files can be found in the `/tmp` directory. + +``` bash +$ ls -la /tmp/*.pem +``` + +### Step 3: Set permissions + +We recommend setting the permissions of your new certificate files to limit read and write access to the file owner and to set group and other user permissions to read access. + +``` bash +$ sudo chmod 644 /tmp/*.pem +``` + +### Step 4: Replace existing certificates {#replace-self-signed} + +You can use `rladmin` to replace the existing certificates with new certificates: + +``` console +$ rladmin cluster certificate set certificate_file \ + .pem key_file .pem +``` + +The following values are supported for the _\_ parameter: + +| Value | Description | +|-------|-------------| +| `api` | The REST API | +| `cm` | The Cluster Manager UI | +| `metrics_exporter` | The metrics exporter | +| `proxy` | The database endpoint | +| `syncer` | The synchronization process | + +You can also use the REST API. To learn more, see [Update certificates]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates#how-to-update-certificates" >}}). + +## Create CA-signed certificates + +You can use certificates signed by a [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority) (CA). + +For best results, use the following guidelines to create the certificates. + +### TLS certificate guidelines + +When you create certificates signed by a certificate authority, you need to create server certificates and client certificates. The following provide guidelines that apply to both certificates and guidance for each certificate type. + +#### Guidelines for server and client certificates + +1. Include the full [certificate chain](https://en.wikipedia.org/wiki/X.509#Certificate_chains_and_cross-certification) when creating certificate .PEM files for either server or client certificates. + +1. List (_chain_) certificates in the .PEM file in the following order: + + ``` text + -----BEGIN CERTIFICATE----- + Domain (leaf) certificate + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + Intermediate CA certificate + -----END CERTIFICATE---- + -----BEGIN CERTIFICATE----- + Trusted Root CA certificate + -----END CERTIFICATE----- + ``` + +#### Server certificate guidelines + +Server certificates support clusters. + +In addition to the general guidelines described earlier, the following guidelines apply to server certificates: + +1. Use the cluster's fully qualified domain name (FQDN) as the certificate Common Name (CN). + +1. Set the following values according to the values specified by your security team or certificate authority: + + - Country Name (C) + - State or Province Name (ST) + - Locality Name (L) + - Organization Name (O) + - Organization Unit (OU) + +1. The [Subject Alternative Name](https://en.wikipedia.org/wiki/Subject_Alternative_Name) (SAN) should include the following values based on the FQDN: + + ``` text + dns= + dns=*. + dns=internal. + dns=*.internal. + ``` + +1. The Extended Key Usage attribute should be set to `TLS Web Client Authentication` and `TLS Web Server Authentication`. + +1. We strongly recommend using a strong hash algorithm, such as SHA-256 or SHA-512. + + Individual operating systems might limit access to specific algorithms. For example, Ubuntu 20.04 [limits access](https://manpages.ubuntu.com/manpages/focal/man7/crypto-policies.7.html) to SHA-1. In such cases, Redis Enterprise Software is limited to the features supported by the underlying operating system. + + +#### Client certificate guidelines + +Client certificates support database connections. + +In addition to the general guidelines described earlier, the following guidelines apply to client certificates: + +1. The Extended Key Usage attribute should be set to `TLS Web Client Authentication`. + +1. We strongly recommend using a strong hash algorithm, such as SHA-256 or SHA-512. + + Individual operating systems might limit access to specific algorithms. For example, Ubuntu 20.04 [limits access](https://manpages.ubuntu.com/manpages/focal/man7/crypto-policies.7.html) to SHA-1. In such cases, Redis Enterprise Software is limited to the features supported by the underlying operating system. + +### Create certificates + +The actual process of creating CA-signed certificates varies according to the CA. In addition, your security team may have custom instructions that you need to follow. + +Here, we demonstrate the general process using OpenSSL. If your CA provides alternate tools, you should use those according to their instructions. + +However you choose to create the certificates, be sure to incorporate the guidelines described earlier. + +1. Create a private key. + + ``` bash + $ openssl genrsa -out .pem 2048 + ``` + +1. Create a certificate signing request. + + ``` bash + $ openssl req -new -key .pem -out \ + .csr -config .cnf + ``` + _Important: _ The .CNF file is a configuration file. Check with your security team or certificate authority for help creating a valid configuration file for your environment. + +3. Sign the private key using your certificate authority. + + ```sh + $ openssl x509 -req -in .csr -signkey .pem -out .pem + ``` + + The signing process varies for each organization and CA vendor. Consult your security team and certificate authority for specific instructions describing how to sign a certificate. + +4. Upload the certificate to your cluster. + + You can use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/certificate" >}}) to replace the existing certificates with new certificates: + + ``` console + $ rladmin cluster certificate set certificate_file \ + .pem key_file .pem + ``` + + For a list of values supported by the `` parameter, see the [earlier table](#replace-self-signed). + + You can also use the REST API. To learn more, see [Update certificates]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates#how-to-update-certificates" >}}). + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use OCSP stapling to verify certificates maintained by a third-party + CA and authenticate connection attempts between clients and servers. +linkTitle: Enable OCSP stapling +title: Enable OCSP stapling +weight: 50 +url: '/operate/rs/7.4/security/certificates/ocsp-stapling/' +--- + +OCSP ([Online Certificate Status Protocol](https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol)) lets a client or server verify the status (`GOOD`, `REVOKED`, or `UNKNOWN`) of a certificate maintained by a third-party [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). + +To check whether a certificate is still valid or has been revoked, a client or server can send a request to the CA's OCSP server (also called an OCSP responder). The OCSP responder checks the certificate's status in the CA's [certificate revocation list](https://en.wikipedia.org/wiki/Certificate_revocation_list) and sends the status back as a signed and timestamped response. + +## OCSP stapling overview + + With OCSP enabled, the Redis Enterprise server regularly polls the CA's OCSP responder for the certificate's status. After it receives the response, the server caches this status until its next polling attempt. + + When a client tries to connect to the Redis Enterprise server, they perform a [TLS handshake](https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake) to authenticate the server and create a secure, encrypted connection. During the TLS handshake, [OCSP stapling](https://en.wikipedia.org/wiki/OCSP_stapling) lets the Redis Enterprise server send (or "staple") the cached certificate status to the client. + +If the stapled OCSP response confirms the certificate is still valid, the TLS handshake succeeds and the client connects to the server. + +The TLS handshake fails and the client blocks the connection to the server if the stapled OCSP response indicates either: + +- The certificate has been revoked. + +- The certificate's status is unknown. This can happen if the OCSP responder fails to send a response. + +## Set up OCSP stapling + +You can configure and enable OCSP stapling for your Redis Enterprise cluster with the [Cluster Manager UI](#cluster-manager-ui-method), the [REST API](#rest-api-method), or [`rladmin`](#rladmin-method). + +While OCSP is enabled, the server always staples the cached OCSP status when a client tries to connect. It is the client's responsibility to use the stapled OCSP status. Some Redis clients, such as [Jedis](https://github.com/redis/jedis) and [redis-py](https://github.com/redis/redis-py), already support OCSP stapling, but others might require additional configuration. + +### Cluster Manager UI method + +To set up OCSP stapling with the Redis Enterprise Cluster Manager UI: + +1. Go to **Cluster > Security > OCSP**. + +1. In the **Responder URI** section, select **Replace Certificate** to update the proxy certificate. + +1. Provide the key and certificate signed by your third-party CA, then select **Save**. + +1. Configure query settings if you don't want to use their default values: + + | Name | Default value | Description | + |------|---------------|-------------| + | **Query frequency** | 1 hour | The time interval between OCSP queries to the responder URI. | + | **Response timeout** | 1 second | The time interval in seconds to wait for a response before timing out. | + | **Recovery frequency** | 1 minute | The time interval between retries after a failed query. | + | **Recovery maximum tries** | 5 | The number of retries before the validation query fails and invalidates the certificate. | + +1. Select **Enable** to turn on OCSP stapling. + +### REST API method + +To set up OCSP stapling with the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}): + +1. Use the REST API to [replace the proxy certificate]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates#use-the-rest-api" >}}) with a certificate signed by your third-party CA. + +1. To configure and enable OCSP, send a [`PUT` request to the `/v1/ocsp`]({{< relref "/operate/rs/7.4/references/rest-api/requests/ocsp#put-ocsp" >}}) endpoint and include an [OCSP JSON object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp" >}}) in the request body: + + ```json + { + "ocsp_functionality": true, + "query_frequency": 3600, + "response_timeout": 1, + "recovery_frequency": 60, + "recovery_max_tries": 5 + } + ``` + +### `rladmin` method + +To set up OCSP stapling with the [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command-line utility: + +1. Use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/certificate" >}}) to [replace the proxy certificate]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates#use-the-cli" >}}) with a certificate signed by your third-party CA. + +1. Update the cluster's OCSP settings with the [`rladmin cluster ocsp config`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/ocsp#ocsp-config" >}}) command if you don't want to use their default values. + + For example: + + ```sh + rladmin cluster ocsp config recovery_frequency set 30 + ``` + +1. Enable OCSP: + + ```sh + rladmin cluster ocsp config ocsp_functionality set enabled + ``` +--- +Title: Certificates +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of certificates in Redis Enterprise Software. +hideListLinks: true +linkTitle: Certificates +weight: 60 +url: '/operate/rs/7.4/security/certificates/' +--- + +Redis Enterprise Software uses self-signed certificates by default to ensure that the product is secure. If using a self-signed certificate is not the right solution for you, you can import a certificate signed by a certificate authority of your choice. + +Here's the list of self-signed certificates that create secure, encrypted connections to your Redis Enterprise cluster: + +| Certificate name | Description | +|------------------|-------------| +| `api` | Encrypts [REST API]({{< relref "/operate/rs/7.4/references/rest-api/" >}}) requests and responses. | +| `cm` | Secures connections to the Redis Enterprise Cluster Manager UI. | +| `ldap_client` | Secures connections between LDAP clients and LDAP servers. | +| `metrics_exporter` | Sends Redis Enterprise metrics to external [monitoring tools]({{< relref "/operate/rs/7.4/clusters/monitoring/" >}}) over a secure connection. | +| `proxy` | Creates secure, encrypted connections between clients and databases. | +| `syncer` | For [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/" >}}) or [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) databases, encrypts data during the synchronization of participating clusters. | + +These self-signed certificates are generated on the first node of each Redis Enterprise Software installation and are copied to all other nodes added to the cluster. + +When you use the default self-signed certificates and you connect to the Cluster Manager UI over a web browser, you'll see an untrusted connection notification. + +Depending on your browser, you can allow the connection for each session or add an exception to trust the certificate for all future sessions. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Update certificates in a Redis Enterprise cluster. +linkTitle: Update certificates +title: Update certificates +weight: 20 +url: '/operate/rs/7.4/security/certificates/updating-certificates/' +--- + +{{}} +When you update the certificates, the new certificate replaces the same certificates on all nodes in the cluster. +{{}} + +## How to update certificates + +You can use the [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) command-line interface (CLI) or the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) to update certificates. The Cluster Manager UI lets you update proxy and syncer certificates on the **Cluster > Security > Certificates** screen. + +The new certificates are used the next time the clients connect to the database. + +When you upgrade Redis Enterprise Software, the upgrade process copies the certificates that are on the first upgraded node to all of the nodes in the cluster. + +{{}} +Don't manually overwrite the files located in `/etc/opt/redislabs`. Instead, upload new certificates to a temporary location on one of the cluster nodes, such as the `/tmp` directory. +{{}} + +### Use the Cluster Manager UI + +To replace proxy or syncer certificates using the Cluster Manager UI: + +1. Go to **Cluster > Security > Certificates**. + +1. Expand the section for the certificate you want to update: + - For the proxy certificate, expand **Server authentication**. + - For the syncer certificate, expand **Replica Of and Active-Active authentication**. + + {{Expanded proxy certificate for server authentication.}} + +1. Click **Replace Certificate** to open the dialog. + + {{Replace proxy certificate dialog.}} + +1. Upload the key file. + +1. Upload the new certificate. + +1. Click **Save**. + +### Use the CLI + +To replace certificates with the `rladmin` CLI, run the [`cluster certificate set`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/certificate" >}}) command: + +```sh + rladmin cluster certificate set certificate_file .pem key_file .pem +``` + +Replace the following variables with your own values: + +- `` - The name of the certificate you want to replace. See the [certificates table]({{< relref "/operate/rs/7.4/security/certificates" >}}) for the list of valid certificate names. +- `` - The name of your certificate file +- `` - The name of your key file + +For example, to replace the Cluster Manager UI (`cm`) certificate with the private key `key.pem` and the certificate file `cluster.pem`: + +```sh +rladmin cluster certificate set cm certificate_file cluster.pem key_file key.pem +``` + +### Use the REST API + +To replace a certificate using the REST API, use [`PUT /v1/cluster/update_cert`]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/certificates#put-cluster-update_cert" >}}): + +```sh +PUT https://[host][:port]/v1/cluster/update_cert + '{ "name": "", "key": "", "certificate": "" }' +``` + +Replace the following variables with your own values: + +- `` - The name of the certificate to replace. See the [certificates table]({{< relref "/operate/rs/7.4/security/certificates" >}}) for the list of valid certificate names. +- `` - The contents of the \*\_key.pem file + + {{< tip >}} + + The key file contains `\n` end of line characters (EOL) that you cannot paste into the API call. + You can use `sed -z 's/\n/\\\n/g'` to escape the EOL characters. + {{< /tip >}} + +- `` - The contents of the \*\_cert.pem file + +## Replica Of database certificates + +This section describes how to update certificates for Replica Of databases. + +### Update proxy certificates {#update-ap-proxy-certs} + +To update the proxy certificate on clusters running Replica Of databases: + +1. Use the Cluster Manager UI, `rladmin`, or the REST API to update the proxy certificate on the source database cluster. + +1. From the Cluster Manager UI, update the destination database (_replica_) configuration with the [new certificate]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create#encrypt-replica-database-traffic" >}}). + +{{}} +- Perform step 2 as quickly as possible after performing step 1. Connections using the previous certificate are rejected after applying the new certificate. Until both steps are performed, recovery of the database sync cannot be established. +{{}} + +## Active-Active database certificates + +### Update proxy certificates {#update-aa-proxy-certs} + +To update proxy certificate on clusters running Active-Active databases: + +1. Use the Cluster Manager UI, `rladmin`, or the REST API to update proxy certificates on a single cluster, multiple clusters, or all participating clusters. + +1. Use the [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) utility to update Active-Active database configuration from the command line. Run the following command once for each Active-Active database residing on the modified clusters: + + ```sh + crdb-cli crdb update --crdb-guid --force + ``` + +{{}} +- Perform step 2 as quickly as possible after performing step 1. Connections using the previous certificate are rejected after applying the new certificate. Until both steps are performed, recovery of the database sync cannot be established.
+- Do not run any other `crdb-cli crdb update` operations between the two steps. +{{
}} + +### Update syncer certificates {#update-aa-syncer-certs} + +To update your syncer certificate on clusters running Active-Active databases, follow these steps: + +1. Update your syncer certificate on one or more of the participating clusters using the Cluster Manager UI, `rladmin`, or the REST API. You can update a single cluster, multiple clusters, or all participating clusters. + +1. Update the Active-Active database configuration from the command line with the [`crdb-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) utility. Run this command once for each Active-Active database that resides on the modified clusters: + + ```sh + crdb-cli crdb update --crdb-guid --force + ``` + +{{}} +- Run step 2 as quickly as possible after step 1. Between the two steps, new syncer connections that use the ‘old’ certificate will get rejected by the cluster that has been updated with the new certificate (in step 1).
+- Do not run any other `crdb-cli crdb update` operations between the two steps.
+- **Known limitation**: Updating syncer certificate on versions prior to 6.0.20-81 will restart the proxy and syncer connections. In these cases, we recommend scheduling certificate replacement carefully to minimize customer impact. +{{
}} +--- +Title: Recommended security practices +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Recommended security practices +hideListLinks: true +weight: 5 +aliases: + - /operate/rs/security/admin-console-security/ +url: '/operate/rs/7.4/security/recommended-security-practices/' +--- + +## Deployment security + +When deploying Redis Enterprise Software to production, we recommend the following practices: + +- **Deploy Redis Enterprise inside a trusted network**: Redis Enterprise is database software and should be deployed on a trusted network not accessible to the public internet. Deploying Redis Enterprise in a trusted network reduces the likelihood that someone can obtain unauthorized access to your data or the ability to manage your database configuration. + +- **Implement anti-virus exclusions**: To ensure that anti-virus solutions that scan files or intercept processes to protect memory do not interfere with Redis Enterprise software, you should ensure that anti-virus exclusions are implemented across all nodes in their Redis Enterprise cluster in a consistent policy. This helps ensure that anti-virus software does not impact the availability of your Redis Enterprise cluster. + + If you are replacing your existing antivirus solution or installing/supporting Redis Enterprise, make sure that the below paths are excluded: + + {{< note >}} +For antivirus solutions that intercept processes, binary files may have to be excluded directly depending on the requirements of your anti-virus vendor. + {{< /note >}} + + | **Path** | **Description** | + |------------|-----------------| + | /opt/redislabs | Main installation directory for all Redis Enterprise Software binaries | + | /opt/redislabs/bin | Binaries for all the utilities for command line access and managements such as "rladmin" or "redis-cli" | + | /opt/redislabs/config | System configuration files | + | /opt/redislabs/lib | System library files | + | /opt/redislabs/sbin | System binaries for tweaking provisioning | + +- **Send logs to a remote logging server**: Redis Enterprise is configured to send logs by default to syslog. To send these logs to a remote logging server you must [configure syslog]({{}}) based the requirements of the remote logging server vendor. Remote logging helps ensure that the logs are not deleted so that you can rotate the logs to prevent your server disk from filling up. + +- **Deploy clusters with an odd number of 3 or more nodes**: Redis is an available and partition-tolerant database. We recommend that Redis Enterprise be deployed in a cluster of an odd number of 3 or more nodes so that you are able to successfully failover in the event of a failure. + +- **Reboot nodes in a sequence rather than all at once**: It is best practice to frequently maintain reboot schedules. If you reboot too many servers at once, it is possible to cause a quorum failure that results in loss of availability of the database. We recommend that rebooting be done in a phased manner so that quorum is not lost. For example, to maintain quorum in a 3 node cluster, at least 2 nodes must be up at all times. Only one server should be rebooted at any given time to maintain quorum. + +- **Implement client-side encryption**: Client-side encryption, or the practice of encrypting data within an application before storing it in a database, such as Redis, is the most widely adopted method to achieve encryption in memory. Redis is an in-memory database and stores data in-memory. If you require encryption in memory, better known as encryption in use, then client side encryption may be the right solution for you. Please be aware that database functions that need to operate on data — such as simple searching functions, comparisons, and incremental operations — don’t work with client-side encryption. + +## Cluster security + +- **Control the level of access to your system**: Redis Enterprise lets you decide which users can access the cluster, which users can access databases, and which users can access both. We recommend preventing database users from accessing the cluster. See [Access control]({{}}) for more information. + +- **Enable LDAP authentication**: If your organization uses the Lightweight Directory Access Protocol (LDAP), we recommend enabling Redis Enterprise Software support for role-based LDAP authentication. + +- **Require HTTPS for API endpoints**: Redis Enterprise comes with a REST API to help automate tasks. This API is available in both an encrypted and unencrypted endpoint for backward compatibility. You can [disable the unencrypted endpoint]({{}}) with no loss in functionality. + +## Database security + +Redis Enterprise offers several database security controls to help protect your data against unauthorized access and to improve the operational security of your database. The following section details configurable security controls available for implementation. + +- **Use strong Redis passwords**: A frequent recommendation in the security industry is to use strong passwords to authenticate users. This helps to prevent brute force password guessing attacks against your database. Its important to check that your password aligns with your organizations security policy. + +- **Deactivate default user access**: Redis Enterprise comes with a "default" user for backwards compatibility with applications designed with versions of Redis prior to Redis Enterprise 6. The default user is turned on by default. This allows you to access the database without specifying a username and only using a shared secret. For applications designed to use access control lists, we recommend that you [deactivate default user access]({{}}). + +- **Configure Transport Layer Security (TLS)**: Similar to the control plane, you can also [configure TLS protocols]({{}}) to help support your security and compliance needs. + +- **Enable client certificate authentication**: To prevent unauthorized access to your data, Redis Enterprise databases support the [TLS protocol]({{}}), which includes authentication and encryption. Client certificate authentication can be used to ensure only authorized hosts can access the database. + +- **Install trusted certificates**: Redis implements self-signed certificates for the database proxy and replication service, but many organizations prefer to [use their own certificates]({{}}). + +- **Configure and verify database backups**: Implementing a disaster recovery strategy is an important part of data security. Redis Enterprise supports [database backups to many destinations]({{}}). +--- +Title: Rotate passwords +alwaysopen: false +categories: +- docs +- operate +- rs +description: Rotate user passwords. +linkTitle: Rotate passwords +toc: 'true' +weight: 70 +url: '/operate/rs/7.4/security/access-control/manage-passwords/rotate-passwords/' +--- + +Redis Enterprise Software lets you implement password rotation policies using the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}). + +You can add a new password for a database user without immediately invalidating the old one (which might cause authentication errors in production). + +{{< note >}} +Password rotation does not work for the default user. [Add additional users]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}) to enable password rotation. +{{< /note >}} + +## Password rotation policies + +For user access to the Redis Enterprise Software Cluster Manager UI, +you can set a [password expiration policy]({{< relref "/operate/rs/7.4/security/access-control/manage-passwords/password-expiration" >}}) to prompt the user to change their password. + +However, for database connections that rely on password authentication, +you need to allow for authentication with the existing password while you roll out the new password to your systems. + +With the Redis Enterprise Software REST API, you can add additional passwords to a user account for authentication to the database or the Cluster Manager UI and API. + +After the old password is replaced in the database connections, you can delete the old password to finish the password rotation process. + +{{< warning >}} +Multiple passwords are only supported using the REST API. +If you reset the password for a user in the Cluster Manager UI, +the new password replaces all other passwords for that user. +{{< /warning >}} + +The new password cannot already exist as a password for the user and must meet the [password complexity]({{< relref "/operate/rs/7.4/security/access-control/manage-passwords/password-complexity-rules" >}}) requirements, if enabled. + +## Rotate password + +To rotate the password of a user account: + +1. Add an additional password to a user account with [`POST /v1/users/password`]({{< relref "/operate/rs/7.4/references/rest-api/requests/users/password#add-password" >}}): + + ```sh + POST https://[host][:port]/v1/users/password + '{"username":"", "old_password":"", "new_password":""}' + ``` + + After you send this request, you can authenticate with both the old and the new password. + +1. Update the password in all database connections that connect with the user account. +1. Delete the original password with [`DELETE /v1/users/password`]({{< relref "/operate/rs/7.4/references/rest-api/requests/users/password#update-password" >}}): + + ```sh + DELETE https://[host][:port]/v1/users/password + '{"username":"", "old_password":""}' + ``` + + If there is only one valid password for a user account, you cannot delete that password. + +## Replace all passwords + +You can also replace all existing passwords for a user account with a single password that does not match any existing passwords. +This can be helpful if you suspect that your passwords are compromised and you want to quickly resecure the account. + +To replace all existing passwords for a user account with a single new password, use [`PUT /v1/users/password`]({{< relref "/operate/rs/7.4/references/rest-api/requests/users/password#delete-password" >}}): + +```sh +PUT https://[host][:port]/v1/users/password + '{"username":"", "old_password":"", "new_password":""}' +``` + +All of the existing passwords are deleted and only the new password is valid. + +{{}} +If you send the above request without specifying it is a `PUT` request, the new password is added to the list of existing passwords. +{{}} +--- +Title: Configure password expiration +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure password expiration to enforce expiration of a user's password + after a specified number of days. +linkTitle: Password expiration +toc: 'true' +weight: 50 +url: '/operate/rs/7.4/security/access-control/manage-passwords/password-expiration/' +--- + +## Enable password expiration + +To enforce an expiration of a user's password after a specified number of days: + +- Use the Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn on **Expiration**. + + 1. Enter the number of days before passwords expire. + + 1. Select **Save**. + +- Use the `cluster` endpoint of the REST API + + ``` REST + PUT https://[host][:port]/v1/cluster + {"password_expiration_duration":} + ``` + +## Deactivate password expiration + +To deactivate password expiration: + +- Use the Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn off **Expiration**. + + 1. Select **Save**. + +- Use the `cluster` REST API endpoint to set `password_expiration_duration` to `0` (zero). +--- +Title: Update admin credentials for Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Update admin credentials for Active-Active databases. +linkTitle: Update Active-Active admin credentials +weight: 90 +url: '/operate/rs/7.4/security/access-control/manage-passwords/active-active-admin-credentials/' +--- + +Active-Active databases use administrator credentials to manage operations. + +To update the administrator user password on a cluster with Active-Active databases: + +1. From the user management page, update the administrator user password on the clusters you want to update. + +1. For each participating cluster _and_ each Active-Active database, update the admin user credentials to match the changes in step 1. + +{{}} +Do not perform any management operations on the databases until these steps are complete. +{{}} +--- +Title: Enable password complexity rules +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable password complexity rules. +linkTitle: Password complexity rules +toc: 'true' +weight: 30 +url: '/operate/rs/7.4/security/access-control/manage-passwords/password-complexity-rules/' +--- + +Redis Enterprise Software provides optional password complexity rules that meet common requirements. When enabled, these rules require the password to have: + +- At least 8 characters +- At least one uppercase character +- At least one lowercase character +- At least one number +- At least one special character + +These requirements reflect v6.2.12 and later. Earlier versions did not support numbers or special characters as the first or the last character of a password. This restriction was removed in v6.2.12. + +In addition, the password: + +- Cannot contain the user's email address or the reverse of the email address. +- Cannot have more than three repeating characters. + +Password complexity rules apply when a new user account is created and when the password is changed. Password complexity rules are not applied to accounts authenticated by an external identity provider. + +You can use the Cluster Manager UI or the REST API to enable password complexity rules. + +## Enable using the Cluster Manager UI + +To enable password complexity rules using the Cluster Manager UI: + +1. Go to **Cluster > Security > Preferences**, then select **Edit**. + +1. In the **Password** section, turn on **Complexity rules**. + +1. Select **Save**. + +## Enable using the REST API + +To use the REST API to enable password complexity rules: + +``` REST +PUT https://[host][:port]/v1/cluster +{"password_complexity":true} +``` + +## Deactivate password complexity rules + +To deactivate password complexity rules: + +- Use the Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn off **Complexity rules**. + + 1. Select **Save**. + +- Use the `cluster` REST API endpoint to set `password_complexity` to `false` +--- +Title: Set password policies +alwaysopen: false +categories: +- docs +- operate +- rs +description: Set password policies. +hideListLinks: true +linkTitle: Set password policies +toc: 'true' +weight: 30 +url: '/operate/rs/7.4/security/access-control/manage-passwords/' +--- + +Redis Enterprise Software provides several ways to manage the passwords of local accounts, including: + +- [Password complexity rules]({{< relref "/operate/rs/7.4/security/access-control/manage-passwords/password-complexity-rules" >}}) + +- [Password expiration]({{< relref "/operate/rs/7.4/security/access-control/manage-passwords/password-expiration" >}}) + +- [Password rotation]({{< relref "/operate/rs/7.4/security/access-control/manage-passwords/rotate-passwords" >}}) + +You can also manage a user's ability to [sign in]({{< relref "/operate/rs/7.4/security/access-control/manage-users/login-lockout#user-login-lockout" >}}) and control [session timeout]({{< relref "/operate/rs/7.4/security/access-control/manage-users/login-lockout#session-timeout" >}}). + +To enforce more advanced password policies, we recommend using [LDAP integration]({{< relref "/operate/rs/7.4/security/access-control/ldap" >}}) with an external identity provider, such as Active Directory. + +{{}} +Redis Enterprise Software stores all user passwords using the SHA-256 cryptographic hash function. +{{}} +--- +Title: Manage user login +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage user login lockout and session timeout. +linkTitle: Manage user login and session +toc: 'true' +weight: 40 +url: '/operate/rs/7.4/security/access-control/manage-users/login-lockout/' +--- + +Redis Enterprise Software secures user access in a few different ways, including automatically: + +- Locking user accounts after a series of authentication failures (invalid passwords) + +- Signing sessions out after a period of inactivity + +Here, you learn how to configure the relevant settings. + +## User login lockout + +By default, after 5 failed login attempts within 15 minutes, the user account is locked for 30 minutes. You can change the user login lockout settings in the Cluster Manager UI or with [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}). + +### View login lockout settings + +You can view the cluster's user login lockout settings from **Cluster > Security > Preferences > Lockout threshold** in the Cluster Manager UI or with [`rladmin info cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/info#info-cluster" >}}): + +```sh +$ rladmin info cluster | grep login_lockout + login_lockout_counter_reset_after: 900 + login_lockout_duration: 1800 + login_lockout_threshold: 5 +``` + +### Configure user login lockout + +To change the user login lockout settings using the Cluster Manager UI: + +1. Go to **Cluster > Security > Preferences**, then select **Edit**. + +1. In the **Lockout threshold** section, make sure the checkbox is selected. + + {{The Lockout threshold configuration section}} + +1. Configure the following **Lockout threshold** settings: + + 1. **Log-in attempts until user is revoked** - The number of failed login attempts allowed before the user account is locked. + + 1. **Time between failed login attempts** in seconds, minutes, or hours - The amount of time during which failed login attempts are counted. + + 1. For **Unlock method**, select one of the following: + + - **Locked duration** to set how long the user account is locked after excessive failed login attempts. + + - **Only Admin can unlock the user by resetting the password**. + +1. Select **Save**. + +### Change allowed login attempts + +To change the number of failed login attempts allowed before the user account is locked, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_threshold + ``` + + For example, to set the lockout threshold to 10 failed login attempts, run: + + ```sh + rladmin tune cluster login_lockout_threshold 10 + ``` + + If you set the lockout threshold to 0, it turns off account lockout, and the cluster settings show `login_lockout_threshold: disabled`. + + ```sh + rladmin tune cluster login_lockout_threshold 0 + ``` + +### Change time before login attempts reset + +To change the amount of time during which failed login attempts are counted, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_counter_reset_after + ``` + + For example, to set the lockout reset to 1 hour, run: + + ```sh + rladmin tune cluster login_lockout_counter_reset_after 3600 + ``` + +### Change login lockout duration + +To change the amount of time that the user account is locked after excessive failed login attempts, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_duration + ``` + + For example, to set the lockout duration to 1 hour, run: + + ```sh + rladmin tune cluster login_lockout_duration 3600 + ``` + + If you set the lockout duration to 0, then the account can be unlocked only when an administrator changes the account's password. + + ```sh + rladmin tune cluster login_lockout_duration 0 + ``` + + The cluster settings now show `login_lockout_duration: admin-release`. + +### Unlock locked user accounts + +To unlock a user account in the Cluster Manager UI: + +1. Go to **Access Control > Users**. Locked users have a "User is locked out" label: + + {{The Access Control > Users configuration screen in the Cluster Manager UI}} + +1. Point to the user you want to unlock, then click **Reset to unlock**: + + {{Reset to unlock button appears when you point to a locked user in the list}} + +1. In the **Reset user password** dialog, enter a new password for the user: + + {{Reset user password dialog}} + +1. Select **Save** to reset the user's password and unlock their account. + +To unlock a user account or reset a user password with `rladmin`, run: + +```sh +rladmin cluster reset_password +``` + +To unlock a user account or reset a user password with the REST API, use [`PUT /v1/users`]({{< relref "/operate/rs/7.4/references/rest-api/requests/users#put-user" >}}): + +```sh +PUT /v1/users +{"password": ""} +``` + +### Turn off login lockout + +To turn off user login lockout and allow unlimited login attempts, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. Clear the **Lockout threshold** checkbox. + + 1. Select **Save**. + +- [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_threshold 0 + ``` + +The cluster settings show `login_lockout_threshold: disabled`. + +## Configure session timeout + +The Redis Enterprise Cluster Manager UI supports session timeouts. By default, users are automatically logged out after 15 minutes of inactivity. + +To customize the session timeout, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. For **Session timeout**, select minutes or hours from the list and enter the timeout value. + + 1. Select **Save**. + +- [`rladmin cluster config`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config cm_session_timeout_minutes + ``` + + The `` is the number of minutes after which sessions will time out. +--- +Title: Manage user security +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage user account security settings. +hideListLinks: false +linkTitle: Manage user security +weight: 20 +url: '/operate/rs/7.4/security/access-control/manage-users/' +--- + +Redis Enterprise supports the following user account security settings: + +- Password complexity +- Password expiration +- User lockouts +- Account inactivity timeout + +## Manage users and user security + +--- +Title: Manage default user +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage a database's default user. +linkTitle: Manage default user +toc: 'true' +weight: 60 +url: '/operate/rs/7.4/security/access-control/manage-users/default-user/' +--- + +When you [create a database]({{< relref "/operate/rs/7.4/databases/create" >}}), default user database access is enabled by default (**Unauthenticated access** is selected). This gives the default user full access to the database and enables compatibility with versions of Redis before Redis 6. + +Select **Password-only authentication**, then enter and confirm a default database password to require authentication for connections to the database. + +{{Select Password-only authentication to require a password to access the database.}} + +## Authenticate as default user + +When you configure a password for your database, all connections to the database must authenticate using the [AUTH]({{< relref "/commands/auth" >}}) command. See Redis security's [authentication]({{}}) section for more information. + +```sh +AUTH +``` + +## Change default database password + +To change the default user's password: + +1. From the database's **Security** tab, select **Edit**. + +1. In the **Access Control** section, select **Password-only authentication** as the **Access method**. + +1. Enter and re-enter the new password. + +1. Select **Save**. + +## Deactivate default user + +If you set up [role-based access control]({{< relref "/operate/rs/7.4/security/access-control" >}}) with [access control lists]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) (ACLs) for your database and don't require backwards compatibility with versions earlier than Redis 6, you can [deactivate the default user]({{< relref "/operate/rs/7.4/security/access-control/manage-users/default-user" >}}). + +{{}} +Before you deactivate default user access, make sure the role associated with the database is [assigned to a user]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}). Otherwise, the database will be inaccessible. +{{}} + +To deactivate the default user: + +1. From the database's **Security** tab, select **Edit**. + +1. In the **Access Control** section, select **Using ACL only** as the **Access method**. + + {{Select Using ACL only to deactivate default user access to the database.}} + +1. Choose at least one role and Redis ACL to access the database. + +1. Select **Save**. +--- +Title: Update database ACLs +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to use the Cluster Manager UI to update database access + control lists (ACLs) to authorize access to roles authorizing LDAP user access. +weight: 45 +url: '/operate/rs/7.4/security/access-control/ldap/update-database-acls/' +--- + +To grant LDAP users access to a database, assign the mapped access role to the access control list (ACL) for the database. + +1. In the Cluster Manager UI, go to **Databases**, then select the database from the list. + +1. From the **Security** tab, select the **Edit** button. + +1. In the **Access Control List** section, select **+ Add ACL**. + + {{Updating a database access control list (ACL)}} + +1. Select the appropriate roles and then save your changes. + +If you assign multiple roles to an ACL and a user is authorized by more than one of these roles, their access is determined by the first “matching” rule in the list. + +If the first rule gives them read access and the third rule authorizes write access, the user will only be able to read data. + +As a result, we recommend ordering roles so that higher access roles appear before roles with more limited access. + + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap.md" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles.md" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/7.4/security/" >}}) +--- +Title: Enable role-based LDAP +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to enable role-based LDAP authentication and authorization + using the Cluster Manager UI. +weight: 25 +url: '/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap/' +--- + +Redis Enterprise Software uses a role-based mechanism to enable LDAP authentication and authorization. + +When a user attempts to access Redis Enterprise resources using LDAP credentials, the credentials are passed to the LDAP server in a bind request. If the request succeeds, the user’s groups are searched for a group that authorizes access to the original resource. + +Role-based LDAP lets you authorize cluster management users (previously known as _external users_) and database users. As with any access control role, you can define the level of access authorized by the role. + +## Set up LDAP connection + +To configure and enable LDAP from the Cluster Manager UI: + +1. Go to **Access Control > LDAP > Configuration**. + +1. Select **+ Create**. + +1. In **Set LDAP**, configure [LDAP server settings](#ldap-server-settings), [bind credentials](#bind-credentials), [authentication query](#authentication-query), and [authorization query](#authorization-query). + + {{The LDAP configuration screen in the Cluster Manager UI}} + +1. Select **Save & Enable**. + +### LDAP server settings + +The **LDAP server** settings define the communication settings used for LDAP authentication and authorization. These include: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Protocol type** | Underlying communication protocol; must be _LDAP_, _LDAPS_, or _STARTTLS_ | +| **Host** | URL of the LDAP server | +| **Port** | LDAP server port number | +| **Trusted CA certificate** | _(LDAPS or STARTTLS protocols only)_ Certificate for the trusted certificate authority (CA) | + +When defining multiple LDAP hosts, the organization tree structure must be identical for all hosts. + +### Bind credentials + +These settings define the credentials for the bind query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Distinguished Name** | Example: `cd=admin,dc=example,dc=org` | +| **Password** | Example: `admin1` | +| **Client certificate authentication** |_(LDAPS or STARTTLS protocols only)_ Place checkmark to enable | +| **Client public key** | _(LDAPS or STARTTLS protocols only)_ The client public key for authentication | +| **Client private key** | _(LDAPS or STARTTLS protocols only)_ The client private key for authentication | + +### Authentication query + +These settings define the authentication query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Search user by** | Either _Template_ or _Query_ | +| **Template** | _(template search)_ Example: `cn=%u,ou=dev,dc=example,dc=com` | +| **Base** | _(query search)_ Example: `ou=dev,dc=example,dc=com` | +| **Filter** | _(query search)_ Example: `(cn=%u)` | +| **Scope** | _(query search)_ Must be _baseObject_, _singleLevel_, or _wholeSubtree_ | + +In this example, `%u` is replaced by the username attempting to access the Redis Enterprise resource. + +### Authorization query + +These settings define the group authorization query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Search groups by** | Either _Attribute_ or _Query_ | +| **Attribute** | _(attribute search)_ Example: `memberOf` (case-sensitive) | +| **Base** | _(query search)_ Example: `ou=groups,dc=example,dc=com` | +| **Filter** | _(query search)_ Example: `(members=%D)` | +| **Scope** | _(query search)_ Must be _baseObject_, _singleLevel_, or _wholeSubtree_ | + +In this example, `%D` is replaced by the Distinguished Name of the user attempting to access the Redis Enterprise resource. + +### Authentication timeout + +The **Authentication timeout** setting determines the connection timeout to the LDAP server during user authentication. + +By default, the timeout is 5 seconds, which is recommended for most cases. + +However, if you enable multi-factor authentication (MFA) for your LDAP server, you might need to increase the timeout to provide enough time for MFA verification. You can set it to any integer in the range of 5-60 seconds. + +## More info + +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/7.4/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Software [security and practices]({{< relref "/operate/rs/7.4/security/" >}}) +--- +Title: Map LDAP groups to roles +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to map LDAP authorization groups to Redis Enterprise roles + using the Cluster Manager UI. +weight: 35 +url: '/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles/' +--- + +Redis Enterprise Software uses a role-based mechanism to enable LDAP authentication and authorization. + +Once LDAP is enabled, you need to map LDAP groups to Redis Enterprise access control roles. + +## Map LDAP groups to roles + +To map LDAP groups to access control roles in the Cluster Manager UI: + +1. Select **Access Control > LDAP > Mapping**. + + {{}} +You can map LDAP roles when LDAP configuration is not enabled, but they won't have any effect until you [configure and enable LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap" >}}). + {{}} + + {{Enable LDAP mappings Panel}} + +1. Select the **+ Add LDAP Mapping** button to create a new mapping and then enter the following details: + + | _Setting_ | _Description_ | +|:----------|:--------------| +| **Name** | A descriptive, unique name for the mapping | +| **Distinguished Name** | The distinguished name of the LDAP group to be mapped.
Example: `cn=admins,ou=groups,dc=example,dc=com` | +| **Role** | The Redis Software access control role defined for this group | +| **Email** | _(Optional)_ An address to receive alerts| +| **Alerts** | Selections identifying the desired alerts. | + + {{Enable LDAP mappings Panel}} + +1. When finished, select the **Save** button. + +Create a mapping for each LDAP group used to authenticate and/or authorize access to Redis Enterprise Software resources. + +The scope of the authorization depends on the access control role: + +- If the role authorizes admin management, LDAP users are authorized as cluster management administrators. + +- If the role authorizes database access, LDAP users are authorized to use the database to the limits specified in the role. + +- To authorize LDAP users to specific databases, update the database access control lists (ACLs) to include the mapped LDAP role. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/7.4/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/7.4/security/" >}}) +--- +Title: Migrate to role-based LDAP +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to migrate existing cluster-based LDAP deployments to role-based + LDAP. +weight: 55 +url: '/operate/rs/7.4/security/access-control/ldap/migrate-to-role-based-ldap/' +--- + +Redis Enterprise Software supports LDAP through a [role-based mechanism]({{< relref "/operate/rs/7.4/security/access-control/ldap/" >}}), first introduced [in v6.0.20]({{< relref "/operate/rs/release-notes/rs-6-0-20-april-2021" >}}). + +Earlier versions of Redis Enterprise Software supported a cluster-based mechanism; however, that mechanism was removed in v6.2.12. + +If you're using the cluster-based mechanism to enable LDAP authentication, you need to migrate to the role-based mechanism before upgrading to Redis Enterprise Software v6.2.12 or later. + +## Migration checklist + +This checklist covers the basic process: + +1. Identify accounts per app on the customer end. + +1. Create or identify an LDAP user account on the server that is responsible for LDAP authentication and authorization. + +1. Create or identify an LDAP group that contains the app team members. + +1. Verify or configure the Redis Enterprise ACLs. + +1. Configure each database ACL. + +1. Remove the earlier "external" (LDAP) users from Redis Enterprise. + +1. _(Recommended)_ Update cluster configuration to replace the cluster-based configuration file. + + You can use `rladmin` to update the cluster configuration: + + ``` bash + $ touch /tmp/saslauthd_empty.conf + $ rladmin cluster config saslauthd_ldap_conf \ + /tmp/saslauthd_empty.conf + ``` + + Here, a blank file replaces the earlier configuration. + +1. Use **Access Control > LDAP > Configuration** to enable role-based LDAP. + +1. Map your LDAP groups to access control roles. + +1. Test application connectivity using the LDAP credentials of an app team member. + +1. _(Recommended)_ Turn off default access for the database to avoid anonymous client connections. + + Because deployments and requirements vary, you’ll likely need to adjust these guidelines. + +## Test LDAP access + +To test your LDAP integration, you can: + +- Connect with `redis-cli` and use the [`AUTH` command]({{< relref "/commands/auth" >}}) to test LDAP username/password credentials. + +- Sign in to the Cluster Manager UI using LDAP credentials authorized for admin access. + +- Use [Redis Insight]({{< relref "/develop/tools/insight/" >}}) to access a database using authorized LDAP credentials. + +- Use the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) to connect using authorized LDAP credentials. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/7.4/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/7.4/security/" >}}) +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how Redis Enterprise Software integrates LDAP authentication + and authorization. Also describes how to enable LDAP for your deployment of Redis + Enterprise Software. +hideListLinks: true +linkTitle: LDAP authentication +title: LDAP authentication +weight: 50 +url: '/operate/rs/7.4/security/access-control/ldap/' +--- + +Redis Enterprise Software supports [Lightweight Directory Access Protocol](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol) (LDAP) authentication and authorization through its [role-based access controls]({{< relref "/operate/rs/7.4/security/access-control" >}}) (RBAC). You can use LDAP to authorize access to the Cluster Manager UI and to control database access. + +You can configure LDAP roles using the Redis Enterprise Cluster Manager UI or [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/ldap_mappings/" >}}). + +## How it works + +Here's how role-based LDAP integration works: + +{{LDAP overview}} + +1. A user signs in with their LDAP credentials. + + Based on the LDAP configuration details, the username is mapped to an LDAP Distinguished Name. + +1. A simple [LDAP bind request](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Bind_(authenticate)) is attempted using the Distinguished Name and the password. The sign-in fails if the bind fails. + +1. Obtain the user’s LDAP group memberships. + + Using configured LDAP details, obtain a list of the user’s group memberships. + +1. Compare the user’s LDAP group memberships to those mapped to local roles. + +1. Determine if one of the user's groups is authorized to access the target resource. If so, the user is granted the level of access authorized to the role. + +To access the Cluster Manager UI, the user needs to belong to an LDAP group mapped to an administrative role. + +For database access, the user needs to belong to an LDAP group mapped to a role listed in the database’s access control list (ACL). The rights granted to the group determine the user's level of access. + +## Prerequisites + +Before you enable LDAP in Redis Enterprise, you need: + +1. The following LDAP details: + + - Server URI, including host, port, and protocol details. + - Certificate details for secure protocols. + - Bind credentials, including Distinguished Name, password, and (optionally) client public and private keys for certificate authentication. + - Authentication query details, whether template or query. + - Authorization query details, whether attribute or query. + - The Distinguished Names of LDAP groups you’ll use to authorize access to Redis Enterprise resources. + +1. The LDAP groups that correspond to the levels of access you wish to authorize. Each LDAP group will be mapped to a Redis Enterprise access control role. + +1. A Redis Enterprise access control role for each LDAP group. Before you enable LDAP, you need to set up [role-based access controls]({{< relref "/operate/rs/7.4/security/access-control" >}}) (RBAC). + +## Enable LDAP + +To enable LDAP: + +1. From **Access Control > LDAP** in the Cluster Manager UI, select the **Configuration** tab and [enable LDAP access]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap" >}}). + + {{Enable LDAP Panel}} + +2. Map LDAP groups to [access control roles]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles" >}}). + +3. Update database access control lists (ACLs) to [authorize role access]({{< relref "/operate/rs/7.4/security/access-control/ldap/update-database-acls" >}}). + +If you already have appropriate roles, you can update them to include LDAP groups. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap/enable-role-based-ldap" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/7.4/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/7.4/security/" >}}) + +--- +Title: Create users +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create users and assign access control roles. +linkTitle: Create users +weight: 10 +aliases: + - /operate/rs/security/access-control/manage-users/add-users/ + - /operate/rs/security/access-control/rbac/assign-user-role/ +url: '/operate/rs/7.4/security/access-control/create-users/' +--- + +## Prerequisites + +Before you create other users: + +1. Review the [access control overview]({{}}) to learn how to use role-based access control (RBAC) to manage users' cluster access and database access. + +1. Create roles you can assign to users. See [Create roles with cluster access only]({{}}), [Create roles with database access only]({{}}), or [Create roles with combined access]({{}}) for instructions. + +## Add users + +To add a user to the cluster: + +1. From the **Access Control > Users** tab in the Cluster Manager UI, select **+ Add user**. + + {{Add role with name}} + +1. Enter the name, email, and password of the new user. + + {{Add role with name}} + +1. Assign a **Role** to the user to grant permissions for cluster management and data access. + + {{Add role to user.}} + +1. Select the **Alerts** the user should receive by email: + + - **Receive alerts for databases** - The alerts that are enabled for the selected databases will be sent to the user. Choose **All databases** or **Customize** to select the individual databases to send alerts for. + + - **Receive cluster alerts** - The alerts that are enabled for the cluster in **Cluster > Alerts Settings** are sent to the user. + +1. Select **Save**. + +## Assign roles to users + +Assign a role, associated with specific databases and access control lists (ACLs), to a user to grant database access: + +1. From the **Access Control > Users** tab in the Cluster Manager UI, you can: + + - Point to an existing user and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit the user. + + - Select **+ Add user** to [create a new user]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}). + +1. Select a role to assign to the user. + + {{Add role to user.}} + +1. Select **Save**. + +## Next steps + +Depending on the type of the user's assigned role (cluster management role or data access role), the user can now: + +- [Connect to a database]({{< relref "/operate/rs/7.4/databases/connect" >}}) associated with the role and run limited Redis commands, depending on the role's Redis ACLs. + +- Sign in to the Redis Enterprise Software Cluster Manager UI. + +- Make a [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) request. +--- +Title: Create roles with cluster access only +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with cluster access only. +linkTitle: Create roles with cluster access only +weight: 14 +aliases: + - /operate/rs/security/access-control/admin-console-access/ +url: '/operate/rs/7.4/security/access-control/create-cluster-roles/' +--- + +Roles with cluster access allow access to the Cluster Management UI and REST API. + +## Default management roles + +Redis Enterprise Software includes five predefined roles that determine a user's level of access to the Cluster Manager UI and [REST API]({{}}). + +1. **DB Viewer** - Read database settings +1. **DB Member** - Administer databases +1. **Cluster Viewer** - Read cluster settings +1. **Cluster Member** - Administer the cluster +1. **Admin** - Full cluster access +1. **None** - For data access only - cannot access the Cluster Manager UI or use the REST API + +For more details about the privileges granted by each of these roles, see [Cluster Manager UI permissions](#cluster-manager-ui-permissions) or [REST API permissions]({{}}). + +## Cluster Manager UI permissions + +Here's a summary of the Cluster Manager UI actions permitted by each default management role: + +| Action | DB Viewer | DB Member | Cluster Viewer | Cluster Member | Admin | +|--------|:---------:|:---------:|:--------------:|:-----------:|:------:| +| Create support package | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | +| Edit database configuration | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | +| Reset slow log | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | +| View cluster configuration | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | +| View cluster logs | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes
| +| View cluster metrics | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | +| View database configuration | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View database metrics | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View node configuration | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | +| View node metrics | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | +| View Redis database password | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | +| View slow log | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | +| View and edit cluster settings |❌ No | ❌ No | ❌ No | ❌ No | ✅ Yes | + +## Create roles for cluster access {#create-cluster-role} + +To create a role that grants cluster access but does not grant access to any databases: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. + +1. Choose a **Cluster management role** to determine cluster management permissions. + + {{Select a cluster management role to set the level of cluster management permissions for the new role.}} + +1. To prevent database access when using this role, do not add any ACLs. + +1. Select **Save**. + +You can [assign the new role to users]({{}}) to grant cluster access. +--- +Title: Create roles with combined access +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with both cluster and database access. +linkTitle: Create roles with combined access +weight: 16 +url: '/operate/rs/7.4/security/access-control/create-combined-roles/' +--- + +To create a role that grants database access privileges and allows access to the Cluster Management UI and REST API: + +1. [Define Redis ACLs](#define-redis-acls) that determine database access privileges. + +1. [Create a role with ACLs](#create-role) added and choose a **Cluster management role** other than **None**. + +## Define Redis ACLs + +To define a Redis ACL rule that you can assign to a role: + +1. From **Access Control > Redis ACLs**, you can either: + + - Point to a Redis ACL and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing Redis ACL. + + - Select **+ Add Redis ACL** to create a new Redis ACL. + +1. Enter a descriptive name for the Redis ACL. This will be used to associate the ACL rule with the role. + +1. Define the ACL rule. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + + {{}} +The **ACL builder** does not support selectors and key permissions. Use **Free text command** to manually define them instead. + {{}} + +1. Select **Save**. + +{{}} +For multi-key commands on multi-slot keys, the return value is `failure`, but the command runs on the keys that are allowed. +{{}} + +## Create roles with ACLs and cluster access {#create-role} + +To create a role that grants database access privileges and allows access to the Cluster Management UI and REST API: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. This will be used to reference the role when configuring users. + +1. Choose a **Cluster management role** other than **None**. For details about permissions granted by each role, see [Cluster Manager UI permissions]({{}}) and [REST API permissions]({{}}). + + {{Add role with name}} + +1. Select **+ Add ACL**. + + {{Add role database acl}} + +1. Choose a Redis ACL and databases to associate with the role. + + {{Add databases to access}} + +1. Select the check mark {{< image filename="/images/rs/buttons/checkmark-button.png#no-click" alt="The Check button" width="25px" class="inline" >}} to confirm. + +1. Select **Save**. + + {{Add databases to access}} + +You can [assign the new role to users]({{}}) to grant database access and access to the Cluster Manager UI and REST API. +--- +Title: Overview of Redis ACLs in Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of Redis ACLs, syntax, and ACL command support in Redis Enterprise Software. +linkTitle: Redis ACL overview +weight: 17 +aliases: + - /operate/rs/security/access-control/rbac/configure-acl/ +url: '/operate/rs/7.4/security/access-control/redis-acl-overview/' +--- + +Redis access control lists (Redis ACLs) allow you to define named permissions for specific Redis commands, keys, and pub/sub channels. You can use defined Redis ACLs for multiple databases and roles. + +## Predefined Redis ACLs + +Redis Enterprise Software provides one predefined Redis ACL named **Full Access**. This ACL allows all commands on all keys and cannot be edited. + +## Redis ACL syntax + +Redis ACLs are defined by a [Redis syntax]({{< relref "/operate/oss_and_stack/management/security/acl" >}}) where you specify the commands or command categories that are allowed for specific keys. + +### Commands and categories + +Redis ACL rules can allow or block specific [Redis commands]({{< relref "/commands" >}}) or [command categories]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#command-categories). + +- `+` includes commands + +- `-` excludes commands + +- `+@` includes command categories + +- `-@` excludes command categories + +The following example allows all `read` commands and the `SET` command: + +```sh ++@read +SET +``` + +Module commands have several ACL limitations: + +- [Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) do not have command categories. + +- Other [command category]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#command-categories) ACLs, such as `+@read` and `+@write`, do not include Redis module commands. `+@all` is the only exception because it allows all Redis commands. + +- You have to include individual module commands in a Redis ACL rule to allow them. + + For example, the following Redis ACL rule allows read-only commands and the RediSearch commands `FT.INFO` and `FT.SEARCH`: + + ```sh + +@read +FT.INFO +FT.SEARCH + ``` + +### Key patterns + +To define access to specific keys or key patterns, use the following prefixes: + +- `~` or `%RW~` allows read and write access to keys. + +- `%R~` allows read access to keys. + +- `%W~` allows write access to keys. + +`%RW~`, `%R~`, and `%W~` are only supported for databases with Redis version 7.2 or later. + +The following example allows read and write access to all keys that start with "app1" and read-only access to all keys that start with "app2": + +```sh +~app1* %R~app2* +``` + +### Pub/sub channels + +The `&` prefix allows access to [pub/sub channels]({{< relref "/develop/interact/pubsub" >}}) (only supported for databases with Redis version 6.2 or later). + +To limit access to specific channels, include `resetchannels` before the allowed channels: + +```sh +resetchannels &channel1 &channel2 +``` + +### Selectors + +[Selectors]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#selectors) let you define multiple sets of rules in a single Redis ACL (only supported for databases with Redis version 7.2 or later). A command is allowed if it matches the base rule or any selector in the Redis ACL. + +- `()` creates a new selector. + +- `clearselectors` deletes all existing selectors for a user. This action does not delete the base ACL rule. + +In the following example, the base rule allows `GET key1` and the selector allows `SET key2`: + +```sh ++GET ~key1 (+SET ~key2) +``` + +## Default pub/sub permissions + +Redis database version 6.2 introduced pub/sub ACL rules that determine which [pub/sub channels]({{< relref "/develop/interact/pubsub" >}}) a user can access. + +The configuration option `acl-pubsub-default`, added in Redis Enterprise Software version 6.4.2, determines the cluster-wide default level of access for all pub/sub channels. Redis Enterprise Software uses the following pub/sub permissions by default: + +- For versions 6.4.2 and 7.2, `acl-pubsub-default` is permissive (`allchannels` or `&*`) by default to accommodate earlier Redis versions. + +- In future versions, `acl-pubsub-default` will change to restrictive (`resetchannels`). Restrictive permissions block all pub/sub channels by default, unless explicitly permitted by an ACL rule. + +If you use ACLs and pub/sub channels, you should review your databases and ACL settings and plan to transition your cluster to restrictive pub/sub permissions in preparation for future Redis Enterprise Software releases. + +### Prepare for restrictive pub/sub permissions + +To secure pub/sub channels and prepare your cluster for future Redis Enterprise Software releases that default to restrictive pub/sub permissions: + +1. Upgrade Redis databases: + + - For Redis Enterprise Software version 6.4.2, upgrade all databases in the cluster to Redis DB version 6.2. + + - For Redis Enterprise Software version 7.2, upgrade all databases in the cluster to Redis DB version 7.2 or 6.2. + +1. Create or update ACLs with permissions for specific channels using the `resetchannels &channel` format. + +1. Associate the ACLs with relevant databases. + +1. Set default pub/sub permissions (`acl-pubsub-default`) to restrictive. See [Change default pub/sub permissions](#change-default-pubsub-permissions) for details. + +1. If any issues occur, you can temporarily change the default pub/sub setting back to permissive. Resolve any problematic ACLs before making pub/sub permissions restrictive again. + +{{}} +When you change the cluster's default pub/sub permissions to restrictive, `&*` is added to the **Full Access** ACL. Before you make this change, consider the following: + +- Because pub/sub ACL syntax was added in Redis 6.2, you can't associate the **Full Access** ACL with database versions 6.0 or lower after this change. + +- The **Full Access** ACL is not reverted if you change `acl-pubsub-default` to permissive again. + +- Every database with the default user enabled uses the **Full Access** ACL. +{{}} + +### Change default pub/sub permissions + +As of Redis Enterprise version 6.4.2, you can configure `acl_pubsub_default`, which determines the default pub/sub permissions for all databases in the cluster. You can set `acl_pubsub_default` to the following values: + +- `resetchannels` is restrictive and blocks access to all channels by default. + +- `allchannels` is permissive and allows access to all channels by default. + +To make default pub/sub permissions restrictive: + +1. [Upgrade all databases]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-database" >}}) in the cluster to Redis version 6.2 or later. + +1. Set the default to restrictive (`resetchannels`) using one of the following methods: + + - New Cluster Manager UI (only available for Redis Enterprise versions 7.2 and later): + + 1. Navigate to **Access Control > Settings > Pub/Sub ACLs** and select **Edit**. + + 1. For **Default permissions for Pub/Sub ACLs**, select **Restrictive**, then **Save**. + + - [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster acl_pubsub_default resetchannels + ``` + + - [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "acl_pubsub_default": "resetchannels" } + ``` + +## ACL command support + +Redis Enterprise Software does not support certain Redis ACL commands. Instead, you can manage access controls from the Cluster Manager UI. + +{{}} + +Redis ACLs also have the following differences in Redis Enterprise Software: + +- The `MULTI`, `EXEC`, `DISCARD` commands are always allowed, but ACLs are enforced on `MULTI` subcommands. + +- Nested selectors are not supported. + + For example, the following selectors are not valid in Redis Enterprise: `+GET ~key1 (+SET (+SET ~key2) ~key3)` + +- Key and pub/sub patterns do not allow the following characters: `'(', ')'` + +- The following password configuration syntax is not supported: `'>', '<', '#!', 'resetpass'` + + To configure passwords in Redis Enterprise Software, use one of the following methods: + + - [`rladmin cluster reset_password`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/reset_password" >}}): + + ```sh + rladmin cluster reset_password + ``` + + - REST API [`PUT /v1/users`]({{< relref "/operate/rs/7.4/references/rest-api/requests/users#put-user" >}}) request and provide `password` + +--- +Title: Create roles with database access only +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with database access only. +linkTitle: Create roles with database access only +weight: 15 +aliases: + - /operate/rs/security/access-control/database-access/ +url: '/operate/rs/7.4/security/access-control/create-db-roles/' +--- + +Roles with database access grant the ability to access and interact with a database's data. Database access privileges are determined by defining [Redis ACLs]({{}}) and adding them to roles. + +To create a role that grants database access without granting access to the Redis Enterprise Cluster Manager UI and REST API: + +1. [Define Redis ACLs](#define-redis-acls) that determine database access privileges. + +1. [Create a role with ACLs](#create-roles-with-acls) added and leave the **Cluster management role** as **None**. + +## Define Redis ACLs + +To define a Redis ACL rule that you can assign to a role: + +1. From **Access Control > Redis ACLs**, you can either: + + - Point to a Redis ACL and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing Redis ACL. + + - Select **+ Add Redis ACL** to create a new Redis ACL. + +1. Enter a descriptive name for the Redis ACL. This will be used to associate the ACL rule with the role. + +1. Define the ACL rule. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + + {{}} +The **ACL builder** does not support selectors and key permissions. Use **Free text command** to manually define them instead. + {{}} + +1. Select **Save**. + +{{}} +For multi-key commands on multi-slot keys, the return value is `failure`, but the command runs on the keys that are allowed. +{{}} + +## Create roles with ACLs + +To create a role that grants database access to users but blocks access to the Redis Enterprise Cluster Manager UI and REST API, set the **Cluster management role** to **None**. + +To define a role for database access: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. This will be used to reference the role when configuring users. + +1. Leave **Cluster management role** as the default **None**. + + {{Add role with name}} + +1. Select **+ Add ACL**. + + {{Add role database acl}} + +1. Choose a Redis ACL and databases to associate with the role. + + {{Add databases to access}} + +1. Select the check mark {{< image filename="/images/rs/buttons/checkmark-button.png#no-click" alt="The Check button" width="25px" class="inline" >}} to confirm. + +1. Select **Save**. + + {{Add databases to access}} + +You can [assign the new role to users]({{}}) to grant database access. +--- +Title: Access control +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of access control in Redis Enterprise Software. +hideListLinks: false +linkTitle: Access control +weight: 10 +aliases: + - /operate/rs/security/access-control/rbac/ + - /operate/rs/security/access-control/rbac/create-roles/ +url: '/operate/rs/7.4/security/access-control/' +--- + +Redis Enterprise Software lets you use role-based access control (RBAC) to manage users' access privileges. RBAC requires you to do the following: + +1. Create roles and define each role's access privileges. + +1. Create users and assign roles to them. The assigned role determines the user's access privileges. + +## Cluster access versus database access + +Redis Enterprise allows two separate paths of access: + +- **Cluster access** allows performing management-related actions, such as creating databases and viewing statistics. + +- **Database access** allows performing data-related actions, like reading and writing data in a database. + +You can grant cluster access, database access, or both to each role. These roles let you differentiate between users who can access databases and users who can access cluster management, according to your organization's security needs. + +The following diagram shows three different options for roles and users: + +{{Role-based access control diagram.}} + +- Role A was created with permission to access the cluster and perform management-related actions. Because user A was assigned role A, they can access the cluster but cannot access databases. + +- Role B was created with permission to access one or more databases and perform data-related actions. Because user B was assigned role B, they cannot access the cluster but can access databases. + +- Role C was created with cluster access and database access permissions. Because user C was assigned role C, they can access the cluster and databases. + +## Default database access + +When you create a database, [default user access]({{< relref "/operate/rs/7.4/security/access-control/manage-users/default-user" >}}) is enabled automatically. + +If you set up role-based access controls for your database and don't require compatibility with versions earlier than Redis 6, you can [deactivate the default user]({{< relref "/operate/rs/7.4/security/access-control/manage-users/default-user" >}}). + +{{}} +Before you [deactivate default user access]({{< relref "/operate/rs/7.4/security/access-control/manage-users/default-user#deactivate-default-user" >}}), make sure the role associated with the database is [assigned to a user]({{< relref "/operate/rs/7.4/security/access-control/create-users#assign-roles-to-users" >}}). Otherwise, the database will be inaccessible. +{{}} + +## More info +--- +Title: Security +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: true +weight: 60 +aliases: + - /operate/rs/administering/designing-production/security +url: '/operate/rs/7.4/security/' +--- + +Redis Enterprise Software provides various features to secure your Redis Enterprise Software deployment: + +| Login and passwords | Users and roles | Encryption and TLS | Certificates and audit | +|---------------------|-----------------|--------------------|-----------------------| +| [Password attempts and session timeout]({{}}) | [Cluster and database access explained]({{}}) | [Enable TLS]({{}}) | [Create certificates]({{}}) | +| [Password complexity]({{}}) | [Create users]({{}}) | [Configure TLS protocols]({{}}) | [Monitor certificates]({{}}) | +| [Password expiration]({{}}) | [Create roles]({{}}) | [Configure cipher suites]({{}}) | [Update certificates]({{}}) | +| [Default database access]({{}}) | [Redis ACLs]({{}}) | [Encrypt private keys on disk]({{}}) | [Enable OCSP stapling]({{}}) | +| [Rotate user passwords]({{}}) | [Integrate with LDAP]({{}}) | [Internode encryption]({{}}) | [Audit database connections]({{}}) | + +## Recommended security practices + +See [Recommended security practices]({{}}) to learn how to protect Redis Enterprise Software. + +## Redis Trust Center + +Visit our [Trust Center](https://trust.redis.io/) to learn more about Redis security policies. If you find a suspected security bug, you can [submit a report](https://hackerone.com/redis-vdp?type=team). +--- +Title: Compatibility with Redis Open Source configuration settings +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Open Source configuration settings supported by Redis Enterprise. +linkTitle: Configuration settings +weight: 50 +url: '/operate/rs/7.4/references/compatibility/config-settings/' +--- + +Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}) only support a subset of [Redis Open Source configuration settings]({{}}). Using [`CONFIG GET`]({{< relref "/commands/config-get" >}}) or [`CONFIG SET`]({{< relref "/commands/config-set" >}}) with unsupported configuration settings returns an error. + +| Setting | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| activerehashing | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| busy-reply-threshold | ✅ Standard
✅ Active-Active | ❌ Standard
❌ Active-Active | Value must be between 0 and 60000 milliseconds. | +| hash-max-listpack-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-listpack-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-ziplist-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-ziplist-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hll-sparse-max-bytes | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-compress-depth | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-max-listpack-size | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-max-ziplist-size | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| lua-time-limit | ✅ Standard
✅ Active-Active | ❌ Standard
❌ Active-Active | Value must be between 0 and 60000 milliseconds. | +| notify-keyspace-events | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| set-max-intset-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| slowlog-log-slower-than | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Value must be larger than 1000 microseconds. | +| slowlog-max-len | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Value must be between 128 and 1024. | +| stream-node-max-bytes | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| stream-node-max-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-listpack-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-listpack-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-ziplist-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-ziplist-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Connection management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Connection management commands compatibility. +linkTitle: Connection management +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/connection/' +--- + +The following tables show which Redis Open Source [connection management commands]({{< relref "/commands" >}}?group=connection) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [AUTH]({{< relref "/commands/auth" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT CACHING]({{< relref "/commands/client-caching" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT GETNAME]({{< relref "/commands/client-getname" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT GETREDIR]({{< relref "/commands/client-getredir" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT ID]({{< relref "/commands/client-id" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Because Redis Enterprise clustering allows [multiple active proxies]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}), `CLIENT ID` cannot guarantee incremental IDs between clients that connect to different nodes under multi proxy policies. | +| [CLIENT INFO]({{< relref "/commands/client-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT KILL]({{< relref "/commands/client-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT LIST]({{< relref "/commands/client-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT NO-EVICT]({{< relref "/commands/client-no-evict" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT NO-TOUCH]({{< relref "/commands/client-no-touch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT PAUSE]({{< relref "/commands/client-pause" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT REPLY]({{< relref "/commands/client-reply" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT SETINFO]({{< relref "/commands/client-setinfo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT SETNAME]({{< relref "/commands/client-setname" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT TRACKING]({{< relref "/commands/client-tracking" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT TRACKINGINFO]({{< relref "/commands/client-trackinginfo" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT UNBLOCK]({{< relref "/commands/client-unblock" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT UNPAUSE]({{< relref "/commands/client-unpause" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ECHO]({{< relref "/commands/echo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HELLO]({{< relref "/commands/hello" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PING]({{< relref "/commands/ping" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [QUIT]({{< relref "/commands/quit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v7.2.0. | +| [RESET]({{< relref "/commands/reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SELECT]({{< relref "/commands/select" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Redis Enterprise does not support shared databases due to potential negative performance impacts and blocks any related commands. The `SELECT` command is supported solely for compatibility with Redis Open Source but does not perform any operations in Redis Enterprise. | +--- +Title: Cluster management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Cluster management commands compatible with Redis Enterprise. +linkTitle: Cluster management +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/cluster/' +--- + +[Clustering in Redis Enterprise Software]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering" >}}) and [Redis Cloud]({{< relref "/operate/rc/databases/configuration/clustering" >}}) differs from the [Redis Open Source cluster]({{}}) and works with all standard Redis clients. + +Redis Enterprise blocks most [cluster commands]({{< relref "/commands" >}}?group=cluster). If you try to use a blocked cluster command, it returns an error. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [ASKING]({{< relref "/commands/asking" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER ADDSLOTS]({{< relref "/commands/cluster-addslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER ADDSLOTSRANGE]({{< relref "/commands/cluster-addslotsrange" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER BUMPEPOCH]({{< relref "/commands/cluster-bumpepoch" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER COUNT-FAILURE-REPORTS]({{< relref "/commands/cluster-count-failure-reports" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER COUNTKEYSINSLOT]({{< relref "/commands/cluster-countkeysinslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER DELSLOTS]({{< relref "/commands/cluster-delslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER DELSLOTSRANGE]({{< relref "/commands/cluster-delslotsrange" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FAILOVER]({{< relref "/commands/cluster-failover" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FLUSHSLOTS]({{< relref "/commands/cluster-flushslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FORGET]({{< relref "/commands/cluster-forget" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER GETKEYSINSLOT]({{< relref "/commands/cluster-getkeysinslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER HELP]({{< relref "/commands/cluster-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER INFO]({{< relref "/commands/cluster-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER KEYSLOT]({{< relref "/commands/cluster-keyslot" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER LINKS]({{< relref "/commands/cluster-links" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MEET]({{< relref "/commands/cluster-meet" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MYID]({{< relref "/commands/cluster-myid" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MYSHARDID]({{< relref "/commands/cluster-myshardid" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER NODES]({{< relref "/commands/cluster-nodes" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER REPLICAS]({{< relref "/commands/cluster-replicas" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER REPLICATE]({{< relref "/commands/cluster-replicate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER RESET]({{< relref "/commands/cluster-reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SAVECONFIG]({{< relref "/commands/cluster-saveconfig" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SET-CONFIG-EPOCH]({{< relref "/commands/cluster-set-config-epoch" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SETSLOT]({{< relref "/commands/cluster-setslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SHARDS]({{< relref "/commands/cluster-shards" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SLAVES]({{< relref "/commands/cluster-slaves" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v5.0.0. | +| [CLUSTER SLOTS]({{< relref "/commands/cluster-slots" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). Deprecated as of Redis v7.0.0. | +| [READONLY]({{< relref "/commands/readonly" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [READWRITE]({{< relref "/commands/readwrite" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +--- +Title: Data type commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Data type commands compatibility (bitmaps, geospatial indices, hashes, + HyperLogLogs, lists, sets, sorted sets, streams, strings). +linkTitle: Data types +toc: 'true' +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/data-types/' +--- + +The following tables show which Redis Open source data type commands are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Bitmap commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BITCOUNT]({{< relref "/commands/bitcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITFIELD]({{< relref "/commands/bitfield" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITFIELD_RO]({{< relref "/commands/bitfield_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITOP]({{< relref "/commands/bitop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITPOS]({{< relref "/commands/bitpos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETBIT]({{< relref "/commands/getbit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETBIT]({{< relref "/commands/setbit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Geospatial indices commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [GEOADD]({{< relref "/commands/geoadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEODIST]({{< relref "/commands/geodist" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOHASH]({{< relref "/commands/geohash" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOPOS]({{< relref "/commands/geopos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEORADIUS]({{< relref "/commands/georadius" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUS_RO]({{< relref "/commands/georadius_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUSBYMEMBER]({{< relref "/commands/georadiusbymember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUSBYMEMBER_RO]({{< relref "/commands/georadiusbymember_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEOSEARCH]({{< relref "/commands/geosearch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOSEARCHSTORE]({{< relref "/commands/geosearchstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Hash commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [HDEL]({{< relref "/commands/hdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HEXISTS]({{< relref "/commands/hexists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HGET]({{< relref "/commands/hget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HGETALL]({{< relref "/commands/hgetall" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HINCRBY]({{< relref "/commands/hincrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HINCRBYFLOAT]({{< relref "/commands/hincrbyfloat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HKEYS]({{< relref "/commands/hkeys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HLEN]({{< relref "/commands/hlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HMGET]({{< relref "/commands/hmget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HMSET]({{< relref "/commands/hmset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v4.0.0. | +| [HRANDFIELD]({{< relref "/commands/hrandfield" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSCAN]({{< relref "/commands/hscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSET]({{< relref "/commands/hset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSETNX]({{< relref "/commands/hsetnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSTRLEN]({{< relref "/commands/hstrlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HVALS]({{< relref "/commands/hvals" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## HyperLogLog commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [PFADD]({{< relref "/commands/pfadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFCOUNT]({{< relref "/commands/pfcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFDEBUG]({{< relref "/commands/pfdebug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [PFMERGE]({{< relref "/commands/pfmerge" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFSELFTEST]({{< relref "/commands/pfselftest" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## List commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BLMOVE]({{< relref "/commands/blmove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BLMPOP]({{< relref "/commands/blmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BLPOP]({{< relref "/commands/blpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BRPOP]({{< relref "/commands/brpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BRPOPLPUSH]({{< relref "/commands/brpoplpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [LINDEX]({{< relref "/commands/lindex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LINSERT]({{< relref "/commands/linsert" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LLEN]({{< relref "/commands/llen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LMOVE]({{< relref "/commands/lmove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LMPOP]({{< relref "/commands/lmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPOP]({{< relref "/commands/lpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPOS]({{< relref "/commands/lpos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPUSH]({{< relref "/commands/lpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPUSHX]({{< relref "/commands/lpushx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LRANGE]({{< relref "/commands/lrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LREM]({{< relref "/commands/lrem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LSET]({{< relref "/commands/lset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LTRIM]({{< relref "/commands/ltrim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPOP]({{< relref "/commands/rpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPOPLPUSH]({{< relref "/commands/rpoplpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [RPUSH]({{< relref "/commands/rpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPUSHX]({{< relref "/commands/rpushx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Set commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [SADD]({{< relref "/commands/sadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCARD]({{< relref "/commands/scard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SDIFF]({{< relref "/commands/sdiff" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SDIFFSTORE]({{< relref "/commands/sdiffstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTER]({{< relref "/commands/sinter" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTERCARD]({{< relref "/commands/sintercard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTERSTORE]({{< relref "/commands/sinterstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SISMEMBER]({{< relref "/commands/sismember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMEMBERS]({{< relref "/commands/smembers" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMISMEMBER]({{< relref "/commands/sismember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMOVE]({{< relref "/commands/smove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SPOP]({{< relref "/commands/spop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SRANDMEMBER]({{< relref "/commands/srandmember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SREM]({{< relref "/commands/srem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SSCAN]({{< relref "/commands/sscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNION]({{< relref "/commands/sunion" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNIONSTORE]({{< relref "/commands/sunionstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Sorted set commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BZMPOP]({{< relref "/commands/bzmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BZPOPMAX]({{< relref "/commands/bzpopmax" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BZPOPMIN]({{< relref "/commands/bzpopmin" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZADD]({{< relref "/commands/zadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZCARD]({{< relref "/commands/zcard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZCOUNT]({{< relref "/commands/zcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZDIFF]({{< relref "/commands/zdiff" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZDIFFSTORE]({{< relref "/commands/zdiffstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINCRBY]({{< relref "/commands/zincrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTER]({{< relref "/commands/zinter" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTERCARD]({{< relref "/commands/zintercard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTERSTORE]({{< relref "/commands/zinterstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZLEXCOUNT]({{< relref "/commands/zlexcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZMPOP]({{< relref "/commands/zmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZMSCORE]({{< relref "/commands/zmscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZPOPMAX]({{< relref "/commands/zpopmax" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZPOPMIN]({{< relref "/commands/zpopmin" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANDMEMBER]({{< relref "/commands/zrandmember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANGE]({{< relref "/commands/zrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANGEBYLEX]({{< relref "/commands/zrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZRANGEBYSCORE]({{< relref "/commands/zrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZRANGESTORE]({{< relref "/commands/zrangestore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANK]({{< relref "/commands/zrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREM]({{< relref "/commands/zrem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYLEX]({{< relref "/commands/zremrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYRANK]({{< relref "/commands/zremrangebyrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYSCORE]({{< relref "/commands/zremrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREVRANGE]({{< relref "/commands/zrevrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANGEBYLEX]({{< relref "/commands/zrevrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANGEBYSCORE]({{< relref "/commands/zrevrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANK]({{< relref "/commands/zrevrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZSCAN]({{< relref "/commands/zscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZSCORE]({{< relref "/commands/zscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZUNION]({{< relref "/commands/zunion" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZUNIONSTORE]({{< relref "/commands/zunionstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Stream commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [XACK]({{< relref "/commands/xack" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XADD]({{< relref "/commands/xadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XAUTOCLAIM]({{< relref "/commands/xautoclaim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XCLAIM]({{< relref "/commands/xclaim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XDEL]({{< relref "/commands/xdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XGROUP]({{< relref "/commands/xgroup" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XINFO]({{< relref "/commands/xinfo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XLEN]({{< relref "/commands/xlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XPENDING]({{< relref "/commands/xpending" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XRANGE]({{< relref "/commands/xrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREAD]({{< relref "/commands/xread" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREADGROUP]({{< relref "/commands/xreadgroup" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREVRANGE]({{< relref "/commands/xrevrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XSETID]({{< relref "/commands/xsetid" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XTRIM]({{< relref "/commands/xtrim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## String commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [APPEND]({{< relref "/commands/append" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DECR]({{< relref "/commands/decr" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DECRBY]({{< relref "/commands/decrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GET]({{< relref "/commands/get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETDEL]({{< relref "/commands/getdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETEX]({{< relref "/commands/getex" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | \*Not supported for HyperLogLog. | +| [GETRANGE]({{< relref "/commands/getrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETSET]({{< relref "/commands/getset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [INCR]({{< relref "/commands/incr" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INCRBY]({{< relref "/commands/incrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INCRBYFLOAT]({{< relref "/commands/incrbyfloat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LCS]({{< relref "/commands/lcs" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MGET]({{< relref "/commands/mget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MSET]({{< relref "/commands/mset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MSETNX]({{< relref "/commands/msetnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PSETEX]({{< relref "/commands/psetex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SET]({{< relref "/commands/set" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETEX]({{< relref "/commands/setex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETNX]({{< relref "/commands/setnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETRANGE]({{< relref "/commands/setrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| STRALGO | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v7.0.0. | +| [STRLEN]({{< relref "/commands/strlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUBSTR]({{< relref "/commands/substr" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v2.0.0. | +--- +Title: Server management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Server management commands compatibility. +linkTitle: Server management +toc: 'true' +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/server/' +--- + +The following tables show which Redis Open Source [server management commands]({{< relref "/commands" >}}?group=server) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Access control commands + +Several access control list (ACL) commands are not available in Redis Enterprise. Instead, you can manage access controls from the [Redis Enterprise Software Cluster Manager UI]({{< relref "/operate/rs/7.4/security/access-control" >}}) and the [Redis Cloud console]({{< relref "/operate/rc/security/access-control/data-access-control/role-based-access-control.md" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [ACL CAT]({{< relref "/commands/acl-cat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL DELUSER]({{< relref "/commands/acl-deluser" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL DRYRUN]({{< relref "/commands/acl-dryrun" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Might reply with "unknown user" for LDAP users even if `AUTH` succeeds. | +| [ACL GENPASS]({{< relref "/commands/acl-genpass" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL GETUSER]({{< relref "/commands/acl-getuser" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL HELP]({{< relref "/commands/acl-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL LIST]({{< relref "/commands/acl-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL LOAD]({{< relref "/commands/acl-load" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL LOG]({{< relref "/commands/acl-log" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL SAVE]({{< relref "/commands/acl-save" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL SETUSER]({{< relref "/commands/acl-setuser" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL USERS]({{< relref "/commands/acl-users" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL WHOAMI]({{< relref "/commands/acl-whoami" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | + + +## Configuration commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [CONFIG GET]({{< relref "/commands/config-get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | [Only supports a subset of configuration settings.]({{< relref "/operate/rs/7.4/references/compatibility/config-settings" >}}) | +| [CONFIG RESETSTAT]({{< relref "/commands/config-resetstat" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CONFIG REWRITE]({{< relref "/commands/config-rewrite" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CONFIG SET]({{< relref "/commands/config-set" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | [Only supports a subset of configuration settings.]({{< relref "/operate/rs/7.4/references/compatibility/config-settings" >}}) | + + +## General server commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [COMMAND]({{< relref "/commands/command" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND COUNT]({{< relref "/commands/command-count" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND DOCS]({{< relref "/commands/command-docs" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND GETKEYS]({{< relref "/commands/command-getkeys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND GETKEYSANDFLAGS]({{< relref "/commands/command-getkeysandflags" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND HELP]({{< relref "/commands/command-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND INFO]({{< relref "/commands/command-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND LIST]({{< relref "/commands/command-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DEBUG]({{< relref "/commands/debug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [FLUSHALL]({{< relref "/commands/flushall" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active | \*Can use the [Active-Active flush API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdbs/flush" >}}). | +| [FLUSHDB]({{< relref "/commands/flushdb" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active | \*Can use the [Active-Active flush API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdbs/flush" >}}). | +| [LOLWUT]({{< relref "/commands/lolwut" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SHUTDOWN]({{< relref "/commands/shutdown" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SWAPDB]({{< relref "/commands/swapdb" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [TIME]({{< relref "/commands/time" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Module commands + +For Redis Enterprise Software, you can [manage Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/" >}}) from the Cluster Manager UI or with [REST API requests]({{< relref "/operate/rs/7.4/references/rest-api/requests/modules" >}}). + +Redis Cloud manages modules for you and lets you [enable modules]({{< relref "/operate/rc/databases/create-database#modules" >}}) when you create a database. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [MODULE HELP]({{< relref "/commands/module-help" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE LIST]({{< relref "/commands/module-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MODULE LOAD]({{< relref "/commands/module-load" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE LOADEX]({{< relref "/commands/module-loadex" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE UNLOAD]({{< relref "/commands/module-unload" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## Monitoring commands + +Although Redis Enterprise does not support certain monitoring commands, you can use the Cluster Manager UI to view Redis Enterprise Software [metrics]({{< relref "/operate/rs/7.4/clusters/monitoring" >}}) and [logs]({{< relref "/operate/rs/7.4/clusters/logging" >}}) or the Redis Cloud console to view Redis Cloud [metrics]({{< relref "/operate/rc/databases/monitor-performance" >}}) and [logs]({{< relref "/operate/rc/logs-reports/system-logs" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [DBSIZE]({{< relref "/commands/dbsize" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INFO]({{< relref "/commands/info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | In Redis Enterprise, `INFO` returns a different set of fields than Redis Open Source.
Not supported for [scripts]({{}}). | +| [LATENCY DOCTOR]({{< relref "/commands/latency-doctor" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY GRAPH]({{< relref "/commands/latency-graph" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY HELP]({{< relref "/commands/latency-help" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY HISTOGRAM]({{< relref "/commands/latency-histogram" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LATENCY HISTORY]({{< relref "/commands/latency-history" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY LATEST]({{< relref "/commands/latency-latest" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY RESET]({{< relref "/commands/latency-reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY DOCTOR]({{< relref "/commands/memory-doctor" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY HELP]({{< relref "/commands/memory-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}) in Redis versions earlier than 7. | +| [MEMORY MALLOC-STATS]({{< relref "/commands/memory-malloc-stats" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY PURGE]({{< relref "/commands/memory-purge" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY STATS]({{< relref "/commands/memory-stats" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY USAGE]({{< relref "/commands/memory-usage" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}) in Redis versions earlier than 7. | +| [MONITOR]({{< relref "/commands/monitor" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SLOWLOG GET]({{< relref "/commands/slowlog-get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [SLOWLOG LEN]({{< relref "/commands/slowlog-len" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [SLOWLOG RESET]({{< relref "/commands/slowlog-reset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | + + +## Persistence commands + +Data persistence and backup commands are not available in Redis Enterprise. Instead, you can [manage data persistence]({{< relref "/operate/rs/7.4/databases/configure/database-persistence" >}}) and [backups]({{< relref "/operate/rs/7.4/databases/import-export/schedule-backups" >}}) from the Redis Enterprise Software Cluster Manager UI and the [Redis Cloud console]({{< relref "/operate/rc/databases/view-edit-database#durability-section" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BGREWRITEAOF]({{< relref "/commands/bgrewriteaof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [BGSAVE]({{< relref "/commands/bgsave" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LASTSAVE]({{< relref "/commands/lastsave" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SAVE]({{< relref "/commands/save" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## Replication commands + +Redis Enterprise automatically manages [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [FAILOVER]({{< relref "/commands/failover" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MIGRATE]({{< relref "/commands/migrate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [PSYNC]({{< relref "/commands/psync" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [REPLCONF]({{< relref "/commands/replconf" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [REPLICAOF]({{< relref "/commands/replicaof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [RESTORE-ASKING]({{< relref "/commands/restore-asking" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ROLE]({{< relref "/commands/role" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SLAVEOF]({{< relref "/commands/slaveof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v5.0.0. | +| [SYNC]({{< relref "/commands/sync" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +--- +Title: Pub/sub commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Pub/sub commands compatibility. +linkTitle: Pub/sub +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/pub-sub/' +--- + +The following table shows which Redis Open Source [pub/sub commands]({{< relref "/commands" >}}?group=pubsub) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [PSUBSCRIBE]({{< relref "/commands/psubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBLISH]({{< relref "/commands/publish" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB CHANNELS]({{< relref "/commands/pubsub-channels" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB NUMPAT]({{< relref "/commands/pubsub-numpat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB NUMSUB]({{< relref "/commands/pubsub-numsub" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB SHARDCHANNELS]({{< relref "/commands/pubsub-shardchannels" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB SHARDNUMSUB]({{< relref "/commands/pubsub-shardnumsub" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUNSUBSCRIBE]({{< relref "/commands/punsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SPUBLISH]({{< relref "/commands/spublish" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SSUBSCRIBE]({{< relref "/commands/ssubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUBSCRIBE]({{< relref "/commands/subscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNSUBSCRIBE]({{< relref "/commands/sunsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNSUBSCRIBE]({{< relref "/commands/unsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Transaction commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Transaction commands compatibility. +linkTitle: Transactions +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/transactions/' +--- + +The following table shows which Redis Open Source [transaction commands]({{< relref "/commands" >}}?group=transactions) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [DISCARD]({{< relref "/commands/discard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXEC]({{< relref "/commands/exec" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MULTI]({{< relref "/commands/multi" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNWATCH]({{< relref "/commands/unwatch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [WATCH]({{< relref "/commands/watch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Compatibility with Redis Open Source commands +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Open Source commands compatible with Redis Enterprise. +hideListLinks: true +linkTitle: Commands +weight: 30 +url: '/operate/rs/7.4/references/compatibility/commands/' +--- + +Learn which Redis Open Source commands are compatible with Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}). + +Select a command group for more details about compatibility with standard and Active-Active Redis Enterprise. + +{{}} +--- +Title: Scripting commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Scripting and function commands compatibility. +linkTitle: Scripting +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/scripting/' +--- + +The following table shows which Redis Open Source [scripting and function commands]({{< relref "/commands" >}}?group=scripting) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Function commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [FCALL]({{< relref "/commands/fcall" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FCALL_RO]({{< relref "/commands/fcall_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION DELETE]({{< relref "/commands/function-delete" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION DUMP]({{< relref "/commands/function-dump" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION FLUSH]({{< relref "/commands/function-flush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION HELP]({{< relref "/commands/function-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION KILL]({{< relref "/commands/function-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION LIST]({{< relref "/commands/function-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION LOAD]({{< relref "/commands/function-load" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION RESTORE]({{< relref "/commands/function-restore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION STATS]({{< relref "/commands/function-stats" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + +## Scripting commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [EVAL]({{< relref "/commands/eval" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVAL_RO]({{< relref "/commands/eval_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVALSHA]({{< relref "/commands/evalsha" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVALSHA_RO]({{< relref "/commands/evalsha_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT DEBUG]({{< relref "/commands/script-debug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SCRIPT EXISTS]({{< relref "/commands/script-exists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT FLUSH]({{< relref "/commands/script-flush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT KILL]({{< relref "/commands/script-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT LOAD]({{< relref "/commands/script-load" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Key commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Generic key commands compatible with Redis Enterprise. +linkTitle: Keys (generic) +weight: 10 +url: '/operate/rs/7.4/references/compatibility/commands/generic/' +--- + +The following table shows which Redis Open Source [key (generic) commands]({{< relref "/commands" >}}?group=generic) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [COPY]({{< relref "/commands/copy" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | For Active-Active or clustered databases, the source and destination keys must be in the same hash slot.

\*Not supported for stream consumer group info. | +| [DEL]({{< relref "/commands/del" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DUMP]({{< relref "/commands/dump" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXISTS]({{< relref "/commands/exists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIRE]({{< relref "/commands/expire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIREAT]({{< relref "/commands/expireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIRETIME]({{< relref "/commands/expiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [KEYS]({{< relref "/commands/keys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MIGRATE]({{< relref "/commands/migrate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MOVE]({{< relref "/commands/move" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Redis Enterprise does not support shared databases due to potential negative performance impacts and blocks any related commands. | +| [OBJECT ENCODING]({{< relref "/commands/object-encoding" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT FREQ]({{< relref "/commands/object-freq" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT IDLETIME]({{< relref "/commands/object-idletime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT REFCOUNT]({{< relref "/commands/object-refcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PERSIST]({{< relref "/commands/persist" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIRE]({{< relref "/commands/pexpire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIREAT]({{< relref "/commands/pexpireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIRETIME]({{< relref "/commands/pexpiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PTTL]({{< relref "/commands/pttl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RANDOMKEY]({{< relref "/commands/randomkey" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RENAME]({{< relref "/commands/rename" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | For Active-Active or clustered databases, the original key and new key must be in the same hash slot.

\*Not supported for stream consumer group info. | +| [RENAMENX]({{< relref "/commands/renamenx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | For Active-Active or clustered databases, the original key and new key must be in the same hash slot. | +| [RESTORE]({{< relref "/commands/restore" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active\* | \*Only supported for module keys. | +| [SCAN]({{< relref "/commands/scan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SORT]({{< relref "/commands/sort" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SORT_RO]({{< relref "/commands/sort_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TOUCH]({{< relref "/commands/touch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TTL]({{< relref "/commands/ttl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TYPE]({{< relref "/commands/type" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNLINK]({{< relref "/commands/unlink" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [WAIT]({{< relref "/commands/wait" >}}) | ✅ Standard
❌ Active-Active\* | ❌ Standard**
❌ Active-Active | \*For Active-Active databases, `WAIT` commands are supported for primary and replica shard replication. You can contact support to enable `WAIT` for local replicas only. `WAIT` is not supported for cross-instance replication.

\*\*`WAIT` commands are supported on Redis Cloud Flexible subscriptions. | +| [WAITAOF]({{< relref "/commands/waitaof" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + +--- +Title: RESP compatibility with Redis Enterprise +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise supports RESP2 and RESP3. +linkTitle: RESP +toc: 'true' +weight: 80 +url: '/operate/rs/7.4/references/compatibility/resp/' +--- + +RESP (Redis Serialization Protocol) is the protocol that clients use to communicate with Redis databases. See the [RESP protocol specification]({{< relref "/develop/reference/protocol-spec" >}}) for more information. + +## Supported RESP versions + +- RESP2 is supported by all Redis Enterprise versions. + +- RESP3 is supported by Redis Enterprise 7.2 and later. + +{{}} +Redis Enterprise versions that support RESP3 continue to support RESP2. +{{}} + + +## Enable RESP3 for a database {#enable-resp3} + +To use RESP3 with a Redis Enterprise Software database: + +1. Upgrade Redis servers to version 7.2 or later. + + For Active-Active and Replica Of databases: + + 1. Upgrade all participating clusters to Redis Enterprise version 7.2.x or later. + + 1. Upgrade all databases to version 7.x or later. + +1. Enable RESP3 support for your database (`enabled` by default): + + - [`rladmin tune db`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-db" >}}): + + ```sh + rladmin tune db db: resp3 enabled + ``` + + You can use the database name in place of `db:` in the preceding command. + + - [Update database configuration]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#put-bdbs" >}}) REST API request: + + ```sh + PUT /v1/bdbs/ + { "resp3": true } + ``` + + ## Deactivate RESP3 for a database {#deactivate-resp3} + + To deactivate RESP3 support for a database: + +- [`rladmin tune db`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-db" >}}): + + ```sh + rladmin tune db db: resp3 disabled + ``` + + You can use the database name in place of `db:` in the preceding command. + +- [Update database configuration]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#put-bdbs" >}}) REST API request: + + ```sh + PUT /v1/bdbs/ + { "resp3": false } + ``` + + When RESP3 is deactivated, connected clients that use RESP3 are disconnected from the database. + +{{}} +You cannot use sharded pub/sub if you deactivate RESP3 support. +{{}} + +## Change default RESP3 option + +The cluster-wide option `resp3_default` determines the default value of the `resp3` option, which enables or deactivates RESP3 for a database, upon upgrading a database to version 7.2. `resp3_default` is set to `enabled` by default. + +To change `resp3_default` to `disabled`, use one of the following methods: + +- Cluster Manager UI: + + 1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + + 1. Select **Upgrade configuration**. + + 1. For **RESP3 support**, select **Disable**. + + 1. Click **Save**. + +- [`rladmin tune cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}) + + ```sh + rladmin tune cluster resp3_default disabled + ``` + +- [Update cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "resp3_default": false } + ``` + +## Client prerequisites for Redis 7.2 upgrade + +The Redis clients [Go-Redis](https://redis.uptrace.dev/) version 9 and [Lettuce](https://redis.github.io/lettuce/) versions 6 and later use RESP3 by default. If you use either client to run Redis Stack commands, you should set the client's protocol version to RESP2 before upgrading your database to Redis version 7.2 to prevent potential application issues due to RESP3 breaking changes. + +### Go-Redis + +For applications using Go-Redis v9.0.5 or later, set the protocol version to RESP2: + +```go +client := redis.NewClient(&redis.Options{ + Addr: "", + Protocol: 2, // Pin the protocol version +}) +``` + +### Lettuce + +To set the protocol version to RESP2 with Lettuce v6 or later: + +```java +import io.lettuce.core.*; +import io.lettuce.core.api.*; +import io.lettuce.core.protocol.ProtocolVersion; + +// ... +RedisClient client = RedisClient.create(""); +client.setOptions(ClientOptions.builder() + .protocolVersion(ProtocolVersion.RESP2) // Pin the protocol version + .build()); +// ... +``` + +If you are using [LettuceMod](https://github.com/redis-developer/lettucemod/), you need to upgrade to [v3.6.0](https://github.com/redis-developer/lettucemod/releases/tag/v3.6.0). +--- +Title: Redis Enterprise compatibility with Redis Open Source +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Enterprise compatibility with Redis Open Source. +hideListLinks: true +linkTitle: Redis Open Source compatibility +weight: $weight +tocEmbedHeaders: true +url: '/operate/rs/7.4/references/compatibility/' +--- +Both Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}) are compatible with Redis Open Source. + +{{< embed-md "rc-rs-oss-compatibility.md" >}} + +## RESP compatibility + +Redis Enterprise Software and Redis Cloud support RESP2 and RESP3. See [RESP compatibility with Redis Enterprise]({{< relref "/operate/rs/7.4/references/compatibility/resp" >}}) for more information. + +## Compatibility with open source Redis Cluster API + +Redis Enterprise supports [Redis OSS Cluster API]({{< relref "/operate/rs/7.4/clusters/optimize/oss-cluster-api" >}}) if it is enabled for a database. For more information, see [Enable OSS Cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). +--- +Title: Resource usage metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Resource usage +weight: $weight +url: '/operate/rs/7.4/references/metrics/resource-usage/' +--- + +## Connections + +Number of connections to the database. + +**Components measured**: Cluster, Node, and Database + +## CPU usage + +Percent of the node CPU used. + +**Components measured**: Cluster and Node + +### Main thread CPU usage + +Percent of the CPU used by the main thread. + +**Components measured**: Database and Shard + +### Fork CPU usage + +CPU usage of Redis child forks. + +**Components measured**: Database and Shard + +### Total CPU usage + +Percent usage of the CPU for all nodes. + +**Components measured**: Database + +## Free disk space + +Remaining unused disk space. + +**Components measured**: Cluster and Node + +## Memory +### Used memory + +Total memory used by the database, including RAM, [Flash]({{< relref "/operate/rs/7.4/databases/auto-tiering" >}}) (if enabled), and [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}) (if enabled). + +Used memory does not include: + +1. Fragmentation overhead - The ratio of memory seen by the operating system to memory allocated by Redis +2. Replication buffers at the primary nodes - Set to 10% of used memory and is between 64 MB and 2048 MB +3. Memory used by Lua scripts - Does not exceed 1 MB +4. Copy on Write (COW) operation that can be triggered by: + - A full replication process + - A database snapshot process + - AOF rewrite process + +Used memory is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +### Free RAM + +Available RAM for System use. + +**Components measured**: Cluster and Node + +### Memory limit + +Memory size limit of the database, enforced on the [used memory](#used-memory). + +**Components measured**: Database + +### Memory usage + +Percent of memory used by Redis out of the [memory limit](#memory-limit). + +**Components measured**: Database +## Traffic + +### Incoming traffic + +Total incoming traffic to the database in bytes/sec. + +All incoming traffic is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Cluster, Node and Database + +#### Incoming traffic compressed + +Total incoming compressed traffic (in bytes/sec) per [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active" >}}) replica database. + +#### Incoming traffic uncompressed + +Total incoming uncompressed traffic (in bytes/sec) per [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active" >}}) replica database. + +### Outgoing traffic + +Total outgoing traffic from the database in bytes per second. + +Outgoing traffic is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Cluster, Node and Database + + + + + + + + +--- +Title: Database operations metrics +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: null +linkTitle: Database operations +weight: $weight +url: '/operate/rs/7.4/references/metrics/database-operations/' +--- + +## Evicted objects/sec + +Number of objects evicted from the database per second. + +Objects are evicted from the database according to the [eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy" >}}). + +Object information is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Expired objects/sec + +Number of expired objects per second. + +Object information is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Hit ratio + +Ratio of the number of operations on existing keys out of the total number of operations. + +**Components measured**: Database and Shard + +### Read misses/sec + +The number of [read operations](#readssec) per second on keys that do not exist. + +Read misses are not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database + +### Write misses/sec + +Number of [write operations](#writessec) per second on keys that do not exist. + +Write misses are not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Latency + +The total amount of time between sending a Redis operation and receiving a response from the database. + +The graph shows average, minimum, maximum, and last latency values for all latency metrics. + +**Components measured**: Database + +### Reads latency + +[Latency](#latency) of [read operations](#readssec). + +**Components measured**: Database + +### Writes latency + +[Latency](#latency) per [write operation](#writessec). + +**Components measured**: Database + +### Other commands latency + +[Latency](#latency) of [other operations](#other-commandssec). + +**Components measured**: Database + +## Ops/sec + +Number of total operations per second, which includes [read operations](#readssec), [write operations](#writessec), and [other operations](#other-commandssec). + +**Components measured**: Cluster, Node, Database, and Shard + +### Reads/sec + +Number of total read operations per second. + +To find out which commands are read operations, run the following command with [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}): + +```sh +ACL CAT read +``` + +**Components measured**: Database + +### Writes/sec + +Number of total write operations per second. + +To find out which commands are write operations, run the following command with [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}): + +```sh +ACL CAT write +``` + +**Components measured**: Database + +#### Pending writes min + +Minimum number of write operations queued per [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active" >}}) replica database. + +#### Pending writes max + +Maximum number of write operations queued per [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active" >}}) replica database. + +### Other commands/sec + +Number of operations per second that are not [read operations](#readssec) or [write operations](#writessec). + +Examples of other operations include [PING]({{< relref "/commands/ping" >}}), [AUTH]({{< relref "/commands/auth" >}}, and [INFO]({{< relref "/commands/info" >}} + +**Components measured**: Database + +## Total keys + +Total number of keys in the dataset. + +Does not include replicated keys, even if [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}) is enabled. + +Total keys is not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). + +**Components measured**: Database + + + + + + + + +--- +Title: Real-time metrics +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Documents the metrics that are tracked with Redis Enterprise Software. +hideListLinks: true +linkTitle: Metrics +weight: $weight +url: '/operate/rs/7.4/references/metrics/' +--- + +In the Redis Enterprise Cluster Manager UI, you can see real-time performance metrics for clusters, nodes, databases, and shards, and configure alerts that send notifications based on alert parameters. Select the **Metrics** tab to view the metrics for each component. For more information, see [Monitoring with metrics and alerts]({{< relref "/operate/rs/7.4/clusters/monitoring" >}}). + +See the following topics for metrics definitions: +- [Database operations]({{< relref "/operate/rs/7.4/references/metrics/database-operations" >}}) for database metrics +- [Resource usage]({{< relref "/operate/rs/7.4/references/metrics/resource-usage" >}}) for resource and database usage metrics +- [Auto Tiering]({{< relref "/operate/rs/7.4/references/metrics/auto-tiering" >}}) for additional metrics for [Auto Tiering ]({{< relref "/operate/rs/7.4/databases/auto-tiering" >}}) databases + +## Prometheus metrics + +To collect and display metrics data from your databases and other cluster components, +you can connect your [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) server to your Redis Enterprise Software cluster. See [Metrics in Prometheus]({{< relref "/integrate/prometheus-with-redis-enterprise/prometheus-metrics-definitions" >}}) for a list of available metrics. + +We recommend you use Prometheus and Grafana to view metrics history and trends. + +See [Prometheus integration]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}) to learn how to connect Prometheus and Grafana to your Redis Enterprise database. + +## Limitations + +### Shard limit + +Metrics information is not shown for clusters with more than 128 shards. For large clusters, we recommend you use [Prometheus and Grafana]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}) to view metrics. + +### Metrics not shown during shard migration + +The following metrics are not measured during [shard migration]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}). If you view these metrics while resharding, the graph will be blank. + +- [Evicted objects/sec]({{< relref "/operate/rs/7.4/references/metrics/database-operations#evicted-objectssec" >}}) +- [Expired objects/sec]({{< relref "/operate/rs/7.4/references/metrics/database-operations#expired-objectssec" >}}) +- [Read misses/sec]({{< relref "/operate/rs/7.4/references/metrics/database-operations#read-missessec" >}}) +- [Write misses/sec]({{< relref "/operate/rs/7.4/references/metrics/database-operations#write-missessec" >}}) +- [Total keys]({{< relref "/operate/rs/7.4/references/metrics/database-operations#total-keys" >}}) +- [Incoming traffic]({{< relref "/operate/rs/7.4/references/metrics/resource-usage#incoming-traffic" >}}) +- [Outgoing traffic]({{< relref "/operate/rs/7.4/references/metrics/resource-usage#outgoing-traffic" >}}) +- [Used memory]({{< relref "/operate/rs/7.4/references/metrics/resource-usage#used-memory" >}}) +--- +Title: Auto Tiering Metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Auto Tiering +weight: $weight +url: '/operate/rs/7.4/references/metrics/auto-tiering/' +--- + +These metrics are additional metrics for [Auto Tiering ]({{< relref "/operate/rs/7.4/databases/auto-tiering" >}}) databases. + +#### % Values in RAM + +Percent of keys whose values are stored in RAM. + +A low percentage alert means most of the RAM is used for holding keys and not much RAM is available for values. This can be due to a high number of small keys or a few large keys. Inserting more keys might cause the database to run out of memory. + +If the percent of values in RAM is low for a subset of the database's shards, it might also indicate an unbalanced database. + +**Components measured**: Database and Shard + +#### Values in flash + +Number of keys with values stored in flash, not including [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}). + +**Components measured**: Database and Shard + +#### Values in RAM + +Number of keys with values stored in RAM, not including [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}). + +**Components measured**: Database and Shard + +#### Flash key-value operations + +Number of operations on flash key values (read + write + del) per second. + +**Components measured**: Node + +#### Flash bytes/sec + +Number of total bytes read and written per second on flash memory. + +**Components measured**: Cluster, Node, Database, and Shard + +#### Flash I/O operations/sec + +Number of input/output operations per second on the flash storage device. + +**Components measured**: Cluster and Node + +#### RAM:Flash access ratio + +Ratio between logical Redis key value operations and actual flash key value operations. + +**Components measured**: Database and Shard + +#### RAM hit ratio + +Ratio of requests processed directly from RAM to total number of requests processed. + +**Components measured**: Database and Shard + +#### Used flash + +Total amount of memory used to store values in flash. + +**Components measured**: Database and Shard + +#### Free flash + +Amount of free space on flash storage. + +**Components measured**: Cluster and Node + +#### Flash fragmentation + +Ratio between the used logical flash memory and the physical flash memory that is used. + +**Components measured**: Database and Shard + +#### Used RAM + +Total size of data stored in RAM, including keys, values, overheads, and [replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}) (if enabled). + +**Components measured**: Database and Shard + +#### RAM dataset overhead + +Percentage of the [RAM limit](#ram-limit) that is used for anything other than values, such as key names, dictionaries, and other overheads. + +**Components measured**: Database and Shard + +#### RAM limit + +Maximum amount of RAM that can be used in bytes. + +**Components measured**: Database + +#### RAM usage + +Percentage of the [RAM limit](#ram-limit) used. + +**Components measured**: Database + +#### Storage engine usage + +Total count of shards used, filtered by the sorage engine (Speedb / RockSB) per given database. + +**Components measured**: Database, Shards + + + +#### Calculated metrics + +These RoF statistics can be calculated from other metrics. + +- RoF average key size with overhead + + ([ram_dataset_overhead](#ram-dataset-overhead) * [used_ram](#used-ram)) + / ([total_keys]({{< relref "/operate/rs/7.4/references/metrics/database-operations#total-keys" >}}) * 2) + +- RoF average value size in RAM + + ((1 - [ram_dataset_overhead](#ram-dataset-overhead)) * [used_ram](#used-ram)) / ([values_in_ram](#values-in-ram) * 2) + +- RoF average value size in flash + + [used_flash](#used-flash) / [values_in_flash](#values-in-flash) +--- +Title: Supported platforms +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software is supported on several operating systems, + cloud environments, and virtual environments. +linkTitle: Supported platforms +weight: 30 +tocEmbedHeaders: true +url: '/operate/rs/7.4/references/supported-platforms/' +--- +{{}} +--- +Title: Supported upgrade paths for Redis Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Supported paths to upgrade a Redis Software cluster. +linkTitle: Upgrade paths +weight: 30 +tocEmbedHeaders: true +url: '/operate/rs/7.4/references/upgrade-paths/' +--- + +{{}} + +For detailed upgrade instructions, see [Upgrade a Redis Enterprise Software cluster]({{}}). + +See the [Redis Enterprise Software product lifecycle]({{}}) for more information about release numbers and the end-of-life schedule. +--- +Title: Connecting to Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +draft: true +weight: null +url: '/operate/rs/7.4/references/connecting-to-redis/' +--- +To establish a connection to a Redis database, you'll need the following information: + +- The hostname or IP address of the Redis server +- The port number that the Redis server is listening at +- The database password (when configured with an authentication password which is **strongly recommended**) +- The SSL certificates (when configured with SSL authentication and encryption - see [this article](/kb/read-more-ssl) for more information) + +The combination of `hostname:port` is commonly referred to as the "endpoint." This information is readily obtainable from your Redis Enterprise Cluster and Redis Cloud admin consoles. Unless otherwise specified, our Redis databases are accessible via a single managed endpoint to ensure high availability. + +You can connect to a Redis database using a wide variety of tools and libraries depending on your needs. Here's a short list: + +- Use one of the many [clients for Redis](redis.io/clients) - see below for client-specific information and examples +- Code your own Redis client based on the [Redis Serialization Protocol (RESP)](http://redis.io/topics/protocol) +- Make friends with Redis' own command line tool - `redis-cli` - to quickly connect and manage any Redis database (**tip:** you can also use `telnet` instead) +- Use tools that provide a [GUI for Redis](/blog/so-youre-looking-for-the-redis-gui) + +## Basic connection troubleshooting + +Connecting to a remote server can be challenging. Here’s a quick checklist for common pitfalls: + +- Verify that the connection information was copy-pasted correctly <- more than 90% of connectivity issues are due to a single missing character. +- If you're using Redis in the cloud or not inside of a LAN, consider adjusting your client's timeout settings +- Try disabling any security measures that your database may have been set up with (e.g. Source IP/Subnet lists, Security Groups, SSL, etc...). +- Try using a command line tool to connect to the database from your server - it is possible that your host and/port are blocked by the network. +- If you've managed to open a connection, try sending the `INFO` command and act on its reply or error message. +- Redis Enterprise Software Redis databases only support connecting to the default database (0) and block some administrative commands. To learn more, see: + - Redis Enterprise Cluster: [REC compatibility](/redis-enterprise-documentation/rlec-compatibility) + - Redis Cloud FAQ: [Are you fully compatible with Redis Open Source](/faqs#are-you-fully-compatible-with-open-source-redis) + +If you encounter any difficulties or have questions please feel free to [contact our help desk](mailto:support@redislabs.com). +--- +Title: Clustering Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +draft: true +weight: null +url: '/operate/rs/7.4/references/clustering-redis/' +--- +Joining multiple Redis servers into a Redis cluster is a challenging task, especially because Redis supports complex data structures and commands required by modern web applications, in high-throughput and low latency (sub-millisecond) conditions. Some of those challenges are: + +- Performing union and intersection operations over List/Set/Sorted Set + data types across multiple shards and nodes +- Maintaining consistency across multi-shard/multi-node architecture, + while running (a) a SORT command over a List of Hash keys; or (b) a + Redis transaction that includes multiple keys; or (c) a Lua script + with multiple keys +- Creating a simple abstraction layer that hides the complex cluster + architecture from the user’s application, without code modifications + and while supporting infinite scalability +- Maintaining a reliable and consistent infrastructure in a cluster + configuration + +There are several solutions to clustering Redis, most notable of which is the [Redis Open Source cluster](http://redis.io/topics/cluster-spec). + +Redis Enterprise Software and Redis Cloud were built from the ground up to provide a Redis cluster of any size while supporting all Redis commands. Your dataset is distributed across multiple shards in multiple nodes of the Redis cluster and is constantly monitored to ensure optimal performance. When needed, more shards and nodes can be added to your dataset so it can scale continuously and limitlessly. + +Redis Enterprise clusters provide a single endpoint to connect to, and do not require any code changes or special configuration from the application’s perspective. For more information on setting up and using Redis Enterprise clusters, see [Database clustering]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering/" >}}). +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Explains terms used in Redis Enterprise Software and its docs. +linkTitle: Terminology +title: Terminology in Redis Enterprise Software +weight: $weight +url: '/operate/rs/7.4/references/terminology/' +--- +Here are explanations of some of the terms used in Redis Enterprise Software. + +## Node + +A _node_ is a physical machine, virtual machine, container or cloud +instance on which the RS installation package was installed and the +setup process was run in order to make the machine part of the cluster. + +Each node is a container for running multiple Redis +instances, referred to as "shards". + +The recommended configuration for a production cluster is an uneven +number of nodes, with a minimum of three. Note that in some +configurations, certain functionalities might be blocked. For example, +if a cluster has only one node you cannot enable database replication, +which helps to achieve high availability. + +A node is made up of several components, as detailed below, and works +together with the other cluster nodes. + +## Redis instance (shard) + +As indicated above, each node serves as a container for hosting multiple +database instances, referred to as "shards". + +Redis Enterprise Software supports various database configurations: + +- **Standard Redis database** - A single Redis shard with no + replication or clustering. +- **Highly available Redis database** - Every database master shard + has a replica shard, so that if the master shard fails the + cluster can automatically fail over to the replica with minimal impact. Master and replica shards are always placed on separate + nodes to ensure high availability. +- **Clustered Redis database** - The data stored in the database is + split across several shards. The number of shards can be defined by + the user. Various performance optimization algorithms define where + shards are placed within the cluster. During the lifetime of the + cluster, these algorithms might migrate a shard between nodes. +- **Clustered and highly available Redis database** - Each master shard + in the clustered database has a replica shard, enabling failover if + the master shard fails. + +## Proxy + +Each node includes one zero-latency, multi-threaded proxy +(written in low-level C) that masks the underlying system complexity. The +proxy oversees forwarding Redis operations to the database shards on +behalf of a Redis client. + +The proxy simplifies the cluster operation, from the application or +Redis client point of view, by enabling the use of a standard Redis +client. The zero-latency proxy is built over a cut-through architecture +and employs various optimization methods. For example, to help ensure +high-throughput and low-latency performance, the proxy might use +instruction pipelining even if not instructed to do so by the client. + +## Database endpoint + +Each database is served by a database endpoint that is part of and +managed by the proxies. The endpoint oversees forwarding Redis +operations to specific database shards. + +If the master shard fails and the replica shard is promoted to master, the +master endpoint is updated to point to the new master shard. + +If the master endpoint fails, the replica endpoint is promoted to be the +new master endpoint and is updated to point to the master shard. + +Similarly, if both the master shard and the master endpoint fail, then +both the replica shard and the replica endpoint are promoted to be the new +master shard and master endpoint. + +Shards and their endpoints do not +have to reside within the same node in the cluster. + +In the case of a clustered database with multiple database shards, only +one master endpoint acts as the master endpoint for all master shards, +forwarding Redis operations to all shards as needed. + +## Cluster manager + +The cluster manager oversees all node management-related tasks, and the +cluster manager in the master node looks after all the cluster related +tasks. + +The cluster manager is designed in a way that is totally decoupled from +the Redis operation. This enables RS to react in a much faster and +accurate manner to failure events, so that, for example, a node failure +event triggers mass failover operations of all the master endpoints +and master shards that are hosted on the failed node. + +In addition, this architecture guarantees that each Redis shard is only +dealing with processing Redis commands in a shared-nothing architecture, +thus maintaining the inherent high-throughput and low-latency of each +Redis process. Lastly, this architecture guarantees that any change in +the cluster manager itself does not affect the Redis operation. + +Some of the primary functionalities of the cluster manager include: + +- Deciding where shards are created +- Deciding when shards are migrated and to where +- Monitoring database size +- Monitoring databases and endpoints across all nodes +- Running the database resharding process +- Running the database provisioning and de-provisioning processes +- Gathering operational statistics +- Enforcing license and subscription limitations + +--- +Title: redis-cli +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Run Redis commands. +hideListLinks: true +linkTitle: redis-cli (run Redis commands) +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/redis-cli/' +--- + +The `redis-cli` command-line utility lets you interact with a Redis database. With `redis-cli`, you can run [Redis commands]({{< relref "/commands" >}}) directly from the command-line terminal or with [interactive mode](#interactive-mode). + +If you want to run Redis commands without `redis-cli`, you can [connect to a database with Redis Insight]({{< relref "/develop/tools/insight/" >}}) and use the built-in [CLI]({{< relref "/develop/tools/insight/" >}}) prompt instead. + +## Install `redis-cli` + +When you install Redis Enterprise Software or Redis Open Source, it also installs the `redis-cli` command-line utility. + +To learn how to install Redis and `redis-cli`, see the following installation guides: + +- [Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) + +- [Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) + +- [Redis Enterprise Software with Docker]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}) + +## Connect to a database + +To run Redis commands with `redis-cli`, you need to connect to your Redis database. + +You can find endpoint and port details in the **Databases** list or the database’s **Configuration** screen. + +### Connect remotely + +If you have `redis-cli` installed on your local machine, you can use it to connect to a remote Redis database. You will need to provide the database's connection details, such as the hostname or IP address, port, and password. + +```sh +$ redis-cli -h -p -a +``` + +You can also provide the password with the `REDISCLI_AUTH` environment variable instead of the `-a` option: + +```sh +$ export REDISCLI_AUTH= +$ redis-cli -h -p +``` + +### Connect over TLS + +To connect to a Redis Enterprise Software or Redis Cloud database over TLS: + +1. Download or copy the Redis Enterprise server (or proxy) certificates. + + - For Redis Cloud, see [Download certificates]({{< relref "/operate/rc/security/database-security/tls-ssl#download-certificates" >}}) for detailed instructions on how to download the server certificates (`redis_ca.pem`) from the [Redis Cloud console](https://cloud.redis.io/). + + - For Redis Enterprise Software, copy the proxy certificate from the Cluster Manager UI (**Cluster > Security > Certificates > Server authentication**) or from a cluster node (`/etc/opt/redislabs/proxy_cert.pem`). + +1. Copy the certificate to each client machine. + +1. If your database doesn't require client authentication, provide the Redis Enterprise server certificate (`redis_ca.pem` for Cloud or `proxy_cert.pem` for Software) when you connect: + + ```sh + redis-cli -h -p --tls --cacert .pem + ``` + +1. If your database requires client authentication, provide your client's private and public keys along with the Redis Enterprise server certificate (`redis_ca.pem` for Cloud or `proxy_cert.pem` for Software) when you connect: + + ```sh + redis-cli -h -p --tls --cacert .pem \ + --cert redis_user.crt --key redis_user_private.key + ``` + +### Connect with Docker + +If your Redis database runs in a Docker container, you can use `docker exec` to run `redis-cli` commands: + +```sh +$ docker exec -it redis-cli -p +``` + +## Basic use + +You can run `redis-cli` commands directly from the command-line terminal: + +```sh +$ redis-cli -h -p +``` + +For example, you can use `redis-cli` to test your database connection and store a new Redis string in the database: + +```sh +$ redis-cli -h -p 12000 PING +PONG +$ redis-cli -h -p 12000 SET mykey "Hello world" +OK +$ redis-cli -h -p 12000 GET mykey +"Hello world" +``` + +For more information, see [Command line usage]({{< relref "/develop/tools/cli" >}}#command-line-usage). + +## Interactive mode + +In `redis-cli` [interactive mode]({{< relref "/develop/tools/cli" >}}#interactive-mode), you can: + +- Run any `redis-cli` command without prefacing it with `redis-cli`. +- Enter `?` for more information about how to use the `HELP` command and [set `redis-cli` preferences]({{< relref "/develop/tools/cli" >}}#preferences). +- Enter [`HELP`]({{< relref "/develop/tools/cli" >}}#showing-help-about-redis-commands) followed by the name of a command for more information about the command and its options. +- Press the `Tab` key for command completion. +- Enter `exit` or `quit` or press `Control+D` to exit interactive mode and return to the terminal prompt. + +This example shows how to start interactive mode and run Redis commands: + +```sh +$ redis-cli -p 12000 +127.0.0.1:12000> PING +PONG +127.0.0.1:12000> SET mykey "Hello world" +OK +127.0.0.1:12000> GET mykey +"Hello world" +``` + +## Examples + +### Check slowlog + +Run [`slowlog get`]({{< relref "/commands/slowlog-get" >}}) for a list of recent slow commands: + +```sh +redis-cli -h -p slowlog get +``` + +### Scan for big keys + +Scan the database for big keys: + +```sh +redis-cli -h -p --bigkeys +``` + +See [Scanning for big keys]({{< relref "/develop/tools/cli" >}}#scanning-for-big-keys) for more information. + +## More info + +- [Redis CLI documentation]({{< relref "/develop/tools/cli" >}}) +- [Redis commands reference]({{< relref "/commands/" >}} +--- +Title: rlcheck +alwaysopen: false +categories: +- docs +- operate +- rs +description: Verify nodes. +hideListLinks: true +linkTitle: rlcheck (verify nodes) +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rlcheck/' +--- +The `rlcheck` utility runs various [tests](#tests) to check the health of a Redis Enterprise Software node and reports any discovered issues. +You can use this utility to confirm a successful installation or to verify that the node is functioning properly. + +To resolve issues reported by `rlcheck`, [contact Redis support](https://redis.com/company/support/). + +## Run rlcheck + +You can run `rlcheck` from the node host's command line. +The output of `rlcheck` shows information specific to the host you run it on. + +To run `rlcheck` tests: + +1. Sign in to the Redis Enterprise Software host with an account that is a member of the **redislabs** operating system group. + +1. Run: + + ```sh + rlcheck + ``` + +## Options + +You can run `rlcheck` with the following options: + +| Option | Description | +|--------|-------------| +| `--suppress-tests TEXT` | Skip the specified, comma-delimited list of tests. See [Tests](#tests) for the list of tests and descriptions. | +| `--retry-delay INTEGER` | Delay between retries, in seconds. | +| `--retry INTEGER` | Number of retries after a failure. | +| `--file-path TEXT` | Custom path to `rlcheck.log`. | +| `--continue-on-error` | Continue to run all tests even if a test fails, then show all errors when complete. | +| `--help` | Return the list of `rlcheck` options. | + +## Tests + +`rlcheck` runs the following tests by default: + +| Test name | Description | +|-----------|-------------| +| verify_owner_and_group | Verifies the owner and group for Redis Enterprise Software files are correct. | +| verify_bootstrap_status | Verifies the local node's bootstrap process completed without errors. | +| verify_services | Verifies all Redis Enterprise Software services are running. | +| verify_port_range | Verifies the [`ip_local_port_range`](https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html) doesn't conflict with the ports Redis Enterprise might assign to shards. | +| verify_pidfiles | Verifies all active local shards have PID files. | +| verify_capabilities | Verifies all binaries have the proper capability bits. | +| verify_existing_sockets | Verifies sockets exist for all processes that require them. | +| verify_host_settings | Verifies the following:
• Linux `overcommit_memory` setting is 1.
•`transparent_hugepage` is disabled.
• Socket maximum connections setting `somaxconn` is 1024. | +| verify_tcp_connectivity | Verifies this node can connect to all other alive nodes. | +| verify_encrypted_gossip | Verifies gossip communication is encrypted. | +--- +Title: rladmin tune +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures parameters for databases, proxies, nodes, and clusters. +headerRange: '[1-2]' +linkTitle: tune +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/tune/' +--- + +Configures parameters for databases, proxies, nodes, and clusters. + +## `tune cluster` + +Configures cluster parameters. + +``` sh +rladmin tune cluster + [ repl_diskless { enabled | disabled } ] + [ redis_provision_node_threshold ] + [ redis_migrate_node_threshold ] + [ redis_provision_node_threshold_percent ] + [ redis_migrate_node_threshold_percent ] + [ max_simultaneous_backups ] + [ failure_detection_sensitivity { high | low } ] + [ watchdog_profile { cloud | local-network } ] + [ slave_ha { enabled | disabled } ] + [ slave_ha_grace_period ] + [ slave_ha_cooldown_period ] + [ slave_ha_bdb_cooldown_period ] + [ max_saved_events_per_type ] + [ parallel_shards_upgrade ] + [ default_concurrent_restore_actions ] + [ show_internals { enabled | disabled } ] + [ expose_hostnames_for_all_suffixes { enabled | disabled } ] + [ redis_upgrade_policy { latest | major } ] + [ default_redis_version ] + [ default_non_sharded_proxy_policy { single | all-master-shards | all-nodes } ] + [ default_sharded_proxy_policy { single | all-master-shards | all-nodes } ] + [ default_shards_placement { dense | sparse } ] + [ data_internode_encryption { enabled | disabled } ] + [ db_conns_auditing { enabled | disabled } ] + [ acl_pubsub_default { resetchannels | allchannels } ] + [ resp3_default { enabled | disabled } ] + [ automatic_node_offload { enabled | disabled } ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------------| +| acl_pubsub_default | `resetchannels`
`allchannels` | Default pub/sub ACL rule for all databases in the cluster:
•`resetchannels` blocks access to all channels (restrictive)
•`allchannels` allows access to all channels (permissive) | +| automatic_node_offload | `enabled`
`disabled` | Define whether automatic node offload migration will take place | +| data_internode_encryption | `enabled`
`disabled` | Activates or deactivates [internode encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption" >}}) for new databases | +| db_conns_auditing | `enabled`
`disabled` | Activates or deactivates [connection auditing]({{< relref "/operate/rs/7.4/security/audit-events" >}}) by default for new databases of a cluster | +| default_concurrent_restore_actions | integer
`all` | Default number of concurrent actions when restoring a node from a snapshot (positive integer or "all") | +| default_non_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default [proxy policy]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}) for newly created non-sharded databases' endpoints | +| default_redis_version | version number | The default Redis database compatibility version used to create new databases.

The value parameter should be a version number in the form of "x.y" where _x_ represents the major version number and _y_ represents the minor version number. The final value corresponds to the desired version of Redis.

You cannot set _default_redis_version_ to a value higher than that supported by the current _redis_upgrade_policy_ value. | +| default_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default [proxy policy]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy" >}}) for newly created sharded databases' endpoints | +| default_shards_placement | `dense`
`sparse` | New databases place shards according to the default [shard placement policy]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy" >}}) | +| expose_hostnames_for_all_suffixes | `enabled`
`disabled` | Exposes hostnames for all DNS suffixes | +| failure_detection_sensitivity | `high`
`low` | Predefined thresholds and timeouts for failure detection (previously known as `watchdog_profile`)
• `high` (previously `local-network`) – high failure detection sensitivity, lower thresholds, faster failure detection and failover
• `low` (previously `cloud`) – low failure detection sensitivity, higher tolerance for latency variance (also called network jitter) | +| login_lockout_counter_reset_after | time in seconds | Time after failed login attempt before the counter resets to 0 | +| login_lockout_duration | time in seconds | Time a locked account remains locked ( "0" means only an admin can unlock the account) | +| login_lockout_threshold | integer | Number of failed sign-in attempts to trigger locking a user account ("0" means never lock the account) | +| max_saved_events_per_type | integer | Maximum number of events each type saved in CCS per object type | +| max_simultaneous_backups | integer (default: 4) | Number of database backups allowed to run at the same time. Combines with `max_redis_forks` (set by [`tune node`](#tune-node)) to determine the number of shard backups allowed to run simultaneously. | +| parallel_shards_upgrade | integer
`all` | Number of shards upgraded in parallel during DB upgrade (positive integer or "all") | +| redis_migrate_node_threshold | size in MB | Memory (in MBs by default or can be specified) needed to migrate a database between nodes | +| redis_migrate_node_threshold_percent | percentage | Memory (in percentage) needed to migrate a database between nodes | +| redis_provision_node_threshold | size in MB | Memory (in MBs by default or can be specified) needed to provision a new database | +| redis_provision_node_threshold_percent | percentage | Memory (in percentage) needed to provision a new database | +| redis_upgrade_policy | `latest`
`major` | When you upgrade or create a new Redis database, this policy determines which version of Redis database compatibility is used.

Supported values are:
  • `latest`, which applies the most recent Redis compatibility update \(_effective default prior to v6.2.4_)

  • `major`, which applies the most recent major release compatibility update (_default as of v6.2.4_).
| +| repl_diskless | `enabled`
`disabled` | Activates or deactivates diskless replication (can be overridden per database) | +| resp3_default | `enabled`
`disabled` | Determines the default value of the `resp3` option upon upgrading a database to version 7.2 (defaults to `enabled`) | +| show_internals | `enabled`
`disabled` | Controls the visibility of internal databases that are only used for the cluster's management | +| slave_ha | `enabled`
`disabled` | Activates or deactivates [replica high availability]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) in the cluster
(enabled by default; use [`rladmin tune db`](#tune-db) to change `slave_ha` for a specific database)

Deprecated as of Redis Enterprise Software v7.2.4. | +| slave_ha_bdb_cooldown_period | time in seconds (default: 7200) | Time (in seconds) a database must wait after its shards are relocated by [replica high availability]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) before it can go through another shard migration if another node fails (default is 2 hours) | +| slave_ha_cooldown_period | time in seconds (default: 3600) | Time (in seconds) [replica high availability]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) must wait after relocating shards due to node failure before performing another shard migration for any database in the cluster (default is 1 hour) | +| slave_ha_grace_period | time in seconds (default: 600) | Time (in seconds) between when a node fails and when [replica high availability]({{< relref "/operate/rs/7.4/databases/configure/replica-ha" >}}) starts relocating shards to another node | +| watchdog_profile | `cloud`
`local-network` | Watchdog profiles with preconfigured thresholds and timeouts (deprecated as of Redis Enterprise Software v6.4.2-69; use `failure_detection_sensitivity` instead)
• `cloud` is suitable for common cloud environments and has a higher tolerance for latency variance (also called network jitter).
• `local-network` is suitable for dedicated LANs and has better failure detection and failover times. | + +### Returns + +Returns `Finished successfully` if the cluster configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info cluster`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/info#info-cluster" >}}) to verify the cluster configuration was changed. + +### Example + +``` sh +$ rladmin tune cluster slave_ha enabled +Finished successfully +$ rladmin info cluster | grep slave_ha + slave_ha: enabled +``` + +## `tune db` + +Configures database parameters. + +``` sh +rladmin tune db { db: | } + [ slave_buffer ] + [ client_buffer ] + [ repl_backlog ] + [ crdt_repl_backlog ] + [ repl_timeout ] + [ repl_diskless { enabled | disabled | default } ] + [ master_persistence { enabled | disabled } ] + [ maxclients ] + [ schedpolicy { cmp | mru | spread | mnp } ] + [ max_shard_pipeline ] + [ conns ] + [ conns_type ] + [ max_client_pipeline ] + [ max_connections ] + [ max_aof_file_size ] + [ max_aof_load_time ] + [ oss_cluster { enabled | disabled } ] + [ oss_cluster_api_preferred_ip_type ] + [ slave_ha { enabled | disabled } ] + [ slave_ha_priority ] + [ skip_import_analyze { enabled | disabled } ] + [ mkms { enabled | disabled } ] + [ continue_on_error ] + [ gradual_src_mode { enabled | disabled } ] + [ gradual_sync_mode { enabled | disabled | auto } ] + [ gradual_sync_max_shards_per_source ] + [ module_name ] [ module_config_params ] + [ crdt_xadd_id_uniqueness_mode { liberal | semi-strict | strict } ] + [ metrics_export_all { enabled | disabled } ] + [ syncer_mode { distributed | centralized }] + [ syncer_monitoring { enabled | disabled } ] + [ mtls_allow_weak_hashing { enabled | disabled } ] + [ mtls_allow_outdated_cert { enabled | disabled } ] + [ data_internode_encryption { enabled | disabled } ] + [ db_conns_auditing { enabled | disabled } ] + [ resp3 { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|--------------------------------------|----------------------------------|---------------------------------------------------------------------------------------------------------------------------------------| +| db:id | integer | ID of the specified database | +| name | string | Name of the specified database | +| client_buffer | value in MB hard:soft:time | Redis client output buffer limits | +| conns | integer | Size of internal connection pool, specified per-thread or per-shard depending on conns_type | +| conns_type | `per-thread`
`per-shard` | Specifies connection pool size as either per-thread or per-shard | +| continue_on_error | | Flag that skips tuning shards that can't be reached | +| crdt_repl_backlog | value in MB
`auto` | Size of the Active-Active replication buffer | +| crdt_xadd_id_uniqueness_mode | `liberal`
`semi-strict`
`strict` | XADD's behavior in an Active-Active database, defined as liberal, semi-strict, or strict (see descriptions below) | +| data_internode_encryption | `enabled`
`disabled` | Activates or deactivates [internode encryption]({{< relref "/operate/rs/7.4/security/encryption/internode-encryption" >}}) for the database | +| db_conns_auditing | `enabled`
`disabled` | Activates or deactivates database [connection auditing]({{< relref "/operate/rs/7.4/security/audit-events" >}}) for a database | +| gradual_src_mode | `enabled`
`disabled` | Activates or deactivates gradual sync of sources | +| gradual_sync_max_shards_per_source | integer | Number of shards per sync source that can be replicated in parallel (positive integer) | +| gradual_sync_mode | `enabled`
`disabled`
`auto` | Activates, deactivates, or automatically determines gradual sync of source shards | +| master_persistence | `enabled`
`disabled` | If enabled, persists the primary shard in addition to replica shards in a replicated and persistent database. | +| max_aof_file_size | size in MB | Maximum size (in MB, if not specified) of [AoF]({{< relref "/glossary/_index.md#letter-a" >}}) file (minimum value is 10 GB) | +| max_aof_load_time | time in seconds | Time limit in seconds to load a shard from an append-only file (AOF). If exceeded, an AOF rewrite is initiated to decrease future load time.
Minimum: 2700 seconds (45 minutes)
Default: 3600 seconds (1 hour) | +| max_client_pipeline | integer | Maximum commands in the proxy's pipeline per client connection (max value is 2047, default value is 200) | +| max_connections | integer | Maximum client connections to the database's endpoint (default value is 0, which is unlimited) | +| max_shard_pipeline | integer | Maximum commands in the proxy's pipeline per shard connection (default value is 200) | +| maxclients | integer | Controls the maximum client connections between the proxy and shards (default value is 10000) | +| metrics_export_all | `enabled`
`disabled` | Activates the exporter to expose all shard metrics | +| mkms | `enabled`
`disabled` | Activates multi-key multi-slot commands | +| module_config_params | string | Configures module arguments at runtime. Enclose `module_config_params` within quotation marks. | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf`
`rg` | The module to configure with `module_config_params` | +| mtls_allow_outdated_cert | `enabled`
`disabled` | Activates outdated certificates in mTLS connections | +| mtls_allow_weak_hashing | `enabled`
`disabled` | Activates weak hashing (less than 2048 bits) in mTLS connections | +| oss_cluster | `enabled`
`disabled` | Activates OSS cluster API | +| oss_cluster_api_preferred_ip_type | `internal`
`external` | IP type for the endpoint and database in the OSS cluster API (default is internal) | +| repl_backlog | size in MB
`auto` | Size of the replication buffer | +| repl_diskless | `enabled`
`disabled`
`default` | Activates or deactivates diskless replication (defaults to the cluster setting) | +| repl_timeout | time in seconds | Replication timeout (in seconds) | +| resp3 | `enabled`
`disabled` | Enables or deactivates RESP3 support (defaults to `enabled`) | +| schedpolicy | `cmp`
`mru`
`spread`
`mnp` | Controls how server-side connections are used when forwarding traffic to shards | +| skip_import_analyze | `enabled`
`disabled` | Skips the analyzing step when importing a database | +| slave_buffer | `auto`
value in MB
hard:soft:time | Redis replica output buffer limits
• `auto`: dynamically adjusts the buffer limit based on the shard’s current used memory
• value in MB: sets the buffer limit in MB
• hard:soft:time: sets the hard limit (maximum buffer size in MB), soft limit in MB, and the time in seconds that the soft limit can be exceeded | +| slave_ha | `enabled`
`disabled` | Activates or deactivates replica high availability (defaults to the cluster setting) | +| slave_ha_priority | integer | Priority of the database in the replica high-availability mechanism | +| syncer_mode | `distributed`
`centralized`| Configures syncer to run in distributed or centralized mode. For distributed syncer, the DMC policy must be all-nodes or all-master-nodes | +| syncer_monitoring | `enabled`
`disabled` | Activates syncer monitoring | + +| XADD behavior mode | Description | +| - | - | +| liberal | XADD succeeds with any valid ID (not recommended, allows duplicate IDs) | +| semi-strict | Allows a full ID. Partial IDs are completed with the unique database instance ID (not recommended, allows duplicate IDs). | +| strict | XADD fails if a full ID is given. Partial IDs are completed using the unique database instance ID. | + +### Returns + +Returns `Finished successfully` if the database configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info db`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/info#info-db" >}}) to verify the database configuration was changed. + +### Example + +``` sh +$ rladmin tune db db:4 repl_timeout 300 +Tuning database: o +Finished successfully +$ rladmin info db db:4 | grep repl_timeout + repl_timeout: 300 seconds +``` + +## `tune node` + +Configures node parameters. + +``` sh +tune node { | all } + [ max_listeners ] + [ max_redis_forks ] + [ max_redis_servers ] + [ max_slave_full_syncs ] + [ quorum_only { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|----------------------|------------|----------------------------------------------------------------------------------------------------------------------------------| +| id | integer | ID of the specified node | +| all | | Configures settings for all nodes | +| max_listeners | integer | Maximum number of endpoints that may be bound to the node | +| max_redis_forks | integer | Maximum number of background processes forked from shards that may exist on the node at any given time | +| max_redis_servers | integer | Maximum number of shards allowed to reside on the node | +| max_slave_full_syncs | integer | Maximum number of simultaneous replica full-syncs that may be running at any given time (0: Unlimited, -1: Use cluster settings) | +| quorum_only | `enabled`
`disabled` | If activated, configures the node as a [quorum-only node]({{< relref "/glossary/_index.md#letter-p" >}}) | + +### Returns + +Returns `Finished successfully` if the node configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info node`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/info#info-node" >}}) to verify the node configuration was changed. + +### Example + +``` sh +$ rladmin tune node 3 max_redis_servers 120 +Finished successfully +$ rladmin info node 3 | grep "max redis servers" + max redis servers: 120 +``` + +## `tune proxy` + +Configures proxy parameters. + +``` sh +rladmin tune proxy { | all } + [ mode { static | dynamic } ] + [ threads ] + [ max_threads ] + [ scale_threshold ] + [ scale_duration ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------|----------------------------|-------------------------------------------------------------------------------------| +| id | integer | ID of the specified proxy | +| all | | Configures settings for all proxies | +| max_threads | integer, (range: 1-255) | Maximum number of threads allowed | +| mode | `static`
`dynamic` | Determines if the proxy automatically adjusts the number of threads based on load size | +| scale_duration | time in seconds, (range: 10-300) | Time of scale_threshold CPU utilization before the automatic proxy automatically scales | +| scale_threshold | percentage, (range: 50-99) | CPU utilization threshold that triggers spawning new threads | +| threads | integer, (range: 1-255) | Initial number of threads created at startup | + +### Returns + +Returns `OK` if the proxy configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info proxy`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/info#info-proxy" >}}) to verify the proxy configuration was changed. + +### Example + +``` sh +$ rladmin tune proxy 2 scale_threshold 75 +Configuring proxies: + - proxy:2: ok +$ rladmin info proxy 2 | grep scale_threshold + scale_threshold: 75 (%) +``` +--- +Title: rladmin cluster master +alwaysopen: false +categories: +- docs +- operate +- rs +description: Identifies or changes the cluster's master node. +headerRange: '[1-2]' +linkTitle: master +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/master/' +--- + +Identifies the cluster's master node. Use `set` to change the cluster's master to a different node. + +```sh +cluster master [ set ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node_id | integer | Unique node ID | + +### Returns + +Returns the ID of the cluster's master node. Otherwise, it returns an error message. + +### Example + +Identify the cluster's master node: + +```sh +$ rladmin cluster master +Node 1 is the cluster master node +``` + +Change the cluster master to node 3: + +```sh +$ rladmin cluster master set 3 +Node 3 set to be the cluster master node +``` +--- +Title: rladmin cluster recover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Recovers a cluster from a backup file. +headerRange: '[1-2]' +linkTitle: recover +tags: +- non-configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/recover/' +--- + +Recovers a cluster from a backup file. The default location of the configuration backup file is `/var/opt/redislabs/persist/ccs/ccs-redis.rdb`. + +```sh +rladmin cluster recover + filename + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ rack_id ] + [ override_rack_id ] + [ node_uid ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| addr | IP address | Sets a node's internal IP address. If not provided, the node sets the address automatically. (optional) | +| ccs_persistent_path | filepath | Path to the location of CCS snapshots (default is the same as persistent_path) (optional) | +| external_addr | IP address | Sets a node's external IP address. If not provided, the node sets the address automatically. (optional) | +| ephemeral_path | filepath (default: /var/opt/redislabs) | Path to an ephemeral storage location (optional) | +| filename | filepath | Backup file to use for recovery | +| flash_enabled | | Enables flash storage (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location in case the node does not support CAPI (required if flash_enabled) | +| node_uid | integer (default: 1) | Specifies which node will recover first and become master (optional) | +| override_rack_id | | Changes to a new rack, specified by `rack_id` (optional) | +| persistent_path | filepath | Path to the persistent storage location (optional) | +| rack_id | string | Switches to the specified rack (optional) | + +### Returns + +Returns `ok` if the cluster recovered successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster recover filename /tmp/persist/ccs/ccs-redis.rdb node_uid 1 rack_id 5 +Initiating cluster recovery... ok +``` +--- +Title: rladmin cluster stats_archiver +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enables/deactivates the stats archiver. +headerRange: '[1-2]' +linkTitle: stats_archiver +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/stats_archiver/' +--- + +Enables or deactivates the stats archiver, which logs statistics in CSV (comma-separated values) format. + +```sh +rladmin cluster stats_archiver { enabled | disabled } +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| enabled | Turn on the stats archiver | +| disabled | Turn off the stats archiver | + +### Returns + +Returns the updated status of the stats archiver. + +### Example + +```sh +$ rladmin cluster stats_archiver enabled +Status: enabled +``` +--- +Title: rladmin cluster reset_password +alwaysopen: false +categories: +- docs +- operate +- rs +description: Changes the password for a given email. +headerRange: '[1-2]' +linkTitle: reset_password +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/reset_password/' +--- + +Changes the password for the user associated with the specified email address. + +Enter a new password when prompted. Then enter the same password when prompted a second time to confirm the password change. + +```sh +rladmin cluster reset_password +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| user email | email address | The email address of the user that needs a password reset | + +### Returns + +Reports whether the password change succeeded or an error occurred. + +### Example + +```sh +$ rladmin cluster reset_password user@example.com +New password: +New password (again): +Password changed. +``` +--- +Title: rladmin cluster config +alwaysopen: false +categories: +- docs +- operate +- rs +description: Updates the cluster's configuration. +headerRange: '[1-2]' +linkTitle: config +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config/' +--- + +Updates the cluster configuration. + +```sh + rladmin cluster config + [ auditing db_conns audit_protocol { TCP | local } + audit_address audit_port ] + [bigstore_driver {speedb | rocksdb} ] + [ control_cipher_suites ] + [ cm_port ] + [ cm_session_timeout_minutes ] + [ cnm_http_port ] + [ cnm_https_port ] + [ crdb_coordinator_port ] + [ data_cipher_list ] + [ data_cipher_suites_tls_1_3 ] + [ debuginfo_path ] + [ encrypt_pkeys { enabled | disabled } ] + [ envoy_admin_port ] + [ envoy_mgmt_server_port ] + [ gossip_envoy_admin_port ] + [ handle_redirects { enabled | disabled } ] + [ handle_metrics_redirects { enabled | disabled } ] + [ http_support { enabled | disabled } ] + [ ipv6 { enabled | disabled } ] + [ min_control_TLS_version { 1.2 | 1.3 } ] + [ min_data_TLS_version { 1.2 | 1.3 } ] + [ min_sentinel_TLS_version { 1.2 | 1.3 } ] + [ reserved_ports ] + [ s3_url ] + [ s3_ca_cert ] + [ saslauthd_ldap_conf ] + [ sentinel_tls_mode { allowed | required | disabled } ] + [ sentinel_cipher_suites ] + [ services { cm_server | crdb_coordinator | crdb_worker | + mdns_server | pdns_server | saslauthd | + stats_archiver } { enabled | disabled } ] + [ upgrade_mode { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| audit_address | string | TCP/IP address where a listener can capture [audit event notifications]({{< relref "/operate/rs/7.4/security/audit-events" >}}) | +| audit_port | string | Port where a listener can capture [audit event notifications]({{< relref "/operate/rs/7.4/security/audit-events" >}}) | +| audit_protocol | `tcp`
`local` | Protocol used for [audit event notifications]({{< relref "/operate/rs/7.4/security/audit-events" >}})
For production systems, only `tcp` is supported. | +| control_cipher_suites | list of ciphers | Cipher suites used for TLS connections to the Cluster Manager UI (specified in the format understood by the BoringSSL library)
(previously named `cipher_suites`) | +| cm_port | integer | UI server listening port | +| cm_session_timeout_minutes | integer | Timeout in minutes for the CM session +| cnm_http_port | integer | HTTP REST API server listening port | +| cnm_https_port | integer | HTTPS REST API server listening port | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| data_cipher_list | list of ciphers | Cipher suites used by the the data plane (specified in the format understood by the OpenSSL library) | +| data_cipher_suites_tls_1_3 | list of ciphers | Specifies the enabled TLS 1.3 ciphers for the data plane | +| debuginfo_path | filepath | Local directory to place generated support package files | +| encrypt_pkeys | `enabled`
`disabled` | Enable or turn off encryption of private keys | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| handle_redirects | `enabled`
`disabled` | Enable or turn off handling DNS redirects when DNS is not configured and running behind a load balancer | +| handle_metrics_redirects | `enabled`
`disabled` | Enable or turn off handling cluster redirects internally for Metrics API | +| http_support | `enabled`
`disabled` | Enable or turn off using HTTP for REST API connections | +| ipv6 | `enabled`
`disabled` | Enable or turn off IPv6 connections to the Cluster Manager UI | +| min_control_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the control path | +| min_data_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the data path | +| min_sentinel_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the discovery service | +| reserved_ports | list of ports/port ranges | List of reserved ports and/or port ranges to avoid using for database endpoints (for example `reserved_ports 11000 13000-13010`) | +| s3_url | string | The URL of S3 export and import | +| s3_ca_cert | string | The CA certificate filepath for S3 export and import | +| saslauthd_ldap_conf | filepath | Updates LDAP authentication configuration for the cluster | +| sentinel_cipher_suites | list of ciphers | Cipher suites used by the discovery service (supported ciphers are implemented by the [cipher_suites.go]() package) | +| sentinel_tls_mode | `allowed`
`required`
`disabled` | Define the SSL policy for the discovery service
(previously named `sentinel_ssl_policy`) | +| services | `cm_server`
`crdb_coordinator`
`crdb_worker`
`mdns_server`
`pdns_server`
`saslauthd`
`stats_archiver`

`enabled`
`disabled` | Enable or turn off selected cluster services | +| upgrade_mode | `enabled`
`disabled` | Enable or turn off upgrade mode on the cluster | + +### Returns + +Reports whether the cluster was configured successfully. Displays an error message if the configuration attempt fails. + +### Example + +```sh +$ rladmin cluster config cm_session_timeout_minutes 20 +Cluster configured successfully +``` +--- +Title: rladmin cluster ocsp +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages OCSP. +headerRange: '[1-2]' +linkTitle: ocsp +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/ocsp/' +--- + +Manages OCSP configuration and verifies the status of a server certificate maintained by a third-party [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). + +## `ocsp certificate_compatible` + +Checks if the proxy certificate contains an OCSP URI. + +```sh +rladmin cluster ocsp certificate_compatible +``` + +### Parameters + +None + +### Returns + +Returns the OCSP URI if it exists. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin cluster ocsp certificate_compatible +Success. OCSP URI is http://responder.ocsp.url.com +``` + +## `ocsp config` + +Displays or updates OCSP configuration. Run the command without the `set` option to display the current configuration of a parameter. + +```sh +rladmin cluster ocsp config + [set ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|---------------|-------------| +| ocsp_functionality | enabled

disabled | Enables or turns off OCSP for the cluster | +| query_frequency | integer (range: 60-86400) (default: 3600) | The time interval in seconds between OCSP queries to check the certificate's status | +| recovery_frequency | integer (range: 60-86400) (default: 60) | The time interval in seconds between retries after a failed query | +| recovery_max_tries | integer (range: 1-100) (default: 5) | The number of retries before the validation query fails and invalidates the certificate | +| responder_url | string | The OCSP server URL embedded in the proxy certificate (you cannot manually set this parameter) | +| response_timeout | integer (range: 1-60) (default: 1) | The time interval in seconds to wait for a response before timing out | + +### Returns + +If you run the `ocsp config` command without the `set` option, it displays the specified parameter's current configuration. + +### Example + +```sh +$ rladmin cluster ocsp config recovery_frequency +Recovery frequency of the OCSP server is 60 seconds +$ rladmin cluster ocsp config recovery_frequency set 30 +$ rladmin cluster ocsp config recovery_frequency +Recovery frequency of the OCSP server is 30 seconds +``` + +## `ocsp status` + +Returns the latest cached status of the certificate's OCSP response. + +```sh +rladmin cluster ocsp status +``` +### Parameters + +None + +### Returns + +Returns the latest cached status of the certificate's OCSP response. + +### Example + +```sh +$ rladmin cluster ocsp status +OCSP certificate status is: REVOKED +produced_at: Wed, 22 Dec 2021 12:50:11 GMT +responder_url: http://responder.ocsp.url.com +revocation_time: Wed, 22 Dec 2021 12:50:04 GMT +this_update: Wed, 22 Dec 2021 12:50:11 GMT +``` + +## `ocsp test_certificate` + +Queries the OCSP server for the certificate's latest status, then caches and displays the response. + +```sh +rladmin cluster ocsp test_certificate +``` + +### Parameters + +None + +### Returns + +Returns the latest status of the certificate's OCSP response. + +### Example + +```sh +$ rladmin cluster ocsp test_certificate +Initiating a query to OCSP server +...OCSP certificate status is: REVOKED +produced_at: Wed, 22 Dec 2021 12:50:11 GMT +responder_url: http://responder.ocsp.url.com +revocation_time: Wed, 22 Dec 2021 12:50:04 GMT +this_update: Wed, 22 Dec 2021 12:50:11 GMT +``` +--- +Title: rladmin cluster certificate +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets the cluster certificate. +headerRange: '[1-2]' +linkTitle: certificate +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/certificate/' +--- + +Sets a cluster certificate to a specified PEM file. + +```sh +rladmin cluster certificate + set + certificate_file + [ key_file ] +``` + +To set a certificate for a specific service, use the corresponding certificate name. See the [certificates table]({{< relref "/operate/rs/7.4/security/certificates" >}}) for the list of cluster certificates and their descriptions. + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| certificate name | 'cm'
'api'
'proxy'
'syncer'
'metrics_exporter' | Name of the certificate to update | +| certificate_file | filepath | Path to the certificate file | +| key_file | filepath | Path to the key file (optional) | + +### Returns + +Reports that the certificate was set to the specified file. Returns an error message if the certificate fails to update. + +### Example + +```sh +$ rladmin cluster certificate set proxy \ + certificate_file /tmp/proxy.pem +Set proxy certificate to contents of file /tmp/proxy.pem +``` +--- +Title: rladmin cluster create +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates a new cluster. +headerRange: '[1-2]' +linkTitle: create +tags: +- non-configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/create/' +--- + +Creates a new cluster. The node where you run `rladmin cluster create` becomes the first node of the new cluster. + +```sh +cluster create + name + username + password + [ node_uid ] + [ rack_aware ] + [ rack_id ] + [ license_file ] + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ register_dns_suffix ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr [ ... ] ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| addr | IP address | The node's internal IP address (optional) | +| ccs_persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the location of CCS snapshots (optional) | +| ephemeral_path | filepath (default: /var/opt/redislabs) | Path to the ephemeral storage location (optional) | +| external_addr | list of IP addresses | A space-delimited list of the node's external IP addresses (optional) | +| flash_enabled | | Enables flash storage (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location (optional) | +| license_file | filepath | Path to the RLEC license file (optional) | +| name | string | Cluster name | +| node_uid | integer | Unique node ID (optional) | +| password | string | Admin user's password | +| persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the persistent storage location (optional) | +| rack_aware | | Activates or deactivates rack awareness (optional) | +| rack_id | string | The rack's unique identifier (optional) | +| register_dns_suffix | | Enables database mapping to both internal and external IP addresses (optional) | +| username | email address | Admin user's email address | + +### Returns + +Returns `ok` if the new cluster was created successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster create name cluster.local \ + username admin@example.com \ + password admin-password +Creating a new cluster... ok +``` +--- +Title: rladmin cluster running_actions +alwaysopen: false +categories: +- docs +- operate +- rs +description: Lists all active tasks. +headerRange: '[1-2]' +linkTitle: running_actions +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/running_actions/' +--- + +Lists all active tasks running on the cluster. + +```sh +rladmin cluster running_actions +``` + +### Parameters + +None + +### Returns + +Returns details about any active tasks running on the cluster. + +### Example + +```sh +$ rladmin cluster running_actions +Got 1 tasks: +1) Task: maintenance_on (ce391d81-8d51-4ce2-8f63-729c7ac2589e) Node: 1 Status: running +``` +--- +Title: rladmin cluster debug_info +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates a support package. +headerRange: '[1-2]' +linkTitle: debug_info +tags: +- configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/debug_info/' +--- + +Downloads a support package to the specified path. If you do not specify a path, it downloads the package to the default path specified in the cluster configuration file. + +```sh +rladmin cluster debug_info + [ node ] + [ path ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node | integer | Downloads a support package for the specified node | +| path | filepath | Specifies the location where the support package should download | + +### Returns + +Reports the progress of the support package download. + +### Example + +```sh +$ rladmin cluster debug_info node 1 +Preparing the debug info files package +Downloading... +[==================================================] +Downloading complete. File /tmp/debuginfo.20220511-215637.node-1.tar.gz is saved. +``` +--- +Title: rladmin cluster join +alwaysopen: false +categories: +- docs +- operate +- rs +description: Adds a node to an existing cluster. +headerRange: '[1-2]' +linkTitle: join +tags: +- non-configured +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/join/' +--- + +Adds a node to an existing cluster. + +```sh +rladmin cluster join + nodes + username + password + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ rack_id ] + [ override_rack_id ] + [ replace_node ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr [ ... ] ] + [ override_repair ] + [ accept_servers { enabled | disabled } ] + [ cnm_http_port ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| accept_servers | 'enabled'
'disabled' | Allows allocation of resources on the new node when enabled (optional) | +| addr | IP address | Sets a node's internal IP address. If not provided, the node sets the address automatically. (optional) | +| ccs_persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the CCS snapshot location (the default is the same as persistent_path) (optional) | +| cnm_http_port | integer | Joins a cluster that has a non-default cnm_http_port (optional) | +| ephemeral_path | filepath | Path to the ephemeral storage location (optional) | +| external_addr | list of IP addresses | Sets a node's external IP addresses (space-delimited list). If not provided, the node sets the address automatically. (optional) | +| flash_enabled | | Enables flash capabilities for a database (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location in case the node does not support CAPI (required if flash_enabled) | +| nodes | IP address | Internal IP address of an existing node in the cluster | +| override_rack_id | | Changes to a new rack, specified by `rack_id` (optional) | +| override_repair | | Enables joining a cluster with a dead node (optional) | +| password | string | Admin user's password | +| persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the persistent storage location (optional) | +| rack_id | string | Moves the node to the specified rack (optional) | +| replace_node | integer | Replaces the specified node with the new node (optional) | +| username | email address | Admin user's email address | + +### Returns + +Returns `ok` if the node joined the cluster successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster join nodes 192.0.2.2 \ + username admin@example.com \ + password admin-password +Joining cluster... ok +``` +--- +Title: rladmin cluster +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage cluster. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: cluster +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/cluster/' +--- + +Manages cluster configuration and administration. Most `rladmin cluster` commands are only for clusters that are already configured, while a few others are only for new clusters that have not been configured. + +## Commands for configured clusters + +{{}} + +## Commands for non-configured clusters + +{{}} +--- +Title: rladmin bind +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages the proxy policy for a specified database endpoint. +headerRange: '[1-2]' +linkTitle: bind +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/bind/' +--- + +Manages the proxy policy for a specific database endpoint. + +## `bind endpoint exclude` + +Defines a list of nodes to exclude from the proxy policy for a specific database endpoint. When you exclude a node, the endpoint cannot bind to the node's proxy. + +Each time you run an exclude command, it overwrites the previous list of excluded nodes. + +```sh +rladmin bind + [ db { db: | } ] + endpoint exclude + +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| proxy | list of proxy IDs | Proxies to exclude | + +### Returns + +Returns `Finished successfully` if the list of excluded proxies was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:2 all-nodes No +db:6 tr02 endpoint:6:1 node:1 all-nodes No +db:6 tr02 endpoint:6:1 node:3 all-nodes No +$ rladmin bind endpoint 6:1 exclude 2 +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-nodes -2 No +db:6 tr02 endpoint:6:1 node:3 all-nodes -2 No +``` + +## `bind endpoint include` + +Defines a list of nodes to include in the proxy policy for the specific database endpoint. + +Each time you run an include command, it overwrites the previous list of included nodes. + +```sh +rladmin bind + [ db { db: | } ] + endpoint include + +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| proxy | list of proxy IDs | Proxies to include | + +### Returns + +Returns `Finished successfully` if the list of included proxies was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin bind endpoint 6:1 include 3 +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-master-shards +3 No +db:6 tr02 endpoint:6:1 node:3 all-master-shards +3 No +``` + +## `bind endpoint policy` + +Changes the overall proxy policy for a specific database endpoint. + +```sh +rladmin bind + [ db { db: | } ] + endpoint + policy { single | all-master-shards | all-nodes } +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| policy | 'all-master-shards'
'all-nodes'
'single' | Changes the [proxy policy](#proxy-policies) to the specified policy | + +| Proxy policy | Description | +| - | - | +| all-master-shards | Multiple proxies, one on each master node (best for high traffic and multiple master shards) | +| all-nodes | Multiple proxies, one on each node of the cluster (increases traffic in the cluster, only used in special cases) | +| single | All traffic flows through a single proxy bound to the database endpoint (preferable in most cases) | + +### Returns + +Returns `Finished successfully` if the proxy policy was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-nodes -2 No +db:6 tr02 endpoint:6:1 node:3 all-nodes -2 No +$ rladmin bind endpoint 6:1 policy all-master-shards +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +``` +--- +Title: rladmin restart +alwaysopen: false +categories: +- docs +- operate +- rs +description: Restarts Redis Enterprise Software processes for a specific database. +headerRange: '[1-2]' +linkTitle: restart +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/restart/' +--- + +Schedules a restart of the Redis Enterprise Software processes on primary and replica instances of a specific database. + +``` sh +rladmin restart db { db: | } + [preserve_roles] + [discard_data] + [force_discard] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|----------------|--------------------------------|-----------------------------------------------------------------------| +| db | db:\
name | Restarts Redis Enterprise Software processes for the specified database | +| discard_data | | Allows discarding data if there is no persistence or replication | +| force_discard | | Forcibly discards data even if there is persistence or replication | +| preserve_roles | | Performs an additional failover to maintain shard roles | + +### Returns + +Returns `Done` if the restart completed successfully. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin restart db db:5 preserve_roles +Monitoring 1db07491-35da-4bb6-9bc1-56949f4c312a +active - SMUpgradeBDB init +active - SMUpgradeBDB stop_forwarding +active - SMUpgradeBDB stop_active_expire +active - SMUpgradeBDB check_slave +oactive - SMUpgradeBDB stop_active_expire +active - SMUpgradeBDB second_failover +completed - SMUpgradeBDB +Done +``` +--- +Title: rladmin migrate +alwaysopen: false +categories: +- docs +- operate +- rs +description: Moves Redis Enterprise Software shards or endpoints to a new node in + the same cluster. +headerRange: '[1-2]' +linkTitle: migrate +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/migrate/' +--- + +Moves Redis Enterprise shards or endpoints to a new node in the same cluster. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +## `migrate all_master_shards` + +Moves all primary shards of a specified database or node to a new node in the same cluster. + +```sh +rladmin migrate { db { db: | } | node } + all_master_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| node | integer | Limits migration to a specific origin node | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:6 sort ROLE +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:3 master 0-4095 3.01MB OK +db:6 tr02 redis:16 node:3 master 4096-8191 3.2MB OK +db:6 tr02 redis:18 node:3 master 8192-12287 3.2MB OK +db:6 tr02 redis:20 node:3 master 12288-16383 3.01MB OK +$ rladmin migrate db db:6 all_master_shards target_node 1 +Monitoring 8b0f28e2-4342-427a-a8e3-a68cba653ffe +queued - migrate_shards +running - migrate_shards +Executing migrate_redis with shards_uids ['18', '14', '20', '16'] +Ocompleted - migrate_shards +Done +$ rladmin status shards node 1 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:1 master 0-4095 3.22MB OK +db:6 tr02 redis:16 node:1 master 4096-8191 3.22MB OK +db:6 tr02 redis:18 node:1 master 8192-12287 3.22MB OK +db:6 tr02 redis:20 node:1 master 12288-16383 2.99MB OK +``` +## `migrate all_shards` + +Moves all shards on a specified node to a new node in the same cluster. + +``` sh +rladmin migrate node + [ max_concurrent_bdb_migrations ] + all_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| node | integer | Limits migration to a specific origin node | +| max_concurrent_bdb_migrations | integer | Sets the maximum number of concurrent endpoint migrations | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards node 1 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 master 0-16383 3.04MB OK +db:6 tr02 redis:15 node:1 slave 0-4095 2.93MB OK +db:6 tr02 redis:17 node:1 slave 4096-8191 2.93MB OK +db:6 tr02 redis:19 node:1 slave 8192-12287 3.08MB OK +db:6 tr02 redis:21 node:1 slave 12288-16383 3.08MB OK +$ rladmin migrate node 1 all_shards target_node 2 +Monitoring 71a4f371-9264-4398-a454-ce3ff4858c09 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['21', '15', '17', '19'] +OExecuting migrate_redis with shards_uids ['12'] +Ocompleted - migrate_shards +Done +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.14MB OK +db:6 tr02 redis:15 node:2 slave 0-4095 2.96MB OK +db:6 tr02 redis:17 node:2 slave 4096-8191 2.96MB OK +db:6 tr02 redis:19 node:2 slave 8192-12287 2.96MB OK +db:6 tr02 redis:21 node:2 slave 12288-16383 2.96MB OK +``` + +## `migrate all_slave_shards` + +Moves all replica shards of a specified database or node to a new node in the same cluster. + +```sh +rladmin migrate { db { db: | } | node } + all_slave_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| node | integer | Limits migration to a specific origin node | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:6 node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:15 node:2 slave 0-4095 3.06MB OK +db:6 tr02 redis:17 node:2 slave 4096-8191 3.06MB OK +db:6 tr02 redis:19 node:2 slave 8192-12287 3.06MB OK +db:6 tr02 redis:21 node:2 slave 12288-16383 3.06MB OK +$ rladmin migrate db db:6 all_slave_shards target_node 3 +Monitoring 5d36a98c-3dc8-435f-8ed9-35809ba017a4 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['15', '17', '21', '19'] +Ocompleted - migrate_shards +Done +$ rladmin status shards db db:6 node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:15 node:3 slave 0-4095 3.04MB OK +db:6 tr02 redis:17 node:3 slave 4096-8191 3.04MB OK +db:6 tr02 redis:19 node:3 slave 8192-12287 3.04MB OK +db:6 tr02 redis:21 node:3 slave 12288-16383 3.04MB OK +``` + +## `migrate endpoint_to_shards` + +Moves database endpoints to the node where the majority of primary shards are located. + +```sh +rladmin migrate [ db { db: | } ] + endpoint_to_shards + [ restrict_target_node ] + [ commit ] + [ max_concurrent_bdb_migrations ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| restrict_target_node | integer | Moves the endpoint only if the target node matches the specified node | +| commit | | Performs endpoint movement | +| max_concurrent_bdb_migrations | integer | Sets the maximum number of concurrent endpoint migrations | + + +### Returns + +Returns a list of steps to perform the migration. If the `commit` flag is set, the steps will run and return `Finished successfully` if they were completed. Otherwise, returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the endpoints were moved. + +### Example + +```sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin migrate db db:6 endpoint_to_shards +* Going to bind endpoint:6:1 to node 1 +Dry-run completed, add 'commit' argument to execute +$ rladmin migrate db db:6 endpoint_to_shards commit +* Going to bind endpoint:6:1 to node 1 +Executing bind endpoint:6:1: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-master-shards No +``` + +## `migrate shard` + +Moves one or more shards to a new node in the same cluster. + +```sh +rladmin migrate shard + [ preserve_roles ] + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| shard | list of shard IDs | Shards to migrate | +| preserve_roles | | Performs an additional failover to guarantee the primary shards' roles are preserved | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:5 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.01MB OK +db:5 tr01 redis:13 node:3 slave 0-16383 3.1MB OK +$ rladmin migrate shard 13 target_node 1 +Monitoring d2637eea-9504-4e94-a70c-76df087efcb2 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['13'] +Ocompleted - migrate_shards +Done +$ rladmin status shards db db:5 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.01MB OK +db:5 tr01 redis:13 node:1 slave 0-16383 3.04MB OK +``` +--- +Title: rladmin recover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Recovers databases in recovery mode. +headerRange: '[1-2]' +linkTitle: recover +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/recover/' +--- + +Recovers databases in recovery mode after events such as cluster failure, and restores the databases' configurations and data from stored persistence files. See [Recover a failed database]({{< relref "/operate/rs/7.4/databases/recover" >}}) for detailed instructions. + +Database persistence files are stored in `/var/opt/redislabs/persist/redis/` by default, but you can specify a different directory to use for database recovery with [`rladmin node recovery_path set `]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/node/recovery-path" >}}). + +## `recover all` + +Recovers all databases in recovery mode. + +```sh +rladmin recover all + [ only_configuration ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|--------------------|------------|---------------------------------------------| +| only_configuration | | Recover database configuration without data | + +### Returns + +Returns `Completed successfully` if the database was recovered. Otherwise, returns an error. + +### Example + +``` +$ rladmin recover all + 0% [ 0 recovered | 0 failed ] | | Elapsed Time: 0:00:00[first-db (db:1) recovery] Initiated.[second-db (db:2) recovery] Initiated. + 50% [ 0 recovered | 0 failed ] |### | Elapsed Time: 0:00:04[first-db (db:1) recovery] Completed successfully + 75% [ 1 recovered | 0 failed ] |###### | Elapsed Time: 0:00:06[second-db (db:2) recovery] Completed successfully +100% [ 2 recovered | 0 failed ] |#########| Elapsed Time: 0:00:08 +``` + +## `recover db` + +Recovers a specific database in recovery mode. + +```sh +rladmin recover db { db: | } + [ only_configuration ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|--------------------|----------------------|---------------------------------------------| +| db | db:\
name | Database to recover | +| only_configuration | | Recover database configuration without data | + +### Returns + +Returns `Completed successfully` if the database was recovered. Otherwise, returns an error. + +### Example + +``` +$ rladmin recover db db:1 + 0% [ 0 recovered | 0 failed ] | | Elapsed Time: 0:00:00[demo-db (db:1) recovery] Initiated. + 50% [ 0 recovered | 0 failed ] |### | Elapsed Time: 0:00:00[demo-db (db:1) recovery] Completed successfully +100% [ 1 recovered | 0 failed ] |######| Elapsed Time: 0:00:02 +``` + +## `recover list` + +Shows a list of all databases that are currently in recovery mode. + +```sh +rladmin recover list +``` + +### Parameters + +None + +### Returns + +Displays a list of all recoverable databases. If no databases are in recovery mode, returns `No recoverable databases found`. + +### Example + +```sh +$ rladmin recover list +DATABASES IN RECOVERY STATE: +DB:ID NAME TYPE SHARDS REPLICATION PERSISTENCE STATUS +db:5 tr01 redis 1 enabled aof missing-files +db:6 tr02 redis 4 enabled snapshot ready +``` + +## `recover s3_import` + +Imports current database snapshot files from an AWS S3 bucket to a directory on the node. + +```sh +rladmin recover s3_import + s3_bucket + [ s3_prefix ] + s3_access_key_id + s3_secret_access_key + import_path +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------|------------|------------------------------------------------------------------| +| s3_bucket | string | S3 bucket name | +| s3_prefix | string | S3 object prefix | +| s3_access_key_id | string | S3 access key ID | +| s3_secret_access_key | string | S3 secret access key | +| import_path | filepath | Local import path where all database snapshots will be imported | + +### Returns + +Returns `Completed successfully` if the database files were imported. Otherwise, returns an error. + +### Example + +```sh +rladmin recover s3_import s3_bucket s3_prefix / s3_access_key_id s3_secret_access_key import_path /tmp +``` +--- +Title: rladmin status +alwaysopen: false +categories: +- docs +- operate +- rs +description: Displays the current cluster status and topology information. +headerRange: '[1-2]' +linkTitle: status +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/status/' +--- + +Displays the current cluster status and topology information. + +## `status` + +Displays the current status of all nodes, databases, database endpoints, and shards on the cluster. + +``` sh +rladmin status + [ extra ] + [ issues_only] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| issues_only | Filters out all items that have an `OK` status | + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all databases in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns tables of the status of all nodes, databases, and database endpoints on the cluster. + +If `issues_only` is specified, it only shows instances that do not have an `OK` status. + +### Example + +``` sh +$ rladmin status extra all +CLUSTER: +OK. Cluster master: 1 (198.51.100.2) +Cluster health: OK, [1, 0.13333333333333333, 0.03333333333333333] +failures/minute - avg1 1.00, avg15 0.13, avg60 0.03. + +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME MASTERS SLAVES OVERBOOKING_DEPTH SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION SHA RACK-ID STATUS +node:1 master 198.51.100.2 3d99db1fdf4b 4 0 10.91GB 4/100 6 14.91GB/19.54GB 10.91GB/16.02GB 6.2.12-37 5c2106 - OK +node:2 slave 198.51.100.3 fc7a3d332458 0 0 11.4GB 0/100 6 14.91GB/19.54GB 11.4GB/16.02GB 6.2.12-37 5c2106 - OK +*node:3 slave 198.51.100.4 b87cc06c830f 0 0 11.4GB 0/100 6 14.91GB/19.54GB 11.4GB/16.02GB 6.2.12-37 5c2106 - OK + +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT EXEC_STATE EXEC_STATE_MACHINE BACKUP_PROGRESS MISSING_BACKUP_TIME REDIS_VERSION +db:3 database3 redis active 4 dense disabled disabled redis-11103.cluster.local:11103 N/A N/A N/A N/A 6.0.16 + +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL WATCHDOG_STATUS +db:3 database3 endpoint:3:1 node:1 single No OK + +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY BACKUP_PROGRESS RAM_FRAG WATCHDOG_STATUS STATUS +db:3 database3 redis:4 node:1 master 0-4095 2.08MB N/A 4.73MB OK OK +db:3 database3 redis:5 node:1 master 4096-8191 2.08MB N/A 4.62MB OK OK +db:3 database3 redis:6 node:1 master 8192-12287 2.08MB N/A 4.59MB OK OK +db:3 database3 redis:7 node:1 master 12288-16383 2.08MB N/A 4.66MB OK OK +``` + +## `status databases` + +Displays the current status of all databases on the cluster. + +``` sh +rladmin status databases + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all databases in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns a table of the status of all databases on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows databases that do not have an `OK` status. + +### Example + +``` sh +$ rladmin status databases sort REPLICATION PERSISTENCE +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:1 database1 redis active 1 dense disabled disabled redis-10269.testdbd11169.localhost:10269 +db:2 database2 redis active 1 dense disabled snapshot redis-13897.testdbd11169.localhost:13897 +db:3 database3 redis active 1 dense enabled snapshot redis-19416.testdbd13186.localhost:19416 +``` + +## `status endpoints` + +Displays the current status of all endpoints on the cluster. + +``` sh +rladmin status endpoints + [ node ] + [ db { db: | } ] + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| node \ | Only show endpoints for the specified node ID | +| db db:\ | Only show endpoints for the specified database ID | +| db \ | Only show endpoints for the specified database name | +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all endpoints in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns a table of the status of all endpoints on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows endpoints that do not have an `OK` status. + +### Example + +``` sh +$ rladmin status endpoints +DB:ID NAME ID NODE ROLE SSL +db:1 database1 endpoint:1:1 node:1 single No +db:2 database2 endpoint:2:1 node:2 single No +db:3 database3 endpoint:3:1 node:3 single No +``` + +## `status modules` + +Displays the current status of modules installed on the cluster and modules used by databases. This information is not included in the combined status report returned by [`rladmin status`](#status). + +``` sh +rladmin status modules + [ db { db: | } ... { db: | } ] + [ extra { all | min_redis_version | module_id } ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| db db:\ | Provide a list of database IDs to show only modules used by the specified databases
(for example: `rladmin status modules db db:1 db:2`) | +| db \ | Provide a list of database names to show only modules used by the specified databases
(for example: `rladmin status modules db name1 name2`) | +| extra all | Shows all extra information | +| extra module_id | Shows module IDs | +| extra min_redis_version | Shows the minimum compatible Redis database version for each module | + +### Returns + +Returns the status of modules installed on the cluster and modules used by databases. + +### Example + +```sh +$ rladmin status modules extra all +CLUSTER MODULES: +MODULE VERSION MIN_REDIS_VERSION ID +RedisBloom 2.4.5 6.0 1b895a180592cbcae5bd3bff6af24be2 +RedisBloom 2.6.8 7.1 95264e7c9ac9540268c115c86a94659b +RediSearch 2 2.6.12 6.0 2c000539f65272f7a2712ed3662c2b6b +RediSearch 2 2.8.9 7.1 dd9a75710db528afa691767e9310ac6f +RedisGears 2.0.15 7.1 18c83d024b8ee22e7caf030862026ca6 +RedisGraph 2.10.12 6.0 5a1f2fdedb8f6ca18f81371ea8d28f68 +RedisJSON 2.4.7 6.0 28308b101a0203c21fa460e7eeb9344a +RedisJSON 2.6.8 7.1 b631b6a863edde1b53b2f7a27a49c004 +RedisTimeSeries 1.8.11 6.0 8fe09b00f56afe5dba160d234a6606af +RedisTimeSeries 1.10.9 7.1 98a492a017ea6669a162fd3503bf31f3 + +DATABASE MODULES: +DB:ID NAME MODULE VERSION ARGS STATUS +db:1 search-json-db RediSearch 2 2.8.9 PARTITIONS AUTO OK +db:1 search-json-db RedisJSON 2.6.8 OK +db:2 timeseries-db RedisTimeSeries 1.10.9 OK +``` + +## `status nodes` + +Displays the current status of all nodes on the cluster. + +``` sh +rladmin status nodes + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all nodes in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns a table of the status of all nodes on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows nodes that do not have an `OK` status. + +### Example + +``` sh +$ rladmin status nodes sort PROVISIONAL_RAM HOSTNAME +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +node:1 master 198.51.100.2 3d99db1fdf4b 4/100 6 14.74GB/19.54GB 10.73GB/16.02GB 6.2.12-37 OK +*node:3 slave 198.51.100.4 b87cc06c830f 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +node:2 slave 198.51.100.3 fc7a3d332458 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +``` + +## `status shards` + +Displays the current status of all shards on the cluster. + +``` sh +rladmin status shards + [ node ] + [ db {db: | } ] + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| node \ | Only show shards for the specified node ID | +| db db:\ | Only show shards for the specified database ID | +| db \ | Only show shards for the specified database name | +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra shardstats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all shards in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns a table of the status of all shards on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows shards that do not have an `OK` status. + +### Example + +``` sh +$ rladmin status shards sort USED_MEMORY ID +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:3 database3 redis:6 node:1 master 8192-12287 2.04MB OK +db:3 database3 redis:4 node:1 master 0-4095 2.08MB OK +db:3 database3 redis:5 node:1 master 4096-8191 2.08MB OK +db:3 database3 redis:7 node:1 master 12288-16383 2.08MB OK +``` +--- +Title: rladmin verify +alwaysopen: false +categories: +- docs +- operate +- rs +description: Prints verification reports for the cluster. +headerRange: '[1-2]' +linkTitle: verify +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/verify/' +--- + +Prints verification reports for the cluster. + +## `verify balance` + +Prints a balance report that displays all of the unbalanced endpoints or nodes in the cluster. + +```sh +rladmin verify balance [ node ] +``` + +The [proxy policy]({{< relref "/operate/rs/7.4/databases/configure/proxy-policy#proxy-policies" >}}) determines which nodes or endpoints to report as unbalanced. + +A node is unbalanced if: +- `all-nodes` proxy policy and the node has no endpoint + +An endpoint is unbalanced in the following cases: +- `single` proxy policy and one of the following is true: + - Shard placement is [`sparse`]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy.md#sparse-shard-placement-policy" >}}) and none of the master shards are on the node + - Shard placement is [`dense`]({{< relref "/operate/rs/7.4/databases/memory-performance/shard-placement-policy.md#dense-shard-placement-policy" >}}) and some master shards are on a different node from the endpoint +- `all-master-shards` proxy policy and one of the following is true: + - None of the master shards are on the node + - Some master shards are on a different node from the endpoint + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node | integer | Specify a node ID to return a balance table for that node only (optional) | + +### Returns + +Returns a table of unbalanced endpoints and nodes in the cluster. + +### Examples + +Verify all nodes: + +```sh +$ rladmin verify balance +The table presents all of the unbalanced endpoints/nodes in the cluster +BALANCE: +NODE:ID DB:ID NAME ENDPOINT:ID PROXY_POLICY LOCAL SHARDS TOTAL SHARDS +``` + +Verify a specific node: + +```sh +$ rladmin verify balance node 1 +The table presents all of the unbalanced endpoints/nodes in the cluster +BALANCE: +NODE:ID DB:ID NAME ENDPOINT:ID PROXY_POLICY LOCAL SHARDS TOTAL SHARDS +``` + +## `verify rack_aware` + +Verifies that the cluster complies with the rack awareness policy and reports any discovered rack collisions, if [rack-zone awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness" >}}) is enabled. + +```sh +rladmin verify rack_aware +``` + +### Parameters + +None + +### Returns + +Returns whether the cluster is rack aware. If rack awareness is enabled, it returns any rack collisions. + +### Example + +```sh +$ rladmin verify rack_aware + +Cluster policy is not configured for rack awareness. +``` +--- +Title: rladmin placement +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures the shard placement policy for a database. +headerRange: '[1-2]' +linkTitle: placement +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/placement/' +--- + +Configures the shard placement policy for a specified database. + +``` sh +rladmin placement + db { db: | } + { dense | sparse } +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Configures shard placement for the specified database | +| dense | | Places new shards on the same node as long as it has resources | +| sparse | | Places new shards on the maximum number of available nodes within the cluster | + +### Returns + +Returns the new shard placement policy if the policy was changed successfully. Otherwise, it returns an error. + +Use [`rladmin status databases`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-databases" >}}) to verify that the failover completed. + +### Example + +``` sh +$ rladmin status databases +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:5 tr01 redis active 1 dense enabled aof redis-12000.cluster.local:12000 +$ rladmin placement db db:5 sparse +Shards placement policy is now sparse +$ rladmin status databases +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:5 tr01 redis active 1 sparse enabled aof redis-12000.cluster.local:12000 +``` +--- +Title: rladmin info +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the current configuration of a cluster, database, node, or proxy. +headerRange: '[1-2]' +linkTitle: info +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/info/' +--- + +Shows the current configuration of specified databases, proxies, clusters, or nodes. + +## `info cluster` + +Lists the current configuration for the cluster. + +```sh +rladmin info cluster +``` + +### Parameters + +None + +### Returns + +Returns the current configuration for the cluster. + +### Example + +``` sh +$ rladmin info cluster +Cluster configuration: + repl_diskless: enabled + shards_overbooking: disabled + default_non_sharded_proxy_policy: single + default_sharded_proxy_policy: single + default_shards_placement: dense + default_fork_evict_ram: enabled + default_provisioned_redis_version: 6.0 + redis_migrate_node_threshold: 0KB (0 bytes) + redis_migrate_node_threshold_percent: 4 (%) + redis_provision_node_threshold: 0KB (0 bytes) + redis_provision_node_threshold_percent: 12 (%) + max_simultaneous_backups: 4 + slave_ha: enabled + slave_ha_grace_period: 600 + slave_ha_cooldown_period: 3600 + slave_ha_bdb_cooldown_period: 7200 + parallel_shards_upgrade: 0 + show_internals: disabled + expose_hostnames_for_all_suffixes: disabled + login_lockout_threshold: 5 + login_lockout_duration: 1800 + login_lockout_counter_reset_after: 900 + default_concurrent_restore_actions: 10 + endpoint_rebind_propagation_grace_time: 15 + data_internode_encryption: disabled + redis_upgrade_policy: major + db_conns_auditing: disabled + watchdog profile: local-network + http support: enabled + upgrade mode: disabled + cm_session_timeout_minutes: 15 + cm_port: 8443 + cnm_http_port: 8080 + cnm_https_port: 9443 + bigstore_driver: speedb +``` + +## `info db` + +Shows the current configuration for databases. + +```sh +rladmin info db [ {db: | } ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| db:id | ID of the specified database (optional) | +| name | Name of the specified database (optional) | + +### Returns + +Returns the current configuration for all databases. + +If `db:` or `` is specified, returns the current configuration for the specified database. + +### Example + +``` sh +$ rladmin info db db:1 +db:1 [database1]: + client_buffer_limits: 1GB (hard limit)/512MB (soft limit) in 30 seconds + slave_buffer: auto + pubsub_buffer_limits: 32MB (hard limit)/8MB (soft limit) in 60 seconds + proxy_client_buffer_limits: 0KB (hard limit)/0KB (soft limit) in 0 seconds + proxy_slave_buffer_limits: 1GB (hard limit)/512MB (soft limit) in 60 seconds + proxy_pubsub_buffer_limits: 32MB (hard limit)/8MB (soft limit) in 60 seconds + repl_backlog: 1.02MB (1073741 bytes) + repl_timeout: 360 seconds + repl_diskless: default + master_persistence: disabled + maxclients: 10000 + conns: 5 + conns_type: per-thread + sched_policy: cmp + max_aof_file_size: 300GB + max_aof_load_time: 3600 seconds + dedicated_replicaof_threads: 5 + max_client_pipeline: 200 + max_shard_pipeline: 2000 + max_connections: 0 + oss_cluster: disabled + oss_cluster_api_preferred_ip_type: internal + gradual_src_mode: disabled + gradual_src_max_sources: 1 + gradual_sync_mode: auto + gradual_sync_max_shards_per_source: 1 + slave_ha: disabled (database) + mkms: enabled + oss_sharding: disabled + mtls_allow_weak_hashing: disabled + mtls_allow_outdated_certs: disabled + data_internode_encryption: disabled + proxy_policy: single + db_conns_auditing: disabled + syncer_mode: centralized +``` + +## `info node` + +Lists the current configuration for all nodes. + +```sh +rladmin info node [ ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| id | ID of the specified node | + +### Returns + +Returns the current configuration for all nodes. + +If `` is specified, returns the current configuration for the specified node. + +### Example + +``` sh +$ rladmin info node 3 +Command Output: node:3 + address: 198.51.100.17 + external addresses: N/A + recovery path: N/A + quorum only: disabled + max redis servers: 100 + max listeners: 100 +``` + +## `info proxy` + +Lists the current configuration for a proxy. + +``` sh +rladmin info proxy { | all } +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| id | ID of the specified proxy | +| all | Show the current configuration for all proxies (optional) | + +### Returns + +If no parameter is specified or the `all` option is specified, returns the current configuration for all proxies. + +If ``is specified, returns the current configuration for the specified proxy. + +### Example + +``` sh +$ rladmin info proxy +proxy:1 + mode: dynamic + scale_threshold: 80 (%) + scale_duration: 30 (seconds) + max_threads: 8 + threads: 3 +``` +--- +Title: rladmin help +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows available commands or specific command usage. +headerRange: '[1-2]' +linkTitle: help +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/help/' +--- + +Lists all options and parameters for `rladmin` commands. + +``` sh +rladmin help [command] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| command | Display help for this `rladmin` command (optional) | + +### Returns + +Returns a list of available `rladmin` commands. + +If a `command` is specified, returns a list of all the options and parameters for that `rladmin` command. + +### Example + +```sh +$ rladmin help +usage: rladmin [options] [command] [command args] + +Options: + -y Assume Yes for all required user confirmations. + +Commands: + bind Bind an endpoint + cluster Cluster management commands + exit Exit admin shell + failover Fail-over master to slave + help Show available commands, or use help for a specific command + info Show information about tunable parameters + migrate Migrate elements between nodes + node Node management commands + placement Configure shards placement policy + recover Recover databases + restart Restart database shards + status Show status information + suffix Suffix management + tune Tune system parameters + upgrade Upgrade entity version + verify Cluster verification reports + +Use "rladmin help [command]" to get more information on a specific command. +``` +--- +Title: rladmin suffix +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages the DNS suffixes in the cluster. +headerRange: '[1-2]' +linkTitle: suffix +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/suffix/' +--- + +Manages the DNS suffixes in the cluster. + +## `suffix add` + +Adds a DNS suffix to the cluster. + +``` sh +rladmin suffix add name + [default] + [internal] + [mdns] + [use_aaaa_ns] + [slaves ..] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------------|-----------------------------------------------------------------------------------------------| +| name | string | DNS suffix to add to the cluster | +| default | | Sets the given suffix as the default. If a default already exists, this overwrites it. | +| internal | | Forces the given suffix to use private IPs | +| mdns | | Activates multicast DNS support for the given suffix | +| slaves | list of IPv4 addresses | The given suffix will notify the frontend DNS servers when a change in the frontend DNS has occurred | +| use_aaaa_ns | | Activates IPv6 address support | + +### Returns + +Returns `Added suffixes successfully` if the suffix was added. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin suffix add name new.rediscluster.local +Added suffixes successfully +``` + +## `suffix delete` + +Deletes an existing DNS suffix from the cluster. + +``` sh +rladmin suffix delete name +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------------|-----------------------------------------------------------------------------------------------| +| name | string | DNS suffix to delete from the cluster | + +### Returns + +Returns `Suffix deleted successfully` if the suffix was deleted. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin suffix delete name new.rediscluster.local +Suffix deleted successfully +``` + +## `suffix list` + +Lists the DNS suffixes in the cluster. + +```sh +rladmin suffix list +``` + +### Parameters + +None + +### Returns + +Returns a list of the DNS suffixes. + +### Example + +``` sh +$ rladmin suffix list +List of all suffixes: +cluster.local +new.rediscluster.local +``` +--- +Title: rladmin failover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Fail over primary shards of a database to their replicas. +headerRange: '[1-2]' +linkTitle: failover +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/failover/' +--- + +Fails over one or more primary (also known as master) shards of a database and promotes their respective replicas to primary shards. + +``` sh +rladmin failover + [db { db: | }] + shard + [immediate] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Fail over shards for the specified database | +| shard | one or more primary shard IDs | Primary shard or shards to fail over | +| immediate | | Perform failover without verifying the replica shards are in full sync with the master shards | + +### Returns + +Returns `Finished successfully` if the failover completed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the failover completed. + +### Example + +``` sh +$ rladmin status shards +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 slave 0-16383 3.02MB OK +db:5 tr01 redis:13 node:2 master 0-16383 3.09MB OK +$ rladmin failover shard 13 +Executing shard fail-over: OOO. +Finished successfully +$ rladmin status shards +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 master 0-16383 3.12MB OK +db:5 tr01 redis:13 node:2 slave 0-16383 2.99MB OK +``` +--- +Title: rladmin upgrade +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrades the version of a module or Redis Enterprise Software for a database. +headerRange: '[1-2]' +linkTitle: upgrade +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/upgrade/' +--- + +Upgrades the version of a module or Redis Enterprise Software for a database. + +## `upgrade db` + +Schedules a restart of the primary and replica processes of a database and then upgrades the database to the latest version of Redis Enterprise Software. + +For more information, see [Upgrade an existing Redis Software Deployment]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading" >}}). + +```sh +rladmin upgrade db { db: | } + [ preserve_roles ] + [ keep_redis_version ] + [ discard_data ] + [ force_discard ] + [ parallel_shards_upgrade ] + [ keep_crdt_protocol_version ] + [ redis_version ] + [ force ] + [ { latest_with_modules | and module module_name version module_args } ] +``` + +As of v6.2.4, the default behavior for `upgrade db` has changed. It is now controlled by a new parameter that sets the default upgrade policy used to create new databases and to upgrade ones already in the cluster. To learn more, see [`tune cluster default_redis_version`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}). + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------| +| db | db:\
name | Database to upgrade | +| and module | [upgrade module](#upgrade-module) command | Clause that allows the upgrade of a database and a specified Redis module in a single step with only one restart (can be specified multiple times) | +| discard_data | | Indicates that data will not be saved after the upgrade | +| force | | Forces upgrade and skips warnings and confirmations | +| force_discard | | Forces `discard_data` if replication or persistence is enabled | +| keep_crdt_protocol_version | | Keeps the current CRDT protocol version | +| keep_redis_version | | Upgrades to a new patch release, not to the latest major.minor version | +| latest_with_modules | | Upgrades the Redis Enterprise Software version and all modules in the database | +| parallel_shards_upgrade | integer
'all' | Maximum number of shards to upgrade all at once | +| preserve_roles | | Performs an additional failover to guarantee the shards' roles are preserved | +| redis_version | Redis version | Upgrades the database to the specified version instead of the latest version | + +### Returns + +Returns `Done` if the upgrade completed. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin upgrade db db:5 +Monitoring e39c8e87-75f9-4891-8c86-78cf151b720b +active - SMUpgradeBDB init +active - SMUpgradeBDB check_slaves +.active - SMUpgradeBDB prepare +active - SMUpgradeBDB stop_forwarding +oactive - SMUpgradeBDB start_wd +active - SMUpgradeBDB wait_for_version +.completed - SMUpgradeBDB +Done +``` + +## `upgrade module` + +Upgrades Redis modules in use by a specific database. + +For more information, see [Upgrade modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/upgrade-module" >}}). + +```sh +rladmin upgrade module + db_name { db: | } + module_name + version + module_args +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------| +| db_name | db:\
name | Upgrade a module for the specified database | +| module_name | 'ReJSON'
'graph'
'search'
'bf'
'rg'
'timeseries' | Redis module to upgrade | +| version | module version number | Upgrades the module to the specified version | +| module_args | 'keep_args'
string | Module configuration options | + +For more information about module configuration options, see [Module configuration options]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/add-module-to-database#module-configuration-options" >}}). + +### Returns + +Returns `Done` if the upgrade completed. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin upgrade module db_name db:8 module_name graph version 20812 module_args "" +Monitoring 21ac7659-e44c-4cc9-b243-a07922b2a6cc +active - SMUpgradeBDB init +active - SMUpgradeBDB wait_for_version +Ocompleted - SMUpgradeBDB +Done +``` +--- +Title: rladmin node enslave +alwaysopen: false +categories: +- docs +- operate +- rs +description: Changes a node's resources to replicas. +headerRange: '[1-2]' +linkTitle: enslave +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/enslave/' +--- + +Changes the resources of a node to replicas. + +## `node enslave` + +Changes all of the node's endpoints and shards to replicas. + +``` sh +rladmin node enslave + [demote_node] + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's endpoints and shards to replicas | +| demote_node | | If the node is a primary node, changes the node to replica | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:2 master 0-4095 3.2MB OK +db:6 tr02 redis:16 node:2 master 4096-8191 3.12MB OK +db:6 tr02 redis:18 node:2 master 8192-12287 3.16MB OK +db:6 tr02 redis:20 node:2 master 12288-16383 3.12MB OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 slave 192.0.2.12 198.51.100.1 3d99db1fdf4b 1/100 6 14.43GB/19.54GB 10.87GB/16.02GB 6.2.12-37 OK +node:2 master 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.43GB/19.54GB 10.88GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 5/120 6 14.43GB/19.54GB 10.83GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 enslave demote_node +Performing enslave_node action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 1/100 6 14.72GB/19.54GB 10.91GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.72GB/19.54GB 11.17GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 5/120 6 14.72GB/19.54GB 10.92GB/16.02GB 6.2.12-37 OK +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:2 slave 0-4095 2.99MB OK +db:6 tr02 redis:16 node:2 slave 4096-8191 3.01MB OK +db:6 tr02 redis:18 node:2 slave 8192-12287 2.93MB OK +db:6 tr02 redis:20 node:2 slave 12288-16383 3.06MB OK +``` + +## `node enslave endpoints_only` + +Changes the role for all endpoints on a node to replica. + +``` sh +rladmin node enslave endpoints_only + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's endpoints to replicas | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status endpoints +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:5 tr01 endpoint:5:1 node:1 single No +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin node 1 enslave endpoints_only +Performing enslave_node action on node:1: 100% +OK +$ rladmin status endpoints +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:5 tr01 endpoint:5:1 node:3 single No +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +``` + +## `node enslave shards_only` + +Changes the role for all shards of a node to replica. + +``` sh +rladmin node enslave shards_only + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's shards to replicas | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status shards node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:3 master 0-16383 3.04MB OK +db:6 tr02 redis:15 node:3 master 0-4095 4.13MB OK +db:6 tr02 redis:17 node:3 master 4096-8191 4.13MB OK +db:6 tr02 redis:19 node:3 master 8192-12287 4.13MB OK +db:6 tr02 redis:21 node:3 master 12288-16383 4.13MB OK +$ rladmin node 3 enslave shards_only +Performing enslave_node action on node:3: 100% +OK +$ rladmin status shards node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:3 slave 0-16383 2.98MB OK +db:6 tr02 redis:15 node:3 slave 0-4095 4.23MB OK +db:6 tr02 redis:17 node:3 slave 4096-8191 4.11MB OK +db:6 tr02 redis:19 node:3 slave 8192-12287 4.19MB OK +db:6 tr02 redis:21 node:3 slave 12288-16383 4.27MB OK +``` +--- +Title: rladmin node maintenance_mode +alwaysopen: false +categories: +- docs +- operate +- rs +description: Turns quorum-only mode on or off for a node. +headerRange: '[1-2]' +linkTitle: maintenance_mode +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/maintenance-mode/' +--- + +Configures [quorum-only mode]({{< relref "/operate/rs/7.4/clusters/maintenance-mode#activate-maintenance-mode" >}}) on a node. + +## `node maintenance_mode on` + +Migrates shards out of the node and turns the node into a quorum node to prevent shards from returning to it. + +```sh +rladmin node maintenance_mode on + [ keep_slave_shards ] + [ evict_ha_replica { enabled | disabled } ] + [ evict_active_active_replica { enabled | disabled } ] + [ evict_dbs ] + [ demote_node ] + [ overwrite_snapshot ] + [ max_concurrent_actions ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Turns the specified node into a quorum node | +| demote_node | | If the node is a primary node, changes the node to replica | +| evict_ha_replica | `enabled`
`disabled` | Migrates the HA replica shards in the node | +| evict_active_active_replica | `enabled`
`disabled` | Migrates the Active-Active replica shards in the node | +| evict_dbs | list of database names or IDs | Specify databases whose shards should be evicted from the node when entering maintenance mode.

Examples:
`$ rladmin node 1 maintenance_mode on evict_dbs db:1 db:2`
`$ rladmin node 1 maintenance_mode on evict_dbs db_name1 db_name2` | +| keep_slave_shards | | Keeps replica shards in the node and demotes primary shards to replicas.

Deprecated as of Redis Enterprise Software 7.4.2. Use `evict_ha_replica disabled evict_active_active_replica disabled` instead. | +| max_concurrent_actions | integer | Maximum number of concurrent actions during node maintenance | +| overwrite_snapshot | | Overwrites the latest existing node snapshot taken when enabling maintenance mode | + +### Returns + +Returns `OK` if the node was converted successfully. If the cluster does not have enough resources to migrate the shards, the process returns a warning. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the node became a quorum node. + +### Example + +```sh +$ rladmin node 2 maintenance_mode on overwrite_snapshot +Found snapshot from 2024-01-06T11:36:47Z, overwriting the snapshot +Performing maintenance_on action on node:2: 0% +created snapshot NodeSnapshot + +node:2 will not accept any more shards +Performing maintenance_on action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.21GB/19.54GB 10.62GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 0/0 6 14.21GB/19.54GB 0KB/0KB 6.2.12-37 OK +node:4 slave 192.0.2.14 6d754fe12cb9 5/100 6 14.21GB/19.54GB 10.62GB/16.02GB 6.2.12-37 OK +``` + +## `node maintenance_mode off` + +Turns maintenance mode off and returns the node to its previous state. + +```sh +rladmin node maintenance_mode off + [ { snapshot_name | skip_shards_restore } ] + [ max_concurrent_actions ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Restores the node back to the previous state | +| max_concurrent_actions | integer | Maximum number of concurrent actions during node maintenance | +| skip_shards_restore | | Does not restore shards back to the node | +| snapshot_name | string | Restores the node back to a state stored in the specified snapshot | + +### Returns + +Returns `OK` if the node was restored successfully. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the node was restored. + +### Example + +```sh +$ rladmin node 2 maintenance_mode off +Performing maintenance_off action on node:2: 0% +Found snapshot: NodeSnapshot +Performing maintenance_off action on node:2: 0% +migrate redis:12 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:12 to node:2: finished +Performing maintenance_off action on node:2: 0% +migrate redis:17 to node:2: executing + +migrate redis:15 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:17 to node:2: finished + +migrate redis:15 to node:2: finished +Performing maintenance_off action on node:2: 0% +failover redis:16: executing + +failover redis:14: executing +Performing maintenance_off action on node:2: 0% +failover redis:16: finished + +failover redis:14: finished +Performing maintenance_off action on node:2: 0% +failover redis:18: executing +Performing maintenance_off action on node:2: 0% +failover redis:18: finished + +migrate redis:21 to node:2: executing + +migrate redis:19 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:21 to node:2: finished + +migrate redis:19 to node:2: finished + +failover redis:20: executing +Performing maintenance_off action on node:2: 0% +failover redis:20: finished +Performing maintenance_off action on node:2: 0% +rebind endpoint:6:1: executing +Performing maintenance_off action on node:2: 0% +rebind endpoint:6:1: finished +Performing maintenance_off action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.2GB/19.54GB 10.61GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 5/100 6 14.2GB/19.54GB 10.61GB/16.02GB 6.2.12-37 OK +node:4 slave 192.0.2.14 6d754fe12cb9 0/100 6 14.2GB/19.54GB 10.69GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node external_addr +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures a node's external IP addresses. +headerRange: '[1-2]' +linkTitle: external_addr +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/external-addr/' +--- + +Configures a node's external IP addresses. + +## `node external_addr add` + +Adds an external IP address that accepts inbound user connections for the node. + +```sh +rladmin node external_addr + add +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Adds an external IP address for the specified node | +| IP address | IP address | External IP address of the node | + +### Returns + +Returns `Updated successfully` if the IP address was added. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was added. + +### Example + +``` sh +$ rladmin node 1 external_addr add 198.51.100.1 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.24GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +``` + +## `node external_addr set` + +Sets one or more external IP addresses that accepts inbound user connections for the node. + +```sh +rladmin node external_addr + set ... +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets external IP addresses for the specified node | +| IP address | list of IP addresses | Sets specified IP addresses as external addresses | + +### Returns + +Returns `Updated successfully` if the IP addresses were set. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was set. + +### Example + +``` sh +$ rladmin node 2 external_addr set 198.51.100.2 198.51.100.3 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2,198.51.100.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.23GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +``` +## `node external_addr remove` + +Removes the specified external IP address from the node. + +```sh +rladmin node external_addr + remove +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Removes an external IP address for the specified node | +| IP address | IP address | Removes the specified IP address of the node | + +### Returns + +Returns `Updated successfully` if the IP address was removed. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was removed. + +### Example + +``` sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2,198.51.100.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.23GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 external_addr remove 198.51.100.3 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.74GB/19.54GB 11.14GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2 fc7a3d332458 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.74GB/19.54GB 11.14GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node snapshot +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages snapshots of the configuration of a node's shards and endpoints. +headerRange: '[1-2]' +linkTitle: snapshot +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/snapshot/' +--- + +Manages snapshots of the configuration of a node's shards and endpoints. + +You can create node snapshots and use them to restore the node's shards and endpoints to a configuration from a previous point in time. If you restore a node from a snapshot (for example, after an event such as failover or maintenance), the node's shards have the same placement and roles as when the snapshot was created. + +## `node snapshot create` + +Creates a snapshot of a node's current configuration, including the placement of shards and endpoints on the node and the shards' roles. + +```sh +rladmin node snapshot create +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Creates a snapshot of the specified node | +| name | string | Name of the created snapshot | + +### Returns + +Returns `Done` if the snapshot was created successfully. Otherwise, returns an error. + +### Example + +```sh +$ rladmin node 1 snapshot create snap1 +Creating node snapshot 'snap1' for node:1 +Done. +``` + +## `node snapshot delete` + +Deletes an existing snapshot of a node. + +```sh +rladmin node snapshot delete +``` + +{{}} +You cannot use this command to delete a snapshot created by maintenance mode. As of Redis Enterprise Software version 7.4.2, only the latest maintenance mode snapshot is kept. +{{}} + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Deletes a snapshot of the specified node | +| name | string | Deletes the specified snapshot | + +### Returns + +Returns `Done` if the snapshot was deleted successfully. Otherwise, returns an error. + +### Example + +```sh +$ rladmin node 1 snapshot delete snap1 +Deleting node snapshot 'snap1' for node:1 +Done. +``` + +## `node snapshot list` + +Displays a list of created snapshots for the specified node. + +``` sh +rladmin node snapshot list +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Displays snapshots of the specified node | + +### Returns + +Returns a list of snapshots of the specified node. + +### Example + +```sh +$ rladmin node 2 snapshot list +Name Node Time +snap2 2 2022-05-12T19:27:51Z +``` + +## `node snapshot restore` + +Restores a node's shards and endpoints as close to the stored snapshot as possible. + +```sh +rladmin node snapshot restore +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Restore the specified node from a snapshot. | +| restore | string | Name of the snapshot used to restore the node. | + +### Returns + +Returns `Snapshot restore completed successfully` if the actions needed to restore the snapshot completed successfully. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin node 2 snapshot restore snap2 +Reading node snapshot 'snap2' for node:2 +Planning restore +Planned actions: +* migrate redis:15 to node:2 +* failover redis:14 +* migrate redis:17 to node:2 +* failover redis:16 +* migrate redis:19 to node:2 +* failover redis:18 +* migrate redis:21 to node:2 +* failover redis:20 +Proceed?[Y]es/[N]o? Y +2022-05-12T19:43:31.486613 Scheduling 8 actions +[2022-05-12T19:43:31.521422 Actions Status: 8 waiting ] +* [migrate redis:21 to node:2] waiting => executing +* [migrate redis:19 to node:2] waiting => executing +* [migrate redis:17 to node:2] waiting => executing +* [migrate redis:15 to node:2] waiting => executing +[2022-05-12T19:43:32.586084 Actions Status: 4 executing | 4 waiting ] +* [migrate redis:21 to node:2] executing => finished +* [migrate redis:19 to node:2] executing => finished +* [migrate redis:17 to node:2] executing => finished +* [migrate redis:15 to node:2] executing => finished +* [failover redis:20] waiting => executing +* [failover redis:18] waiting => executing +* [failover redis:16] waiting => executing +* [failover redis:14] waiting => executing +[2022-05-12T19:43:33.719496 Actions Status: 4 finished | 4 executing ] +* [failover redis:20] executing => finished +* [failover redis:18] executing => finished +* [failover redis:16] executing => finished +* [failover redis:14] executing => finished +Snapshot restore completed successfully. +``` +--- +Title: rladmin node addr set +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets a node's internal IP address. +headerRange: '[1-2]' +linkTitle: addr +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/addr/' +--- + +Sets the internal IP address of a node. You can only set the internal IP address when the node is down. See [Change internal IP address]({{< relref "/operate/rs/7.4/networking/multi-ip-ipv6#change-internal-ip-address" >}}) for detailed instructions. + +```sh +rladmin node addr set +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets the internal IP address of the specified node | +| addr | IP address | Sets the node's internal IP address to the specified IP address | + +### Returns + +Returns `Updated successfully` if the IP address was set. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the internal IP address was changed. + +### Example + +```sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 3d99db1fdf4b 5/100 6 16.06GB/19.54GB 12.46GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 fc7a3d332458 0/100 6 -/19.54GB -/16.02GB 6.2.12-37 DOWN, last seen 33s ago +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 16.06GB/19.54GB 12.46GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 addr set 192.0.2.5 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 3d99db1fdf4b 5/100 6 14.78GB/19.54GB 11.18GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.5 fc7a3d332458 0/100 6 14.78GB/19.54GB 11.26GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.78GB/19.54GB 11.18GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node recovery_path set +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets a node's local recovery path. +headerRange: '[1-2]' +linkTitle: recovery_path +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/recovery-path/' +--- + +Sets the node's local recovery path, which specifies the directory where [persistence files]({{< relref "/operate/rs/7.4/databases/configure/database-persistence" >}}) are stored. You can use these persistence files to [recover a failed database]({{< relref "/operate/rs/7.4/databases/recover" >}}). + +```sh +rladmin node recovery_path set +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets the recovery path for the specified node | +| path | filepath | Path to the folder where persistence files are stored | + +### Returns + +Returns `Updated successfully` if the recovery path was set. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin node 2 recovery_path set /var/opt/redislabs/persist/redis +Updated successfully. +``` +--- +Title: rladmin node remove +alwaysopen: false +categories: +- docs +- operate +- rs +description: Removes a node from the cluster. +headerRange: '[1-2]' +linkTitle: remove +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/remove/' +--- + +Removes the specified node from the cluster. + +```sh +rladmin node remove [ wait_for_persistence { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------| +| node | integer | The node to remove from the cluster | +| wait_for_persistence | `enabled`
`disabled` | Ensures persistence files are available for recovery. The cluster policy `persistent_node_removal` determines the default value. | + +### Returns + +Returns `OK` if the node was removed successfully. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify that the node was removed. + +### Example + +```sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.26GB/19.54GB 10.67GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.26GB/19.54GB 10.71GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 1/120 6 14.26GB/19.54GB 10.7GB/16.02GB 6.2.12-37 OK +$ rladmin node 3 remove +Performing remove action on node:3: 100% +OK +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.34GB/19.54GB 10.74GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 5/100 6 14.34GB/19.54GB 10.74GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage nodes. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: node +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/node/' +--- + +`rladmin node` commands manage nodes in the cluster. + +{{}} +--- +Title: rladmin +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Redis Enterprise clusters and databases. +hideListLinks: true +linkTitle: rladmin (manage cluster) +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/rladmin/' +--- + +`rladmin` is a command-line utility that lets you perform administrative tasks such as failover, migration, and endpoint binding on a Redis Enterprise Software cluster. You can also use `rladmin` to edit cluster and database configurations. + +Although you can use the Cluster Manager UI for some of these tasks, others are unique to the `rladmin` command-line tool. + +## `rladmin` commands + +{{}} + +## Use the `rladmin` shell + +To open the `rladmin` shell: + +1. Sign in to a Redis Enterprise Software node with an account that is a member of the **redislabs** group. + + The `rladmin` binary is located in `/opt/redislabs/bin`. If you don't have this directory in your `PATH`, you may want to add it. Otherwise, you can use `bash -l ` to sign in as a user with permissions for that directory. + +1. Run: `rladmin` + + {{}} +If the CLI does not recognize the `rladmin` command, +run this command to load the necessary configuration first: `bash -l` + {{}} + +In the `rladmin` shell, you can: + +- Run any `rladmin` command without prefacing it with `rladmin`. +- Enter `?` to view the full list of available commands. +- Enter [`help`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/help" >}}) followed by the name of a command for a detailed explanation of the command and its usage. +- Press the `Tab` key for command completion. +- Enter `exit` or press `Control+D` to exit the `rladmin` shell and return to the terminal prompt. +--- +Title: Command-line utilities +alwaysopen: false +categories: +- docs +- operate +- rs +description: Reference for Redis Enterprise Software command-line utilities, including rladmin, redis-cli, crdb-cli, and rlcheck. +hideListLinks: true +linkTitle: Command-line utilities +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/' +--- + +Redis Enterprise Software includes a set of utilities to help you manage and test your cluster. To use a utility, run it from the command line. + +## Public utilities + +Administrators can use these CLI tools to manage and test a Redis Enterprise cluster. You can find the binaries in the `/opt/redislabs/bin/` directory. + +{{}} + +## Internal utilities + +The `/opt/redislabs/bin/` directory also contains utilities used internally by Redis Enterprise Software and for troubleshooting. + +{{}} +Do not use these tools for normal operations. +{{}} + +| Utility | Description | +|---------|-------------| +| bdb-cli | `redis-cli` connected to a database. | +| ccs-cli | Inspect Cluster Configuration Store. | +| cnm-ctl | Manages services for provisioning, migration, monitoring,
resharding, rebalancing, deprovisioning, and autoscaling. | +| consistency_checker | Checks the consistency of Redis instances. | +| crdbtop | Monitor Active-Active databases. | +| debug_mode | Enables debug mode. | +| debuginfo | Collects cluster information. | +| dmc-cli | Configure and monitor the DMC proxy. | +| pdns_control | Sends commands to a running PowerDNS nameserver. | +| redis_ctl | Stops or starts Redis instances. | +| rl_rdbloader | Load RDB backup files to a server. | +| rlutil | Maintenance utility. | +| shard-cli | `redis-cli` connected to a shard. | +| supervisorctl | Manages the lifecycles of Redis Enterprise services. | +--- +Title: crdb-cli crdb flush +alwaysopen: false +categories: +- docs +- operate +- rs +description: Clears all keys from an Active-Active database. +linkTitle: flush +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/flush/' +--- + +Clears all keys from an Active-Active database. + +```sh +crdb-cli crdb flush --crdb-guid + [ --no-wait ] +``` + +This command is irreversible. If the data in your database is important, back it up before you flush the database. + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task clearing the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be cleared and return `finished`. + +### Example + +```sh +$ crdb-cli crdb flush --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +Task 53cdc59e-ecf5-4564-a8dd-448d71f9e568 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb remove-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Removes a peer replica from an Active-Active database. +linkTitle: remove-instance +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/remove-instance/' +--- + +Removes a peer replica instance from the Active-Active database and deletes the instance and its data from the participating cluster. + +```sh +crdb-cli crdb remove-instance --crdb-guid + --instance-id + [ --force ] + [ --no-wait ] +``` + +If the cluster cannot communicate with the instance that you want to remove, you can: + +1. Use the `--force` option to remove the instance from the Active-Active database without purging the data from the instance. + +1. Run [`crdb-cli crdb purge-instance`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/purge-instance" >}}) from the removed instance to delete the Active-Active database and its data. + +### Parameters + +| Parameter | Value | Description| +|------------------------------|--------|------------| +| crdb-guid | string | The GUID of the database (required) | +| instance-id | string | The ID of the local instance to remove (required) | +| force | | Removes the instance without purging data from the instance.
If --force is specified, you must run [`crdb-cli crdb purge-instance`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/purge-instance" >}}) from the removed instance. | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is deleting the instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be removed and return `finished`. + +### Example + +```sh +$ crdb-cli crdb remove-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 --instance-id 2 --force +Task b1eba5ba-90de-49e9-8678-d66daa1afb51 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb get +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the current configuration of an Active-Active database. +linkTitle: get +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/get/' +--- + +Shows the current configuration of an Active-Active database. + +```sh +crdb-cli crdb get --crdb-guid +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | + +### Returns + +Returns the current configuration of the database. + +### Example + +```sh +$ crdb-cli crdb get --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +CRDB-GUID: d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +Name: database1 +Encryption: False +Causal consistency: False +Protocol version: 1 +FeatureSet version: 5 +Modules: [] +Default-DB-Config: + memory_size: 1073741824 + port: 12000 + replication: True + shard_key_regex: [{'regex': '.*\\{(?.*)\\}.*'}, {'regex': '(?.*)'}] + sharding: True + shards_count: 1 + tls_mode: disabled + rack_aware: None + data_persistence: None + authentication_redis_pass: None + authentication_admin_pass: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +Instance: + Id: 1 + Cluster: + FQDN: cluster1.redis.local + URL: https://cluster1.redis.local:9443 + Replication-Endpoint: + Replication TLS SNI: + Compression: 3 + DB-Config: + authentication_admin_pass: + replication: None + rack_aware: None + memory_size: None + data_persistence: None + tls_mode: None + authentication_redis_pass: None + port: None + shards_count: None + shard_key_regex: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +Instance: + Id: 2 + Cluster: + FQDN: cluster2.redis.local + URL: https://cluster2.redis.local:9443 + Replication-Endpoint: + Replication TLS SNI: + Compression: 3 + DB-Config: + authentication_admin_pass: + replication: None + rack_aware: None + memory_size: None + data_persistence: None + tls_mode: None + authentication_redis_pass: None + port: None + shards_count: None + shard_key_regex: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +``` +--- +Title: crdb-cli crdb health-report +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the health report of an Active-Active database. +linkTitle: health-report +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/health-report/' +--- + +Shows the health report of the API management layer of an Active-Active database. + +```sh +crdb-cli crdb health-report --crdb-guid +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | + +### Returns + +Returns the health report of the API management layer of the database. + +### Example + +```sh +$ crdb-cli crdb health-report --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +[ + { + "active_config_version":1, + "cluster_name":"cluster2.redis.local", + "configurations":[ + { + "causal_consistency":false, + "encryption":false, + "featureset_version":5, + "instances":[ + { + "cluster":{ + "name":"cluster1.redis.local", + "url":"https:\/\/cluster1.redis.local:9443" + }, + "db_uid":"", + "id":1 + }, + { + "cluster":{ + "name":"cluster2.redis.local", + "url":"https:\/\/cluster2.redis.local:9443" + }, + "db_uid":"1", + "id":2 + } + ], + "name":"database1", + "protocol_version":1, + "status":"commit-completed", + "version":1 + } + ], + "connections":[ + { + "name":"cluster1.redis.local", + "status":"ok" + }, + { + "name":"cluster2.redis.local", + "status":"ok" + } + ], + "guid":"d84f6fe4-5bb7-49d2-a188-8900e09c6f66", + "name":"database1", + "connection_error":null + }, + { + "active_config_version":1, + "cluster_name":"cluster1.redis.local", + "configurations":[ + { + "causal_consistency":false, + "encryption":false, + "featureset_version":5, + "instances":[ + { + "cluster":{ + "name":"cluster1.redis.local", + "url":"https:\/\/cluster1.redis.local:9443" + }, + "db_uid":"4", + "id":1 + }, + { + "cluster":{ + "name":"cluster2.redis.local", + "url":"https:\/\/cluster2.redis.local:9443" + }, + "db_uid":"", + "id":2 + } + ], + "name":"database1", + "protocol_version":1, + "status":"commit-completed", + "version":1 + } + ], + "connections":[ + { + "name":"cluster1.redis.local", + "status":"ok" + }, + { + "name":"cluster2.redis.local", + "status":"ok" + } + ], + "guid":"d84f6fe4-5bb7-49d2-a188-8900e09c6f66", + "name":"database1", + "connection_error":null + } +] +``` +--- +Title: crdb-cli crdb add-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Adds a peer replica to an Active-Active database. +linkTitle: add-instance +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/add-instance/' +--- + +Adds a peer replica to an existing Active-Active database in order to host the database on another cluster. This creates an additional active instance of the database on the specified cluster. + +```sh +crdb-cli crdb add-instance --crdb-guid + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + [ --compression <0-6> ] + [ --no-wait ] +``` + +### Parameters + +| Parameter | Value | Description | +|-----------|---------|-------------| +| crdb-guid | string | The GUID of the database (required) | +| instance | strings | The connection information for the new participating cluster (required) | +| compression | 0-6 | The level of data compression: 0=Compression disabled

6=High compression and resource load (Default: 3) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is adding the new instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be added and return `finished`. + +### Example + +```sh +$ crdb-cli crdb add-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 \ + --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin-password +Task f809fae7-8e26-4c8f-9955-b74dbbd47949 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb list +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows a list of all Active-Active databases. +linkTitle: list +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/list/' +--- + +Shows a list of all Active-Active databases. + +```sh +crdb-cli crdb list +``` + +### Parameters + +None + +### Returns + +Returns a list of all Active-Active databases that the cluster participates in. Each database is represented with a unique GUID, the name of the database, an instance ID, and the FQDN of the cluster that hosts the instance. + +### Example + +```sh +$ crdb-cli crdb list +CRDB-GUID NAME REPL-ID CLUSTER-FQDN +d84f6fe4-5bb7-49d2-a188-8900e09c6f66 database1 1 cluster1.redis.local +d84f6fe4-5bb7-49d2-a188-8900e09c6f66 database1 2 cluster2.redis.local +``` +--- +Title: crdb-cli crdb create +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates an Active-Active database. +linkTitle: create +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/create/' +--- + +Creates an Active-Active database. + +```sh +crdb-cli crdb create --name + --memory-size + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + [--port ] + [--no-wait] + [--default-db-config ] + [--default-db-config-file ] + [--compression <0-6>] + [--causal-consistency { true | false } ] + [--password ] + [--replication { true | false } ] + [--encryption { true | false } ] + [--sharding { false | true } ] + [--shards-count ] + [--shard-key-regex ] + [--oss-cluster { true | false } ] + [--bigstore { true | false }] + [--bigstore-ram-size ] + [--with-module name=,version=,args=] +``` + +### Prerequisites + +Before you create an Active-Active database, you must have: + +- At least two participating clusters +- [Network connectivity]({{< relref "/operate/rs/7.4/networking/port-configurations" >}}) between the participating clusters + +### Parameters + + +| Parameter & options(s)           | Value | Description | +|---------------------------------------------------------------------------------------|-------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| name \ | string | Name of the Active-Active database (required) | +| memory-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum database memory (required) | +| instance
   fqdn=\,
   username=\,
   password=\ | strings | The connection information for the participating clusters (required for each participating cluster) | +| port \ | integer | TCP port for the Active-Active database on all participating clusters | +| default-db-config \ | string | Default database configuration options | +| default-db-config-file \ | filepath | Default database configuration options from a file | +| no-wait | | Prevents `crdb-cli` from running another command before this command finishes | +| compression | 0-6 | The level of data compression:

0 = No compression

6 = High compression and resource load (Default: 3) | +| causal-consistency | true
false (*default*) | [Causal consistency]({{< relref "/operate/rs/7.4/databases/active-active/causal-consistency.md" >}}) applies updates to all instances in the order they were received | +| password \ | string | Password for access to the database | +| replication | true
false (*default*) | Activates or deactivates [database replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}}) where every master shard replicates to a replica shard | +| encryption | true
false (*default*) | Activates or deactivates encryption | +| sharding | true
false (*default*) | Activates or deactivates sharding (also known as [database clustering]({{< relref "/operate/rs/7.4/databases/durability-ha/replication.md" >}})). Cannot be updated after the database is created | +| shards-count \ | integer | If sharding is enabled, this specifies the number of Redis shards for each database instance | +| oss-cluster | true
false (*default*) | Activates [OSS cluster API]({{< relref "/operate/rs/7.4/clusters/optimize/oss-cluster-api" >}}) | +| shard-key-regex \ | string | If clustering is enabled, this defines a regex rule (also known as a [hashing policy]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering#custom-hashing-policy" >}})) that determines which keys are located in each shard (defaults to `{u'regex': u'.*\\{(?.*)\\}.*'}, {u'regex': u'(?.*)'} `) | +| bigstore | true

false (*default*) | If true, the database uses Auto Tiering to add flash memory to the database | +| bigstore-ram-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum RAM limit for databases with Auto Tiering enabled | +| with-module
  name=\,
  version=\,
  args=\ | strings | Creates a database with a specific module | +| eviction-policy | noeviction (*default*)
allkeys-lru
allkeys-lfu
allkeys-random
volatile-lru
volatile-lfu
volatile-random
volatile-ttl | Sets [eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy" >}}) | +| proxy-policy | all-nodes
all-master-shards
single | Sets proxy policy | + + + +### Returns + +Returns the task ID of the task that is creating the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be created and then return the CRDB GUID. + +### Examples + +```sh +$ crdb-cli crdb create --name database1 --memory-size 1GB --port 12000 \ + --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin \ + --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin +Task 633aaea3-97ee-4bcb-af39-a9cb25d7d4da created + ---> Status changed: queued -> started + ---> CRDB GUID Assigned: crdb:d84f6fe4-5bb7-49d2-a188-8900e09c6f66 + ---> Status changed: started -> finished +``` + +To create an Active-Active database with two shards in each instance and with encrypted traffic between the clusters: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --shards-count 2 --encryption true +``` + +To create an Active-Active database with two shards and with RediSearch 2.0.6 module: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --shards-count 2 --with-module name=search,version="2.0.6",args="PARTITIONS AUTO" +``` + +To create an Active-Active database with two shards and with encrypted traffic between the clusters: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --encryption true --shards-count 2 +``` + +To create an Active-Active database with 1 shard in each instance and not wait for the response: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --no-wait +``` +--- +Title: crdb-cli crdb delete +alwaysopen: false +categories: +- docs +- operate +- rs +description: Deletes an Active-Active database. +linkTitle: delete +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/delete/' +--- + +Deletes an Active-Active database. + +```sh +crdb-cli crdb delete --crdb-guid + [ --no-wait ] +``` + +This command is irreversible. If the data in your database is important, back it up before you delete the database. + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is deleting the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be deleted and return `finished`. + +### Example + +```sh +$ crdb-cli crdb delete --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 +Task dfe6cacc-88ff-4667-812e-938fd05fe359 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb update +alwaysopen: false +categories: +- docs +- operate +- rs +description: Updates the configuration of an Active-Active database. +linkTitle: update +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/update/' +--- + +Updates the configuration of an Active-Active database. + +```sh +crdb-cli crdb update --crdb-guid + [--no-wait] + [--force] + [--default-db-config ] + [--default-db-config-file ] + [--compression <0-6>] + [--causal-consistency { true | false } ] + [--credentials id=,username=,password= ] + [--encryption { true | false } ] + [--oss-cluster { true | false } ] + [--featureset-version { true | false } ] + [--memory-size ] + [--bigstore-ram-size ] + [--update-module name=,featureset_version=] +``` + +If you want to change the configuration of the local instance only, use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) instead. + +### Parameters + +| Parameter | Value | Description | +|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| crdb-guid \ | string | GUID of the Active-Active database (required) | +| bigstore-ram-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum RAM limit for the databases with Auto Tiering enabled, if activated | +| memory-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum database memory (required) | +| causal-consistency | true
false | [Causal consistency]({{< relref "/operate/rs/7.4/databases/active-active/causal-consistency.md" >}}) applies updates to all instances in the order they were received | +| compression | 0-6 | The level of data compression:

0 = No compression

6 = High compression and resource load (Default: 3) | +| credentials id=\,username=\,password=\ | strings | Updates the credentials for access to the instance | +| default-db-config \ | | Default database configuration from stdin | +| default-db-config-file \ | filepath | Default database configuration from file | +| encryption | true
false | Activates or deactivates encryption | +| force | | Force an update even if there are no changes | +| no-wait | | Do not wait for the command to finish | +| oss-cluster | true
false | Activates or deactivates OSS Cluster mode | +| eviction-policy | noeviction
allkeys-lru
allkeys-lfu
allkeys-random
volatile-lru
volatile-lfu
volatile-random
volatile-ttl | Updates [eviction policy]({{< relref "/operate/rs/7.4/databases/memory-performance/eviction-policy" >}}) | +| featureset-version | true
false | Updates to latest FeatureSet version | +| update-module name=\,featureset_version=\ | strings | Update a module to the specified version | + +### Returns + +Returns the task ID of the task that is updating the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be updated and then return "finished." + +### Example + +```sh +$ crdb-cli crdb update --crdb-guid 968d586c-e12d-4b8f-8473-42eb88d0a3a2 --memory-size 2GBTask 7e98efc1-8233-4578-9e0c-cdc854b8af9e created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb purge-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Deletes data from a local instance and removes it from the Active-Active + database. +linkTitle: purge-instance +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/purge-instance/' +--- + +Deletes data from a local instance and removes the instance from the Active-Active database. + +```sh +crdb-cli crdb purge-instance --crdb-guid + --instance-id + [ --no-wait ] +``` + +Once this command finishes, the other replicas must remove this instance with [`crdb-cli crdb remove-instance --force`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/remove-instance" >}}). + +### Parameters + +| Parameter | Value | Description | +|---------------------------|--------|--------------------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| instance-id | string | The ID of the local instance (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is purging the local instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be purged and return `finished`. + +### Example + +```sh +$ crdb-cli crdb purge-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 --instance-id 2 +Task add0705c-87f1-4c28-ad6a-ab5d98e00c58 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb commands +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active databases. +hideListLinks: true +linkTitle: crdb +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/crdb/' +--- + +Use `crdb-cli crdb` commands to manage Active-Active databases. + +## `crdb-cli crdb` commands + +{{}} +--- +Title: crdb-cli task status +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the status of a specified Active-Active database task. +linkTitle: status +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/task/status/' +--- + +Shows the status of a specified Active-Active database task. + +```sh +crdb-cli task status --task-id +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| task-id \ | string | An Active-Active database task ID (required) | +| verbose | N/A | Returns detailed information when specified | +| no-verbose | N/A | Returns limited information when specified | + +The `--verbose` and `--no-verbose` options are mutually incompatible; specify one or the other. + +The `404 Not Found` error indicates an invalid task ID. Use the [`task list`]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/task/list" >}}) command to determine available task IDs. + +### Returns + +Returns the status of an Active-Active database task. + +### Example + +```sh +$ crdb-cli task status --task-id e1c49470-ae0b-4df8-885b-9c755dd614d0 +Task-ID: e1c49470-ae0b-4df8-885b-9c755dd614d0 +CRDB-GUID: 1d7741cc-1110-4e2f-bc6c-935292783d24 +Operation: create_crdb +Status: finished +Worker-Name: crdb_worker:1:0 +Started: 2022-10-12T09:33:41Z +Ended: 2022-10-12T09:33:55Z +``` +--- +Title: crdb-cli task list +alwaysopen: false +categories: +- docs +- operate +- rs +description: Lists active and recent Active-Active database tasks. +linkTitle: list +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/task/list/' +--- + +Lists active and recent Active-Active database tasks. + +```sh +crdb-cli task list +``` + +### Parameters + +None + +### Returns + +A table listing current and recent Active-Active tasks. Each entry includes the following: + +| Column | Description | +|--------|-------------| +| Task ID | String containing the unique ID associated with the task
Example: `e1c49470-ae0b-4df8-885b-9c755dd614d0` | +| CRDB-GUID | String containing the unique ID associated with the Active-Active database affected by the task
Example: `1d7741cc-1110-4e2f-bc6c-935292783d24` | +| Operation | String describing the task action
Example: `create_crdb` | +| Status | String indicating the task status
Example: `finished` | +| Worker name | String identifying the process handling the task
Example: `crdb_worker:1:0` | +| Started | TimeStamp value indicating when the task started ([UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time))
Example: `2022-10-12T09:33:41Z` | +| Ended | TimeStamp value indicating when the task ended (UTC)
Example: ` 2022-10-12T09:33:55Z` | + +### Example + +```sh +$ crdb-cli task list +TASK-ID CRDB-GUID OPERATION STATUS WORKER-NAME STARTED ENDED + +``` +--- +Title: crdb-cli task cancel +alwaysopen: false +categories: +- docs +- operate +- rs +description: Attempts to cancel a specified Active-Active database task. +linkTitle: cancel +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/task/cancel/' +--- + +Cancels the Active-Active database task specified by the task ID. + +```sh +crdb-cli task cancel --task-id +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| task-id \ | string | An Active-Active database task ID (required) | + +### Returns + +Attempts to cancel an Active-Active database task. + +Be aware that tasks may complete before they can be cancelled. + +### Example + +```sh +$ crdb-cli task cancel --task-id 2901c2a3-2828-4717-80c0-6f27f1dd2d7c +``` +--- +Title: crdb-cli task commands +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active tasks. +hideListLinks: true +linkTitle: task +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/task/' +--- + +The `crdb-cli task` commands help investigate Active-Active database performance issues. They should not be used except as directed by Support. + +## `crdb-cli task` commands + +{{}} +--- +Title: crdb-cli +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active databases. +hideListLinks: true +linkTitle: crdb-cli (manage Active-Active) +weight: $weight +url: '/operate/rs/7.4/references/cli-utilities/crdb-cli/' +--- + +An [Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/_index.md" >}}) (also known as CRDB or conflict-free replicated database) +replicates your data across Redis Enterprise Software clusters located in geographically distributed regions. +Active-Active databases allow read-write access in all locations, making them ideal for distributed applications that require fast response times and disaster recovery. + +The Active-Active database on an individual cluster is called an **instance**. +Each cluster that hosts an instance is called a **participating cluster**. + +An Active-Active database requires two or more participating clusters. +Each instance is responsible for updating the instances that reside on other participating clusters with the transactions it receives. +Write conflicts are resolved using [conflict-free replicated data types]({{< relref "/operate/rs/7.4/databases/active-active" >}}) (CRDTs). + +To programmatically maintain an Active-Active database and its instances, you can use the `crdb-cli` command-line tool. + +## `crdb-cli` commands + +{{}} + +## Use the crdb-cli + +To use the `crdb-cli` tool, use SSH to sign in to a Redis Enterprise host with a user that belongs to the group that Redis Enterprise Software was installed with (Default: **redislabs**). +If you sign in with a non-root user, you must add `/opt/redislabs/bin/` to your `PATH` environment variables. + +`crdb-cli` commands use the syntax: `crdb-cli ` to let you: + +- Create, list, update, flush, or delete an Active-Active database. +- Add or remove an instance of the Active-Active database on a specific cluster. + +Each command creates a task. + +By default, the command runs immediately and displays the result in the output. + +If you use the `--no-wait` flag, the command runs in the background so that your application is not delayed by the response. + +Use the [`crdb-cli task` commands]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli/task/" >}}) to manage Active-Active database tasks. + +For each `crdb-cli` command, you can use `--help` for additional information about the command. +--- +Title: Benchmark an Auto Tiering enabled database +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Benchmark Auto Tiering +weight: $weight +url: '/operate/rs/7.4/references/memtier-benchmark/' +--- +Auto Tiering on Redis Enterprise Software lets you use cost-effective Flash memory as a RAM extension for your database. + +But what does the performance look like as compared to a memory-only database, one stored solely in RAM? + +These scenarios use the `memtier_benchmark` utility to evaluate the performance of a Redis Enterprise Software deployment, including the trial version. + +The `memtier_benchmark` utility is located in `/opt/redislabs/bin/` of Redis Enterprise Software deployments. To test performance for cloud provider deployments, see the [memtier-benchmark GitHub project](https://github.com/RedisLabs/memtier_benchmark). + +For additional, such as assistance with larger clusters, [contact support](https://redislabs.com/company/support/). + + +## Benchmark and performance test considerations + +These tests assume you're using a trial version of Redis Enterprise Software and want to test the performance of a Auto Tiering enabled database in the following scenarios: + +- Without replication: Four (4) master shards +- With replication: Two (2) primary and two replica shards + +With the trial version of Redis Enterprise Software you can create a cluster of up to four shards using a combination of database configurations, including: + +- Four databases, each with a single master shard +- Two highly available databases with replication enabled (each database has one master shard and one replica shard) +- One non-replicated clustered database with four master shards +- One highly available and clustered database with two master shards and two replica shards + +## Test environment and cluster setup + +For the test environment, you need to: + +1. Create a cluster with three nodes. +1. Prepare the flash memory. +1. Configure the load generation tool. + +### Creating a three-node cluster {#creating-a-threenode-rs-cluster} + +This performance test requires a three-node cluster. + +You can run all of these tests on Amazon AWS with these hosts: + +- 2 x i3.2xlarge (8 vCPU, 61 GiB RAM, up to 10GBit, 1.9TB NMVe SSD) + + These nodes serve RoF data + +- 1 x m4.large, which acts as a quorum node + +To learn how to install Redis Enterprise Software and set up a cluster, see: + +- [Redis Enterprise Software quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) for a test installation +- [Install and upgrade]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) for a production installation + +These tests use a quorum node to reduce AWS EC2 instance use while maintaining the three nodes required to support a quorum node in case of node failure. Quorum nodes can be on less powerful instances because they do not have shards or support traffic. + +As of this writing, i3.2xlarge instances are required because they support NVMe SSDs, which are required to support RoF. Auto Tiering requires Flash-enabled storage, such as NVMe SSDs. + +For best results, compare performance of a Flash-enabled deployment to the performance in a RAM-only environment, such as a strictly on-premises deployment. + +## Prepare the flash memory + +After you install RS on the nodes, +the flash memory attached to the i3.2xlarge instances must be prepared and formatted with the `/opt/redislabs/sbin/prepare_flash.sh` script. + +## Set up the load generation tool + +The memtier_benchmark load generator tool generates the load on the RoF databases. +To use this tool, install RS on a dedicated instance that is not part of the RS cluster +but is in the same region/zone/subnet of your cluster. +We recommend that you use a relatively powerful instance to avoid bottlenecks at the load generation tool itself. + +For these tests, the load generation host uses a c4.8xlarge instance type. + +## Database configuration parameters + +### Create a Auto Tiering test database + +You can use the Redis Enterprise Cluster Manager UI to create a test database. +We recommend that you use a separate database for each test case with these requirements: + +| **Parameter** | **With replication** | **Without replication** | **Description** | +| ------ | ------ | ------ | ------ | +| Name | test-1 | test-2 | The name of the test database | +| Memory limit | 100 GB | 100 GB | The memory limit refers to RAM+Flash, aggregated across all the shards of the database, including master and replica shards. | +| RAM limit | 0.3 | 0.3 | RoF always keeps the Redis keys and Redis dictionary in RAM and additional RAM is required for storing hot values. For the purpose of these tests 30% RAM was calculated as an optimal value. | +| Replication | Enabled | Disabled | A database with no replication has only master shards. A database with replication has master and replica shards. | +| Data persistence | None | None | No data persistence is needed for these tests. | +| Database clustering | Enabled | Enabled | A clustered database consists of multiple shards. | +| Number of (master) shards | 2 | 4 | Shards are distributed as follows:
- With replication: One master shard and one replica shard on each node
- Without replication: Two master shards on each node | +| Other parameters | Default | Default | Keep the default values for the other configuration parameters. | + +## Data population + +### Populate the benchmark dataset + +The memtier_benchmark load generation tool populates the database. +To populate the database with N items of 500 Bytes each in size, on the load generation instance run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-maximum=$N -n allkeys -d 500 --key-pattern=P:P --ratio=1:0 +``` + +Set up a test database: + +| **Parameter** | **Description** | +| ------ | ------ | +| Database host
(-s) | The fully qualified name of the endpoint or the IP shown in the RS database configuration | +| Database port
(-p) | The endpoint port shown in your database configuration | +| Number of items
(–key-maximum) | With replication: 75 Million
Without replication: 150 Million | +| Item size
(-d) | 500 Bytes | + +## Centralize the keyspace + +### With replication {#centralize-with-repl} + +To create roughly 20.5 million items in RAM for your highly available clustered database with 75 million items, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-minimum=27250000 --key-maximum=47750000 -n allkeys +--key-pattern=P:P --ratio=0:1 +``` + +To verify the database values, use **Values in RAM** metric, which is available from the **Metrics** tab of your database in the Cluster Manager UI. + +### Without replication {#centralize-wo-repl} + +To create 41 million items in RAM without replication enabled and 150 million items, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-minimum=54500000 --key-maximum=95500000 -n allkeys +--key-pattern=P:P --ratio=0:1 +``` + +## Test runs + +### Generate load + +#### With replication {#generate-with-repl} + +We recommend that you do a dry run and double check the RAM Hit Ratio on the **Metrics** screen in the Cluster Manager UI before you write down the test results. + +To test RoF with an 85% RAM Hit Ratio, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --pipeline=11 -c 20 -t 1 +-d 500 --key-maximum=75000000 --key-pattern=G:G --key-stddev=5125000 +--ratio=1:1 --distinct-client-seed --randomize --test-time=600 +--run-count=1 --out-file=test.out +``` + +#### Without replication {#generate-wo-repl} + +Here is the command for 150 million items: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --pipeline=24 -c 20 -t 1 +-d 500 --key-maximum=150000000 --key-pattern=G:G --key-stddev=10250000 +--ratio=1:1 --distinct-client-seed --randomize --test-time=600 +--run-count=1 --out-file=test.out +``` + +Where: + +| **Parameter** | **Description** | +|------------|-----------------| +| Access pattern (--key-pattern) and standard deviation (--key-stddev) | Controls the RAM Hit ratio after the centralization process is complete | +| Number of threads (-t and -c)\ | Controls how many connections are opened to the database, whereby the number of connections is the number of threads multiplied by the number of connections per thread (-t) and number of clients per thread (-c) | +| Pipelining (--pipeline)\ | Pipelining allows you to send multiple requests without waiting for each individual response (-t) and number of clients per thread (-c) | +| Read\write ratio (--ratio)\ | A value of 1:1 means that you have the same number of write operations as read operations (-t) and number of clients per thread (-c) | + +## Test results + +### Monitor the test results + +You can either monitor the results in the **Metrics** tab of the Cluster Manager UI or with the `memtier_benchmark` output. However, be aware that: + +- The memtier_benchmark results include the network latency between the load generator instance and the cluster instances. + +- The metrics shown in the Cluster Manager UI do _not_ include network latency. + +### Expected results + +You should expect to see an average throughput of: + +- Around 160,000 ops/sec when testing without replication (i.e. Four master shards) +- Around 115,000 ops/sec when testing with enabled replication (i.e. 2 master and 2 replica shards) + +In both cases, the average latency should be below one millisecond. +--- +Title: Develop with Redis clients +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis client libraries allow you to connect to Redis instances from within + your application. This section provides an overview of several recommended Redis + clients for popular programming and scripting languages. +hideListLinks: true +linkTitle: Redis clients +weight: 80 +url: '/operate/rs/7.4/references/client_references/' +--- +To connect to Redis instances from within your application, use a Redis client library that matches your application's language. + +## Official clients + +| Language | Client name | +| :---------- | :------------- | +| .Net | [NRedisStack]({{< relref "/develop/clients/dotnet" >}}) | +| Go | [go-redis]({{< relref "/develop/clients/go" >}}) | +| Java | [Jedis]({{< relref "/develop/clients/jedis" >}}) (Synchronous) and [Lettuce]({{< relref "/develop/clients/lettuce" >}}) (Asynchronous) | +| Node.js | [node-redis]({{< relref "/develop/clients/nodejs" >}}) | +| Python | [redis-py]({{< relref "/develop/clients/redis-py" >}}) | + +Select a client name to see its quick start. + +## Other clients + +For a list of community-driven Redis clients, which are available for more programming languages, see +[Community-supported clients]({{< relref "/develop/clients#community-supported-clients" >}}). +--- +Title: Permissions +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the permissions used with Redis Enterprise Software REST API + calls. +linkTitle: Permissions +weight: 60 +url: '/operate/rs/7.4/references/rest-api/permissions/' +--- + +Some Redis Enterprise [REST API requests]({{< relref "/operate/rs/7.4/references/rest-api/requests" >}}) may require the user to have specific permissions. + +Administrators can assign a predefined role to a user with the [Cluster Manager UI]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}) or a [`PUT /v1/users/{uid}` API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/users#put-user" >}}) to grant necessary permissions to them. + +## Roles + +Each user in the cluster has an assigned cluster management role, which defines the permissions granted to the user. + +Available management roles include: + +- **none**: No REST API permissions. +- **[db_viewer](#db-viewer-role)**: Can view database info. +- **[db_member](#db-member-role)**: Can create or modify databases and view their info. +- **[cluster_viewer](#cluster-viewer-role)**: Can view cluster and database info. +- **[cluster_member](#cluster-member-role)**: Can modify the cluster and databases and view their info. +- **[admin](#admin-role)**: Can view and modify all elements of the cluster. + +## Permissions list for each role + +| Role | Permissions | +|------|-------------| +| none | No permissions | +|
admin | [add_cluster_module](#add_cluster_module), [cancel_cluster_action](#cancel_cluster_action), [cancel_node_action](#cancel_node_action), [config_ldap](#config_ldap), [config_ocsp](#config_ocsp), [create_bdb](#create_bdb), [create_crdb](#create_crdb), [create_ldap_mapping](#create_ldap_mapping), [create_new_user](#create_new_user), [create_redis_acl](#create_redis_acl), [create_role](#create_role), [delete_bdb](#delete_bdb), [delete_cluster_module](#delete_cluster_module), [delete_crdb](#delete_crdb), [delete_ldap_mapping](#delete_ldap_mapping), [delete_redis_acl](#delete_redis_acl), [delete_role](#delete_role), [delete_user](#delete_user), [edit_bdb_module](#edit_bdb_module), [flush_crdb](#flush_crdb), [install_new_license](#install_new_license), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [start_cluster_action](#start_cluster_action), [start_node_action](#start_node_action), [test_ocsp_status](#test_ocsp_status), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_cluster](#update_cluster), [update_crdb](#update_crdb), [update_ldap_mapping](#update_ldap_mapping), [update_node](#update_node), [update_proxy](#update_proxy), [update_redis_acl](#update_redis_acl), [update_role](#update_role), [update_user](#update_user), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_ldap_mappings_info](#view_all_ldap_mappings_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_all_users_info](#view_all_users_info), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_keys](#view_cluster_keys), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_ldap_config](#view_ldap_config), [view_ldap_mapping_info](#view_ldap_mapping_info), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_ocsp_config](#view_ocsp_config), [view_ocsp_status](#view_ocsp_status), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action), [view_user_info](#view_user_info) | +| cluster_member | [create_bdb](#create_bdb), [create_crdb](#create_crdb), [delete_bdb](#delete_bdb), [delete_crdb](#delete_crdb), [edit_bdb_module](#edit_bdb_module), [flush_crdb](#flush_crdb), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_crdb](#update_crdb), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_keys](#view_cluster_keys), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| cluster_viewer | [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| db_member | [create_bdb](#create_bdb), [create_crdb](#create_crdb), [delete_bdb](#delete_bdb), [delete_crdb](#delete_crdb), [edit_bdb_module](#edit_bdb_module), [flush_crdb](#flush_crdb), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_crdb](#update_crdb), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| db_viewer | [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | + +## Roles list per permission + +| Permission | Roles | +|------------|-------| +| add_cluster_module| admin | +| cancel_cluster_action | admin | +| cancel_node_action | admin | +| config_ldap | admin | +| config_ocsp | admin | +| create_bdb | admin
cluster_member
db_member | +| create_crdb | admin
cluster_member
db_member | +| create_ldap_mapping | admin | +| create_new_user | admin | +| create_redis_acl | admin | +| create_role | admin | +| delete_bdb | admin
cluster_member
db_member | +| delete_cluster_module | admin | +| delete_crdb | admin
cluster_member
db_member | +| delete_ldap_mapping | admin | +| delete_redis_acl | admin | +| delete_role | admin | +| delete_user | admin | +| edit_bdb_module | admin
cluster_member
db_member | +| flush_crdb | admin
cluster_member
db_member | +| install_new_license | admin | +| migrate_shard | admin
cluster_member
db_member | +| purge_instance | admin
cluster_member
db_member | +| reset_bdb_current_backup_status | admin
cluster_member
db_member | +| reset_bdb_current_export_status | admin
cluster_member
db_member | +| reset_bdb_current_import_status | admin
cluster_member
db_member | +| start_bdb_export | admin
cluster_member
db_member | +| start_bdb_import | admin
cluster_member
db_member | +| start_bdb_recovery | admin
cluster_member
db_member | +| start_cluster_action | admin | +| start_node_action | admin | +| test_ocsp_status | admin | +| update_bdb | admin
cluster_member
db_member | +| update_bdb_alerts | admin
cluster_member
db_member | +| update_bdb_with_action | admin
cluster_member
db_member | +| update_cluster | admin | +| update_crdb | admin
cluster_member
db_member | +| update_ldap_mapping | admin | +| update_node | admin | +| update_proxy | admin | +| update_redis_acl | admin | +| update_role | admin | +| update_user | admin | +| view_all_bdb_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_bdbs_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_bdbs_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_ldap_mappings_info | admin | +| view_all_nodes_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_nodes_checks | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_nodes_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_nodes_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_proxies_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_redis_acls_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_roles_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_shard_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_all_users_info | admin | +| view_bdb_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer |view_bdb_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_bdb_recovery_plan | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_bdb_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_cluster_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_cluster_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_cluster_keys | admin
cluster_member | +| view_cluster_modules | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_cluster_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_crdb | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_crdb_list | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_crdb_task | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_crdb_task_list | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_debugging_info | admin
cluster_member
db_member
| +| view_endpoint_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_ldap_config | admin | +| view_ldap_mapping_info | admin | +| view_license | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_logged_events | admin
cluster_member
cluster_viewer
db_member | +| view_node_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_node_check | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_node_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_node_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_ocsp_config | admin | +| view_ocsp_status | admin | +| view_proxy_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_redis_acl_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_redis_pass | admin
cluster_member
db_member | +| view_role_info | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_shard_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_status_of_all_node_actions | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_status_of_cluster_action | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_status_of_node_action | admin
cluster_member
cluster_viewer
db_member
db_viewer | +| view_user_info | admin | +--- +Title: Redis Enterprise Software REST API quick start +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software REST API quick start +linkTitle: Quick start +weight: 20 +url: '/operate/rs/7.4/references/rest-api/quick-start/' +--- + +Redis Enterprise Software includes a REST API that allows you to automate certain tasks. This article shows you how to send a request to the Redis Enterprise Software REST API. + +## Fundamentals + +No matter which method you use to send API requests, there are a few common concepts to remember. + +| Type | Description | +|------|-------------| +| [Authentication]({{< relref "/operate/rs/7.4/references/rest-api#authentication" >}}) | Use [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) with your cluster username (email) and password | +| [Ports]({{< relref "/operate/rs/7.4/references/rest-api#ports" >}}) | All calls are made to port 9443 by default | +| [Versions]({{< relref "/operate/rs/7.4/references/rest-api#versions" >}}) | Specify the version in the request [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) | +| [Headers]({{< relref "/operate/rs/7.4/references/rest-api#headers" >}}) | `Accept` and `Content-Type` should be `application/json` | +| [Response types and error codes]({{< relref "/operate/rs/7.4/references/rest-api#response-types-and-error-codes" >}}) | A response of `200 OK` means success; otherwise, the request failed due to an error | + +For more information, see [Redis Enterprise Software REST API]({{< relref "/operate/rs/7.4/references/rest-api/" >}}). + +## cURL example requests + +[cURL](https://curl.se/) is a command-line tool that allows you to send HTTP requests from a terminal. + +You can use the following options to build a cURL request: + +| Option | Description | +|--------|-------------| +| -X | Method (GET, PUT, PATCH, POST, or DELETE) | +| -H | Request header, can be specified multiple times | +| -u | Username and password information | +| -d | JSON data for PUT or POST requests | +| -F | Form data for PUT or POST requests, such as for the [`POST /v1/modules`]({{< relref "/operate/rs/7.4/references/rest-api/requests/modules/#post-module" >}}) or [`POST /v2/modules`]({{< relref "/operate/rs/7.4/references/rest-api/requests/modules/#post-module-v2" >}}) endpoint | +| -k | Turn off SSL verification | +| -i | Show headers and status code as well as the response body | + +See the [cURL documentation](https://curl.se/docs/) for more information. + +### GET request + +Use the following cURL command to get a list of databases with the [GET `/v1/bdbs/`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#get-all-bdbs" >}}) endpoint. + +```sh +$ curl -X GET -H "accept: application/json" \ + -u "[username]:[password]" \ + https://[host][:port]/v1/bdbs -k -i + +HTTP/1.1 200 OK +server: envoy +date: Tue, 14 Jun 2022 19:24:30 GMT +content-type: application/json +content-length: 2833 +cluster-state-id: 42 +x-envoy-upstream-service-time: 25 + +[ + { + ... + "name": "tr01", + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] +``` + +In the response body, the `uid` is the database ID. You can use the database ID to view or update the database using the API. + +For more information about the fields returned by [GET `/v1/bdbs/`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#get-all-bdbs" >}}), see the [`bdbs` object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/" >}}). + +### PUT request + +Once you have the database ID, you can use [PUT `/v1/bdbs/`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#put-bdbs" >}}) to update the configuration of the database. + +For example, you can pass the database `uid` 1 as a URL parameter and use the `-d` option to specify the new `name` when you send the request. This changes the database's `name` from `tr01` to `database1`: + +```sh +$ curl -X PUT -H "accept: application/json" \ + -H "content-type: application/json" \ + -u "cameron.bates@redis.com:test123" \ + https://[host]:[port]/v1/bdbs/1 \ + -d '{ "name": "database1" }' -k -i +HTTP/1.1 200 OK +server: envoy +date: Tue, 14 Jun 2022 20:00:25 GMT +content-type: application/json +content-length: 2933 +cluster-state-id: 43 +x-envoy-upstream-service-time: 159 + +{ + ... + "name" : "database1", + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +For more information about the fields you can update with [PUT `/v1/bdbs/`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#put-bdbs" >}}), see the [`bdbs` object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/" >}}). + +## Client examples + +You can also use client libraries to make API requests in your preferred language. + +To follow these examples, you need: + +- A [Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) node +- Python 3 and the [requests](https://pypi.org/project/requests/) Python library +- [node.js](https://nodejs.dev/) and [node-fetch](https://www.npmjs.com/package/node-fetch) + +### Python + +```python +import json +import requests + +# Required connection information - replace with your host, port, username, and password +host = "[host]" +port = "[port]" +username = "[username]" +password = "[password]" + +# Get the list of databases using GET /v1/bdbs +bdbs_uri = "https://{}:{}/v1/bdbs".format(host, port) + +print("GET {}".format(bdbs_uri)) +get_resp = requests.get(bdbs_uri, + auth = (username, password), + headers = { "accept" : "application/json" }, + verify = False) + +print("{} {}".format(get_resp.status_code, get_resp.reason)) +for header in get_resp.headers.keys(): + print("{}: {}".format(header, get_resp.headers[header])) + +print("\n" + json.dumps(get_resp.json(), indent=4)) + +# Rename all databases using PUT /v1/bdbs +for bdb in get_resp.json(): + uid = bdb["uid"] # Get the database ID from the JSON response + + put_uri = "{}/{}".format(bdbs_uri, uid) + new_name = "database{}".format(uid) + put_data = { "name" : new_name } + + print("PUT {} {}".format(put_uri, json.dumps(put_data))) + + put_resp = requests.put(put_uri, + data = json.dumps(put_data), + auth = (username, password), + headers = { "content-type" : "application/json" }, + verify = False) + + print("{} {}".format(put_resp.status_code, put_resp.reason)) + for header in put_resp.headers.keys(): + print("{}: {}".format(header, put_resp.headers[header])) + + print("\n" + json.dumps(put_resp.json(), indent=4)) +``` + +See the [Python requests library documentation](https://requests.readthedocs.io/en/latest/) for more information. + +#### Output + +```sh +$ python rs_api.py +python rs_api.py +GET https://[host]:[port]/v1/bdbs +InsecureRequestWarning: Unverified HTTPS request is being made to host '[host]'. +Adding certificate verification is strongly advised. +See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings + warnings.warn( +200 OK +server: envoy +date: Wed, 15 Jun 2022 15:49:43 GMT +content-type: application/json +content-length: 2832 +cluster-state-id: 89 +x-envoy-upstream-service-time: 27 + +[ + { + ... + "name": "tr01", + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] + +PUT https://[host]:[port]/v1/bdbs/1 {"name": "database1"} +InsecureRequestWarning: Unverified HTTPS request is being made to host '[host]'. +Adding certificate verification is strongly advised. +See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings + warnings.warn( +200 OK +server: envoy +date: Wed, 15 Jun 2022 15:49:43 GMT +content-type: application/json +content-length: 2933 +cluster-state-id: 90 +x-envoy-upstream-service-time: 128 + +{ + ... + "name" : "database1", + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +### node.js + +```js +import fetch, { Headers } from 'node-fetch'; +import * as https from 'https'; + +const HOST = '[host]'; +const PORT = '[port]'; +const USERNAME = '[username]'; +const PASSWORD = '[password]'; + +// Get the list of databases using GET /v1/bdbs +const BDBS_URI = `https://${HOST}:${PORT}/v1/bdbs`; +const USER_CREDENTIALS = Buffer.from(`${USERNAME}:${PASSWORD}`).toString('base64'); +const AUTH_HEADER = `Basic ${USER_CREDENTIALS}`; + +console.log(`GET ${BDBS_URI}`); + +const HTTPS_AGENT = new https.Agent({ + rejectUnauthorized: false +}); + +const response = await fetch(BDBS_URI, { + method: 'GET', + headers: { + 'Accept': 'application/json', + 'Authorization': AUTH_HEADER + }, + agent: HTTPS_AGENT +}); + +const responseObject = await response.json(); +console.log(`${response.status}: ${response.statusText}`); +console.log(responseObject); + +// Rename all databases using PUT /v1/bdbs +for (const database of responseObject) { + const DATABASE_URI = `${BDBS_URI}/${database.uid}`; + const new_name = `database${database.uid}`; + + console.log(`PUT ${DATABASE_URI}`); + + const response = await fetch(DATABASE_URI, { + method: 'PUT', + headers: { + 'Authorization': AUTH_HEADER, + 'Content-Type': 'application/json' + }, + body: JSON.stringify({ + 'name': new_name + }), + agent: HTTPS_AGENT + }); + + console.log(`${response.status}: ${response.statusText}`); + console.log(await(response.json())); +} +``` + +See the [node-fetch documentation](https://www.npmjs.com/package/node-fetch) for more info. + +#### Output + +```sh +$ node rs_api.js +GET https://[host]:[port]/v1/bdbs +200: OK +[ + { + ... + "name": "tr01", + ... + "slave_ha" : false, + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] +PUT https://[host]:[port]/v1/bdbs/1 +200: OK +{ + ... + "name" : "tr01", + ... + "slave_ha" : true, + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +## More info + +- [Redis Enterprise Software REST API]({{< relref "/operate/rs/7.4/references/rest-api/" >}}) +- [Redis Enterprise Software REST API requests]({{< relref "/operate/rs/7.4/references/rest-api/requests/" >}}) +--- +Title: DB metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the DB metrics used with Redis Enterprise Software REST API + calls. +linkTitle: DB metrics +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/statistics/db-metrics/' +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| avg_latency | float | Average latency of operations on the DB (microseconds). Only returned when there is traffic. | +| avg_other_latency | float | Average latency of other (non read/write) operations (microseconds). Only returned when there is traffic. | +| avg_read_latency | float | Average latency of read operations (microseconds). Only returned when there is traffic. | +| avg_write_latency | float | Average latency of write operations (microseconds). Only returned when there is traffic. | +| big_del_flash | float | Rate of key deletes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_del_ram | float | Rate of key deletes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| big_fetch_flash | float | Rate of key reads/updates for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_fetch_ram | float | Rate of key reads/updates for keys in RAM (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_flash | float | Rate of key operations on flash. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_redis | float | Rate of Redis operations on keys. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_write_flash | float | Rate of key writes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_ram | float | Rate of key writes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| bigstore_io_dels | float | Rate of key deletions from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_read_bytes | float | Throughput of I/O read operations against backend flash |for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_reads | float | Rate of key reads from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_write_bytes | float | Throughput of I/O write operations against backend flash |for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_writes | float | Rate of key writes from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards of the DB (BigRedis) (ops/sec). Only returned when BigRedis is enabled. | +| bigstore_kv_ops | float | Rate of value read/write/del operations against backend flash for all shards of the DB (BigRedis) (key access/sec). Only returned when BigRedis is enabled | +| bigstore_objs_flash | float | Value count on flash (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_objs_ram | float | Value count in RAM (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| conns | float | Number of client connections to the DB’s endpoints | +| disk_frag_ratio | float | Flash fragmentation ratio (used/required). Only returned when BigRedis is enabled. | +| egress_bytes | float | Rate of outgoing network traffic to the DB’s endpoint (bytes/sec) | +| evicted_objects | float | Rate of key evictions from DB (evictions/sec) | +| expired_objects | float | Rate keys expired in DB (expirations/sec) | +| fork_cpu_system | float | % cores utilization in system mode for all Redis shard fork child processes of this database | +| fork_cpu_user | float | % cores utilization in user mode for all Redis shard fork child processes of this database | +| ingress_bytes | float | Rate of incoming network traffic to the DB’s endpoint (bytes/sec) | +| instantaneous_ops_per_sec | float | Request rate handled by all shards of the DB (ops/sec) | +| last_req_time | date, ISO_8601 format | Last request time received to the DB (ISO format 2015-07-05T22:16:18Z). Returns 1/1/1970 when unavailable. | +| last_res_time | date, ISO_8601 format | Last response time received from DB (ISO format 2015-07-05T22:16:18Z). Returns 1/1/1970 when unavailable. | +| main_thread_cpu_system | float | % cores utilization in system mode for all Redis shard main threads of this database | +| main_thread_cpu_user | float | % cores utilization in user mode for all Redis shard main threads of this database | +| mem_frag_ratio | float | RAM fragmentation ratio (RSS/allocated RAM) | +| mem_not_counted_for_evict | float | Portion of used_memory (in bytes) not counted for eviction and OOM errors | +| mem_size_lua | float | Redis Lua scripting heap size (bytes) | +| monitor_sessions_count | float | Number of client connected in monitor mode to the DB | +| no_of_expires | float | Number of volatile keys in the DB | +| no_of_keys | float | Number of keys in the DB | +| other_req | float | Rate of other (non read/write) requests on DB (ops/sec) | +| other_res | float | Rate of other (non read/write) responses on DB (ops/sec) | +| pubsub_channels | float | Count the pub/sub channels with subscribed clients | +| pubsub_patterns | float | Count the pub/sub patterns with subscribed clients | +| ram_overhead | float | Non values RAM overhead (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| read_hits | float | Rate of read operations accessing an existing key (ops/sec) | +| read_misses | float | Rate of read operations accessing a nonexistent key (ops/sec) | +| read_req | float | Rate of read requests on DB (ops/sec) | +| read_res | float | Rate of read responses on DB (ops/sec) | +| shard_cpu_system | float | % cores utilization in system mode for all Redis shard processes of this database | +| shard_cpu_user | float | % cores utilization in user mode for the Redis shard process | +| total_connections_received | float | Rate of new client connections to the DB (connections/sec) | +| total_req | float | Rate of all requests on DB (ops/sec) | +| total_res | float | Rate of all responses on DB (ops/sec) | +| used_bigstore | float | Flash used by DB (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| used_memory | float | Memory used by DB (in BigRedis this includes flash) (bytes) | +| used_ram | float | RAM used by DB (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| write_hits | float | Rate of write operations accessing an existing key (ops/sec) | +| write_misses | float | Rate of write operations accessing a nonexistent key (ops/sec) | +| write_req | float | Rate of write requests on DB (ops/sec) | +| write_res | float | Rate of write responses on DB (ops/sec) | +--- +Title: Node metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node metrics used with Redis Enterprise Software REST API + calls. +linkTitle: node metrics +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/statistics/node-metrics/' +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| available_flash | float | Available flash on the node (bytes) | +| available_memory | float | Available RAM on the node (bytes) | +| avg_latency | float | Average latency of requests handled by endpoints on the node (micro-sec); returned only when there is traffic | +| bigstore_free | float | Free space of backend flash (used by flash DB's BigRedis) (bytes); returned only when BigRedis is enabled | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (ops/sec); returned only when BigRedis is enabled | +| bigstore_kv_ops | float | Rate of value read/write operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (ops/sec); returned only when BigRedis is enabled | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (bytes/sec); returned only when BigRedis is enabled | +| conns | float | Number of clients connected to endpoints on the node | +| cpu_idle | float | CPU idle time portion (0-1, multiply by 100 to get percent) | +| cpu_system | float | CPU time portion spent in kernel (0-1, multiply by 100 to get percent) | +| cpu_user | float | CPU time portion spent by users-pace processes (0-1, multiply by 100 to get percent) | +| cur_aof_rewrites | float | Number of current AOF rewrites by shards on this node | +| egress_bytes | float | Rate of outgoing network traffic to the node (bytes/sec) | +| ephemeral_storage_avail | float | Disk space available to Redis Enterprise processes on configured ephemeral disk (bytes) | +| ephemeral_storage_free | float | Free disk space on configured ephemeral disk (bytes) | +| free_memory | float | Free memory on the node (bytes) | +| ingress_bytes | float | Rate of incoming network traffic to the node (bytes/sec) | +| persistent_storage_avail | float | Disk space available to Redis Enterprise processes on configured persistent disk (bytes) | +| persistent_storage_free | float | Free disk space on configured persistent disk (bytes) | +| provisional_flash | float | Amount of flash available for new shards on this node, taking into account overbooking, max Redis servers, reserved flash, and provision and migration thresholds (bytes) | +| provisional_memory | float | Amount of RAM available for new shards on this node, taking into account overbooking, max Redis servers, reserved memory, and provision and migration thresholds (bytes) | +| total_req | float | Request rate handled by endpoints on the node (ops/sec) | +--- +Title: Cluster metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster metrics used with Redis Enterprise Software REST + API calls. +linkTitle: cluster metrics +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/statistics/cluster-metrics/' +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| available_flash | float | Sum of available flash in all nodes (bytes) | +| available_memory | float | Sum of available memory in all nodes (bytes) | +| avg_latency | float | Average latency of requests handled by all cluster endpoints (micro-sec); returned only when there is traffic | +| bigstore_free | float | Sum of free space of backend flash (used by flash DB's BigRedis) on all cluster nodes (bytes); only returned when BigRedis is enabled | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) in the cluster (ops/sec); returned only when BigRedis is enabled | +| bigstore_kv_ops | float | Rate of value read/write operations against back-end flash for all shards which are part of a flash based DB (BigRedis) in cluster (ops/sec); only returned when BigRedis is enabled | +| bigstore_throughput | float | Throughput I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) in the cluster (bytes/sec); only returned when BigRedis is enabled | +| conns | float | Total number of clients connected to all cluster endpoints | +| cpu_idle | float | CPU idle time portion, the value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent) | +| cpu_system | float | CPU time portion spent in kernel on the cluster, the value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent) | +| cpu_user | float | CPU time portion spent by users-pace processes on the cluster. The value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent). | +| egress_bytes | float | Sum of rate of outgoing network traffic on all cluster nodes (bytes/sec) | +| ephemeral_storage_avail | float | Sum of disk space available to Redis Enterprise processes on configured ephemeral disk on all cluster nodes (bytes) | +| ephemeral_storage_free | float | Sum of free disk space on configured ephemeral disk on all cluster nodes (bytes) | +| free_memory | float | Sum of free memory in all cluster nodes (bytes) | +| ingress_bytes | float | Sum of rate of incoming network traffic on all cluster nodes (bytes/sec) | +| persistent_storage_avail | float | Sum of disk space available to Redis Enterprise processes on configured persistent disk on all cluster nodes (bytes) | +| persistent_storage_free | float | Sum of free disk space on configured persistent disk on all cluster nodes (bytes) | +| provisional_flash | float | Sum of provisional flash in all nodes (bytes) | +| provisional_memory | float | Sum of provisional memory in all nodes (bytes) | +| total_req | float | Request rate handled by all endpoints on the cluster (ops/sec) | +--- +Title: Statistics +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains metrics for clusters, databases, nodes, or shards +hideListLinks: true +linkTitle: statistics +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/statistics/' +--- + +## Statistics overview + +Clusters, databases, nodes, and shards collect various statistics at regular time intervals. View the statistics for these objects using `GET stats` requests to their respective endpoints: +- [Cluster stats]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/stats" >}}) +- [Database stats]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/stats" >}}) +- [Node stats]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/stats" >}}) +- [Shard stats]({{< relref "/operate/rs/7.4/references/rest-api/requests/shards/stats" >}}) + +View endpoint stats using `GET` requests, see: +- [Endpoint stats]({{< relref "/operate/rs/7.4/references/rest-api/requests/endpoints-stats" >}}) + +### Response object + +Statistics returned from API requests always contain the following fields: +- `interval`: a string that represents the statistics time interval. Valid values include: + - 1sec + - 10sec + - 5min + - 15min + - 1hour + - 12hour + - 1week +- `stime`: a timestamp that represents the beginning of the interval, in the format "2015-05-27T12:00:00Z" +- `etime`: a timestamp that represents the end of the interval, in the format "2015-05-27T12:00:00Z" + +The statistics returned by the API also contain fields that represent the values of different metrics for an object during the specified time interval. + +More details about the metrics relevant to each object: +- [Cluster metrics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics/cluster-metrics" >}}) +- [DB metrics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics/db-metrics" >}}) +- [Node metrics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics/node-metrics" >}}) +- [Shard metrics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics/shard-metrics" >}}) + +{{}} +Certain statistics are not documented because they are for internal use only and should be ignored. Some statistics will only appear in API responses when they are relevant. +{{}} + +### Optional URL parameters + +There are several optional URL parameters you can pass to the various `GET stats` requests to filter the returned statistics. + +- `stime`: limit the start of the time range of the returned statistics +- `etime`: limit the end of the time range of the returned statistics +- `metrics`: only return the statistics for the specified metrics (comma-separated list) + +## Maximum number of samples per interval + +The system retains a maximum number of most recent samples for each interval. + +| Interval | Max samples | +|----------|-------------| +| 1sec | 10 | +| 10sec | 30 | +| 5min | 12 | +| 15min | 96 | +| 1hour | 168 | +| 12hour | 62 | +| 1week | 53 | + +The actual number of samples returned by a `GET stats` request depends on how many samples are available and any filters applied by the optional URL parameters. For example, newly created objects (clusters, nodes, databases, or shards) or a narrow time filter range will return fewer samples. + +{{}} +To reduce load generated by stats collection, relatively inactive databases or shards (less than 5 ops/sec) do not collect 1sec stats at one second intervals. Instead, they collect 1sec stats every 2-5 seconds but still retain the same maximum number of samples. +{{}} +--- +Title: Shard metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the shard metrics used with Redis Enterprise Software REST + API calls. +linkTitle: shard metrics +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/statistics/shard-metrics/' +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| aof_rewrite_inprog | float | The number of simultaneous AOF rewrites that are in progress | +| avg_ttl | float | Estimated average time to live of a random key (msec) | +| big_del_flash | float | Rate of key deletes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_del_ram | float | Rate of key deletes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| big_fetch_flash | float | Rate of key reads/updates for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_fetch_ram | float | Rate of key reads/updates for keys in RAM (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_flash | float | Rate of key operations on flash. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_redis | float | Rate of Redis operations on keys. Can be used to compute the ratio of I/O operations) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_flash | float | Rate of key writes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_ram | float | Rate of key writes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| bigstore_io_dels | float | Rate of key deletions from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_read_bytes | float | Throughput of I/O read operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_reads | float | Rate of key reads from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_write_bytes | float | Throughput of I/O write operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_writes | float | Rate of key writes from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards of the DB (BigRedis) (ops/sec). Only returned when BigRedis is enabled. | +| bigstore_kv_ops | float | Rate of value read/write/del operations against backend flash for all shards of the DB (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_objs_flash | float | Key count on flash (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_objs_ram | float | Key count in RAM (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| blocked_clients | float | Count the clients waiting on a blocking call | +| connected_clients | float | Number of client connections to the specific shard | +| disk_frag_ratio | float | Flash fragmentation ratio (used/required). Only returned when BigRedis is enabled. | +| evicted_objects | float | Rate of key evictions from DB (evictions/sec) | +| expired_objects | float | Rate keys expired in DB (expirations/sec) | +| fork_cpu_system | float | % cores utilization in system mode for the Redis shard fork child process | +| fork_cpu_user | float | % cores utilization in user mode for the Redis shard fork child process | +| last_save_time | float | Time of the last RDB save | +| main_thread_cpu_system | float | % cores utilization in system mode for the Redis shard main thread | +| main_thread_cpu_user | float | % cores utilization in user mode for the Redis shard main thread | +| mem_frag_ratio | float | RAM fragmentation ratio (RSS/allocated RAM) | +| mem_not_counted_for_evict | float | Portion of used_memory (in bytes) not counted for eviction and OOM errors | +| mem_size_lua | float | Redis Lua scripting heap size (bytes) | +| no_of_expires | float | Number of volatile keys on the shard | +| no_of_keys | float | Number of keys in DB | +| pubsub_channels | float | Count the pub/sub channels with subscribed clients | +| pubsub_patterns | float | Count the pub/sub patterns with subscribed clients | +| rdb_changes_since_last_save | float | Count changes since last RDB save | +| read_hits | float | Rate of read operations accessing an existing key (ops/sec) | +| read_misses | float | Rate of read operations accessing a nonexistent key (ops/sec) | +| shard_cpu_system | float | % cores utilization in system mode for the Redis shard process | +| shard_cpu_user | float | % cores utilization in user mode for the Redis shard process | +| total_req | float | Rate of operations on DB (ops/sec) | +| used_memory | float | Memory used by shard (in BigRedis this includes flash) (bytes) | +| used_memory_peak | float | The largest amount of memory used by this shard (bytes) | +| used_memory_rss | float | Resident set size of this shard (bytes) | +| write_hits | float | Rate of write operations accessing an existing key (ops/sec) | +| write_misses | float | Rate of write operations accessing a nonexistent key (ops/sec) | +--- +Title: Alert settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the alert_settings object used with Redis Enterprise Software + REST API calls. +linkTitle: alert_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/cluster/alert_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cluster_certs_about_to_expire | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Cluster certificate will expire in x days | +| cluster_even_node_count | boolean (default: false) | True high availability requires an odd number of nodes in the cluster | +| cluster_flash_overcommit | boolean (default: false) | Flash memory committed to databases is larger than cluster total flash memory | +| cluster_inconsistent_redis_sw | boolean (default: false) | Some shards in the cluster are running different versions of Redis software | +| cluster_inconsistent_rl_sw | boolean (default: false) | Some nodes in the cluster are running different versions of Redis Enterprise software | +| cluster_internal_bdb | boolean (default: false) | Issues with internal cluster databases | +| cluster_multiple_nodes_down | boolean (default: false) | Multiple cluster nodes are down (this might cause data loss) | +| cluster_node_joined | boolean (default: false) | New node joined the cluster | +| cluster_node_remove_abort_completed | boolean (default: false) | Cancel node remove operation completed | +| cluster_node_remove_abort_failed | boolean (default: false) | Cancel node remove operation failed | +| cluster_node_remove_completed | boolean (default: false) | Node removed from the cluster | +| cluster_node_remove_failed | boolean (default: false) | Failed to remove a node from the cluster | +| cluster_ocsp_query_failed | boolean (default: false) | Failed to query the OCSP server | +| cluster_ocsp_status_revoked | boolean (default: false) | OCSP certificate status is REVOKED | +| cluster_ram_overcommit | boolean (default: false) | RAM committed to databases is larger than cluster total RAM | +| cluster_too_few_nodes_for_replication | boolean (default: false) | Replication requires at least 2 nodes in the cluster | +| node_aof_slow_disk_io | boolean (default: false) | AOF reaching disk I/O limits +| node_checks_error | boolean (default: false) | Some node checks have failed | +| node_cpu_utilization | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node CPU utilization has reached the threshold value (% of the utilization limit) | +| node_ephemeral_storage | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node ephemeral storage has reached the threshold value (% of the storage limit) | +| node_failed | boolean (default: false) | Node failed | +| node_free_flash | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node flash storage has reached the threshold value (% of the storage limit) | +| node_insufficient_disk_aofrw | boolean (default: false) | Insufficient AOF disk space | +| node_internal_certs_about_to_expire | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object| Internal certificate on node will expire in x days | +| node_memory | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node memory has reached the threshold value (% of the memory limit) | +| node_net_throughput | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node network throughput has reached the threshold value (bytes/s) | +| node_persistent_storage | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node persistent storage has reached the threshold value (% of the storage limit) | +--- +Title: Cluster alert settings with threshold object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster_alert_settings_with_threshold object used with + Redis Enterprise Software REST API calls. +linkTitle: cluster_alert_settings_with_threshold +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| enabled | boolean (default: false) | Alert enabled or disabled | +| threshold | string | Threshold for alert going on/off | +--- +Title: Cluster object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a cluster +hideListLinks: true +linkTitle: cluster +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/cluster/' +--- + +An API object that represents the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| alert_settings | [alert_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster/alert_settings" >}}) object | Cluster and node alert settings | +| bigstore_driver | 'speedb'
'rocksdb' | Storage engine for Auto Tiering | +| cluster_ssh_public_key | string | Cluster's autogenerated SSH public key | +| cm_port | integer, (range: 1024-65535) | UI HTTPS listening port | +| cm_session_timeout_minutes | integer (default: 15) | The timeout (in minutes) for the session to the CM | +| cnm_http_max_threads_per_worker | integer (default: 10) | Maximum number of threads per worker in the `cnm_http` service (deprecated) | +| cnm_http_port | integer, (range: 1024-65535) | API HTTP listening port | +| cnm_http_workers | integer (default: 1) | Number of workers in the `cnm_http` service | +| cnm_https_port | integer, (range: 1024-65535) | API HTTPS listening port | +| control_cipher_suites | string | Specifies the enabled ciphers for the control plane. The ciphers are specified in the format understood by the BoringSSL library. | +| control_cipher_suites_tls_1_3 | string | Specifies the enabled TLS 1.3 ciphers for the control plane. The ciphers are specified in the format understood by the BoringSSL library. (read-only) | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| crdt_rest_client_retries | integer | Maximum number of retries for the REST client used by the Active-Active management API | +| crdt_rest_client_timeout | integer | Timeout for REST client used by the Active-Active management API | +| created_time | string | Cluster creation date (read-only) | +| data_cipher_list | string | Specifies the enabled ciphers for the data plane. The ciphers are specified in the format understood by the OpenSSL library. | +| data_cipher_suites_tls_1_3 | string | Specifies the enabled TLS 1.3 ciphers for the data plane. | +| debuginfo_path | string | Path to a local directory used when generating support packages | +| default_non_sharded_proxy_policy | string (default: single) | Default proxy_policy for newly created non-sharded databases' endpoints (read-only) | +| default_sharded_proxy_policy | string (default: all-master-shards) | Default proxy_policy for newly created sharded databases' endpoints (read-only) | +| email_alerts | boolean (default: false) | Send node/cluster email alerts (requires valid SMTP and email_from settings) | +| email_from | string | Sender email for automated emails | +| encrypt_pkeys | boolean (default: false) | Enable or turn off encryption of private keys | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_max_downstream_connections | integer, (range: 100-2048) | The max downstream connections envoy is allowed to open | +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| handle_redirects | boolean (default: false) | Handle API HTTPS requests and redirect to the master node internally | +| http_support | boolean (default: false) | Enable or turn off HTTP support | +| min_control_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the control path | +| min_data_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the data path | +| min_sentinel_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the data path | +| name | string | Cluster's fully qualified domain name (read-only) | +| password_complexity | boolean (default: false) | Enforce password complexity policy | +| password_expiration_duration | integer (default: 0) | The number of days a password is valid until the user is required to replace it | +| proxy_certificate | string | Cluster's proxy certificate | +| proxy_max_ccs_disconnection_time | integer | Cluster-wide proxy timeout policy between proxy and CCS | +| rack_aware | boolean | Cluster operates in a rack-aware mode (read-only) | +| reserved_ports | array of strings | List of reserved ports and/or port ranges to avoid using for database endpoints (for example `"reserved_ports": ["11000", "13000-13010"]`) | +| s3_url | string | Specifies the URL for S3 export and import | +| saslauthd_ldap_conf | string | saslauthd LDAP configuration | +| sentinel_cipher_suites | array | Specifies the list of enabled ciphers for the sentinel service. The supported ciphers are those implemented by the [cipher_suites.go]() package. | +| sentinel_cipher_suites_tls_1_3 | string | Specifies the list of enabled TLS 1.3 ciphers for the discovery (sentinel) service. The supported ciphers are those implemented by the [cipher_suites.go]() package.(read-only) | +| sentinel_tls_mode | 'allowed'
'disabled'
'required' | Determines whether the discovery service allows, blocks, or requires TLS connections (previously named `sentinel_ssl_policy`)
**allowed**: Allows both TLS and non-TLS connections
**disabled**: Allows only non-TLS connections
**required**: Allows only TLS connections | +| slave_ha | boolean (default: false) | Enable the replica high-availability mechanism (read-only) | +| slave_ha_bdb_cooldown_period | integer (default: 86400) | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database (read-only) | +| slave_ha_cooldown_period | integer (default: 3600) | Time in seconds between runs of the replica high-availability mechanism on different nodes (read-only) | +| slave_ha_grace_period | integer (default: 900) | Time in seconds between a node failure and when the replica high-availability mechanism starts relocating shards (read-only) | +| slowlog_in_sanitized_support | boolean | Whether to include slowlogs in the sanitized support package | +| smtp_host | string | SMTP server for automated emails | +| smtp_password | string | SMTP server password | +| smtp_port | integer | SMTP server port for automated emails | +| smtp_tls_mode | 'none'
'starttls'
'tls' | Specifies which TLS mode to use for SMTP access | +| smtp_use_tls | boolean (default: false) | Use TLS for SMTP access (deprecated as of Redis Enterprise v4.3.3, use smtp_tls_mode field instead) | +| smtp_username | string | SMTP server username (pattern does not allow special characters &,\<,>,") | +| syncer_certificate | string | Cluster's syncer certificate | +| upgrade_mode | boolean (default: false) | Is cluster currently in upgrade mode | +| use_external_ipv6 | boolean (default: true) | Should redislabs services listen on ipv6 | +| use_ipv6 | boolean (default: true) | Should redislabs services listen on ipv6 (deprecated as of Redis Enterprise v6.4.2, replaced with use_external_ipv6) | +| wait_command | boolean (default: true) | Supports Redis wait command (read-only) | +--- +Title: LDAP mapping object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a mapping between an LDAP group and roles +linkTitle: ldap_mapping +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/ldap_mapping/' +--- + +An API object that represents an [LDAP mapping]({{< relref "/operate/rs/7.4/security/access-control/ldap/map-ldap-groups-to-roles" >}}) between an LDAP group and [roles]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}). + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | LDAP mapping's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API (read-only) | +| bdbs_email_alerts | complex object | UIDs of databases that associated email addresses will receive alerts for | +| cluster_email_alerts | boolean | Activate cluster email alerts for an associated email | +| dn | string | An LDAP group's distinguished name | +| email | string | Email address used for alerts (if set) | +| email_alerts | boolean (default: true) | Activate email alerts for an associated email | +| name | string | Role's name | +| role_uids | array of integers | List of role UIDs associated with the LDAP group | +--- +Title: Cluster identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster_identity object used with Redis Enterprise Software + REST API calls. +linkTitle: cluster_identity +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/cluster_identity/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| name | string | Fully qualified cluster name. Limited to 64 characters and must comply with the IETF's RFC 952 standard and section 2.1 of the RFC 1123 standard. | +| nodes | array of strings | Array of IP addresses of existing cluster nodes | +| wait_command | boolean (default: true) | Supports Redis wait command | +--- +Title: Identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the identity object used with Redis Enterprise Software REST + API calls. +linkTitle: identity +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/identity/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Assumed node's UID to join cluster. Used to replace a dead node with a new one. | +| accept_servers | boolean (default: true) | If true, no shards will be created on the node | +| addr | string | Internal IP address of node | +| external_addr | complex object | External IP addresses of node. `GET` `/jsonschema` to retrieve the object's structure. | +| name | string | Node's name | +| override_rack_id | boolean | When replacing an existing node in a rack-aware cluster, allows the new node to be located in a different rack | +| rack_id | string | Rack ID, overrides cloud config | +| use_internal_ipv6 | boolean (default: false) | Node uses IPv6 for internal communication | +--- +Title: Credentials object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the credentials object used with Redis Enterprise Software + REST API calls. +linkTitle: credentials +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/credentials/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| password | string | Admin password | +| username | string | Admin username (pattern does not allow special characters &,\<,>,") | +--- +Title: Limits object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the limits object used with Redis Enterprise Software REST + API calls. +linkTitle: limits +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/limits/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| max_listeners | integer (default: 100) | Max allowed listeners on node | +| max_redis_servers | integer (default: 100) | Max allowed Redis servers on node | +--- +Title: Paths object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the paths object used with Redis Enterprise Software REST API + calls. +linkTitle: paths +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/paths/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| bigstore_path | string | Bigredis storage path | +| ccs_persistent_path | string | Persistent storage path of CCS | +| ephemeral_path | string | Ephemeral storage path | +| persistent_path | string | Persistent storage path | +--- +Title: Policy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the policy object used with Redis Enterprise Software REST + API calls. +linkTitle: policy +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/policy/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| default_fork_evict_ram | boolean (default: false) | If true, the databases should evict data from RAM to ensure successful replication or persistence | +| default_non_sharded_proxy_policy | **'single'**
'all-master-shards'
'all-nodes' | Default proxy_policy for newly created non-sharded databases' endpoints | +| default_sharded_proxy_policy | 'single'
**'all-master-shards'**
'all-nodes' | Default proxy_policy for newly created sharded databases' endpoints | +| default_shards_placement | 'dense'
**'sparse'** | Default shards_placement for newly created databases | +| rack_aware | boolean | Cluster rack awareness | +| shards_overbooking | boolean (default: true) | If true, all databases' memory_size settings are ignored during shards placement | +--- +Title: Node identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node_identity object used with Redis Enterprise Software + REST API calls. +linkTitle: node_identity +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/node_identity/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| bigstore_driver | 'rocksdb' | Bigstore driver name or none (deprecated) | +| bigstore_enabled | boolean | Bigstore enabled or disabled | +| identity | [identity]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/identity" >}}) object | Node identity | +| limits | [limits]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/limits" >}}) object | Node limits | +| paths | [paths]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/paths" >}}) object | Storage paths object | +--- +Title: Bootstrap object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for bootstrap configuration +hideListLinks: true +linkTitle: bootstrap +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bootstrap/' +--- + +A bootstrap configuration object. + +| Name | Type/Value | Description | +|------|------------|-------------| +| action | 'create_cluster'
'join_cluster'
'recover_cluster' | Action to perform | +| cluster | [cluster_identity]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/cluster_identity" >}}) object | Cluster to join or create | +| cnm_https_port | integer | Port to join a cluster with non-default cnm_https port | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| credentials | [credentials]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/credentials" >}}) object | Cluster admin credentials | +| dns_suffixes | {{}} +[{ + "name": string, + "cluster_default": boolean, + "use_aaaa_ns": boolean, + "use_internal_addr": boolean, + "slaves": array +}, ...] +{{}} | Explicit configuration of DNS suffixes
**name**: DNS suffix name
**cluster_default**: Should this suffix be the default cluster suffix
**use_aaaa_ns**: Should AAAA records be published for NS records
**use_internal_addr**: Should internal cluster IPs be published for databases
**slaves**: List of replica servers that should be published as NS and notified | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| license | string | License string. If not provided, a trial license is set by default. | +| max_retries | integer | Max number of retries in case of recoverable errors | +| node | [node_identity]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/node_identity" >}}) object | Node description | +| policy | [policy]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap/policy" >}}) object | Policy object | +| recovery_filename | string | Name of backup file to recover from | +| required_version | string | This node can only join the cluster if all nodes in the cluster have a version greater than the required_version | +| retry_time | integer | Max waiting time between retries (in seconds) | + + +--- +Title: Check result object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains the results of a cluster check +linkTitle: check_result +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/check_result/' +--- + +Cluster check result + +| Name | Type/Value | Description | +|------|------------|-------------| +| cluster_test_result | boolean | Indication if any of the tests failed | +| nodes | {{}} +[{ + "node_uid": integer, + "result": boolean, + "error": string +}, ...] +{{}} | Nodes results | +--- +Title: CRDB task object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a CRDB task +linkTitle: crdb_task +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb_task/' +--- + +An object that represents an Active-Active (CRDB) task. + +| Name | Type/Value | Description | +|------|------------|-------------| +| id | string | CRDB task ID (read only) | +| crdb_guid | string | Globally unique Active-Active database ID (GUID) (read-only) | +| errors | {{}} +[{ + "cluster_name": string, + "description": string, + "error_code": string +}, ...] {{}} | Details for errors that occurred on a cluster | +| status | 'queued'
'started'
'finished'
'failed' | CRDB task status (read only) | +--- +Title: MDNS server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the mdns_server object used with Redis Enterprise Software + REST API calls. +linkTitle: mdns_server +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/mdns_server/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the multicast DNS server | +--- +Title: CRDB worker object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the crdb_worker object used with Redis Enterprise Software + REST API calls. +linkTitle: crdb_worker +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/crdb_worker/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CRDB worker processes | +--- +Title: Stats archiver object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the stats_archiver object used with Redis Enterprise Software + REST API calls. +linkTitle: stats_archiver +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/stats_archiver/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the stats archiver service | +--- +Title: CRDB coordinator object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the crdb_coordinator object used with Redis Enterprise Software + REST API calls. +linkTitle: crdb_coordinator +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/crdb_coordinator/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CRDB coordinator process | +--- +Title: CM server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cm_server object used with Redis Enterprise Software REST + API calls. +linkTitle: cm_server +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/cm_server/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CM server | +--- +Title: PDNS server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the pdns_server object used with Redis Enterprise Software + REST API calls. +linkTitle: pdns_server +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/pdns_server/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the PDNS server | +--- +Title: Alert manager object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the alert_mgr object used with Redis Enterprise Software REST API calls. +linkTitle: alert_mgr +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/alert_mgr/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the alert manager processes | +--- +Title: Services configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for optional cluster services settings +hideListLinks: true +linkTitle: services_configuration +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/services_configuration/' +--- + +Optional cluster services settings + +| Name | Type/Value | Description | +|------|------------|-------------| +| alert_mgr | [alert_mgr]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/alert_mgr" >}}) object | Whether to enable/disable the alert manager processes | +| cm_server | [cm_server]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/cm_server" >}}) object | Whether to enable/disable the CM server | +| crdb_coordinator | [crdb_coordinator]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/crdb_coordinator" >}}) object | Whether to enable/disable the CRDB coordinator process | +| crdb_worker | [crdb_worker]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/crdb_worker" >}}) object | Whether to enable/disable the CRDB worker processes | +| mdns_server | [mdns_server]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/mdns_server" >}}) object | Whether to enable/disable the multicast DNS server | +| pdns_server | [pdns_server]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/pdns_server" >}}) object | Whether to enable/disable the PDNS server | +| stats_archiver | [stats_archiver]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration/stats_archiver" >}}) object | Whether to enable/disable the stats archiver service | +--- +Title: Alert object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains alert info +linkTitle: alert +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/alert/' +--- + +You can view, configure, and enable various alerts for the cluster. + +Alerts are bound to a cluster object (such as a [BDB]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) or [node]({{< relref "/operate/rs/7.4/references/rest-api/objects/node" >}})), and the cluster's state determines whether the alerts turn on or off. + + Name | Type/Value | Description | Writable +|-------|------------|-------------|----------| +| change_time | string | Timestamp when alert state last changed | | +| change_value | object | Contains data relevant to the evaluation time when the alert went on/off (thresholds, sampled values, etc.) | | +| enabled | boolean | If true, alert is enabled | x | +| severity | 'DEBUG'
'INFO'
'WARNING'
'ERROR'
'CRITICAL' | The alert's severity | | +| state | boolean | If true, alert is currently triggered | | +| threshold | string | Represents an alert threshold when applicable | x | +--- +Title: JWT authorize object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for user authentication or a JW token refresh request +linkTitle: jwt_authorize +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/jwt_authorize/' +--- + +An API object for user authentication or a JW token refresh request. + +| Name | Type/Value | Description | +|------|------------|-------------| +| password | string | The user’s password (required) | +| ttl | integer (range: 1-86400) (default: 300) | Time to live - The amount of time in seconds the token will be valid | +| username | string | The user’s username (required) | +--- +Title: Node object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a node in the cluster +linkTitle: node +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/node/' +--- + +An API object that represents a node in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Cluster unique ID of node (read-only) | +| accept_servers | boolean (default: true) | The node only accepts new shards if `accept_servers` is `true` | +| addr | string | Internal IP address of node | +| architecture | string | Hardware architecture (read-only) | +| bigredis_storage_path | string | Flash storage path (read-only) | +| bigstore_driver | 'ibm-capi-ga1'
'ibm-capi-ga2'
'ibm-capi-ga4'
'speedb'
'rocksdb' | Bigstore driver name or none (deprecated as of Redis Enterprise v7.2, use the [cluster object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster" >}})'s bigstore_driver instead) | +| bigstore_size | integer | Storage size of bigstore storage (read-only) | +| cores | integer | Total number of CPU cores (read-only) | +| ephemeral_storage_path | string | Ephemeral storage path (read-only) | +| ephemeral_storage_size | number | Ephemeral storage size (bytes) (read-only) | +| external_addr | complex object | External IP addresses of node. `GET` `/jsonschema` to retrieve the object's structure. | +| max_listeners | integer | Maximum number of listeners on the node | +| max_redis_servers | integer | Maximum number of shards on the node | +| os_family | 'rhel'
'ubuntu'
'amzn' | Operating system family (read-only) | +| os_name | string | Operating system name (read-only) | +| os_semantic_version | string | Full version number (read-only) | +| os_version | string | Installed OS version (human-readable) (read-only) | +| persistent_storage_path | string | Persistent storage path (read-only) | +| persistent_storage_size | number | Persistent storage size (bytes) (read- only) | +| public_addr | string | Public IP address of node (deprecated as of Redis Enterprise v4.3.3, use external_addr instead) | +| rack_id | string | Rack ID where node is installed | +| recovery_path | string | Recovery files path | +| shard_count | integer | Number of shards on the node (read-only) | +| shard_list | array of integers | Cluster unique IDs of all node shards | +| software_version | string | Installed Redis Enterprise cluster software version (read-only) | +| status | 'active'
'decommissioning'
'down'
'provisioning' | Node status (read-only) | +| supported_database_versions | {{}} +[{ + "db_type": string, + "version": string +}, ...] +{{}} | Versions of Redis Open Source databases supported by Redis Enterprise Software on the node (read-only)
**db_type**: Type of database
**version**: Version of database | +| system_time | string | System time (UTC) (read-only) | +| total_memory | integer | Total memory of node (bytes) (read-only) | +| uptime | integer | System uptime (seconds) (read-only) | +| use_internal_ipv6 | boolean (default: false) | Node uses IPv6 for internal communication. Value is taken from bootstrap identity (read-only) | +--- +Title: Cluster settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for cluster resource management settings +linkTitle: cluster_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/cluster_settings/' +--- + +Cluster resources management policy + +| Name | Type/Value | Description | +|------|------------|-------------| +| acl_pubsub_default | `resetchannels`
`allchannels` | Default pub/sub ACL rule for all databases in the cluster:
•`resetchannels` blocks access to all channels (restrictive)
•`allchannels` allows access to all channels (permissive) | +| auto_recovery | boolean (default: false) | Defines whether to use automatic recovery after shard failure | +| automatic_node_offload | boolean (default: true) | Defines whether the cluster will automatically migrate shards from a node, in case the node is overbooked | +| bigstore_migrate_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| bigstore_migrate_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| bigstore_provision_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| bigstore_provision_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| data_internode_encryption | boolean | Enable/deactivate encryption of the data plane internode communication | +| db_conns_auditing | boolean | [Audit connections]({{< relref "/operate/rs/7.4/security/audit-events" >}}) for new databases by default if set to true. | +| default_concurrent_restore_actions | integer | Default number of restore actions allowed at the same time. Set to 0 to allow any number of simultaneous restore actions. | +| default_fork_evict_ram | boolean | If true, the bdbs should evict data from RAM to ensure successful replication or persistence | +| default_non_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default proxy_policy for newly created non-sharded databases' endpoints | +| default_provisioned_redis_version | string | Default Redis version | +| default_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default proxy_policy for newly created sharded databases' endpoints | +| default_shards_placement | `dense`
`sparse` | Default shards_placement for a newly created databases | +| endpoint_rebind_propagation_grace_time | integer | Time to wait between the addition and removal of a proxy | +| failure_detection_sensitivity | `high`
`low` | Predefined thresholds and timeouts for failure detection (previously known as `watchdog_profile`)
• `high` (previously `local-network`) – high failure detection sensitivity, lower thresholds, faster failure detection and failover
• `low` (previously `cloud`) – low failure detection sensitivity, higher tolerance for latency variance (also called network jitter) | +| hide_user_data_from_log | boolean (default: false) | Set to `true` to enable the `hide-user-data-from-log` Redis configuration setting, which avoids logging user data | +| login_lockout_counter_reset_after | integer | Number of seconds that must elapse between failed sign in attempts before the lockout counter is reset to 0. | +| login_lockout_duration | integer | Duration (in secs) of account lockout. If set to 0, the account lockout will persist until released by an admin. | +| login_lockout_threshold | integer | Number of failed sign in attempts allowed before locking a user account | +| max_saved_events_per_type | integer | Maximum saved events per event type | +| max_simultaneous_backups | integer (default: 4) | Maximum number of backup processes allowed at the same time | +| parallel_shards_upgrade | integer | Maximum number of shards to upgrade in parallel | +| persistence_cleanup_scan_interval | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the Redis cleanup schedule | +| persistent_node_removal | boolean | When removing a node, wait for persistence files to be created for all migrated shards | +| rack_aware | boolean | Cluster operates in a rack-aware mode | +| redis_migrate_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| redis_migrate_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| redis_provision_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| redis_provision_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| redis_upgrade_policy | **`major`**
`latest` | Create/upgrade Redis Enterprise software on databases in the cluster by compatibility with major versions or latest versions of Redis Open Source | +| resp3_default | boolean (default: true) | Determines the default value of the `resp3` option upon upgrading a database to version 7.2 | +| shards_overbooking | boolean | If true, all databases' memory_size is ignored during shards placement | +| show_internals | boolean | Show internal databases (and their shards and endpoints) REST APIs | +| slave_ha | boolean | Enable the replica high-availability mechanism. Deprecated as of Redis Enterprise Software v7.2.4. | +| slave_ha_bdb_cooldown_period | integer | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database | +| slave_ha_cooldown_period | integer | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database | +| slave_ha_grace_period | integer | Time in seconds between a node failure and when the replica high-availability mechanism starts relocating shards | +--- +Title: Module object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a Redis module +linkTitle: module +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/module/' +--- + +Represents a [Redis module]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}). + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | string | Cluster unique ID of module | +| architecture | string | Architecture used to compile the module | +| author | string | Module creator | +| capabilities | array of strings | List of capabilities supported by this module | +| capability_name | string | Short description of module functionality | +| command_line_args | string | Command line arguments passed to the module | +| config_command | string | Name of command to configure module arguments at runtime | +| dependencies | object dependencies | Module dependencies | +| description | string | Short description of the module +| display_name | string | Name of module for display purposes | +| email | string | Author's email address | +| homepage | string | Module's homepage | +| is_bundled | boolean | Whether module came bundled with a version of Redis Enterprise | +| license | string | Module is distributed under this license +| min_redis_pack_version | string | Minimum Redis Enterprise Software cluster version required by this module | +| min_redis_version | string | Minimum Redis database version required by this module | +| module_file | string | Module filename | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf` | Module's name
| +| os | string | Operating system used to compile the module | +| os_list | array of strings | List of supported operating systems | +| semantic_version | string | Module's semantic version | +| sha256 | string | SHA256 of module binary | +| version | integer | Module's version | +--- +Title: Suffix object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a DNS suffix +linkTitle: suffix +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/suffix/' +--- + +An API object that represents a DNS suffix in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| default | boolean | Suffix is the default suffix for the cluster (read-only) | +| internal | boolean | Does the suffix point to internal IP addresses (read-only) | +| mdns | boolean | Support for multicast DNS (read-only) | +| name | string | Unique suffix name that represents its zone (read-only) | +| slaves | array of strings | Frontend DNS servers to be updated by this suffix | +| use_aaaa_ns | boolean | Suffix uses AAAA NS entries (read-only) | +--- +Title: Database connection auditing configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for database connection auditing settings +linkTitle: db_conns_auditing_config +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/db-conns-auditing-config/' +--- + +Database connection auditing configuration + +| Name | Type/Value | Description | +|------|------------|-------------| +| audit_address | string | TCP/IP address where one can listen for notifications. | +| audit_port | integer | Port where one can listen for notifications. | +| audit_protocol | `TCP`
`local` | Protocol used to process notifications. For production systems, `TCP` is the only valid value. | +| audit_reconnect_interval | integer | Interval (in seconds) between attempts to reconnect to the listener. Default is 1 second. | +| audit_reconnect_max_attempts | integer | Maximum number of attempts to reconnect. Default is 0 (infinite). | +--- +Title: LDAP object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains the cluster's LDAP configuration +linkTitle: ldap +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/ldap/' +--- + +An API object that represents the cluster's [LDAP]({{< relref "/operate/rs/7.4/security/access-control/ldap" >}}) configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| bind_dn | string | DN used when binding with the LDAP server to run queries | +| bind_pass | string | Password used when binding with the LDAP server to run queries | +| ca_cert | string | PEM-encoded CA certificate(s) used to validate TLS connections to the LDAP server | +| cache_ttl | integer (default: 300) | Maximum TTL (in seconds) of cached entries | +| control_plane | boolean (default: false) | Use LDAP for user authentication/authorization in the control plane | +| data_plane | boolean (default: false) | Use LDAP for user authentication/authorization in the data plane | +| directory_timeout_s | integer (range: 5-60) (default: 5) | The connection timeout to the LDAP server when authenticating a user, in seconds | +| dn_group_attr | string | The name of an attribute of the LDAP user entity that contains a list of the groups that user belongs to. (Mutually exclusive with "dn_group_query") | +| dn_group_query | complex object | An LDAP search query for mapping from a user DN to the groups the user is a member of. The substring "%D" in the filter will be replaced with the user's DN. (Mutually exclusive with "dn_group_attr") | +| starttls | boolean (default: false) | Use StartTLS negotiation for the LDAP connection | +| uris | array of strings | URIs of LDAP servers that only contain the schema, host, and port | +| user_dn_query | complex object | An LDAP search query for mapping from a username to a user DN. The substring "%u" in the filter will be replaced with the username. (Mutually exclusive with "user_dn_template") | +| user_dn_template | string | A string template that maps between the username, provided to the cluster for authentication, and the LDAP DN. The substring "%u" will be replaced with the username. (Mutually exclusive with "user_dn_query") | +--- +Title: CRDB cluster info object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents Active-Active cluster info +linkTitle: cluster_info +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/cluster_info/' +--- + +Configuration details for a cluster that is part of an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| credentials | {{}} +{ + "username": string, + "password": string +} {{}} | Cluster access credentials (required) | +| name | string | Cluster fully qualified name, used to uniquely identify the cluster. Typically this is the same as the hostname used in the URL, although in some configruations the URL may point to a different name/address. (required) | +| replication_endpoint | string | Address to use for peer replication. If not specified, it is assumed that standard cluster naming conventions apply. | +| replication_tls_sni | string | Cluster SNI for TLS connections | +| url | string | Cluster access URL (required) | +--- +Title: CRDB health report configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the database configuration to include in an + Active-Active database health report. +linkTitle: health_report_configuration +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/health_report/health_report_configuration/' +--- + +An object that represents the database configuration to include in an Active-Active database health report. + +| Name | Type/Value | Description | +|------|------------|-------------| +| causal_consistency | boolean | Enables causal consistency across Active-Active replicas | +| encryption | boolean | Intercluster encryption | +| featureset_version | integer | CRDB active FeatureSet version | +| instances | {{}}[{ + // Unique instance ID + "id": integer, + // Local database instance ID + "db_uid": string, + "cluster": { + // Cluster FQDN + "name": string + // Cluster access URL + "url": string + } +}, ...] {{}} | Local database instances | +| name | string | Name of database | +| protocol_version | integer | CRDB active protocol version | +| status | string | Current status of the configuration.
Possible values:
**posted:** Configuration was posted to all replicas
**ready:** All replicas have finished processing posted configuration (create a database)
**committed**: Posted configuration is now active on all replicas
**commit-completed:** All replicas have finished processing committed configuration (database is active)
**failed:** Configuration failed to post | +| version | integer | Database configuration version | +--- +Title: CRDB health report object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents an Active-Active database health report. +hideListLinks: true +linkTitle: health_report +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/health_report/' +--- + +An object that represents an Active-Active database health report. + +| Name | Type/Value | Description | +|------|------------|-------------| +| active_config_version | integer | Active configuration version | +| cluster_name | string | Name of local Active-Active cluster | +| configurations | array of [health_report_configuration]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/health_report/health_report_configuration" >}}) objects | Stored database configurations | +| connection_error | string | Error string if remote cluster is not available | +| connections | {{}} +[{ + "name": string, + "replication_links": [ + { + "link_uid": "bdb_uid:replica_uid", + "status": "up | down" + } ], + "status": string +}, ...] {{}} | Connections to other clusters and their statuses. A replication link's `bdb_uid` is the unique ID of a local database instance ([bdb]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}})) in the current cluster. The `replica_uid` is the unique ID of the database's remote replica, located in the connected cluster. | +| name | string | Name of the Active-Active database | +--- +Title: CRDB database config object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the database configuration +linkTitle: database_config +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/database_config/' +--- + +An object that represents the database configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| aof_policy | string | Policy for Append-Only File data persistence | +| authentication_admin_pass | string | Administrative databases access token | +| authentication_redis_pass | string | Redis AUTH password (deprecated as of Redis Enterprise v7.2, replaced with multiple passwords feature in version 6.0.X) | +| bigstore | boolean | Database driver is Auto Tiering | +| bigstore_ram_size | integer | Memory size of RAM size | +| data_persistence | string | Database on-disk persistence | +| enforce_client_authentication | **'enabled'**
'disabled' | Require authentication of client certificates for SSL connections to the database. If enabled, a certificate should be provided in either `authentication_ssl_client_certs` or `authentication_ssl_crdt_certs` | +| max_aof_file_size | integer | Hint for maximum AOF file size | +| max_aof_load_time | integer | Hint for maximum AOF reload time | +| memory_size | integer | Database memory size limit, in bytes | +| oss_cluster | boolean | Enables OSS Cluster mode | +| oss_cluster_api_preferred_ip_type | string | Indicates preferred IP type in OSS cluster API: internal/external | +| oss_sharding | boolean | An alternative to shard_key_regex for using the common case of the OSS shard hashing policy | +| port | integer | TCP port for database access | +| proxy_policy | string | The policy used for proxy binding to the endpoint | +| rack_aware | boolean | Require the database to be always replicated across multiple racks | +| replication | boolean | Database replication | +| sharding | boolean (default: false) | Cluster mode (server-side sharding). When true, shard hashing rules must be provided by either `oss_sharding` or `shard_key_regex` | +| shard_key_regex | `[{ "regex": string }, ...]` | Custom keyname-based sharding rules (required if sharding is enabled)

To use the default rules you should set the value to:
`[{"regex": ".*\\{(?.*)\\}.*"}, {"regex": "(?.*)"}]` | +| shards_count | integer | Number of database shards | +| shards_placement | string | Control the density of shards: should they reside on as few or as many nodes as possible | +| snapshot_policy | array of [snapshot_policy]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/snapshot_policy" >}}) objects | Policy for snapshot-based data persistence (required) | +| tls_mode | string | Encrypt communication | +--- +Title: CRDB instance info object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents Active-Active instance info +linkTitle: instance_info +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/instance_info/' +--- + +An object that represents Active-Active instance info. + +| Name | Type/Value | Description | +|------|------------|-------------| +| id | integer | Unique instance ID | +| cluster | [CRDB cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/cluster_info" >}}) object | | +| compression | integer | Compression level when syncing from this source | +| db_config | [CRDB database_config]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/database_config" >}}) object | Database configuration | +| db_uid | string | ID of local database instance. This field is likely to be empty for instances other than the local one. | +--- +Title: CRDB modify request object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object to update an Active-Active database +linkTitle: modify_request +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/modify_request/' +--- + +An object to update an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| add_instances | array of [CRDB instance_info]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/instance_info" >}}) objects | List of new CRDB instances | +| crdb | [CRDB]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb" >}}) object | An object that represents an Active-Active database | +| force_update | boolean | (Warning: This flag can cause unintended and dangerous changes) Force the configuration update and increment the configuration version even if there is no change to the configuration parameters. If you use force, you can mistakenly cause the other instances to update to the configuration version even though it was not changed. | +| remove_instances | array of integers | List of unique instance IDs | +| remove_instances.force_remove | boolean | Force removal of instance from the Active-Active database. Before we remove an instance from an Active-Active database, all of the operations that the instance received from clients must be propagated to the other instances. This is the safe method to remove an instance from the Active-Active database. If the instance does not have connectivity to other instances, the propagation fails and removal fails. To remove an instance that does not have connectivity to other instances, you must use the force flag. The removed instance keeps its data and configuration for the instance. After you remove an instance by force, you must use the purge_instances API on the removed instance. | +--- +Title: CRDB object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents an Active-Active database +hideListLinks: true +linkTitle: crdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/crdb/' +--- + +An object that represents an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| guid | string | The global unique ID of the Active-Active database | +| causal_consistency | boolean | Enables causal consistency across CRDT instances | +| default_db_config| [CRDB database_config]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/database_config" >}}) object | Default database configuration | +| encryption | boolean | Encrypt communication | +| featureset_version | integer | Active-Active database active FeatureSet version +| instances | array of [CRDB instance_info]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/instance_info" >}}) objects | | +| local_databases | {{}}[{ + "bdb_uid": string, + "id": integer +}, ...] {{}} | Mapping of instance IDs for local databases to local BDB IDs | +| name | string | Name of Active-Active database | +| protocol_version | integer | Active-Active database active protocol version | +--- +Title: OCSP object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the cluster's OCSP configuration +linkTitle: ocsp +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/ocsp/' +--- + +An API object that represents the cluster's OCSP configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| ocsp_functionality | boolean (default: false) | Enables or turns off OCSP for the cluster | +| query_frequency | integer (range: 60-86400) (default: 3600) | The time interval in seconds between OCSP queries to check the certificate’s status | +| recovery_frequency | integer (range: 60-86400) (default: 60) | The time interval in seconds between retries after the OCSP responder returns an invalid status for the certificate | +| recovery_max_tries | integer (range: 1-100) (default: 5) | The number of retries before the validation query fails and invalidates the certificate | +| responder_url | string | The OCSP server URL embedded in the proxy certificate (if available) (read-only) | +| response_timeout | integer (range: 1-60) (default: 1) | The time interval in seconds to wait for a response before timing out | +--- +Title: Certificate rotation job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cert_rotation_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: cert_rotation_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/cert_rotation_job_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the certificate rotation schedule | +| expiry_days_before_rotation | integer, (range: 1-90) (default: 60) | Number of days before a certificate expires before rotation | +--- +Title: Rotate CCS job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the rotate_ccs_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: rotate_ccs_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/rotate_ccs_job_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the CCS rotation schedule | +| file_suffix | string (default: 5min) | String added to the end of the rotated RDB files | +| rotate_max_num | integer, (range: 1-100) (default: 24) | The maximum number of saved RDB files | +--- +Title: Redis cleanup job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the redis_cleanup_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: redis_cleanup_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/redis_cleanup_job_settings/' +--- + +Deprecated and replaced with `persistence_cleanup_scan_interval`. + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the Redis cleanup schedule | +--- +Title: Log rotation job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the log_rotation_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: log_rotation_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/log_rotation_job_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the log rotation schedule | +--- +Title: Backup job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the backup_job_settings object used with Redis Enterprise Software + REST API calls. +linkTitle: backup_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/backup_job_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the backup schedule | +--- +Title: Node checks job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node_checks_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: node_checks_job_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/node_checks_job_settings/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the node checks schedule | +--- +Title: Job scheduler object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for job scheduler settings +hideListLinks: true +linkTitle: job_scheduler +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/job_scheduler/' +--- + +An API object that represents the job scheduler settings in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| backup_job_settings | [backup_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/backup_job_settings" >}}) object | Backup job settings | +| cert_rotation_job_settings | [cert_rotation_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/cert_rotation_job_settings" >}}) object | Job settings for internal certificate rotation | +| log_rotation_job_settings | [log_rotation_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/log_rotation_job_settings" >}}) object | Log rotation job settings | +| node_checks_job_settings | [node_checks_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/node_checks_job_settings" >}}) object | Node checks job settings | +| redis_cleanup_job_settings | [redis_cleanup_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/redis_cleanup_job_settings" >}}) object | Redis cleanup job settings (deprecated as of Redis Enterprise v6.4.2, replaced with persistence_cleanup_scan_interval) | +| rotate_ccs_job_settings | [rotate_ccs_job_settings]({{< relref "/operate/rs/7.4/references/rest-api/objects/job_scheduler/rotate_ccs_job_settings" >}}) object | Rotate CCS job settings | +--- +Title: User object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a Redis Enterprise user +linkTitle: user +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/user/' +--- + +An API object that represents a Redis Enterprise user. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | User's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API request (read-only) | +| auth_method | **'regular'** | User's authentication method (deprecated as of Redis Enterprise v7.2) | +| bdbs_email_alerts | complex object | UIDs of databases that user will receive alerts for | +| cluster_email_alerts | boolean | Activate cluster email alerts for a user | +| email | string | User's email (pattern matching only ASCII characters) | +| email_alerts | boolean (default: true) | Activate email alerts for a user | +| name | string | User's name (pattern does not allow non-ASCII and special characters &,\<,>,") | +| password | string | User's password. If `password_hash_method` is set to `1`, the password should be hashed using SHA-256. The format before hashing is `username:clustername:password`. | +| password_hash_method | '1' | Used when password is passed pre-hashed to specify the hashing method | +| password_issue_date | string | The date in which the password was set (read-only) | +| role | 'admin'
'cluster_member'
'cluster_viewer'
'db_member'
**'db_viewer'**
'none' | User's [role]({{< relref "/operate/rs/7.4/references/rest-api/permissions#roles" >}}) | +| role_uids | array of integers | UIDs of user's roles for role-based access control | +| status | 'active'
'locked' | User sign-in status (read-only)
**active**: able to sign in
**locked**: unable to sign in | +--- +Title: BDB alert settings with threshold object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb_alert_settings_with_threshold object used with Redis + Enterprise Software REST API calls. +linkTitle: bdb_alert_settings_with_threshold +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| enabled | boolean (default: false) | Alert enabled or disabled | +| threshold | string | Threshold for alert going on/off | +--- +Title: Database alerts settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for database alerts configuration +hideListLinks: true +linkTitle: db_alerts_settings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/' +--- + +An API object that represents the database alerts configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| bdb_backup_delayed | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Periodic backup has been delayed for longer than specified threshold value (minutes) | +| bdb_crdt_src_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB source sync lag is higher than specified threshold value (seconds) | +| bdb_crdt_src_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB source sync had a connection error while trying to connect to replica source | +| bdb_crdt_src_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB sync encountered in general error | +| bdb_high_latency | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Latency is higher than specified threshold value (microsec) | +| bdb_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync lag is higher than specified threshold value (seconds) (deprecated as of Redis Enterprise v5.0.1) | +| bdb_high_throughput | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Throughput is higher than specified threshold value (requests / sec) | +| bdb_long_running_action | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | An alert for state machines that are running for too long | +| bdb_low_throughput | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Throughput is lower than specified threshold value (requests / sec) | +| bdb_ram_dataset_overhead | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Dataset RAM overhead of a shard has reached the threshold value (% of its RAM limit) | +| bdb_ram_values | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Percent of values kept in a shard's RAM is lower than (% of its key count) | +| bdb_replica_src_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of source sync lag is higher than specified threshold value (seconds) | +| bdb_replica_src_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of source sync has connection error while trying to connect replica source | +| bdb_replica_src_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync encountered in general error | +| bdb_shard_num_ram_values | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Number of values kept in a shard's RAM is lower than (values) | +| bdb_size | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Dataset size has reached the threshold value \(% of the memory limit) | +| bdb_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync has connection error while trying to connect replica source (deprecated as of Redis Enterprise v5.0.1) | +| bdb_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/7.4/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync encountered in general error (deprecated as of Redis Enterprise v5.0.1) | +--- +Title: BDB dataset import sources object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb dataset_import_sources object used with Redis Enterprise + Software REST API calls. +linkTitle: dataset_import_sources +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/dataset_import_sources/' +--- + +You can import data to a database from the following location types: + +- HTTP/S +- FTP +- SFTP +- Amazon S3 +- Google Cloud Storage +- Microsoft Azure Storage +- NAS/Local Storage + +The source file to import should be in the [RDB]({{< relref "/operate/rs/7.4/databases/configure/database-persistence.md" >}}) format. It can also be in a compressed (gz) RDB file. + +Supply an array of dataset import source objects to import data from multiple files. + +## Basic parameters + +For all import location objects, you need to specify the location type via the `type` field. + +| Location type | "type" value | +|---------------|--------------| +| FTP/S | "url" | +| SFTP | "sftp" | +| Amazon S3 | "s3" | +| Google Cloud Storage | "gs" | +| Microsoft Azure Storage | "abs" | +| NAS/Local Storage | "mount_point" | + +## Location-specific parameters + +Any additional required parameters may differ based on the import location type. + +### FTP + +| Key name | Type | Description | +|----------|------|-------------| +| url | string | A URI that represents the FTP/S location with the following format: `ftp://user:password@host:port/path/`. The user and password can be omitted if not needed. | + +### SFTP + +| Key name | Type | Description | +|----------|------|-------------| +| key | string | SSH private key to secure the SFTP server connection. If you do not specify an SSH private key, the autogenerated private key of the cluster is used and you must add the SSH public key of the cluster to the SFTP server configuration. (optional) | +| sftp_url | string | SFTP URL in the format: `sftp://user:password@host[:port]/path/filename.rdb`. The default port number is 22 and the default path is '/'. | + +### AWS S3 + +| Key name | Type | Description | +|----------|------|-------------| +| access_key_id | string | The AWS Access Key ID with access to the bucket | +| bucket_name | string | S3 bucket name | +| region_name | string | Amazon S3 region name (optional) | +| secret_access_key | string | The AWS Secret Access that matches the Access Key ID | +| subdir | string | Path to the backup directory in the S3 bucket (optional) | + +### Google Cloud Storage + +| Key name | Type | Description | +|----------|------|-------------| +| bucket_name | string | Cloud Storage bucket name | +| client_email | string | Email address for the Cloud Storage client ID | +| client_id | string | Cloud Storage client ID with access to the Cloud Storage bucket | +| private_key | string | Private key for the Cloud Storage matching the private key ID | +| private_key_id | string | Cloud Storage private key ID with access to the Cloud Storage bucket | +| subdir | string | Path to the backup directory in the Cloud Storage bucket (optional) | + +### Azure Blob Storage + +| Key name | Type | Description | +|----------|------|-------------| +| account_key | string | Access key for the storage account | +| account_name | string | Storage account name with access to the container | +| container | string | Blob Storage container name | +| sas_token | string | Token to authenticate with shared access signature | +| subdir | string | Path to the backup directory in the Blob Storage container (optional) | + +{{}} +`account_key` and `sas_token` are mutually exclusive +{{}} + +### NAS/Local Storage + +| Key name | Type | Description | +|----------|------|-------------| +| path | string | Path to the locally mounted filename to import. You must create the mount point on all nodes, and the `redislabs:redislabs` user must have read permissions on the local mount point. +--- +Title: BDB status field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb status field used with Redis Enterprise Software REST + API calls. +linkTitle: status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/status/' +--- + +The BDB status field is a read-only field that represents the database status. + +Possible status values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'active' | Database is active and no special action is in progress | 'active-change-pending'
'import-pending'
'delete-pending' | +| 'active-change-pending' | |'active' | +| 'creation-failed' | Initial database creation failed | | +| 'delete-pending' | Database deletion is in progress | | +| 'import-pending' | Dataset import is in progress | 'active' | +| 'pending' | Temporary status during database creation | 'active'
'creation-failed' | +| 'recovery' | Not currently relevant (intended for future use) | | + +{{< image filename="/images/rs/rest-api-bdb-status.png#no-click" alt="BDB status" >}} +--- +Title: BDB backup/export location object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb backup_location/export_location object used with Redis + Enterprise Software REST API calls. +linkTitle: backup_location/export_location +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/backup_location/' +--- + +You can back up or export a database's dataset to the following types of locations: + +- FTP/S +- SFTP +- Amazon S3 +- Google Cloud Storage +- Microsoft Azure Storage +- NAS/Local Storage + +## Basic parameters + +For all backup/export location objects, you need to specify the location type via the `type` field. + +| Location type | "type" value | +|---------------|--------------| +| FTP/S | "url" | +| SFTP | "sftp" | +| Amazon S3 | "s3" | +| Google Cloud Storage | "gs" | +| Microsoft Azure Storage | "abs" | +| NAS/Local Storage | "mount_point" | + +## Location-specific parameters + +Any additional required parameters may differ based on the backup/export location type. + +### FTP + +| Key name | Type | Description | +|----------|------|-------------| +| url | string | A URI that represents a FTP/S location with the following format: `ftp://user:password@host:port/path/`. The user and password can be omitted if not needed. | + +### SFTP + +| Key name | Type | Description | +|----------|------|-------------| +| key | string | SSH private key to secure the SFTP server connection. If you do not specify an SSH private key, the autogenerated private key of the cluster is used, and you must add the SSH public key of the cluster to the SFTP server configuration. (optional) | +| sftp_url | string | SFTP URL in the format: `sftp://user:password@host[:port][/path/]`. The default port number is 22 and the default path is '/'. | + +### AWS S3 + +| Key name | Type | Description | +|----------|------|-------------| +| access_key_id | string | The AWS Access Key ID with access to the bucket | +| bucket_name | string | S3 bucket name | +| region_name | string | Amazon S3 region name (optional) | +| secret_access_key | string | The AWS Secret Access Key that matches the Access Key ID | +| subdir | string | Path to the backup directory in the S3 bucket (optional) | + +### Google Cloud Storage + +| Key name | Type | Description | +|----------|------|-------------| +| bucket_name | string | Cloud Storage bucket name | +| client_email | string | Email address for the Cloud Storage client ID | +| client_id | string | Cloud Storage client ID with access to the Cloud Storage bucket | +| private_key | string | Cloud Storage private key that matches the private key ID | +| private_key_id | string | Cloud Storage private key ID with access to the Cloud Storage bucket | +| subdir | string | Path to the backup directory in the Cloud Storage bucket (optional) | + +### Azure Blob Storage + +| Key name | Type | Description | +|----------|------|-------------| +| account_key | string | Access key for the storage account | +| account_name | string | Storage account name with access to the container | +| container | string | Blob Storage container name | +| sas_token | string | Token to authenticate with shared access signature | +| subdir | string | Path to the backup directory in the Blob Storage container (optional) | + +{{}} +`account_key` and `sas_token` are mutually exclusive +{{}} + +### NAS/Local Storage + +| Key name | Type | Description | +|----------|------|-------------| +| path | string | Path to the local mount point. You must create the mount point on all nodes, and the `redislabs:redislabs` user must have read and write permissions on the local mount point. | +--- +Title: Syncer sources object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the syncer_sources object used with Redis Enterprise Software + REST API calls. +linkTitle: syncer_sources +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/syncer_sources/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Unique ID of this source | +| client_cert | string | Client certificate to use if encryption is enabled | +| client_key | string | Client key to use if encryption is enabled | +| compression | integer, (range: 0-6) | Compression level for the replication link | +| encryption | boolean | Encryption enabled/disabled | +| lag | integer | Lag in milliseconds between source and destination (while synced) | +| last_error | string | Last error encountered when syncing from the source | +| last_update | string | Time when we last received an update from the source | +| rdb_size | integer | The source's RDB size to be transferred during the syncing phase | +| rdb_transferred | integer | Number of bytes transferred from the source's RDB during the syncing phase | +| replication_tls_sni | string | Replication TLS server name indication | +| server_cert | string | Server certificate to use if encryption is enabled | +| status | string | Sync status of this source | +| uri | string | Source Redis URI | +--- +Title: Snapshot policy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the snapshot_policy object used with Redis Enterprise Software + REST API calls. +linkTitle: snapshot_policy +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/snapshot_policy/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| secs | integer | Interval in seconds between snapshots | +| writes | integer | Number of write changes required to trigger a snapshot | +--- +Title: BDB replica sync field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb replica_sync field used with Redis Enterprise Software + REST API calls. +linkTitle: replica_sync +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/replica_sync/' +--- + +The BDB `replica_sync` field relates to the [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create.md" >}}) feature, which enables the creation of a Redis database (single- or multi-shard) that synchronizes data from another Redis database (single- or multi-shard). + +You can use the `replica_sync` field to enable, disable, or pause the [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create.md" >}}) sync process. The BDB `crdt_sync` field has a similar purpose for the Redis CRDB. + +Possible BDB sync values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'disabled' | (default value) Disables the sync process and represents that no sync is currently configured or running. | 'enabled' | +| 'enabled' | Enables the sync process and represents that the process is currently active. | 'stopped'
'paused' | +| 'paused' | Pauses the sync process. The process is configured but is not currently executing any sync commands. | 'enabled'
'stopped' | +| 'stopped' | An unrecoverable error occurred during the sync process, which caused the system to stop the sync. | 'enabled' | + +{{< image filename="/images/rs/rest-api-bdb-sync.png#no-click" alt="BDB sync" >}} + +When the sync is in the 'stopped' or 'paused' state, then the `last_error` field in the relevant source entry in the `sync_sources` "status" field contains the detailed error message. +--- +Title: BDB replica sources status field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb replica_sources status field used with Redis Enterprise + Software REST API calls. +linkTitle: replica_sources status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/replica_sources_status/' +--- + +The `replica_sources` status field relates to the [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/create.md" >}}) feature, which enables the creation of a Redis database (single- or multi-shard) that synchronizes data from another Redis database (single- or multi-shard). + +The status field represents the Replica Of sync status for a specific sync source. + +Possible status values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'out-of-sync' | Sync process is disconnected from source and trying to reconnect | 'syncing' | +| 'syncing' | Sync process is in progress | 'in-sync'
'out-of-sync' | +| 'in-sync' | Sync process finished successfully, and new commands are syncing on a regular basis | 'syncing'
'out-of-sync' + +{{< image filename="/images/rs/rest-api-replica-sources-status.png#no-click" alt="Replica sources status" >}} +--- +Title: BDB object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a database +hideListLinks: true +linkTitle: bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb/' +--- + +An API object that represents a managed database in the cluster. + +| Name | Type/Value & Description | +|------|-------------------------| +| uid | integer; Cluster unique ID of database. Can be set during creation but cannot be updated. | +| account_id | integer; SM account ID | +| action_uid | string; Currently running action's UID (read-only) | +| aof_policy | Policy for Append-Only File data persistence
Values:
**'appendfsync-every-sec'**
'appendfsync-always' | +| authentication_admin_pass | string; Password for administrative access to the BDB (used for SYNC from the BDB) | +| authentication_redis_pass | string; Redis AUTH password authentication.
Use for Redis databases only. Ignored for memcached databases. (deprecated as of Redis Enterprise v7.2, replaced with multiple passwords feature in version 6.0.X) | +| authentication_sasl_pass | string; Binary memcache SASL password | +| authentication_sasl_uname | string; Binary memcache SASL username (pattern does not allow special characters &,\<,>,") | +| authentication_ssl_client_certs | {{}}[{
"client_cert": string
}, ...]{{
}} List of authorized client certificates
**client_cert**: X.509 PEM (base64) encoded certificate | +| authentication_ssl_crdt_certs | {{}}[{
"client_cert": string
}, ...]{{
}} List of authorized CRDT certificates
**client_cert**: X.509 PEM (base64) encoded certificate | +| authorized_names | array of strings; Additional certified names (deprecated as of Redis Enterprise v6.4.2; use authorized_subjects instead) | +| authorized_subjects | {{}}[{
"CN": string,
"O": string,
"OU": [array of strings],
"L": string,
"ST": string,
"C": string
}, ...]{{
}} A list of valid subjects used for additional certificate validations during TLS client authentication. All subject attributes are case-sensitive.
**Required subject fields**:
"CN" for Common Name
**Optional subject fields:**
"O" for Organization
"OU" for Organizational Unit (array of strings)
"L" for Locality (city)
"ST" for State/Province
"C" for 2-letter country code | +| auto_upgrade | boolean (default: false); Upgrade the database automatically after a cluster upgrade | +| avoid_nodes | array of strings; Cluster node UIDs to avoid when placing the database's shards and binding its endpoints | +| background_op | {{}}[{
"status": string,
"name": string,
"error": object,
"progress": number
}, ...]{{
}} (read-only); **progress**: Percent of completed steps in current operation | +| backup | boolean (default: false); Policy for periodic database backup | +| backup_failure_reason | Reason of last failed backup process (read-only)
Values:
'no-permission'
'wrong-file-path'
'general-error' | +| backup_history | integer (default: 0); Backup history retention policy (number of days, 0 is forever) | +| backup_interval | integer; Interval in seconds in which automatic backup will be initiated | +| backup_interval_offset | integer; Offset (in seconds) from round backup interval when automatic backup will be initiated (should be less than backup_interval) | +| backup_location | [complex object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/backup_location" >}}); Target for automatic database backups.
Call `GET` `/jsonschema` to retrieve the object's structure. | +| backup_progress | number, (range: 0-100); Database scheduled periodic backup progress (percentage) (read-only) | +| backup_status | Status of scheduled periodic backup process (read-only)
Values:
'exporting'
'succeeded'
'failed' | +| bigstore | boolean (default: false); Database bigstore option | +| bigstore_ram_size | integer (default: 0); Memory size of bigstore RAM part. | +| bigstore_ram_weights | {{}}[{
"shard_uid": integer,
"weight": number
}, ...]{{
}} List of shard UIDs and their bigstore RAM weights;
**shard_uid**: Shard UID;
**weight**: Relative weight of RAM distribution | +| client_cert_subject_validation_type | Enables additional certificate validations that further limit connections to clients with valid certificates during TLS client authentication.
Values:
**disabled**: Authenticates clients with valid certificates. No additional validations are enforced.
**san_cn**: A client certificate is valid only if its Common Name (CN) matches an entry in the list of valid subjects. Ignores other Subject attributes.
**full_subject**: A client certificate is valid only if its Subject attributes match an entry in the list of valid subjects. | +| conns | integer (default 5); Number of internal proxy connections | +| conns_type | Connections limit type
Values:
**‘per-thread’**
‘per-shard’ | +| crdt | boolean (default: false); Use CRDT-based data types for multi-master replication | +| crdt_causal_consistency | boolean (default: false); Causal consistent CRDB. | +| crdt_config_version | integer; Replica-set configuration version, for internal use only. | +| crdt_featureset_version | integer; CRDB active FeatureSet version | +| crdt_ghost_replica_ids | string; Removed replicas IDs, for internal use only. | +| crdt_guid | string; GUID of CRDB this database belongs to, for internal use only. | +| crdt_modules | string; CRDB modules information. The string representation of a JSON list, containing hashmaps. | +| crdt_protocol_version | integer; CRDB active Protocol version | +| crdt_repl_backlog_size | string; Active-Active replication backlog size ('auto' or size in bytes) | +| crdt_replica_id | integer; Local replica ID, for internal use only. | +| crdt_replicas | string; Replica set configuration, for internal use only. | +| crdt_sources | array of [syncer_sources]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/syncer_sources" >}}) objects; Remote endpoints/peers of CRDB database to sync from. See the 'bdb -\> replica_sources' section | +| crdt_sync | Enable, disable, or pause syncing from specified crdt_sources. Applicable only for Active-Active databases. See [replica_sync]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/replica_sync" >}}) for more details.
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| crdt_sync_dist | boolean; Enable/disable distributed syncer in master-master | +| crdt_syncer_auto_oom_unlatch | boolean (default: true); Syncer automatically attempts to recover synchronisation from peers after this database throws an Out-Of-Memory error. Otherwise, the syncer exits | +| crdt_xadd_id_uniqueness_mode | XADD strict ID uniqueness mode. CRDT only.
Values:
‘liberal’
**‘strict’**
‘semi-strict’ | +| created_time | string; The date and time the database was created (read-only) | +| data_internode_encryption | boolean; Should the data plane internode communication for this database be encrypted | +| data_persistence | Database on-disk persistence policy. For snapshot persistence, a [snapshot_policy]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/snapshot_policy" >}}) must be provided
Values:
**'disabled'**
'snapshot'
'aof' | +| dataset_import_sources | [complex object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/dataset_import_sources" >}}); Array of source file location description objects to import from when performing an import action. This is write-only and cannot be read after set.
Call `GET /v1/jsonschema` to retrieve the object's structure. | +| db_conns_auditing | boolean; Enables/deactivates [database connection auditing]({{< relref "/operate/rs/7.4/security/audit-events" >}}) | +| default_user | boolean (default: true); Allow/disallow a default user to connect | +| disabled_commands | string (default: ); Redis commands which are disabled in db | +| dns_address_master | string; Database private address endpoint FQDN (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| email_alerts | boolean (default: false); Send email alerts for this DB | +| endpoint | string; Latest bound endpoint. Used when reconfiguring an endpoint via update | +| endpoint_ip | complex object; External IP addresses of node hosting the BDB's endpoint. `GET /v1/jsonschema` to retrieve the object's structure. (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| endpoint_node | integer; Node UID hosting the BDB's endpoint (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| endpoints | array; List of database access endpoints (read-only)
**uid**: Unique identification of this source
**dns_name**: Endpoint’s DNS name
**port**: Endpoint’s TCP port number
**addr**: Endpoint’s accessible addresses
**proxy_policy**: The policy used for proxy binding to the endpoint
**exclude_proxies**: List of proxies to exclude
**include_proxies**: List of proxies to include
**addr_type**: Indicates if the endpoint is based on internal or external IPs
**oss_cluster_api_preferred_ip_type**: Indicates preferred IP type in the OSS cluster API: internal/external
**oss_cluster_api_preferred_endpoint_type**: Indicates preferred endpoint type in the OSS cluster API: ip/hostname | +| enforce_client_authentication | Require authentication of client certificates for SSL connections to the database. If set to 'enabled', a certificate should be provided in either authentication_ssl_client_certs or authentication_ssl_crdt_certs
Values:
**'enabled'**
'disabled' | +| eviction_policy | Database eviction policy (Redis style).
Values:
'volatile-lru'
'volatile-ttl'
'volatile-random'
'allkeys-lru'
'allkeys-random'
'noeviction'
'volatile-lfu'
'allkeys-lfu'
**Redis DB default**: 'volatile-lru'
**memcached DB default**: 'allkeys-lru' | +| export_failure_reason | Reason of last failed export process (read-only)
Values:
'no-permission'
'wrong-file-path'
'general-error' | +| export_progress | number, (range: 0-100); Database manually triggered export progress (percentage) (read-only) | +| export_status | Status of manually triggered export process (read-only)
Values:
'exporting'
'succeeded'
'failed' | +| generate_text_monitor | boolean; Enable/disable generation of syncer monitoring information | +| gradual_src_max_sources | integer (default: 1); Sync a maximum N sources in parallel (gradual_src_mode should be enabled for this to take effect) | +| gradual_src_mode | Indicates if gradual sync (of sync sources) should be activated
Values:
'enabled'
'disabled' | +| gradual_sync_max_shards_per_source | integer (default: 1); Sync a maximum of N shards per source in parallel (gradual_sync_mode should be enabled for this to take effect) | +| gradual_sync_mode | Indicates if gradual sync (of source shards) should be activated ('auto' for automatic decision)
Values:
'enabled'
'disabled'
'auto' | +| hash_slots_policy | The policy used for hash slots handling
Values:
**'legacy'**: slots range is '1-4096'
**'16k'**: slots range is '0-16383' | +| implicit_shard_key | boolean (default: false); Controls the behavior of what happens in case a key does not match any of the regex rules.
**true**: if a key does not match any of the rules, the entire key will be used for the hashing function
**false**: if a key does not match any of the rules, an error will be returned. | +| import_failure_reason | Import failure reason (read-only)
Values:
'download-error'
'file-corrupted'
'general-error'
'file-larger-than-mem-limit:\:\'
'key-too-long'
'invalid-bulk-length'
'out-of-memory' | +| import_progress | number, (range: 0-100); Database import progress (percentage) (read-only) | +| import_status | Database import process status (read-only)
Values:
'idle'
'initializing'
'importing'
'succeeded'
'failed' | +| internal | boolean (default: false); Is this a database used by the cluster internally | +| last_backup_time | string; Time of last successful backup (read-only) | +| last_changed_time | string; Last administrative configuration change (read-only) | +| last_export_time | string; Time of last successful export (read-only) | +| max_aof_file_size | integer; Maximum size for shard's AOF file (bytes). Default 300GB, (on bigstore DB 150GB) | +| max_aof_load_time | integer (default: 3600); Maximum time shard's AOF reload should take (seconds). | +| max_client_pipeline | integer (default: 200); Maximum number of pipelined commands per connection. Maximum value is 2047. | +| max_connections | integer (default: 0); Maximum number of client connections allowed (0 unlimited) | +| max_pipelined | integer (default: 2000); Determines the maximum number of commands in the proxy’s pipeline per shard connection. | +| master_persistence | boolean (default: false); If true, persists the primary shard in addition to replica shards in a replicated and persistent database. | +| memory_size | integer (default: 0); Database memory limit (0 is unlimited), expressed in bytes. | +| metrics_export_all | boolean; Enable/disable exposing all shard metrics through the metrics exporter | +| mkms | boolean (default: true); Are MKMS (Multi Key Multi Slots) commands supported? | +| module_list | {{}}[{
"module_id": string,
"module_args": [
u'string',
u'null'],
"module_name": string,
"semantic_version": string
}, ...]{{
}} List of modules associated with the database

**module_id**: Module UID
**module_args**: Module command-line arguments (pattern does not allow special characters &,\<,>,")
**module_name**: Module's name
**semantic_version**: Module's semantic version

As of Redis Enterprise Software v7.4.2, **module_id** and **semantic_version** are optional. | +| mtls_allow_outdated_certs | boolean; An optional mTLS relaxation flag for certs verification | +| mtls_allow_weak_hashing | boolean; An optional mTLS relaxation flag for certs verification | +| name | string; Database name. Only letters, numbers, or hyphens are valid characters. The name must start and end with a letter or number. | +| oss_cluster | boolean (default: false); OSS Cluster mode option. Cannot be enabled with `'hash_slots_policy': 'legacy'` | +| oss_cluster_api_preferred_endpoint_type | Endpoint type in the OSS cluster API
Values:
**‘ip’**
‘hostname’ | +| oss_cluster_api_preferred_ip_type | Internal/external IP type in OSS cluster API. Default value for new endpoints
Values:
**'internal'**
'external' | +| oss_sharding | boolean (default: false); An alternative to `shard_key_regex` for using the common case of the OSS shard hashing policy | +| port | integer; TCP port on which the database is available. Generated automatically if omitted and returned as 0 | +| proxy_policy | The default policy used for proxy binding to endpoints
Values:
'single'
'all-master-shards'
'all-nodes' | +| rack_aware | boolean (default: false); Require the database to always replicate across multiple racks | +| recovery_wait_time | integer (default: -1); Defines how many seconds to wait for the persistence file to become available during auto recovery. After the wait time expires, auto recovery completes with potential data loss. The default `-1` means to wait forever. | +| redis_version | string; Version of the redis-server processes: e.g. 6.0, 5.0-big | +| repl_backlog_size | string; Redis replication backlog size ('auto' or size in bytes) | +| replica_sources | array of [syncer_sources]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/syncer_sources" >}}) objects; Remote endpoints of database to sync from. See the 'bdb -\> replica_sources' section | +| [replica_sync]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/replica_sync" >}}) | Enable, disable, or pause syncing from specified replica_sources
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| replica_sync_dist | boolean; Enable/disable distributed syncer in replica-of | +| replication | boolean (default: false); In-memory database replication mode | +| resp3 | boolean (default: true); Enables or deactivates RESP3 support | +| roles_permissions | {{}}[{
"role_uid": integer,
"redis_acl_uid": integer
}, ...]{{
}} | +| sched_policy | Controls how server-side connections are used when forwarding traffic to shards.
Values:
**cmp**: Closest to max_pipelined policy. Pick the connection with the most pipelined commands that has not reached the max_pipelined limit.
**mru**: Try to use most recently used connections.
**spread**: Try to use all connections.
**mnp**: Minimal pipeline policy. Pick the connection with the least pipelined commands. | +| shard_block_crossslot_keys | boolean (default: false); In Lua scripts, prevent use of keys from different hash slots within the range owned by the current shard | +| shard_block_foreign_keys | boolean (default: true); In Lua scripts, `foreign_keys` prevent use of keys which could reside in a different shard (foreign keys) | +| shard_key_regex | Custom keyname-based sharding rules.
`[{"regex": string}, ...]`
To use the default rules you should set the value to:
`[{"regex": ".*\\{(?.*)\\}.*"}, {"regex": "(?.*)"}]` | +| shard_list | array of integers; Cluster unique IDs of all database shards. | +| sharding | boolean (default: false); Cluster mode (server-side sharding). When true, shard hashing rules must be provided by either `oss_sharding` or `shard_key_regex` | +| shards_count | integer, (range: 1-512) (default: 1); Number of database server-side shards | +| shards_placement | Control the density of shards
Values:
**'dense'**: Shards reside on as few nodes as possible
**'sparse'**: Shards reside on as many nodes as possible | +| skip_import_analyze | Enable/disable skipping the analysis stage when importing an RDB file
Values:
'enabled'
'disabled' | +| slave_buffer | Redis replica output buffer limits
Values:
'auto'
value in MB
hard:soft:time | +| slave_ha | boolean; Enable replica high availability mechanism for this database (default takes the cluster setting) | +| slave_ha_priority | integer; Priority of the BDB in replica high availability mechanism | +| snapshot_policy | array of [snapshot_policy]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/snapshot_policy" >}}) objects; Policy for snapshot-based data persistence. A dataset snapshot will be taken every N secs if there are at least M writes changes in the dataset | +| ssl | boolean (default: false); Require SSL authenticated and encrypted connections to the database (deprecated as of Redis Enterprise v5.0.1) | +| [status]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/status" >}}) | Database lifecycle status (read-only)
Values:
'pending'
'active'
'active-change-pending'
'delete-pending'
'import-pending'
'creation-failed'
'recovery' | +| support_syncer_reconf | boolean; Determines whether the syncer handles its own configuration changes. If false, the DMC restarts the syncer upon a configuration change. | +| sync | (deprecated as of Redis Enterprise v5.0.1, use [replica_sync]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/replica_sync" >}}) or crdt_sync instead) Enable, disable, or pause syncing from specified sync_sources
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| sync_dedicated_threads | integer (range: 0-10) (default: 5); Number of dedicated Replica Of threads | +| sync_sources | {{}}[{
"uid": integer,
"uri": string,
"compression": integer,
"status": string,
"rdb_transferred": integer,
"rdb_size": integer,
"last_update": string,
"lag": integer,
"last_error": string
}, ...]{{
}} (deprecated as of Redis Enterprise v5.0.1, instead use replica_sources or crdt_sources) Remote endpoints of database to sync from. See the 'bdb -\> replica_sources' section
**uid**: Numeric unique identification of this source
**uri**: Source Redis URI
**compression**: Compression level for the replication link
**status**: Sync status of this source
**rdb_transferred**: Number of bytes transferred from the source's RDB during the syncing phase
**rdb_size**: The source's RDB size to be transferred during the syncing phase
**last_update**: Time last update was received from the source
**lag**: Lag in millisec between source and destination (while synced)
**last_error**: Last error encountered when syncing from the source | +| syncer_log_level | Minimum syncer log level to log. Only logs with this level or higher will be logged.
Values:
‘crit’
‘error’
‘warn’
**‘info’**
‘trace’
‘debug’ | +| syncer_mode | The syncer for replication between database instances is either on a single node (centralized) or on each node that has a proxy according to the proxy policy (distributed). (read-only)
Values:
'distributed'
'centralized' | +| tags | {{}}[{
"key": string,
"value": string
}, ...]{{
}} Optional list of tag objects attached to the database. Each tag requires a key-value pair.
**key**: Represents the tag's meaning and must be unique among tags (pattern does not allow special characters &,\<,>,")
**value**: The tag's value.| +| tls_mode | Require TLS-authenticated and encrypted connections to the database
Values:
'enabled'
**'disabled'**
'replica_ssl' | +| type | Type of database
Values:
**'redis'**
'memcached' | +| use_nodes | array of strings; Cluster node UIDs to use for database shards and bound endpoints | +| version | string; Database compatibility version: full Redis/memcached version number, such as 6.0.6. This value can only change during database creation and database upgrades.| +| wait_command | boolean (default: true); Supports Redis wait command (read-only) | +--- +Title: BDB group object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a group of databases with a shared memory pool +linkTitle: bdb_group +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/bdb_group/' +--- + +An API object that represents a group of databases that share a memory pool. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Cluster unique ID of the database group | +| members | array of strings | A list of UIDs of member databases (read-only) | +| memory_size | integer | The common memory pool size limit for all databases in the group, expressed in bytes | +--- +Title: Action object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents cluster actions +linkTitle: action +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/action/' +--- + +The cluster allows you to invoke general maintenance actions such as rebalancing or taking a node offline by moving all of its entities to other nodes. + +Actions are implemented as tasks in the cluster. Every task has a unique `task_id` assigned by the cluster, a task name which describes the task, a status, and additional task-specific parameters. + +The REST API provides a simplified interface that allows callers to invoke actions and query their status without a specific `task_id`. + +The action lifecycle is based on the following status and status transitions: + +{{< image filename="/images/rs/rest-api-action-cycle.png#no-click" alt="Action lifecycle" >}} + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | float (range: 0-100) | Represents percent completed (As of v7.4.2, the return value type changed to 'float' to provide improved progress indication) | +| status | queued | Requested operation and added it to the queue to await processing | +| | starting | Picked up operation from the queue and started processing | +| | running | Currently executing operation | +| | cancelling | Operation cancellation is in progress | +| | cancelled | Operation cancelled | +| | completed | Operation completed | +| | failed | Operation failed | + +When a task fails, the `error_code` and `error_message` fields describe the error. + +Possible `error_code` values: + + Code | Description | +|-------------------------|------------------------------------------------| +| internal_error | An internal error that cannot be mapped to a more precise error code +| insufficient_resources | The cluster does not have sufficient resources to complete the required operation + +--- +Title: State machine object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a state machine. +linkTitle: state-machine +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/state-machine/' +--- + +A state machine object tracks the status of database actions. + +A state machine contains the following attributes: + +| Name | Type/Value | Description | +|-------------|------------|-------------| +| action_uid | string | A globally unique identifier of the action | +| object_name | string | Name of the object being manipulated by the state machine | +| status | pending | Requested state machine has not started | +| | active | State machine is currently running | +| | completed | Operation complete | +| | failed | Operation or state machine failed | +| name | string | Name of the running (or failed) state machine | +| state | string | Current state within the state machine, when known | +| error | string | A descriptive error string for failed state machine, when known | +--- +Title: Role object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a role +linkTitle: role +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/role/' +--- + +An API object that represents a role. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Role's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the GET /actions/{uid} API (read-only) | +| management | 'admin'
'db_member'
'db_viewer'
'cluster_member'
'cluster_viewer'
'none' | [Management role]({{< relref "/operate/rs/7.4/references/rest-api/permissions#roles" >}}) | +| name | string | Role's name | +--- +Title: Redis ACL object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a Redis access control list (ACL) +linkTitle: redis_acl +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/redis_acl/' +--- + +An API object that represents a Redis [access control list (ACL)]({{< relref "/operate/rs/7.4/security/access-control/create-db-roles" >}}) + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Object's unique ID | +| account_id | integer | SM account ID | +| acl | string | Redis ACL's string | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API (read-only) | +| name | string | Redis ACL's name | +| min_version | string | Minimum database version that supports this ACL. Read only | +| max_version | string | Maximum database version that supports this ACL. Read only | + +--- +Title: Proxy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a proxy in the cluster +linkTitle: proxy +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/proxy/' +--- + +An API object that represents a [proxy](https://en.wikipedia.org/wiki/Proxy_server) in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Unique ID of the proxy (read-only) | +| backlog | integer | TCP listen queue backlog | +| client_keepcnt | integer | Client TCP keepalive count | +| client_keepidle | integer | Client TCP keepalive idle | +| client_keepintvl | integer | Client TCP keepalive interval | +| conns | integer | Number of connections | +| duration_usage_threshold | integer, (range: 10-300) | Max number of threads | +| dynamic_threads_scaling | boolean | Automatically adjust the number of threads| +| ignore_bdb_cconn_limit | boolean | Ignore client connection limits | +| ignore_bdb_cconn_output_buff_limits | boolean | Ignore buffer limit | +| log_level | `crit`
`error`
`warn`
`info`
`trace`
`debug` | Minimum log level to log. Only logs with this level or greater will be logged. | +| max_listeners | integer | Max number of listeners | +| max_servers | integer | Max number of Redis servers | +| max_threads | integer, (range: 1-256) | Max number of threads | +| max_worker_client_conns | integer | Max client connections per thread | +| max_worker_server_conns | integer | Max server connections per thread | +| max_worker_txns | integer | Max in-flight transactions per thread | +| threads | integer, (range: 1-256) | Number of threads | +| threads_usage_threshold | integer, (range: 50-99) | Max number of threads | +--- +Title: Sync object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the sync object used with Redis Enterprise Software REST API + calls. +linkTitle: sync +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/shard/sync/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | integer | Number of bytes remaining in current sync | +| status | 'in_progress'
'idle'
'link_down' | Indication of the shard's current sync status | +--- +Title: Loading object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the loading object used with Redis Enterprise Software REST + API calls. +linkTitle: loading +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/shard/loading/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | number, (range: 0-100) | Percentage of bytes already loaded | +| status | 'in_progress'
'idle' | Status of the load of a dump file (read-only) | +--- +Title: Backup object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the backup object used with Redis Enterprise Software REST + API calls. +linkTitle: backup +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/shard/backup/' +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | number, (range: 0-100) | Shard backup progress (percentage) | +| status | 'exporting'
'succeeded'
'failed' | Status of scheduled periodic backup process | +--- +Title: Shard object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a database shard +hideListLinks: true +linkTitle: shard +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/shard/' +--- + +An API object that represents a Redis shard in a database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | string | Cluster unique ID of shard | +| assigned_slots | string | Shards hash slot range | +| backup | [backup]({{< relref "/operate/rs/7.4/references/rest-api/objects/shard/backup" >}}) object | Current status of scheduled periodic backup process | +| bdb_uid | integer | The ID of the database this shard belongs to | +| bigstore_ram_weight | number | Shards RAM distribution weight | +| detailed_status | 'busy'
'down'
'importing'
'loading'
'ok'
'timeout'
'trimming'
'unknown' | A more detailed status of the shard | +| loading | [loading]({{< relref "/operate/rs/7.4/references/rest-api/objects/shard/loading" >}}) object | Current status of dump file loading | +| node_uid | string | The ID of the node this shard is located on | +| redis_info | redis_info object | A sub-dictionary of the [Redis INFO command]({{< relref "/commands/info" >}}) | +| report_timestamp | string | The time in which the shard's info was collected (read-only) | +| role | 'master'
'slave' | Role of this shard | +| status | 'active'
'inactive'
'trimming' | The current status of the shard | +| sync | [sync]({{< relref "/operate/rs/7.4/references/rest-api/objects/shard/sync.md" >}}) object | Shard's current sync status and progress | +--- +Title: OCSP status object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the cluster's OCSP status +linkTitle: ocsp_status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/objects/ocsp_status/' +--- + +An API object that represents the cluster's OCSP status. + +| Name | Type/Value | Description | +|------|------------|-------------| +| cert_status | string | Indicates the proxy certificate's status: GOOD/REVOKED/UNKNOWN (read-only) | +| responder_url | string | The OCSP responder URL this status came from (read-only) | +| next_update | string | The expected date and time of the next certificate status update (read-only) | +| produced_at | string | The date and time when the OCSP responder signed this response (read-only) | +| revocation_time | string | The date and time when the certificate was revoked or placed on hold (read-only) | +| this_update | string | The most recent time that the responder confirmed the current status (read-only) | +--- +Title: Redis Enterprise REST API objects +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the objects used with Redis Enterprise Software REST API calls. +hideListLinks: true +linkTitle: Objects +weight: 40 +url: '/operate/rs/7.4/references/rest-api/objects/' +--- + +Certain [REST API requests]({{< relref "/operate/rs/7.4/references/rest-api/requests" >}}) require you to include specific objects in the request body. Many requests also return objects in the response body. + +Both REST API requests and responses represent these objects as [JSON](https://www.json.org). + +{{< table-children columnNames="Object,Description" columnSources="LinkTitle,Description" enableLinks="LinkTitle" >}} +--- +Title: Suffixes requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: DNS suffixes requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: suffixes +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/suffixes/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-suffixes) | `/v1/suffixes` | Get all DNS suffixes | + +## Get all suffixes {#get-all-suffixes} + + GET /v1/suffixes + +Get all DNS suffixes in the cluster. + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/suffixes + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +The response body contains a JSON array with all suffixes, represented as [suffix objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/suffix" >}}). + +#### Example JSON body + +```json +[ + { + "name": "cluster.fqdn", + "// additional fields..." + }, + { + "name": "internal.cluster.fqdn", + "// additional fields..." + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +--- +Title: Migrate shards requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to migrate database shards +headerRange: '[1-2]' +linkTitle: migrate +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/shards/actions/migrate/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-multi-shards) | `/v1/shards/actions/migrate` | Migrate multiple shards | +| [POST](#post-shard) | `/v1/shards/{uid}/actions/migrate` | Migrate a specific shard | + +## Migrate multiple shards {#post-multi-shards} + + POST /v1/shards/actions/migrate + +Migrates the list of given shard UIDs to the node specified by `target_node_uid`. The shards can be from multiple databases. This request is asynchronous. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [migrate_shard]({{< relref "/operate/rs/7.4/references/rest-api/permissions#migrate_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-multi-request} + +#### Example HTTP request + + POST /v1/shards/actions/migrate + +#### Example JSON body + +```json +{ + "shard_uids": ["2","4","6"], + "target_node_uid": 9, + "override_rack_policy": false, + "preserve_roles": false, + "max_concurrent_bdb_migrations": 3 +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body {#post-multi-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| shard_uids | array of strings | List of shard UIDs to migrate. | +| target_node_uid | integer | UID of the node to where the shards should migrate. | +| override_rack_policy | boolean | If true, overrides and ignores rack-aware policy violations. | +| dry_run | boolean | Determines whether the migration is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. | +| preserve_roles | boolean | If true, preserves the migrated shards' roles after migration. | +| max_concurrent_bdb_migrations | integer | The number of concurrent databases that can migrate shards. | + +### Response {#post-multi-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Migrate was triggered" +} +``` + +### Status codes {#post-multi-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | Conflicting parameters. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | A list of shard UIDs is required and not given, a specified shard does not exist, or a node UID is required and not given. | +| [500 Internal Server Error](https://www.rfc-editor.org/rfc/rfc9110.html#name-500-internal-server-error) | Migration failed. | + + +## Migrate shard {#post-shard} + + POST /v1/shards/{int: uid}/actions/migrate + +Migrates the shard with the given `shard_uid` to the node specified by `target_node_uid`. If the shard is already on the target node, nothing happens. This request is asynchronous. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [migrate_shard]({{< relref "/operate/rs/7.4/references/rest-api/permissions#migrate_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/shards/1/actions/migrate + +#### Example JSON body + +```json +{ + "target_node_uid": 9, + "override_rack_policy": false, + "preserve_roles": false +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard to migrate. | + + +#### Request body {#post-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| target_node_uid | integer | UID of the node to where the shard should migrate. | +| override_rack_policy | boolean | If true, overrides and ignores rack-aware policy violations. | +| dry_run | boolean | Determines whether the migration is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. | +| preserve_roles | boolean | If true, preserves the migrated shards' roles after migration. | + +### Response {#post-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Migrate was triggered" +} +``` + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist, or node UID is required and not given. | +| [409 Conflict](https://www.rfc-editor.org/rfc/rfc9110.html#name-409-conflict) | Database is currently busy. | +--- +Title: Shard actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to perform shard actions +headerRange: '[1-2]' +hideListLinks: true +linkTitle: actions +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/shards/actions/' +--- + +## Migrate + +| Method | Path | Description | +|--------|------|-------------| +| [POST]({{}}) | `/v1/shards/actions/migrate` | Migrate multiple shards | +| [POST]({{}}) | `/v1/shards/{uid}/actions/migrate` | Migrate a specific shard | +--- +Title: Latest shards stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Most recent shard statistics requests +headerRange: '[1-2]' +linkTitle: last +weight: $weight +aliases: /operate/rs/references/rest-api/requests/shards-stats/last/ +url: '/operate/rs/7.4/references/rest-api/requests/shards/stats/last/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards-stats-last) | `/v1/shards/stats/last` | Get most recent stats for all shards | +| [GET](#get-shard-stats-last) | `/v1/shards/stats/last/{uid}` | Get most recent stats for a specific shard | + +## Get latest stats for all shards {#get-all-shards-stats-last} + + GET /v1/shards/stats/last + +Get most recent statistics for all shards. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_shard_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_shard_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards/stats/last?interval=1sec&stime=015-05-27T08:27:35Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all shards. + +#### Example JSON body + +```json +{ + "1": { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:28:36Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + }, + "2": { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:28:45Z", + "// additional fields..." + } +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | No shards exist | + +## Get latest shard stats {#get-shard-stats-last} + + GET /v1/shards/stats/last/{int: uid} + +Get most recent statistics for a specific shard. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_shard_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_shard_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/stats/last/1?interval=1sec&stime=2015-05-28T08:27:35Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec. (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns the most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the specified shard. + +#### Example JSON body + +```json +{ + "1": { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:36Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + } +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Shard isn't currently active | +--- +Title: Shards stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shard statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: stats +weight: $weight +aliases: /operate/rs/references/rest-api/requests/shards-stats/ +url: '/operate/rs/7.4/references/rest-api/requests/shards/stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards-stats) | `/v1/shards/stats` | Get stats for all shards | +| [GET](#get-shard-stats) | `/v1/shards/stats/{uid}` | Get stats for a specific shard | + +## Get all shards stats {#get-all-shards-stats} + + GET /v1/shards/stats + +Get statistics for all shards. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_shard_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_shard_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards/stats?interval=1hour&stime=2014-08-28T10:00:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| parent_uid | integer | Only return shard from the given BDB ID (optional) | +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| metrics | list | Comma-separated list of [metric names]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics/shard-metrics" >}}) for which we want statistics (default is all) (optional) | + +### Response {#get-all-response} + +Returns a JSON array of [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all shards. + +#### Example JSON body + +```json +[ + { + "status": "active", + "uid": "1", + "node_uid": "1", + "assigned_slots": "0-8191", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:40Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:27:45Z", + "// additional fields..." + } + ] + }, + { + "uid": "2", + "status": "active", + "node_uid": "1", + "assigned_slots": "8192-16383", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:40Z", + "// additional fields..." + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:27:45Z", + "// additional fields..." + } + ] + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | No shards exist | + +## Get shard stats {#get-shard-stats} + + GET /v1/shards/stats/{int: uid} + +Get statistics for a specific shard. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_shard_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_shard_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/stats/1?interval=1hour&stime=2014-08-28T10:00:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the specified shard. + +#### Example JSON body + +```json +{ + "uid": "1", + "status": "active", + "node_uid": "1", + "role": "master", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:24:13Z", + "etime": "2015-05-28T08:24:18Z", + "avg_ttl": 0.0, + "blocked_clients": 0.0, + "connected_clients": 9.0, + "etime": "2015-05-28T08:24:18Z", + "evicted_objects": 0.0, + "expired_objects": 0.0, + "last_save_time": 1432541051.0, + "used_memory": 5651440.0, + "mem_size_lua": 35840.0, + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "no_of_expires": 0.0, + "no_of_keys": 0.0, + "pubsub_channels": 0.0, + "pubsub_patterns": 0.0, + "rdb_changes_since_last_save": 0.0, + "read_hits": 0.0, + "read_misses": 0.0, + "stime": "2015-05-28T08:24:13Z", + "sync_full": 0.0, + "sync_partial_err": 0.0, + "sync_partial_ok": 0.0, + "total_req": 0.0, + "write_hits": 0.0, + "write_misses": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:24:18Z", + "etime": "2015-05-28T08:24:23Z", + + "// additional fields..." + } + ] +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Shard isn't currently active | +--- +Title: Shard requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests for database shards +headerRange: '[1-2]' +hideListLinks: true +linkTitle: shards +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/shards/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards) | `/v1/shards` | Get all shards | +| [GET](#get-shard) | `/v1/shards/{uid}` | Get a specific shard | + +## Get all shards {#get-all-shards} + + GET /v1/shards + +Get information about all shards in the cluster. + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards?extra_info_keys=used_memory_rss&extra_info_keys=connected_clients + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| extra_info_keys | list of strings | A list of extra keys to be fetched (optional) | + +### Response {#get-all-response} + +Returns a JSON array of [shard objects]({{}}). + +#### Example JSON body + +```json +[ + { + "uid": "1", + "role": "master", + "assigned_slots": "0-16383", + "bdb_uid": 1, + "detailed_status": "ok", + "loading": { + "status": "idle" + }, + "node_uid": "1", + "redis_info": { + "connected_clients": 14, + "used_memory_rss": 12263424 + }, + "report_timestamp": "2024-06-28T18:44:01Z", + "status": "active" + }, + { + "uid": 2, + "role": "slave", + // additional fields... + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | + +## Get shard {#get-shard} + + GET /v1/shards/{int: uid} + +Gets information about a single shard. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/1?extra_info_keys=used_memory_rss&extra_info_keys=connected_clients + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the requested shard. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| extra_info_keys | list of strings | A list of extra keys to be fetched (optional) | + +### Response {#get-response} + +Returns a [shard object]({{}}). + +#### Example JSON body + +```json +{ + "assigned_slots": "0-16383", + "bdb_uid": 1, + "detailed_status": "ok", + "loading": { + "status": "idle" + }, + "node_uid": "1", + "redis_info": { + "connected_clients": 14, + "used_memory_rss": 12263424 + }, + "role": "master", + "report_timestamp": "2024-06-28T18:44:01Z", + "status": "active", + "uid": "1" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard UID does not exist. | +--- +Title: Cluster debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API /cluster/debuginfo requests. +headerRange: '[1-2]' +linkTitle: debuginfo +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/debuginfo/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-debuginfo) | `/v1/cluster/debuginfo` | Get debug info from all nodes and databases | + +## Get cluster debug info {#get-cluster-debuginfo} + + GET /v1/cluster/debuginfo + +Downloads a tar file that contains debug info from all nodes and databases. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/debuginfo + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info for all nodes. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Cluster services configuration requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster services configuration requests +headerRange: '[1-2]' +linkTitle: services_configuration +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/services_configuration/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-services_config) | `/v1/cluster/services_configuration` | Get cluster services settings | +| [PUT](#put-cluster-services_config) | `/v1/cluster/services_configuration` | Update cluster services settings | + +## Get cluster services configuration {#get-cluster-services_config} + + GET /v1/cluster/services_configuration + +Get cluster services settings. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/services_configuration + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a [services configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration" >}}). + +#### Example JSON body + +```json +{ + "cm_server": { + "operating_mode": "disabled" + }, + "mdns_server": { + "operating_mode": "enabled" + }, + "// additional services..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Update cluster services configuration {#put-cluster-services_config} + + PUT /v1/cluster/services_configuration + +Update the cluster services settings. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/cluster/services_configuration + +#### Example JSON body + +```json +{ + "cm_server": { + "operating_mode": "disabled" + }, + "// additional services..." +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a [services configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns the updated [services configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/services_configuration" >}}). + +#### Example JSON body + +```json +{ + "cm_server": { + "operating_mode": "disabled" + }, + "mdns_server": { + "operating_mode": "enabled" + }, + "// additional services..." +} +``` + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +--- +Title: Rotate cluster certificates requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Rotate cluster certificates requests +headerRange: '[1-2]' +linkTitle: rotate +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/certificates/rotate/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-cluster-certificates-rotate) | `/v1/cluster/certificates/rotate` | Regenerate all internal cluster certificates | + +## Rotate cluster certificates {#post-cluster-certificates-rotate} + + POST /v1/cluster/certificates/rotate + +Regenerates all _internal_ cluster certificates. + +The certificate rotation will be performed on all nodes within the cluster. If +"name" is provided, only rotate the specified certificate on all nodes within the cluster. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/cluster/certificates/rotate + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#post-response} + +Responds with a `200 OK` status code if the internal certificates successfully rotate across the entire cluster. + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Failed, not all nodes have been updated. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Unsupported internal certificate rotation. | +--- +Title: Cluster certificates requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster certificates requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: certificates +weight: $weight +aliases: + - /operate/rs/references/rest-api/requests/cluster/update-cert +url: '/operate/rs/7.4/references/rest-api/requests/cluster/certificates/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-certificates) | `/v1/cluster/certificates` | Get cluster certificates | +| [PUT](#put-cluster-update_cert) | `/v1/cluster/update_cert` | Update a cluster certificate | +| [DELETE](#delete-cluster-certificate) | `/v1/cluster/certificates/{certificate_name}` | Delete cluster certificate | + +## Get cluster certificates {#get-cluster-certificates} + + GET /v1/cluster/certificates + +Get the cluster's certificates. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/certificates + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a JSON object that contains the cluster's certificates and keys. + +#### Example JSON body + +```json +{ + "api_cert": "-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----", + "api_key": "-----BEGIN RSA PRIVATE KEY-----...-----END RSA PRIVATE KEY-----" + "// additional certificates..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + + +## Update cluster certificate {#put-cluster-update_cert} + +```sh +PUT /v1/cluster/update_cert +``` + +Replaces an existing certificate on all nodes within the cluster with a new certificate. The new certificate must pass validation before it can replace the old certificate. + +See the [certificates table]({{< relref "/operate/rs/7.4/security/certificates" >}}) for the list of cluster certificates and their descriptions. + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/cluster/update_cert +``` + +#### Example JSON body + +```json +{ + "name": "certificate1", + "key": "-----BEGIN RSA PRIVATE KEY-----\n[key_content]\n-----END RSA PRIVATE KEY-----", + "certificate": "-----BEGIN CERTIFICATE-----\n[cert_content]\n-----END CERTIFICATE-----", +} +``` + +Replace `[key_content]` with the content of the private key and `[cert_content]` with the content of the certificate. + +### Response {#put-response} + +Responds with the `200 OK` status code if the certificate replacement succeeds across the entire cluster. + +Otherwise, retry the certificate update in case the failure was due to a temporary issue in the cluster. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Failed, invalid certificate. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Failed, unknown certificate. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Failed, invalid certificate. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Failed, expired certificate. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Failed, not all nodes have been updated. | + + +## Delete cluster certificate {#delete-cluster-certificate} + + DELETE /v1/cluster/certificates/{string: certificate_name} + +Removes the specified cluster certificate from both CCS and disk +across all nodes. Only optional certificates can be deleted through +this endpoint. See the [certificates table]({{< relref "/operate/rs/7.4/security/certificates" >}}) for the list of cluster certificates and their descriptions. + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/cluster/certificates/ + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#delete-response} + +Returns a status code that indicates the certificate deletion success or failure. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Operation successful | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Failed, requested deletion of an unknown certificate | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Failed, requested deletion of a required certificate | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed, error while deleting certificate from disk | +--- +Title: Check all cluster nodes requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Requests that run checks on all cluster nodes. +headerRange: '[1-2]' +linkTitle: check +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/check/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-check) | `/v1/cluster/check` | Runs checks on all cluster nodes | + +## Check all nodes {#get-cluster-check} + + GET /v1/cluster/check + +Runs the following checks on all cluster nodes: + +| Check name | Description | +|-----------|-------------| +| bootstrap_status | Verifies the local node's bootstrap process completed without errors. | +| services | Verifies all Redis Enterprise Software services are running. | +| port_range | Verifies the [`ip_local_port_range`](https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html) doesn't conflict with the ports Redis Enterprise might assign to shards. | +| pidfiles | Verifies all active local shards have PID files. | +| capabilities | Verifies all binaries have the proper capability bits. | +| existing_sockets | Verifies sockets exist for all processes that require them. | +| host_settings | Verifies the following:
• Linux `overcommit_memory` setting is 1.
• `transparent_hugepage` is disabled.
• Socket maximum connections setting `somaxconn` is 1024. | +| tcp_connectivity | Verifies this node can connect to all other alive nodes. | + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_nodes_checks]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_nodes_checks" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/check + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +### Response {#get-response} + +Returns a JSON array with results from all nodes. + +When errors occur, the server returns a JSON object with `result: false` and an `error` field that provides additional information for each node that had an error. If an error occurs during a check, the `error` field only includes a message for the first check that fails on each node. + +Possible `error` messages: + +- "bootstrap request to cnm_http failed,resp_code: ...,resp_content: ..." +- "process ... is not running or not responding (...)" +- "could not communicate with 'supervisorctl': ..." +- "connectivity check failed retrieving ports for testing" + +#### Example JSON body + +```json +{ + "cluster_test_result": false, + "nodes": [ + { + "node_uid": "1", + "result": true + }, + { + "node_uid": "2", + "result": true + }, + { + "node_uid": "3", + "result": false, + "error": "process alert_mgr is not running or not responding ([Errno 111] Connection refused)" + } + ] +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +--- +Title: Cluster LDAP requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: LDAP configuration requests +headerRange: '[1-2]' +linkTitle: ldap +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/ldap/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-ldap) | `/v1/cluster/ldap` | Get LDAP configuration | +| [PUT](#put-cluster-ldap) | `/v1/cluster/ldap` | Set/update LDAP configuration | +| [DELETE](#delete-cluster-ldap) | `/v1/cluster/ldap` | Delete LDAP configuration | + +## Get LDAP configuration {#get-cluster-ldap} + + GET /v1/cluster/ldap + +Get the LDAP configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_ldap_config]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_ldap_config" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/ldap + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns an [LDAP object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap" >}}). + +#### Example JSON body + +```json +{ + "bind_dn": "rl_admin", + "bind_pass": "***", + "ca_cert": "", + "control_plane": false, + "data_plane": false, + "dn_group_attr": "MemberOf", + "dn_group_query": {}, + "starttls": false, + "uris": ["ldap://ldap.example.org:636"], + "user_dn_query": {}, + "user_dn_template": "cn=%u, ou=users,dc=example,dc=org" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success | + +## Update LDAP configuration {#put-cluster-ldap} + + PUT /v1/cluster/ldap + +Set or update the cluster LDAP configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [config_ldap]({{< relref "/operate/rs/7.4/references/rest-api/permissions#config_ldap" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + POST /v1/cluster/ldap + +#### Example JSON body + +```json +{ + "uris": [ + "ldap://ldap.redislabs.com:389" + ], + "bind_dn": "rl_admin", + "bind_pass": "secret", + "user_dn_template": "cn=%u,dc=example,dc=org", + "dn_group_attr": "MemberOf", + "directory_timeout_s": 5 +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +Include an [LDAP object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns a status code. If an error occurs, the response body may include an error code and message with more details. + +### Error codes {#put-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| illegal_fields_combination | An unacceptable combination of fields was specified for the configuration object (e.g.: two mutually-exclusive fields), or a required field is missing.| + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, LDAP config has been set. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | + +## Delete LDAP configuration {#delete-cluster-ldap} + + DELETE /v1/cluster/ldap + +Clear the LDAP configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [config_ldap]({{< relref "/operate/rs/7.4/references/rest-api/permissions#config_ldap" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/cluster/ldap + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#delete-response} + +Returns a status code. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success | +--- +Title: Cluster module capabilities requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis module capabilities requests +headerRange: '[1-2]' +linkTitle: module-capabilities +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/module-capabilities/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-module-capabilities) | `/v1/cluster/module-capabilities` | List possible Redis module capabilities | + +## List Redis module capabilities {#get-cluster-module-capabilities} + + GET /v1/cluster/module-capabilities + +List possible Redis module capabilities. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_modules]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_modules" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/module-capabilities + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | \*/\* | Accepted media type | + +### Response {#get-response} + +Returns a JSON object that contains a list of capability names and descriptions. + +#### Example JSON body + +```json +{ + "all_capabilities": [ + {"name": "types", "desc": "module has its own types and not only operate on existing redis types"}, + {"name": "no_multi_key", "desc": "module has no methods that operate on multiple keys"} + "// additional capabilities..." + ] +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +--- +Title: Auditing database connections requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Auditing database connections requests +headerRange: '[1-2]' +linkTitle: auditing/db_conns +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/auditing-db-conns/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-audit-db-conns) | `/v1/cluster/auditing/db_conns` | Get database connection auditing settings | +| [PUT](#put-cluster-audit-db-conns) | `/v1/cluster/auditing/db_conns` | Update database connection auditing settings | +| [DELETE](#delete-cluster-audit-db-conns) | `/v1/cluster/auditing/db_conns` | Delete database connection auditing settings | + +## Get database auditing settings {#get-cluster-audit-db-conns} + + GET /v1/cluster/auditing/db_conns + +Gets the configuration settings for [auditing database connections]({{< relref "/operate/rs/7.4/security/audit-events" >}}). + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/auditing/db_conns + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a [database connection auditing configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/db-conns-auditing-config" >}}). + +#### Example JSON body + +```json +{ + "audit_address": "127.0.0.1", + "audit_port": 12345, + "audit_protocol": "TCP", + "audit_reconnect_interval": 1, + "audit_reconnect_max_attempts": 0 +} +``` + +### Error codes {#get-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_conns_auditing_unsupported_by_capability | Not all nodes support DB Connections Auditing capability | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Feature not supported for all nodes | + +## Update database auditing {#put-cluster-audit-db-conns} + + PUT /v1/cluster/auditing/db_conns + +Updates the configuration settings for [auditing database connections]({{< relref "/operate/rs/7.4/security/audit-events" >}}). + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/cluster/auditing/db_conns + +#### Example JSON body + +```json +{ + "audit_protocol": "TCP", + "audit_address": "127.0.0.1", + "audit_port": 12345, + "audit_reconnect_interval": 1, + "audit_reconnect_max_attempts": 0 +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a [database connection auditing configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/db-conns-auditing-config" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns the updated [database connection auditing configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/db-conns-auditing-config" >}}). + +#### Example JSON body + +```json +{ + "audit_address": "127.0.0.1", + "audit_port": 12345, + "audit_protocol": "TCP", + "audit_reconnect_interval": 1, + "audit_reconnect_max_attempts": 0 +} +``` + +### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_conns_auditing_unsupported_by_capability | Not all nodes support DB Connections Auditing capability | + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Feature not supported for all nodes | + +## Delete database auditing settings {#delete-cluster-audit-db-conns} + + DELETE /v1/cluster/auditing/db_conns + +Resets the configuration settings for [auditing database connections]({{< relref "/operate/rs/7.4/security/audit-events" >}}). + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/cluster/auditing/db_conns + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#delete-response} + +Returns a status code that indicates whether the database connection auditing settings reset successfully or failed to reset. + +### Error codes {#delete-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_conns_audit_config_not_found | Unable to find the auditing configuration | +| cannot_delete_audit_config_when_policy_enabled | Auditing cluster policy is 'enabled' when trying to delete the auditing configuration | +| cannot_delete_audit_config_when_bdb_enabled | One of the databases has auditing configuration 'enabled' when trying to delete the auditing configuration | + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Configuration not found | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Feature not supported for all nodes | +--- +Title: Cluster alerts requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster alert requests +headerRange: '[1-2]' +linkTitle: alerts +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/alerts/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-alerts) | `/v1/cluster/alerts` | Get all cluster alerts | +| [GET](#get-alert) | `/v1/cluster/alerts/{alert}` | Get a specific cluster alert | + +## Get all cluster alerts {#get-all-alerts} + + GET /v1/cluster/alerts + +Get all alert states for the cluster object. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_alerts" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/cluster/alerts + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| ignore_settings | boolean | Retrieve updated alert state regardless of the cluster’s alert_settings. When not present, a disabled alert will always be retrieved as disabled with a false state. (optional) | + +### Response {#get-all-response} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "cluster_too_few_nodes_for_replication": { + "change_time": "2014-12-22T11:48:00Z", + "change_value": { + "state": false + }, + "enabled": true, + "state": "off", + "severity": "WARNING", + }, + "..." +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get cluster alert {#get-alert} + + GET /v1/cluster/alerts/{alert} + +Get a cluster alert state. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_alerts" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/alerts/cluster_too_few_nodes_for_replication + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| ignore_settings | boolean | Retrieve updated alert state regardless of the cluster’s alert_settings. When not present, a disabled alert will always be retrieved as disabled with a false state. (optional) | + +### Response {#get-response} + +Returns an [alert object]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +#### Example JSON body + +```json +{ + "change_time": "2014-12-22T11:48:00Z", + "change_value": { + "state": false + }, + "enabled": true, + "state": "off", + "severity": "WARNING", +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified alert does not exist | +--- +Title: Cluster actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster action requests +headerRange: '[1-2]' +linkTitle: actions +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/actions/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-cluster-actions) | `/v1/cluster/actions` | Get the status of all actions | +| [GET](#get-cluster-action) | `/v1/cluster/actions/{action}` | Get the status of a specific action | +| [POST](#post-cluster-action) | `/v1/cluster/actions/{action}` | Initiate a cluster-wide action | +| [DELETE](#delete-cluster-action) | `/v1/cluster/actions/{action}` | Cancel action or remove action status | + +## Get all cluster actions {#get-all-cluster-actions} + + GET /v1/cluster/actions + +Get the status of all currently executing, queued, or completed cluster actions. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_status_of_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_cluster_action" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/cluster/actions + +### Response {#get-all-response} + +Returns a JSON array of [action objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}). + +#### Example JSON body + +```json +{ + "actions": [ + { + "name": "action_name", + "status": "queued", + "progress": 0.0 + } + ] +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides info about an ongoing action. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run). | + +## Get cluster action {#get-cluster-action} + + GET /v1/cluster/actions/{action} + +Get the status of a currently executing, queued, or completed cluster action. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_status_of_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_cluster_action" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/actions/action_name + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The action to check. | + +### Response {#get-response} + +Returns an [action object]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}). + +#### Example JSON body + +```json +{ + "name": "action_name", + "status": "queued", + "progress": 0.0 +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides info about an ongoing action. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run). | + +## Initiate cluster-wide action {#post-cluster-action} + + POST /v1/cluster/actions/{action} + +Initiate a cluster-wide action. + +The API allows only a single instance of any action type to be +invoked at the same time, and violations of this requirement will +result in a `409 CONFLICT` response. + +The caller is expected to query and process the results of the +previously executed instance of the same action, which will be +removed as soon as the new one is submitted. + +#### Required permissions + +| Permission name | +|-----------------| +| [start_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#start_cluster_action" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/cluster/actions/action_name + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The name of the action required. | + +Supported cluster actions: + +- `change_master`: Promotes a specified node to become the primary node of the cluster, which coordinates cluster-wide operations. Include the `node_uid` of the node you want to promote in the request body. + + ```sh + POST /v1/cluster/actions/change_master + { + "node_uid": "2" + } + ``` + +### Response {#post-response} + +The body content may provide additional action details. Currently, it is not used. + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, action was initiated. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad action or content provided. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | A conflicting action is already in progress. | + +## Cancel action {#delete-cluster-action} + + DELETE /v1/cluster/actions/{action} + +Cancel a queued or executing cluster action, or remove the status of +a previously executed and completed action. + +#### Required permissions + +| Permission name | +|-----------------| +| [cancel_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#cancel_cluster_action" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/cluster/actions/action_name + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The name of the action to cancel, currently no actions are supported. | + +### Response {#delete-response} + +Returns a status code. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action will be cancelled when possible. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action unknown or not currently running. | +--- +Title: Cluster policy requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster policy requests +headerRange: '[1-2]' +linkTitle: policy +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/policy/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-policy) | `/v1/cluster/policy` | Get cluster policy settings | +| [PUT](#put-cluster-policy) | `/v1/cluster/policy` | Update cluster policy settings | + +## Get cluster policy {#get-cluster-policy} + + GET /v1/cluster/policy + +Gets the cluster's current policy settings. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/policy + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a [cluster settings object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster_settings" >}}). + +#### Example JSON body + +```json +{ + "db_conns_auditing": false, + "default_non_sharded_proxy_policy": "single", + "default_provisioned_redis_version": "6.0", + "default_sharded_proxy_policy": "single", + "default_shards_placement": "dense", + "redis_upgrade_policy": "major", + "// additional fields..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | + +## Update cluster policy {#put-cluster-policy} + + PUT /v1/cluster/policy + +Update cluster policy settings. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/cluster/policy + +#### Example JSON body + +```json +{ + "default_shards_placement": "sparse", + "default_sharded_proxy_policy": "all-nodes" +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a [cluster settings object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster_settings" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns a status code that indicates the success or failure of the cluster settings update. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | Failed to set parameters | +--- +Title: Cluster last stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Most recent cluster statistics requests +headerRange: '[1-2]' +linkTitle: last +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/stats/last/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-stats-last) | `/v1/cluster/stats/last` | Get most recent cluster stats | + +## Get latest cluster stats {#get-cluster-stats-last} + + GET /v1/cluster/stats/last + +Get the most recent cluster statistics. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/stats/last?interval=1sec&stime=2015-10-14T06:44:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec. (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns the most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the cluster. + +#### Example JSON body + +```json +{ + "conns": 0.0, + "cpu_idle": 0.8424999999988358, + "cpu_system": 0.01749999999992724, + "cpu_user": 0.08374999999978172, + "egress_bytes": 7403.0, + "ephemeral_storage_avail": 151638712320.0, + "ephemeral_storage_free": 162375925760.0, + "etime": "2015-10-14T06:44:01Z", + "free_memory": 5862400000.0, + "ingress_bytes": 7469.0, + "interval": "1sec", + "persistent_storage_avail": 151638712320.0, + "persistent_storage_free": 162375925760.0, + "stime": "2015-10-14T06:44:00Z", + "total_req": 0.0 +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal server error | +--- +Title: Cluster stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-stats) | `/v1/cluster/stats` | Get cluster stats | + +## Get cluster stats {#get-cluster-stats} + +```sh +GET /v1/cluster/stats +``` + +Get cluster statistics. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_cluster_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/cluster/stats/1?interval=1hour&stime=2014-08-28T10:00:00Z +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the cluster. + +#### Example JSON body + +```json +{ + "intervals": [ + { + "interval": "1hour", + "stime": "2015-05-27T12:00:00Z", + "etime": "2015-05-28T12:59:59Z", + "conns": 0.0, + "cpu_idle": 0.8533959401503577, + "cpu_system": 0.01602159448549579, + "cpu_user": 0.08721123782294203, + "egress_bytes": 1111.2184745131947, + "ephemeral_storage_avail": 3406676307.1449075, + "ephemeral_storage_free": 4455091440.360014, + "free_memory": 2745470765.673594, + "ingress_bytes": 220.84083194769272, + "interval": "1week", + "persistent_storage_avail": 3406676307.1533995, + "persistent_storage_free": 4455091440.088265, + "total_req": 0.0 + }, + { + "interval": "1hour", + "stime": "2015-05-27T13:00:00Z", + "etime": "2015-05-28T13:59:59Z", + "// additional fields..." + } + ] +} +``` + +### Example requests + +#### cURL + +```sh +$ curl -k -u "[username]:[password]" -X GET + https://[host][:port]/v1/cluster/stats?interval=1hour +``` + +#### Python + +```python +import requests + +url = "https://[host][:port]/v1/cluster/stats?interval=1hour" +auth = ("[username]", "[password]") + +response = requests.request("GET", url, auth=auth) + +print(response.text) +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal server error | +--- +Title: Cluster requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster settings requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: cluster +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/cluster/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster) | `/v1/cluster` | Get cluster info | +| [PUT](#put-cluster) | `/v1/cluster` | Update cluster settings | + +## Get cluster info {#get-cluster} + + GET /v1/cluster + +Get cluster info. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_cluster_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a [cluster object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster" >}}). + +#### Example JSON body + +```json +{ + "name": "my-rlec-cluster", + "alert_settings": { "..." }, + "created_time": "2015-04-29T09:09:25Z", + "email_alerts": false, + "email_from": "", + "rack_aware": false, + "smtp_host": "", + "smtp_password": "", + "smtp_port": 25, + "smtp_tls_mode": "none", + "smtp_username": "" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Update cluster settings {#put-cluster} + + PUT /v1/cluster + +Update cluster settings. + +If called with the `dry_run` URL query string, the function will +validate the [cluster object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster" >}}), but will not apply the requested +changes. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/cluster + +#### Example JSON body + +```json +{ + "email_alerts": true, + "alert_settings": { + "node_failed": true, + "node_memory": { + "enabled": true, + "threshold": "80" + } + } +} +``` + +The above request will enable email alerts and alert reporting for node failures and node removals. + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| dry_run | string | Validate but don't apply the new cluster settings | + +#### Request body + +Include a [cluster object]({{< relref "/operate/rs/7.4/references/rest-api/objects/cluster" >}}) with updated fields in the request body. + +### Response {#put-response} + +#### Example JSON body + +```json +{ + "name": "mycluster.mydomain.com", + "email_alerts": true, + "alert_settings": { + "node_failed": true, + "node_memory": { + "enabled": true, + "threshold": "80" + } + }, + "// additional fields..." +} +``` + +### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| bad_nginx_conf | • Designated port is already bound.

• nginx configuration is illegal. | +| bad_debuginfo_path | • Debuginfo path doesn't exist.

• Debuginfo path is inaccessible. | +| config_edit_conflict | Cluster config was edited by another source simultaneously. | + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad content provided. | + +--- +Title: Bootstrap validation requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Boostrap validation requests +headerRange: '[1-2]' +linkTitle: validate +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bootstrap/validate/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bootstrap-validate) | `/v1/bootstrap/validate/{action}` | Perform bootstrap validation | + +## Bootstrap validation {#post-bootstrap-validate} + + POST /v1/bootstrap/validate/{action} + +Perform bootstrap validation. + +Unlike actual bootstrapping, this request blocks and immediately +returns with a response. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/bootstrap/validate/join_cluster + +#### Request body + +The request must contain a [bootstrap configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap" >}}), similar to the one used for actual bootstrapping. + +### Response {#post-response} + +If an error occurs, the call returns a `bootstrap_status` JSON object that contains the following fields: + +| Field | Description | +|-------|-------------| +| state | Current bootstrap state.

`idle`: No bootstrapping started.

`initiated`: Bootstrap request received.

`creating_cluster`: In the process of creating a new cluster.

`joining_cluster`: In the process of joining an existing cluster.

`error`: The last bootstrap action failed.

`completed`: The last bootstrap action completed successfully.| +| start_time | Bootstrap process start time | +| end_time | Bootstrap process end time | +| error_code | If state is `error`, this error code describes the type of error encountered. | +| error_details | An error-specific object that may contain additional information about the error. A common field in use is `message` which provides a more verbose error message. + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, validation was successful. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Validation failed, bootstrap status is returned as body. | +--- +Title: Bootstrap requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Bootstrap requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: bootstrap +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bootstrap/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-bootstrap) | `/v1/boostrap` | Get the local node's bootstrap status | +| [POST](#post-bootstrap) | `/v1/bootstrap/{action}` | Initiate bootstrapping | + +## Get bootstrap status {#get-bootstrap} + +```sh +GET /v1/bootstrap +``` + +Get the local node's bootstrap status. + +This request is accepted as soon the cluster software is installed and before the node is part of an active cluster. + +Once the node is part of an active cluster, authentication is required. + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bootstrap +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +The JSON response object contains a `bootstrap_status` object and a `local_node_info` object. + +The `bootstrap_status` object contains the following information: + +| Field | Description | +|-------|-------------| +| state | Current bootstrap state.

`idle`: No bootstrapping started.

`initiated`: Bootstrap request received.

`creating_cluster`: In the process of creating a new cluster.

`recovering_cluster`: In the process of recovering a cluster.

`joining_cluster`: In the process of joining an existing cluster.

`error`: The last bootstrap action failed.

`completed`: The last bootstrap action completed successfully.| +| start_time | Bootstrap process start time | +| end_time | Bootstrap process end time | +| error_code | If state is `error`, this error code describes the type of error encountered. | +| error_details | An error-specific object that may contain additional information about the error. A common field in use is `message` which provides a more verbose error message. + +The `local_node_info` object is a subset of a [node object]({{< relref "/operate/rs/7.4/references/rest-api/objects/node" >}}) that provides information about the node configuration. + +#### Example JSON body + +```json +{ + "bootstrap_status": { + "start_time": "2014-08-29T11:19:49Z", + "end_time": "2014-08-29T11:19:49Z", + "state": "completed" + }, + "local_node_info": { + "uid": 3, + "software_version": "0.90.0-1", + "cores": 2, + "ephemeral_storage_path": "/var/opt/redislabs/tmp", + "ephemeral_storage_size": 1018889.8304, + "os_version": "Ubuntu 14.04 LTS", + "persistent_storage_path": "/var/opt/redislabs/persist/redis", + "persistent_storage_size": 1018889.8304, + "total_memory": 24137, + "uptime": 50278, + "available_addrs": [{ + "address": "172.16.50.122", + "format": "ipv4", + "if_name": "eth0", + "private": true + }, + { + "address": "10.0.3.1", + "format": "ipv4", + "if_name": "lxcbr0", + "private": true + }, + { + "address": "172.17.0.1", + "format": "ipv4", + "if_name": "docker0", + "private": true + }, + { + "address": "2001:db8:0:f101::1", + "format": "ipv6", + "if_name": "eth0", + "private": false + }] + } +} +``` + +### Error codes {#get-error-codes} + +| Code | Description | +|------|-------------| +| config_error | An error related to the bootstrap configuration provided (e.g. bad JSON). | +| connect_error | Failed to connect to cluster (e.g. FQDN DNS could not resolve, no/wrong node IP provided, etc. | +| access_denied | Invalid credentials supplied. | +| invalid_license | The license string provided is invalid. Additional info can be fetched from the `error_details` object, which includes the violation code in case the license is valid but its terms are violated. | +| repair_required | Cluster is in degraded mode and can only accept replacement nodes. When this happens, `error_details` contains two fields: `failed_nodes` and `replace_candidate`. The `failed_nodes` field is an array of objects, each describing a failed node with at least a `uid` field and an optional `rack_id`. `replace_candidate` is the UID of the node most suitable for replacement. | +| insufficient_node_memory | An attempt to replace a dead node fails because the replaced node does not have enough memory. When this happens, error_details contains a required_memory field which indicates the node memory requirement. | +| insufficient_node_flash | An attempt to replace a dead node fails because the replaced node does not have enough flash. When this happens, `error_details` contains a `required_flash` field which indicates the node flash requirement. | +| time_not_sync | An attempt to join a node with system time not synchronized with the rest of the cluster. | +| rack_id_required | An attempt to join a node with no `rack_id` in a rack-aware cluster. In addition, a `current_rack_ids` field will include an array of currently used rack ids. | +| socket_directory_mismatch | An attempt to join a node with a socket directory setting that differs from the cluster | +| node_config_mismatch | An attempt to join a node with a configuration setting (e.g. confdir, osuser, installdir) that differs from the cluster | +| path_error | A needed path does not exist or is not accessable. | +| internal_error | A different, unspecified internal error was encountered. | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Start bootstrapping {#post-bootstrap} + +```sh +POST /v1/bootstrap/{action} +``` + +Initiate bootstrapping. + +The request must contain a bootstrap configuration JSON object, as +described in [Object attributes]({{< relref "/operate/rs/7.4/references/rest-api/objects/" >}}) or a minimal subset. + +Bootstrapping is permitted only when the current bootstrap state is +`idle` or `error` (in which case the process will restart with the new +configuration). + +This request is asynchronous - once the request has been accepted, +the caller is expected to poll bootstrap status while waiting for it to +complete. + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/bootstrap/create_cluster +``` + +#### Example JSON body + +##### Join cluster +```json +{ + "action": "join_cluster", + "cluster": { + "nodes":[ "1.1.1.1", "2.2.2.2" ] + }, + "node": { + "paths": { + "persistent_path": "/path/to/persistent/storage", + "ephemeral_path": "/path/to/ephemeral/storage", + "bigstore_path": "/path/to/bigstore/storage" + }, + "bigstore_driver": "speedb", + "identity": { + "addr":"1.2.3.4", + "external_addr":["2001:0db8:85a3:0000:0000:8a2e:0370:7334", "3.4.5.6"] + } + }, + "credentials": { + "username": "my_username", + "password": "my_password" + } +} +``` + +##### Create cluster +```json +{ + "action": "create_cluster", + "cluster": { + "nodes": [], + "name": "my.cluster" + }, + "node": { + "paths": { + "persistent_path": "/path/to/persistent/storage", + "ephemeral_path": "/path/to/ephemeral/storage", + "bigstore_path": "/path/to/bigredis/storage" + }, + "identity": { + "addr":"1.2.3.4", + "external_addr":["2001:0db8:85a3:0000:0000:8a2e:0370:7334", "3.4.5.6"] + }, + "bigstore_driver": "rocksdb" + }, + "license": "----- LICENSE START -----\ndi+iK...KniI9\n----- LICENSE END -----\n", + "credentials": { + "username": "my_username", + "password": "my_password" + } +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a [bootstrap object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bootstrap" >}}) in the request body. + +### Response {#post-response} + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Request received and processing begins. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Bootstrap already in progress (check state) | +--- +Title: License requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: License requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: license +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/license/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-license) | `/v1/license` | Get license details | +| [PUT](#put-license) | `/v1/license` | Update the license | + +## Get license {#get-license} + + GET /v1/license + +Returns the license details, including license string, expiration, +and supported features. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_license]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_license" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/license + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns a JSON object that contains the license details: + +| Name | Type/Value | Description | +|------|------------|-------------| +| license | string | License data | +| cluster_name | string | The cluster name (FQDN) | +| expired | boolean | If the cluster key is expired (`true` or `false`) | +| activation_date | string | The date of the cluster key activation | +| expiration_date | string | The date of the cluster key expiration | +| key | string | License key | +| features | array of strings | Features supported by the cluster | +| owner | string | License owner | +| shards_limit | integer | The total number of shards allowed by the cluster key | +| ram_shards_limit | integer | The number of RAM shards allowed by the cluster key (as of v7.2) | +| ram_shards_in_use | integer | The number of RAM shards in use | +| flash_shards_limit | integer | The number of flash shards (Auto Tiering) allowed by the cluster key (as of v7.2) | +| flash_shards_in_use | integer | The number of flash shards in use | + +#### Example JSON body + +```json +{ + "license": "----- LICENSE START -----\\ndi+iK...KniI9\\n----- LICENSE END -----\\n", + "expired": true, + "activation_date":"2018-12-31T00:00:00Z", + "expiration_date":"2019-12-31T00:00:00Z", + "ram_shards_in_use": 0, + "ram_shards_limit": 300, + "flash_shards_in_use": 0, + "flash_shards_limit": 100, + "shards_limit": 400, + "features": ["bigstore"], + "owner": "Redis", + "cluster_name": "mycluster.local", + "key": "----- LICENSE START -----\\ndi+iK...KniI9\\n----- LICENSE END -----\\n" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | License is returned in the response body. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | No license is installed. | + +## Update license {#put-license} + + PUT /v1/license + +Validate and install a new license string. + +If you do not provide a valid license, the cluster behaves as if the license was deleted. See [Expired cluster license]({{< relref "/operate/rs/7.4/clusters/configure/license-keys#expired-cluster-license" >}}) for a list of available actions and restrictions. + +#### Required permissions + +| Permission name | +|-----------------| +| [install_new_license]({{< relref "/operate/rs/7.4/references/rest-api/permissions#install_new_license" >}}) | + +### Request {#put-request} + +The request must be a JSON object with a single key named "license". + +#### Example HTTP request + + PUT /v1/license + +#### Example JSON body + +```json +{ + "license": "----- LICENSE START -----\ndi+iK...KniI9\n----- LICENSE END -----\n" +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Accept | application/json | Accepted media type | + + +#### Request body + +Include a JSON object that contains the new `license` string in the request body. + +### Response {#put-response} + +Returns an error if the new license is not valid. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | License installed successfully. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Invalid request, either bad JSON object or corrupted license was supplied. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | License violation. A response body provides more details on the specific cause. | +--- +Title: Node debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API /nodes/debuginfo requests. +headerRange: '[1-2]' +linkTitle: debuginfo +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/debuginfo/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-debuginfo-all-nodes) | `/v1/nodes/debuginfo` | Get debug info from all nodes | +| [GET](#get-debuginfo-node) | `/v1/nodes/{node_uid}/debuginfo` | Get debug info from a specific node | + +## Get debug info from all nodes {#get-debuginfo-all-nodes} + + GET /v1/nodes/debuginfo + +Downloads a tar file that contains debug info from all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/nodes/debuginfo + +### Response {#get-all-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | + +## Get node debug info {#get-debuginfo-node} + + GET /v1/nodes/{int: node_uid}/debuginfo + +Downloads a tar file that contains debug info from a specific node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/nodes/1/debuginfo + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Node status requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Requests that return a node's hostname and role. +headerRange: '[1-2]' +linkTitle: status +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/status/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes-status) | `/v1/nodes/status` | Get the status of all nodes | +| [GET](#get-node-status) | `/v1/nodes/{uid}/status` | Get a node's status | + +## Get all node statuses {#get-all-nodes-status} + + GET /v1/nodes/status + +Gets the status of all nodes. Includes each node's hostname and role in the cluster: + +- Primary nodes return `"role": "master"` + +- Replica nodes return `"role": "slave"` + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/nodes/status + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +For each node in the cluster, returns a JSON object that contains each node's hostname, role, and other status details. + +If a maintenance snapshot exists due to an in-progress or improperly stopped [node maintenance]({{}}) process, the response includes a `maintenance_snapshot` field. + +#### Example JSON body + +```json +{ + "1": { + "cores": 8, + "free_provisional_ram": 0, + "free_ram": 3499368448, + "hostname": "3d99db1fdf4b", + "maintenance_snapshot": { + "created_time": "2024-09-06 20:47:23", + "name": "maintenance_mode_2024-09-06_20-47-23", + "node_uid": "1" + }, + "master_shards": [], + "node_overbooking_depth": 0, + "node_status": "active", + "role": "master", + "slave_shards": [], + "software_version": "7.4.6-22", + "software_version_sha": "6c37b1483b5fb6110c8055c1526aa58eec1d29d3519e92310859101419248831", + "total_memory": 6219673600, + "total_provisional_ram": 0 + }, + "2": { + "hostname": "fc7a3d332458", + "role": "slave", + // additional fields + }, + "3": { + "hostname": "b87cc06c830f", + "role": "slave", + // additional fields + } +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + + +## Get node status {#get-node-status} + + GET /v1/nodes/{int: uid}/status + +Gets the status of a node. Includes the node's hostname and role in the cluster: + +- Primary nodes return `"role": "master"` + +- Replica nodes return `"role": "slave"` + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/nodes/1/status + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The node's unique ID. | + + +### Response {#get-response} + +Returns a JSON object that contains the node's hostname, role, and other status details. + +If a maintenance snapshot exists due to an in-progress or improperly stopped [node maintenance]({{}}) process, the response includes a `maintenance_snapshot` field. + +#### Example JSON body + +```json +{ + "cores": 8, + "free_provisional_ram": 0, + "free_ram": 3504422912, + "hostname": "3d99db1fdf4b", + "maintenance_snapshot": { + "created_time": "2024-09-06 20:47:23", + "name": "maintenance_mode_2024-09-06_20-47-23", + "node_uid": "1" + }, + "master_shards": [], + "node_overbooking_depth": 0, + "node_status": "active", + "role": "master", + "slave_shards": [], + "software_version": "7.4.6-22", + "software_version_sha": "6c37b1483b5fb6110c8055c1526aa58eec1d29d3519e92310859101419248831", + "total_memory": 6219673600, + "total_provisional_ram": 0 +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node UID does not exist | +--- +Title: Check node requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Requests that run checks on a cluster node. +headerRange: '[1-2]' +linkTitle: check +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/check/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-node-check) | `/v1/nodes/check/{uid}` | Runs checks on a cluster node | + +## Check node {#get-node-check} + + GET /v1/nodes/check/{int: uid} + +Runs the following checks on a cluster node: + +| Check name | Description | +|-----------|-------------| +| bootstrap_status | Verifies the local node's bootstrap process completed without errors. | +| services | Verifies all Redis Enterprise Software services are running. | +| port_range | Verifies the [`ip_local_port_range`](https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html) doesn't conflict with the ports Redis Enterprise might assign to shards. | +| pidfiles | Verifies all active local shards have PID files. | +| capabilities | Verifies all binaries have the proper capability bits. | +| existing_sockets | Verifies sockets exist for all processes that require them. | +| host_settings | Verifies the following:
• Linux `overcommit_memory` setting is 1.
• `transparent_hugepage` is disabled.
• Socket maximum connections setting `somaxconn` is 1024. | +| tcp_connectivity | Verifies this node can connect to all other alive nodes. | + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_check]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_check" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/nodes/check/1 + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The node's unique ID. | + + +### Response {#get-response} + +Returns a JSON object with the node's check results. + +When errors occur, the server returns a JSON object with `result: false` and an `error` field that provides additional information. If an error occurs during a check, the `error` field only includes a message for the first check that fails. + +Possible `error` messages: + +- "bootstrap request to cnm_http failed,resp_code: ...,resp_content: ..." +- "process ... is not running or not responding (...)" +- "could not communicate with 'supervisorctl': ..." +- "connectivity check failed retrieving ports for testing" + +#### Example JSON body + +```json +{ + "node_uid": 1, + "result": true +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +--- +Title: Node alerts requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Node alert requests +headerRange: '[1-2]' +linkTitle: alerts +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/alerts/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes-alerts) | `/v1/nodes/alerts` | Get all alert states for all nodes | +| [GET](#get-node-alerts) | `/v1/nodes/alerts/{uid}` | Get all alert states for a node | +| [GET](#get-node-alert) | `/v1/nodes/alerts/{uid}/{alert}` | Get node alert state | + +## Get all alert states {#get-all-nodes-alerts} + + GET /v1/nodes/alerts + +Get all alert states for all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_nodes_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_nodes_alerts" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/nodes/alerts + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| ignore_settings | boolean | Retrieve updated alert state regardless of the cluster's alert_settings. When not present, a disabled alert will always be retrieved as disabled with a false state. (optional) | + +### Response {#get-all-response} + +Returns a hash of node UIDs and the [alert states]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) for each node. + +#### Example JSON body + +```json +{ + "1": { + "node_cpu_utilization": { + "change_time": "2014-12-22T10:42:00Z", + "change_value": { + "cpu_util": 2.500000000145519, + "global_threshold": "1", + "state": true + }, + "enabled": true, + "state": true, + "severity": "WARNING" + }, + "..." + }, + "..." +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get node alert states {#get-node-alerts} + + GET /v1/nodes/alerts/{int: uid} + +Get all alert states for a node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_alerts" >}}) | + +### Request {#get-request-all-alerts} + +#### Example HTTP request + + GET /v1/nodes/alerts/1 + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| ignore_settings | boolean | Retrieve updated alert state regardless of the cluster's alert_settings. When not present, a disabled alert will always be retrieved as disabled with a false state. (optional) | + +### Response {#get-response-all-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states for a specific node. + +#### Example JSON body + +```json +{ + "node_cpu_utilization": { + "change_time": "2014-12-22T10:42:00Z", + "change_value": { + "cpu_util": 2.500000000145519, + "global_threshold": "1", + "state": true + }, + "enabled": true, + "state": true, + "severity": "WARNING", + }, + "..." +} +``` + +### Status codes {#get-status-codes-all-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified node does not exist | + +## Get node alert state {#get-node-alert} + + GET /v1/nodes/alerts/{int: uid}/{alert} + +Get a node alert state. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_alerts" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/nodes/alerts/1/node_cpu_utilization + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| ignore_settings | boolean | Retrieve updated alert state regardless of the cluster's alert_settings. When not present, a disabled alert will always be retrieved as disabled with a false state. (optional) | + +### Response {#get-response} + +Returns an [alert object]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +#### Example JSON body + +```json +{ + "change_time": "2014-12-22T10:42:00Z", + "change_value": { + "cpu_util": 2.500000000145519, + "global_threshold": "1", + "state": true + }, + "enabled": true, + "state": true, + "severity": "WARNING", +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified alert or node does not exist | +--- +Title: Node snapshot requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Node snapshot requests +headerRange: '[1-2]' +linkTitle: snapshots +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/snapshots/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-snapshots) | `/v1/nodes/{uid}/snapshots` | Get node snapshots | +| [DELETE](#delete-snapshot) | `/v1/nodes/{uid}/snapshots/{snapshot_name}` | Delete a node snapshot | + +## Get node snapshots {#get-snapshots} + +```sh +GET /v1/nodes/{int: uid}/snapshots +``` + +Get all cluster node snapshots of the specified node. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_node_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/1/snapshots +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the node requested. | + +### Response {#get-response} + +Returns an array of node snapshot JSON objects. + +#### Example JSON body + +```json +[ + { + "created_time": "2024-01-10 20:55:54", + "name": "nightly_snapshot_1", + "node_uid": "1" + }, + { + "created_time": "2024-01-11 20:55:54", + "name": "nightly_snapshot_2", + "node_uid": "1" + } +] +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node UID does not exist | + +## Delete node snapshot {#delete-snapshot} + +```sh +DELETE /v1/nodes/{int: uid}/snapshots/{snapshot_name} +``` + +Delete a cluster node snapshot. Snapshots created by maintenance mode are not deleted. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_node]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_node" >}}) | admin | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/nodes/1/snapshots/nightly_snapshot_19 +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the updated node. | +| snapshot_name | string | The unique name of the snapshot to delete. | + +### Response {#delete-response} + +Returns a JSON object that represents the deleted node snapshot. + +#### Example JSON body + +```json +{ + "created_time": "2024-01-11 20:55:54", + "name": "nightly_snapshot_19", + "node_uid": "1" +} +``` + +#### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Node snapshot is a maintenance snapshot and cannot be deleted | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node uid does not exist | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Snapshot name does not exist for this node uid | +--- +Title: Node actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Node action requests +headerRange: '[1-2]' +linkTitle: actions +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/actions/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes-actions) | `/v1/nodes/actions` | Get status of all actions on all nodes| +| [GET](#get-node-actions) | `/v1/nodes/{node_uid}/actions` | Get status of all actions on a specific node | +| [GET](#get-node-action) | `/v1/nodes/{node_uid}/actions/{action}` | Get status of an action on a specific node | +| [POST](#post-node-action) | `/v1/nodes/{node_uid}/actions/{action}` | Initiate node action | +| [DELETE](#delete-node-action) | `/v1/nodes/{node_uid}/actions/{action}` | Cancel action or remove action status | + +## Get all actions {#get-all-nodes-actions} + +```sh +GET /v1/nodes/actions +``` + +Get the status of all currently executing, pending, or completed +actions on all nodes. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_status_of_all_node_actions]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_all_node_actions" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/actions +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a list of [action objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}). + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides details about an ongoing action. | + +## Get node actions statuses {#get-node-actions} + +```sh +GET /v1/nodes/{node_uid}/actions +``` + +Get the status of all actions on a specific node. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_status_of_node_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_node_action" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request-all-actions} + +#### Example HTTP request + +```sh +GET /v1/nodes/1/actions +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The action to check. | + +### Response {#get-response-all-actions} + +Returns a JSON object that includes a list of [action objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}) for the specified node. + +If no actions are available, the response will include an empty array. + +#### Example JSON body + +```json +{ + "actions": [ + { + "name": "remove_node", + "node_uid": "1", + "status": "running", + "progress": 10 + } + ] +} +``` + +### Error codes {#get-error-codes-all-actions} + +| Code | Description | +|------|-------------| +| internal_error | An internal error that cannot be mapped to a more precise error code has been encountered. | +| insufficient_resources | The cluster does not have sufficient resources to complete the required operation. | + +### Status codes {#get-status-codes-all-actions} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides details about an ongoing action. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run). | + +## Get node action status {#get-node-action} + +```sh +GET /v1/nodes/{node_uid}/actions/{action} +``` + +Get the status of a currently executing, queued, or completed action on a specific node. + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/1/actions/remove +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns an [action object]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}) for the specified node. + +### Error codes {#get-error-codes} + +| Code | Description | +|------|-------------| +| internal_error | An internal error that cannot be mapped to a more precise error code has been encountered. | +| insufficient_resources | The cluster does not have sufficient resources to complete the required operation. | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides details about an ongoing action. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run). | + +## Initiate node action {#post-node-action} + +```sh +POST /v1/nodes/{node_uid}/actions/{action} +``` + +Initiate a node action. + +The API allows only a single instance of any action type to be +invoked at the same time, and violations of this requirement will +result in a `409 CONFLICT` response. + +The caller is expected to query and process the results of the +previously executed instance of the same action, which will be +removed as soon as the new one is submitted. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [start_node_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#start_node_action" >}}) | admin | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/nodes/1/actions/remove +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The name of the action required. | + +Currently supported actions are: + +- `remove`: Removes the node from the cluster after migrating all bound resources to other nodes. As soon as a successful remove request is received, the cluster will no longer automatically migrate resources, such as shards and endpoints, to the node even if the remove task fails at some point. + + - By default, the remove node action completes after all resources migrate off the removed node. Node removal does not wait for migrated shards' persistence files to be created on the new nodes. + + To change node removal to wait for the creation of new persistence files for all migrated shards, set `wait_for_persistence` to `true` in the request body or [update the cluster policy]({{}}) `persistent_node_removal` to `true` to change the cluster's default behavior. + + ```sh + POST /v1/nodes//actions/remove + { + "wait_for_persistence": true + } + ``` + +- `maintenance_on`: Creates a snapshot of the node, migrates shards to other nodes, and prepares the node for maintenance. See [maintenance mode]({{< relref "/operate/rs/7.4/clusters/maintenance-mode" >}}) for more information. + + - As of Redis Enterprise Software version 7.4.2, a new node snapshot is created only if no maintenance mode snapshots already exist or if you set `"overwrite_snapshot": true` in the request body. + + ```sh + POST /v1/nodes/1/actions/maintenance_on + { + "overwrite_snapshot": true + } + ``` + + - If there aren't enough resources to migrate shards out of the maintained node, set `"evict_ha_replica": false` and `"evict_active_active_replica": false` in the request body to keep the replica shards in place but demote any master shards. Use these two parameters instead of `keep_slave_shards`, which is deprecated as of Redis Enterprise Software version 7.4.2. + + ```sh + POST /v1/nodes/1/actions/maintenance_on + { + "overwrite_snapshot": true, + "evict_ha_replica": false, + "evict_active_active_replica": false + } + ``` + + - To specify databases whose shards should be evicted from the node when entering maintenance mode, set `"evict_dbs": ["List of database ID strings"]` in the request body. + + ```sh + POST /v1/nodes/1/actions/maintenance_on + { + "overwrite_snapshot": true, + "evict_dbs": ["1", "3"] + } + ``` + +- `maintenance_off`: Restores node to its previous state before maintenance started. See [maintenance mode]({{< relref "/operate/rs/7.4/clusters/maintenance-mode" >}}) for more information. + + - By default, it uses the latest node snapshot. + + - Use `"snapshot_name":` `"..."` in the request body to restore the state from a specific snapshot. + + - To avoid restoring shards at the node, use `"skip_shards_restore":` `true`. + +- `enslave_node`: Turn node into a replica. + +### Response {#post-response} + +The body content may provide additional action details. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action initiated successfully. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Only a single instance of any action type can be invoked at the same time. | + +### Example requests + +#### cURL + +```sh +$ curl -k -X POST -u "[username]:[password]" -d "{}" + https://[host][:port]/v1/nodes/1/actions/remove +``` + +#### Python + +```python +import requests +import json + +url = "https://[host][port]/v1/nodes/1/actions/remove" + +payload = json.dumps({}) +headers = { + 'Content-Type': 'application/json', +} +auth = ("[username]", "[password]") + +response = requests.request("POST", url, auth=auth, headers=headers, data=payload) + +print(response.text) +``` + +## Cancel action {#delete-node-action} + +```sh +DELETE /v1/nodes/{node_uid}/actions/{action} +``` + +Cancel a queued or executing node action, or remove the status of a +previously executed and completed action. + +### Permissions + +| Permission name | +|-----------------| +| [cancel_node_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#cancel_node_action" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/nodes/1/actions/remove +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| action | string | The name of the action to cancel. | + +### Response {#delete-response} + +Returns a status code. + +#### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action will be cancelled when possible. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action unknown or not currently running. | +--- +Title: Latest node stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Most recent node statistics requests +headerRange: '[1-2]' +linkTitle: last +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/stats/last/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes-stats-last) | `/v1/nodes/stats/last` | Get latest stats for all nodes | +| [GET](#get-node-stats-last) | `/v1/nodes/stats/last/{uid}` | Get latest stats for a single node | + +## Get latest stats for all nodes {#get-all-nodes-stats-last} + +```sh +GET /v1/nodes/stats/last +``` + +Get latest statistics for all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_nodes_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_nodes_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/stats/last?interval=1sec&stime=2015-10-14T06:29:43Z +``` + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec. (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all nodes. + +#### Example JSON body + +```json +{ + "1": { + "conns": 0.0, + "cpu_idle": 0.922500000015134, + "cpu_system": 0.007499999999708962, + "cpu_user": 0.01749999999810825, + "cur_aof_rewrites": 0.0, + "egress_bytes": 7887.0, + "ephemeral_storage_avail": 75821363200.0, + "ephemeral_storage_free": 81189969920.0, + "etime": "2015-10-14T06:29:44Z", + "free_memory": 2956963840.0, + "ingress_bytes": 4950.0, + "interval": "1sec", + "persistent_storage_avail": 75821363200.0, + "persistent_storage_free": 81189969920.0, + "stime": "2015-10-14T06:29:43Z", + "total_req": 0.0 + }, + "2": { + "conns": 0.0, + "cpu_idle": 0.922500000015134, + "// additional fields..." + } +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | No nodes exist | + +## Get latest node stats {#get-node-stats-last} + +```sh +GET /v1/nodes/stats/last/{int: uid} +``` + +Get the latest statistics of a node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/stats/last/1?interval=1sec&stime=2015-10-13T09:01:54Z +``` + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the node requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec. (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)format (optional) | + +### Response {#get-response} + +Returns the most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the specified node. + +#### Example JSON body + +```json +{ + "1": { + "conns": 0.0, + "cpu_idle": 0.8049999999930151, + "cpu_system": 0.02750000000014552, + "cpu_user": 0.12000000000080036, + "cur_aof_rewrites": 0.0, + "egress_bytes": 2169.0, + "ephemeral_storage_avail": 75920293888.0, + "ephemeral_storage_free": 81288900608.0, + "etime": "2015-10-13T09:01:55Z", + "free_memory": 3285381120.0, + "ingress_bytes": 3020.0, + "interval": "1sec", + "persistent_storage_avail": 75920293888.0, + "persistent_storage_free": 81288900608.0, + "stime": "2015-10-13T09:01:54Z", + "total_req": 0.0 + } +} +``` + +### Error codes {#get-error-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node does not exist | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Node isn't currently active | +| [503 Service Unavailable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | Mode is in recovery state | +--- +Title: Node stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Node statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes-stats) | `/v1/nodes/stats` | Get stats for all nodes | +| [GET](#get-node-stats) | `/v1/nodes/stats/{uid}` | Get stats for a single node | + +## Get all nodes stats {#get-all-nodes-stats} + +```sh +GET /v1/nodes/stats +``` + +Get statistics for all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_nodes_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_nodes_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/stats?interval=1hour&stime=2014-08-28T10:00:00Z +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns a JSON array of [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all nodes. + +#### Example JSON body + +```json +[ + { + "uid": "1", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:40:11Z", + "etime": "2015-05-28T08:40:12Z", + "conns": 0.0, + "cpu_idle": 0.5499999999883585, + "cpu_system": 0.03499999999985448, + "cpu_user": 0.38000000000101863, + "egress_bytes": 0.0, + "ephemeral_storage_avail": 2929315840.0, + "ephemeral_storage_free": 3977830400.0, + "free_memory": 893485056.0, + "ingress_bytes": 0.0, + "persistent_storage_avail": 2929315840.0, + "persistent_storage_free": 3977830400.0, + "total_req": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:40:12Z", + "etime": "2015-05-28T08:40:13Z", + "cpu_idle": 1.2, + "// additional fields..." + } + ] + }, + { + "uid": "2", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:40:11Z", + "etime": "2015-05-28T08:40:12Z", + "conns": 0.0, + "cpu_idle": 0.5499999999883585, + "cpu_system": 0.03499999999985448, + "cpu_user": 0.38000000000101863, + "egress_bytes": 0.0, + "ephemeral_storage_avail": 2929315840.0, + "ephemeral_storage_free": 3977830400.0, + "free_memory": 893485056.0, + "ingress_bytes": 0.0, + "persistent_storage_avail": 2929315840.0, + "persistent_storage_free": 3977830400.0, + "total_req": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:40:12Z", + "etime": "2015-05-28T08:40:13Z", + "cpu_idle": 1.2, + "// additional fields..." + } + ] + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | No nodes exist | + +## Get node stats {#get-node-stats} + +```sh +GET /v1/nodes/stats/{int: uid} +``` + +Get statistics for a node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_node_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/stats/1?interval=1hour&stime=2014-08-28T10:00:00Z +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the node requested. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for the specified node. + +#### Example JSON body + +```json +{ + "uid": "1", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:40:11Z", + "etime": "2015-05-28T08:40:12Z", + "conns": 0.0, + "cpu_idle": 0.5499999999883585, + "cpu_system": 0.03499999999985448, + "cpu_user": 0.38000000000101863, + "egress_bytes": 0.0, + "ephemeral_storage_avail": 2929315840.0, + "ephemeral_storage_free": 3977830400.0, + "free_memory": 893485056.0, + "ingress_bytes": 0.0, + "persistent_storage_avail": 2929315840.0, + "persistent_storage_free": 3977830400.0, + "total_req": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:40:12Z", + "etime": "2015-05-28T08:40:13Z", + "cpu_idle": 1.2, + "// additional fields..." + } + ] +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node does not exist | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Node isn't currently active | +| [503 Service Unavailable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | Node is in recovery state | +--- +Title: Nodes requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Node requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: nodes +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/nodes/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-nodes) | `/v1/nodes` | Get all cluster nodes | +| [GET](#get-node) | `/v1/nodes/{uid}` | Get a single cluster node | +| [PUT](#put-node) | `/v1/nodes/{uid}` | Update a node | + +## Get all nodes {#get-all-nodes} + +```sh +GET /v1/nodes +``` + +Get all cluster nodes. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_nodes_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_nodes_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/nodes +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [node objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/node" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 1, + "status": "active", + "uptime": 262735, + "total_memory": 6260334592, + "software_version": "0.90.0-1", + "ephemeral_storage_size": 20639797248, + "persistent_storage_path": "/var/opt/redislabs/persist", + "persistent_storage_size": 20639797248, + "os_version": "Ubuntu 14.04.2 LTS", + "ephemeral_storage_path": "/var/opt/redislabs/tmp", + "architecture": "x86_64", + "shard_count": 23, + "public_addr": "", + "cores": 4, + "rack_id": "", + "supported_database_versions": [ + { + "db_type": "memcached", + "version": "1.4.17" + }, + { + "db_type": "redis", + "version": "2.6.16" + }, + { + "db_type": "redis", + "version": "2.8.19" + } + ], + "shard_list": [1, 3, 4], + "addr": "10.0.3.61" + }, + { + "uid": 1, + "status": "active", + "// additional fields..." + } +] +``` + +#### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get node {#get-node} + +```sh +GET /v1/nodes/{int: uid} +``` + +Get a single cluster node. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_node_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_node_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/nodes/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the node requested. | + +### Response {#get-response} + +Returns a [node object]({{< relref "/operate/rs/7.4/references/rest-api/objects/node" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "node:1", + "// additional fields..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Node UID does not exist | + +## Update node {#put-node} + +```sh +PUT /v1/nodes/{int: uid} +``` + +Update a [node object]({{< relref "/operate/rs/7.4/references/rest-api/objects/node" >}}). + +Currently, you can edit the following attributes: + +- `addr` + +- `external_addr` + +- `recovery_path` + +- `accept_servers` + +{{}} +You can only update the `addr` attribute for offline nodes. Otherwise, the request returns an error. +{{}} + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_node]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_node" >}}) | admin | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/nodes/1 +``` + +#### Example JSON body + +```json +{ + "addr": "10.0.0.1", + "external_addr" : [ + "192.0.2.24" + ] +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| Content-Type | application/json | Media type of request/response body | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the updated node. | + + +#### Body + +| Field | Type | Description | +|-------|------|-------------| +| addr | string | Internal IP address of node | +| external_addr | JSON array | External IP addresses of the node | +| recovery_path | string | Path for recovery files | +| accept_servers | boolean | If true, no shards will be created on the node | + +### Response {#put-response} + +If the request is successful, the body will be empty. Otherwise, it may contain a JSON object with an error code and error message. + +#### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, the request has been processed. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Update request cannot be processed. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad content provided. | + +#### Error codes {#put-error-codes} + +| Code | Description | +|------|-------------| +| node_not_found | Node does not exist | +| node_not_offline | Attempted to change node address while it is online | +| node_already_populated | The node contains shards or endpoints, cannot disable accept_servers | +| invalid_oss_cluster_port_mapping | Cannot enable "accept_servers" since there are databases with "oss_cluster_port_mapping" that do not have a port configuration for the current node | +| node_already_has_rack_id | Attempted to change node's rack_id when it already has one | +--- +Title: Redis access control list (ACL) requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis access control list (ACL) requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: redis_acls +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/redis_acls/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-redis_acls) | `/v1/redis_acls` | Get all Redis ACLs | +| [GET](#get-redis_acl) | `/v1/redis_acls/{uid}` | Get a single Redis ACL | +| [PUT](#put-redis_acl) | `/v1/redis_acls/{uid}` | Update a Redis ACL | +| [POST](#post-redis_acl) | `/v1/redis_acls` | Create a new Redis ACL | +| [DELETE](#delete-redis_acl) | `/v1/redis_acls/{uid}` | Delete a Redis ACL | + +## Get all Redis ACLs {#get-all-redis_acls} + +```sh +GET /v1/redis_acls +``` + +Get all Redis ACL objects. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_redis_acls_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_redis_acls_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/redis_acls +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [Redis ACL objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 1, + "name": "Full Access", + "acl": "+@all ~*" + }, + { + "uid": 2, + "name": "Read Only", + "acl": "+@read ~*" + }, + { + "uid": 3, + "name": "Not Dangerous", + "acl": "+@all -@dangerous ~*" + }, + { + "uid": 17, + "name": "Geo", + "acl": "~* +@geo" + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support redis_acl yet. | + +## Get Redis ACL {#get-redis_acl} + +```sh +GET /v1/redis_acls/{int: uid} +``` + +Get a single Redis ACL object. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_redis_acl_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_redis_acl_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/redis_acls/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The object's unique ID. | + +### Response {#get-response} + +Returns a [Redis ACL object]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Geo", + "acl": "~* +@geo" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Operation is forbidden. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | redis_acl does not exist. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support redis_acl yet. | + +## Update Redis ACL {#put-redis_acl} + +```sh +PUT /v1/redis_acls/{int: uid} +``` + +Update an existing Redis ACL object. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_redis_acl]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_redis_acl" >}}) | admin | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/redis_acls/17 +``` + +#### Example JSON body + +```json +{ + "acl": "~* +@geo -@dangerous" +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +Include a [Redis ACL object]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns the updated [Redis ACL object]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Geo", + "acl": "~* +@geo -@dangerous" +} +``` + +### Error codes {#put-error-codes} + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| name_already_exists | An object of the same type and name exists| +| invalid_param | A parameter has an illegal value| + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, redis_acl was updated. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a non-existent redis_acl. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Cannot change a read-only object | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support redis_acl yet. | + +## Create Redis ACL {#post-redis_acl} + +```sh +POST /v1/redis_acls +``` + +Create a new Redis ACL object. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [create_redis_acl]({{< relref "/operate/rs/7.4/references/rest-api/permissions#create_redis_acl" >}}) | admin | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/redis_acls +``` + +#### Example JSON body + +```json +{ + "name": "Geo", + "acl": "~* +@geo" +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +Include a [Redis ACL object]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}) in the request body. + +### Response {#post-response} + +Returns the newly created [Redis ACL object]({{< relref "/operate/rs/7.4/references/rest-api/objects/redis_acl" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Geo", + "acl": "~* +@geo" +} +``` + +### Error codes {#post-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version. | +| name_already_exists | An object of the same type and name exists | +| missing_field | A needed field is missing | +| invalid_param | A parameter has an illegal value | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, redis_acl is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support redis_acl yet. | + +### Examples + +#### cURL + +```sh +curl -k -u "[username]:[password]" -X POST \ + -H 'Content-Type: application/json' \ + -d '{ "name": "Geo", "acl": "~* +@geo" }' \ + https://[host][:port]/v1/redis_acls +``` + +#### Python + +```python +import requests +import json + +url = "https://[host][:port]/v1/redis_acls" + +headers = { + 'Content-Type': 'application/json' +} + +payload = json.dumps({ + "name": "Geo", + "acl": "~* +@geo" +}) +auth=("[username]", "[password]") + +response = requests.request("POST", url, + auth=auth, headers=headers, payload=payload, verify=False) + +print(response.text) +``` + +## Delete Redis ACL {#delete-redis_acl} + +```sh +DELETE /v1/redis_acls/{int: uid} +``` + +Delete a Redis ACL object. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [delete_redis_acl]({{< relref "/operate/rs/7.4/references/rest-api/permissions#delete_redis_acl" >}}) | admin | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/redis_acls/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The redis_acl unique ID. | + +### Response {#delete-response} + +Returns a status code that indicates the Redis ACL deletion success or failure. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the redis_acl is deleted. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The request is not acceptable. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Cannot delete a read-only object | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support redis_acl yet. | +--- +Title: Roles requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Roles requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: roles +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/roles/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-roles) | `/v1/roles` | Get all roles | +| [GET](#get-role) | `/v1/roles/{uid}` | Get a single role | +| [PUT](#put-role) | `/v1/roles/{uid}` | Update an existing role | +| [POST](#post-role) | `/v1/roles` | Create a new role | +| [DELETE](#delete-role) | `/v1/roles/{uid}` | Delete a role | + +## Get all roles {#get-all-roles} + +```sh +GET /v1/roles +``` + +Get all roles' details. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_roles_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_roles_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/roles +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [role objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 1, + "name": "Admin", + "management": "admin" + }, + { + "uid": 2, + "name": "Cluster Member", + "management": "cluster_member" + }, + { + "uid": 3, + "name": "Cluster Viewer", + "management": "cluster_viewer" + }, + { + "uid": 4, + "name": "DB Member", + "management": "db_member" + }, + { + "uid": 5, + "name": "DB Viewer", + "management": "db_viewer" + }, + { + "uid": 6, + "name": "None", + "management": "none" + }, + { + "uid": 17, + "name": "DBA", + "management": "admin" + } +] +``` + +#### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support roles yet. | + +## Get role + +```sh +GET /v1/roles/{int: uid} +``` + +Get the details of a single role. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_role_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_role_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/roles/1 +``` + + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The role's unique ID. | + +### Response {#get-response} + +Returns a [role object]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "DBA", + "management": "admin" +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Operation is forbidden. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Role does not exist. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support roles yet. | + +## Update role {#put-role} + +```sh +PUT /v1/roles/{int: uid} +``` + +Update an existing role's details. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_role]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_role" >}}) | admin | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/roles/17 +``` + +#### Example JSON body + +```json +{ + "management": "cluster_member" +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Body + +Include a [role object]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns a [role object]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}) with the updated fields. + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "DBA", + "management": "cluster_member" +} +``` + +### Error codes {#put-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| name_already_exists | An object of the same type and name exists.| +| change_last_admin_role_not_allowed | At least one user with admin role should exist.| + +#### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, role is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a non-existant role. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support roles yet. | + +## Create role {#post-role} + +```sh +POST /v1/roles +``` + +Create a new role. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [create_role]({{< relref "/operate/rs/7.4/references/rest-api/permissions#create_role" >}}) | admin | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/roles +``` + +#### Example JSON body + +```json +{ + "name": "DBA", + "management": "admin" +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Body + +Include a [role object]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}) in the request body. + +### Response {#post-response} + +Returns the newly created [role object]({{< relref "/operate/rs/7.4/references/rest-api/objects/role" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "DBA", + "management": "admin" +} +``` + +### Error codes {#post-error-codes} + +Possible `error_code`values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version. | +| name_already_exists | An object of the same type and name exists | +| missing_field | A needed field is missing | + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, role is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support roles yet. | + +### Examples + +#### cURL + +```sh +curl -k -u "[username]:[password]" -X POST \ + -H 'Content-Type: application/json' \ + -d '{ "name": "DBA", "management": "admin" }' \ + https://[host][:port]/v1/roles +``` + +#### Python + +```python +import requests +import json + +url = "https://[host][:port]/v1/roles" + +headers = { + 'Content-Type': 'application/json' +} + +payload = json.dumps({ + "name": "DBA", + "management": "admin" +}) +auth=("[username]", "[password]") + +response = requests.request("POST", url, + auth=auth, headers=headers, payload=payload, verify=False) + +print(response.text) +``` + +## Delete role {#delete-role} + +```sh +DELETE /v1/roles/{int: uid} +``` + +Delete a role object. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [delete_role]({{< relref "/operate/rs/7.4/references/rest-api/permissions#delete_role" >}}) | admin | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/roles/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The role unique ID. | + +### Response {#delete-response} + +Returns a status code to indicate role deletion success or failure. + +#### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the role is deleted. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Role does not exist. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The request is not acceptable. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support roles yet. | +--- +Title: Suffix requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: DNS suffix requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: suffix +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/suffix/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-suffix) | `/v1/suffix/{name}` | Get a single DNS suffix | + +## Get suffix {#get-suffix} + + GET /v1/suffix/{string: name} + +Get a single DNS suffix. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/suffix/cluster.fqdn + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| name | string | The unique name of the suffix requested. | + +### Response {#get-response} + +Returns a [suffix object]({{< relref "/operate/rs/7.4/references/rest-api/objects/suffix" >}}). + +#### Example JSON body + +```json +{ + "name": "cluster.fqdn", + "// additional fields..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Suffix name does not exist | +--- +Title: Database debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API /bdbs/debuginfo requests. +headerRange: '[1-2]' +linkTitle: debuginfo +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/debuginfo/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-debuginfo-bdb) | `/v1/bdbs/debuginfo` | Get debug info from all databases | +| [GET](#get-debuginfo-bdb) | `/v1/bdbs/{bdb_uid}/debuginfo` | Get debug info from a specific database | + +## Get debug info from all databases {#get-all-debuginfo-bdb} + + GET /v1/bdbs/debuginfo + +Downloads a tar file that contains debug info from all databases. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/bdbs/debuginfo + +### Response {#get-all-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | + + +## Get database debug info {#get-debuginfo-bdb} + + GET /v1/bdbs/{int: bdb_uid}/debuginfo + +Downloads a tar file that contains debug info from the database specified by `bdb_uid`. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/bdbs/1/debuginfo + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Replica syncer state requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Replica syncer state requests +headerRange: '[1-2]' +linkTitle: replica +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/syncer_state/replica/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-syncer-state) | `/v1/bdbs/{uid}/syncer_state/replica` | Get a CRDB replica's syncer state | + +## Get replica syncer state {#get-syncer-state} + +```sh +GET /v1/bdbs/{int: uid}/syncer_state/replica +``` + +Get a CRDB replica's syncer state as JSON. + +### Permissions + +| Permission name | Roles | +|-----------------|---------| +| [view_bdb_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/syncer_state/replica +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster. | +| Accept | application/json | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database requested. | + +### Response {#get-response} + +Returns a JSON object that represents the syncer state. + +#### Example JSON body + +```json +{ + "DB": 22, + "RunID": 1584086516, + // additional fields... +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | OK | +| [404 Not Found](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Syncer state key does not exist | +| [500 Internal Server Error](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal error | +| [503 Service Unavailable](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | Redis connection error, service unavailable | +--- +Title: CRDT syncer state requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: CRDT syncer state requests +headerRange: '[1-2]' +linkTitle: crdt +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/syncer_state/crdt/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-syncer-state) | `/v1/bdbs/{uid}/syncer_state/crdt` | Get a CRDB's syncer state | + +## Get CRDB syncer state {#get-syncer-state} + +```sh +GET /v1/bdbs/{int: uid}/syncer_state/crdt +``` + +Get a CRDB's syncer state as JSON. + +### Permissions + +| Permission name | Roles | +|-----------------|---------| +| [view_bdb_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/syncer_state/crdt +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster. | +| Accept | application/json | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database requested. | + +### Response {#get-response} + +Returns a JSON object that represents the syncer state. + +#### Example JSON body + +```json +{ + "DB": 22, + "RunID": 1584086516, + // additional fields... +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | OK | +| [404 Not Found](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Syncer state key does not exist | +| [500 Internal Server Error](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal error | +| [503 Service Unavailable](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | Redis connection error, service unavailable | +--- +Title: Syncer state requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Syncer state requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: syncer_state +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/syncer_state/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-syncer-state) | `/v1/bdbs/{uid}/syncer_state` | Get a CRDB's syncer state | + +## Get syncer state {#get-syncer-state} + +```sh +GET /v1/bdbs/{int: uid}/syncer_state +``` + +Get a CRDB's syncer state as JSON. + +{{}} +This endpoint is deprecated as of Redis Enterprise Software version 7.2.4 and will be removed in a future release. Use [`/v1/bdbs//syncer_state/crdt`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/syncer_state/crdt" >}}) instead. +{{}} + +### Permissions + +| Permission name | Roles | +|-----------------|---------| +| [view_bdb_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/syncer_state +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster. | +| Accept | application/json | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database requested. | + +### Response {#get-response} + +Returns a JSON object that represents the syncer state. + +#### Example JSON body + +```json +{ + "DB": 22, + "RunID": 1584086516, + // additional fields... +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | OK | +| [404 Not Found](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Syncer state key does not exist | +| [500 Internal Server Error](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal error | +| [503 Service Unavailable](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | Redis connection error, service unavailable | +--- +Title: Database replica sources alerts requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Replica source alert requests +headerRange: '[1-2]' +linkTitle: replica_sources/alerts +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/replica_sources-alerts/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-replica-sources-alerts) | `/v1/bdbs/replica_sources/alerts` | Get all replica sources alert states for all BDBs | +| [GET](#get-bdbs-replica-sources-alerts) | `/v1/bdbs/replica_sources/alerts/{uid}` | Get all replica sources alert states for a BDB | +| [GET](#get-bdbs-replica_source-all-alerts) | `/v1/bdbs/replica_sources/alerts/{uid}/{replica_src_id}` | Get all alert states for a replica source | +| [GET](#get-bdbs-replica-source-alert) | `/v1/bdbs/replica_sources/alerts/{uid}/{replica_src_id}/{alert}` | Get a replica source alert state | + +## Get all DBs replica sources alert states {#get-all-bdbs-replica-sources-alerts} + + GET /v1/bdbs/replica_sources/alerts + +Get all alert states for all replica sources of all BDBs. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_bdbs_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdbs_alerts" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/bdbs/replica_sources/alerts + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a hash of alert UIDs and the alerts states for each BDB. + +See [REST API alerts overview]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) for a description of the alert state object. + +#### Example JSON body + +```json +{ + "1": { + "replica_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "change_time": "2014-08-29T11:19:49Z", + "severity": "WARNING", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." + }, + "..." +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get DB replica source alert states {#get-bdbs-replica-sources-alerts} + + GET /v1/bdbs/replica_sources/alerts/{int: uid} + +Get all alert states for all replica sources of a specific bdb. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-all-replica-alerts} + +#### Example HTTP request + + GET /v1/bdbs/replica_sources/alerts/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +### Response {#get-response-all-replica-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "replica_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." +} +``` + +### Status codes {#get-status-codes-all-replica-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified bdb does not exist | + +## Get replica source alert states {#get-bdbs-replica_source-all-alerts} + + GET /v1/bdbs/replica_sources/alerts/{int: uid}/{int: replica_src_id} + +Get all alert states for a specific replica source of a bdb. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-replica-alerts} + +#### Example HTTP request + + GET /v1/bdbs/replica_sources/alerts/1/2 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | +| replica_src_id | integer | The ID of the replica source in this BDB | + +### Response {#get-response-replica-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "replica_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." +} +``` + +### Status codes {#get-status-codes-replica-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified bdb does not exist | + +## Get replica source alert state {#get-bdbs-replica-source-alert} + + GET /v1/bdbs/replica_sources/alerts/{int: uid}/{int: replica_src_id}/{alert} + +Get a replica source alert state of a specific bdb. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-alert} + +#### Example HTTP request + + GET /v1/bdbs/replica_sources/alerts/1/2/replica_src_syncer_connection_error + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | +| replica_src_id | integer | The ID of the replica source in this BDB | +| alert | string | The alert name | + +### Response {#get-response-alert} + +Returns an [alert state object]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +#### Example JSON body + +```json +{ + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } +} +``` + +### Status codes {#get-status-codes-alert} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad request | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified alert or bdb does not exist | +--- +Title: Database shards requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests for database shards +headerRange: '[1-2]' +linkTitle: shards +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/shards/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-bdb-shards) | `/v1/bdbs/{bdb_uid}/shards` | Get the status of a database's shards | + +## Get database shards {#get-bdb-shards} + + GET /v1/bdbs/{int: bdb_uid}/shards + +Gets the status for all shards that belong to the specified database. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/bdbs/1/shards?extra_info_keys=used_memory_rss&extra_info_keys=connected_clients + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | integer | The unique ID of the database. | + +### Response {#get-response} + +The response body contains a JSON array with all shards, represented as [shard objects]({{}}). + +#### Example JSON body + +```json +[ + { + "uid": "1", + "role": "master", + "assigned_slots": "0-8191", + "bdb_uid": 1, + "detailed_status": "ok", + "loading": { + "status": "idle" + }, + "node_uid": "1", + "redis_info": { + "connected_clients": 14, + "used_memory_rss": 12460032 + }, + "report_timestamp": "2024-09-13T15:28:10Z", + "status": "active" + }, + { + "uid": 2, + "role": "slave", + // additional fields... + } +] +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | bdb uid does not exist. | +--- +Title: Database passwords requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database password requests +headerRange: '[1-2]' +linkTitle: passwords +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/passwords/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-bdbs-password) | `/v1/bdbs/{uid}/passwords` | Update database password | +| [POST](#post-bdbs-password) | `/v1/bdbs/{uid}/passwords` | Add database password | +| [DELETE](#delete-bdbs-password) | `/v1/bdbs/{uid}/passwords` | Delete database password | + +## Update database password {#put-bdbs-password} + + PUT /v1/bdbs/{int: uid}/passwords + +Set a single password for the bdb's default user (i.e., for `AUTH` `` authentications). + +#### Required permissions + +| Permission name | +|-----------------| +| [update_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/bdbs/1/passwords + +#### Example JSON body + +```json +{ + "password": "new password" +} +``` + +The above request resets the password of the bdb to ‘new password’. + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to update the password. | + + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| password | string | New password | + +### Response {#put-response} + +Returns a status code that indicates password update success or failure. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The password was changed. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | A nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid configuration parameters provided. | + +## Add database password {#post-bdbs-password} + + POST /v1/bdbs/{int: uid}/passwords + +Add a password to the bdb's default user (i.e., for `AUTH` `` authentications). + +#### Required permissions + +| Permission name | +|-----------------| +| [update_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/bdbs/1/passwords + +#### Example JSON body + +```json +{ + "password": "password to add" +} +``` + +The above request adds a password to the bdb. + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to add password. | + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| password | string | Password to add | + +### Response {#post-response} + +Returns a status code that indicates password creation success or failure. + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The password was added. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | A nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid configuration parameters provided. | + +## Delete database password {#delete-bdbs-password} + + DELETE /v1/bdbs/{int: uid}/passwords + +Delete a password from the bdb's default user (i.e., for `AUTH` `` authentications). + +#### Required permissions + +| Permission name | +|-----------------| +| [update_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/bdbs/1/passwords + +#### Example JSON body + +```json +{ + "password": "password to delete" +} +``` + +The above request deletes a password from the bdb. + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to delete password. | + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| password | string | Password to delete | + +### Response {#delete-response} + +Returns a status code that indicates password deletion success or failure. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The password was deleted. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | A nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid configuration parameters provided. | +--- +Title: CRDB peer stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Active-Active peer instance statistics requests +headerRange: '[1-2]' +linkTitle: peer_stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/peer_stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-peer_stats) | `/v1/bdbs/{bdb_uid}/peer_stats` | Get stats for all CRDB peer instances | +| [GET](#get-bdbs-peer_stats) | `/v1/bdbs/{bdb_uid}/peer_stats/{uid}` | Get stats for a specific CRDB peer instance | + +## Get all CRDB peer stats {#get-all-bdbs-peer_stats} + +```sh +GET /v1/bdbs/{bdb_uid}/peer_stats +``` + +Get statistics for all peer instances of a local CRDB instance. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/peer_stats?interval=5min +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | integer | The unique ID of the local CRDB instance. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all CRDB peer instances. + +#### Example JSON body + +```json +{ "peer_stats": [ + { + "intervals": [ + { + "egress_bytes": 0.0, + "egress_bytes_decompressed": 0.0, + "etime": "2017-10-22T19:30:00Z", + "ingress_bytes": 18528, + "ingress_bytes_decompressed": 185992, + "interval": "5min", + "local_ingress_lag_time": 0.244, + "pending_local_writes_max": 0.0, + "pending_local_writes_min": 0.0, + "stime": "2017-10-22T19:25:00Z" + }, + { + "egress_bytes": 0.0, + "egress_bytes_decompressed": 0.0, + "etime": "2017-10-22T19:35:00Z", + "ingress_bytes": 18, + "ingress_bytes_decompressed": 192, + "interval": "5min", + "local_ingress_lag_time": 0.0, + "pending_local_writes_max": 0.0, + "pending_local_writes_min": 0.0, + "stime": "2017-10-22T19:30:00Z" + } + ], + "uid": "3" + } + ] + } +``` + +#### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database does not exist. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Database is not a CRDB. | + +## Get CRDB peer stats {#get-bdbs-peer_stats} + +```sh +GET /v1/bdbs/{bdb_uid}/peer_stats/{int: uid} +``` + +Get statistics for a specific CRDB peer instance. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/peer_stats/3?interval=5min +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | integer | The unique ID of the local CRDB instance. | +| uid | integer | The peer instance uid, as specified in the CRDB instance list. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for a specific CRDB peer instance. + +#### Example JSON body + +```json +{ + "intervals": [ + { + "egress_bytes": 0.0, + "egress_bytes_decompressed": 0.0, + "etime": "2017-10-22T19:30:00Z", + "ingress_bytes": 18528, + "ingress_bytes_decompressed": 185992, + "interval": "5min", + "local_ingress_lag_time": 0.244, + "pending_local_writes_max": 0.0, + "pending_local_writes_min": 0.0, + "stime": "2017-10-22T19:25:00Z" + }, + { + "egress_bytes": 0.0, + "egress_bytes_decompressed": 0.0, + "etime": "2017-10-22T19:35:00Z", + "ingress_bytes": 18, + "ingress_bytes_decompressed": 192, + "interval": "5min", + "local_ingress_lag_time": 0.0, + "pending_local_writes_max": 0.0, + "pending_local_writes_min": 0.0, + "stime": "2017-10-22T19:30:00Z" + } + ], + "uid": "3" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database or peer does not exist. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Database is not a CRDB. | +--- +Title: Database alerts requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database alert requests +headerRange: '[1-2]' +linkTitle: alerts +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/alerts/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-alerts) | `/v1/bdbs/alerts` | Get all alert states for all databases | +| [GET](#get-bdbs-alerts) | `/v1/bdbs/alerts/{uid}` | Get all alert states for a specific database | +| [GET](#get-bdbs-alert) | `/v1/bdbs/alerts/{uid}/{alert}` | Get a specific database alert state | +| [POST](#post-bdbs-alerts) | `/v1/bdbs/alerts/{uid}` | Update a database’s alerts configuration | + +## Get all database alerts {#get-all-bdbs-alerts} + + GET /v1/bdbs/alerts + +Get all alert states for all databases. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_bdbs_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdbs_alerts" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/bdbs/alerts + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a hash of alert UIDs and the [alerts]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) states for each database. + +#### Example JSON body + +```json +{ + "1": { + "bdb_size": { + "enabled": true, + "state": true, + "threshold": "80", + "change_time": "2014-08-29T11:19:49Z", + "severity": "WARNING", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." + }, + "..." +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get database alerts {#get-bdbs-alerts} + + GET /v1/bdbs/alerts/{int: uid} + +Get all alert states for a database. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-alerts} + +#### Example HTTP request + + GET /v1/bdbs/alerts/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "bdb_size": { + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." +} +``` + +### Status codes {#get-status-codes-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified bdb does not exist | + +## Get database alert {#get-bdbs-alert} + + GET /v1/bdbs/alerts/{int: uid}/{alert} + +Get a database alert state. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/bdbs/alerts/1/bdb_size + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | +| alert | string | The alert name | + +### Response {#get-response} + +Returns an [alert object]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +#### Example JSON body + +```json +{ + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad request | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified alert or bdb does not exist | + +## Update database alert {#post-bdbs-alerts} + + POST /v1/bdbs/alerts/{int: uid} + +Updates a database's alerts configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb_alerts" >}}) | + +### Request {#post-request} + +If passed with the dry_run URL query string, the function will validate the alert thresholds, but not commit them. + +#### Example HTTP request + + POST /v1/bdbs/alerts/1 + +#### Example JSON body + +```json +{ + "bdb_size":{ + "threshold":"80", + "enabled":true + }, + "bdb_high_syncer_lag":{ + "threshold":"", + "enabled":false + }, + "bdb_low_throughput":{ + "threshold":"1", + "enabled":true + }, + "bdb_high_latency":{ + "threshold":"3000", + "enabled":true + }, + "bdb_high_throughput":{ + "threshold":"1", + "enabled":true + }, + "bdb_backup_delayed":{ + "threshold":"1800", + "enabled":true + } +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | Database ID | +| dry_run | string | Validate the alert thresholds but do not apply them | + +#### Request body + +The request must contain a single JSON object with one or many database [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +### Response {#post-response} + +The response includes the updated database [alerts]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified database was not found. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid configuration parameters provided. | +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, database alerts updated. | +--- +Title: Import reset status database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Reset database import status requests +headerRange: '[1-2]' +linkTitle: import_reset_status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/import_reset_status/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-bdbs-actions-import-reset-status) | `/v1/bdbs/{uid}/actions/import_reset_status` | Reset database import status | + +## Reset database import status {#put-bdbs-actions-import-reset-status} + + PUT /v1/bdbs/{int: uid}/actions/import_reset_status + +Reset the database’s `import_status` to idle if a backup is not in progress and clears the value of the `import_failure_reason` field. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [reset_bdb_current_import_status]({{< relref "/operate/rs/7.4/references/rest-api/permissions#reset_bdb_current_import_status" >}}) | admin
cluster_member
db_member | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1/actions/import_reset_status +``` + + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +### Response {#put-response} + +Returns a status code. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to perform an action on a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not all the modules loaded to the database support 'backup_restore' capability | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database is currently busy with another action. In this context, this is a temporary condition and the request should be reattempted later. | +--- +Title: Recover database requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests for database recovery +headerRange: '[1-2]' +linkTitle: recover +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/recover/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-bdbs-actions-recover) | `/v1/bdbs/{uid}/actions/recover` | Get database recovery plan | +| [POST](#post-bdbs-actions-recover) | `/v1/bdbs/{uid}/actions/recover` | Recover database | + +## Get recovery plan {#get-bdbs-actions-recover} + +```sh +GET /v1/bdbs/{int: uid}/actions/recover +``` + +Fetches the recovery plan for a database. The recovery plan provides information about the recovery status, such as whether recovery is possible, and details on available files to use for recovery. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_recovery_plan]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_recovery_plan" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/actions/recover +``` + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database. | + +### Response {#get-response} + +Returns a JSON object that represents the database's recovery plan, including recovery files and status. + +#### Example response body + +```json +{ + "data_files": [ + { + "filename": "appendonly-1.aof", + "last_modified": 1721164863.8883622, + "node_uid": "1", + "shard_role": "master", + "shard_slots": "1-2048", + "shard_uid": "1", + "size": 88 + }, + { + "filename": "appendonly-2.aof", + "last_modified": 1721164863.8883622, + "node_uid": "2", + "shard_role": "slave", + "shard_slots": "2049-4096", + "shard_uid": "2", + "size": 88 + } + ], + "status": "ready" +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Database UID does not exist. | + +## Recover database {#post-bdbs-actions-recover} + +```sh +POST /v1/bdbs/{int: uid}/actions/recover +``` + +Initiates [recovery for a database]({{}}) in a recoverable state where all the database's files are available after [cluster recovery]({{}}). + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [start_bdb_recovery]({{< relref "/operate/rs/7.4/references/rest-api/permissions#start_bdb_recovery" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +The request body can either be empty or include a recovery plan. + +If the request body is empty, the database will be recovered automatically: + +- Databases with no persistence are recovered with no data. + +- Persistent files such as AOF or RDB will be loaded from their expected storage locations where replica or primary shards were last active. + +- If persistent files are not found where expected but can be located on other cluster nodes, they will be used. + +#### Example HTTP request + +```sh +POST /v1/bdbs/1/actions/recover +``` + +#### Example request body + +```json +{ + "data_files": [ + { + "filename": "appendonly-1.aof", + "node_uid": "1", + "shard_slots": "1-2048" + }, + { + "filename": "appendonly-2.aof", + "node_uid": "2", + "shard_slots": "2049-4096" + } + ], + "ignore_errors": false, + "recover_without_data": false +} +``` + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to recover. | + +### Response {#post-response} + +Returns a status code. Also returns a JSON object with an `action_uid` in the request body if successful. + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | The request is accepted and is being processed. When the database is recovered, its status will become `active`. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Attempting to perform an action on a nonexistent database. | +| [409 Conflict](https://www.rfc-editor.org/rfc/rfc9110.html#name-409-conflict) | Database is currently busy with another action, recovery is already in progress, or is not in a recoverable state. | +--- +Title: Export resets status database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Reset database export status requests +headerRange: '[1-2]' +linkTitle: export_reset_status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/export_reset_status/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-bdbs-actions-export-reset-status) | `/v1/bdbs/{uid}/actions/export_reset_status` | Reset database export status | + +## Reset database export status {#put-bdbs-actions-export-reset-status} + + PUT /v1/bdbs/{int: uid}/actions/export_reset_status + +Resets the database's `export_status` to idle if an export is not in progress and clears the value of the `export_failure_reason` field. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [reset_bdb_current_export_status]({{< relref "/operate/rs/7.4/references/rest-api/permissions#reset_bdb_current_export_status" >}}) | admin
cluster_member
db_member | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1/actions/export_reset_status +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +### Response {#put-response} + +Returns a status code. + +#### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to perform an action on a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not all the modules loaded to the database support 'backup_restore' capability | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database is currently busy with another action. In this context, this is a temporary condition and the request should be reattempted later. | +--- +Title: Backup reset status database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Reset database backup status requests +headerRange: '[1-2]' +linkTitle: backup_reset_status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/backup_reset_status/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-bdbs-actions-backup-reset-status) | `/v1/bdbs/{uid}/actions/backup_reset_status` | Reset database backup status | + +## Reset database backup status {#put-bdbs-actions-backup-reset-status} + +```sh +PUT /v1/bdbs/{int: uid}/actions/backup_reset_status +``` + +Resets the database's `backup_status` to idle if a backup is not in progress and clears the value of the `backup_failure_reason` field. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [reset_bdb_current_backup_status]({{< relref "/operate/rs/7.4/references/rest-api/permissions#reset_bdb_current_backup_status" >}}) | admin
cluster_member
db_member | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1/actions/backup_reset_status +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +### Response {#put-response} + +Returns a status code. + +#### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to perform an action on a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not all the modules loaded to the database support 'backup_restore' capability | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database is currently busy with another action. In this context, this is a temporary condition and the request should be reattempted later. | +--- +Title: Optimize shards placement database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Optimize shard placement requests +headerRange: '[1-2]' +linkTitle: optimize_shards_placement +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/optimize_shards_placement/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-bdbs-actions-optimize-shards-placement) | `/v1/bdbs/{uid}/actions/optimize_shards_placement` | Get optimized shards placement for a database | + + +## Get optimized shards placement {#get-bdbs-actions-optimize-shards-placement} + +```sh +GET /v1/bdbs/{int: uid}/actions/optimize_shards_placement +``` + +Get optimized shards placement for the given database. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/actions/optimize_shards_placement +``` + +#### Query parameters + +Include query parameters in a `GET` request to generate an optimized shard placement blueprint for a database, using settings that are different from the database's current configuration. + +| Field | Type | Description | +|-------|------|-------------| +| avoid_nodes | list of integers | Comma-separated list of cluster node IDs to avoid when placing the database’s shards and binding its endpoints (for example, `avoid_nodes=1,2`) | +| memory_size | integer (default: 0) | Database memory limit (0 is unlimited), expressed in bytes | +| shards_count | integer, (range: 1-512) (default: 1) | Number of database server-side shards | +| shards_placement | `dense`
`sparse` | Control the density of shards
`dense`: Shards reside on as few nodes as possible
`sparse`: Shards reside on as many nodes as possible | +| bigstore_ram_size | integer (default: 0) | Memory size of bigstore RAM part, expressed in bytes | +| replication | `enabled`
`disabled` | In-memory database replication mode | + +The following example request includes `shards_count` and `memory_size` as query parameters: + +```sh +GET /v1/bdbs/1/actions/optimize_shards_placement?shards_count=10&memory_size=10000 +``` + +### Response {#get-response} + +To rearrange the database shards, you can submit the blueprint returned in this response body as the `shards_blueprint` field in the [`PUT` `/v1/bdbs/{uid}`](#put-bdbs-rearrange-shards) request. + +#### Example JSON body + +```json +[ + { + "nodes": [ + { + "node_uid": "3", + "role": "master" + }, + { + "node_uid": "1", + "role": "slave" + } + ], + "slot_range": "5461-10922" + }, + { + "nodes": [ + { + "node_uid": "3", + "role": "master" + }, + { + "node_uid": "1", + "role": "slave" + } + ], + "slot_range": "10923-16383" + }, + { + "nodes": [ + { + "node_uid": "3", + "role": "master" + }, + { + "node_uid": "1", + "role": "slave" + } + ], + "slot_range": "0-5460" + } +] +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Length | 352 | Length of the request body in octets | +| cluster-state-id | 30 | Cluster state ID | + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database UID does not exist | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not enough resources in the cluster to host the database | + +## Rearrange database shards {#put-bdbs-rearrange-shards} + +Use the blueprint returned by the [`GET` `/v1/bdbs/{uid}/actions/optimize_shards_placement`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/actions/optimize_shards_placement#get-bdbs-actions-optimize-shards-placement" >}}) request as the value of the `shards_blueprint` field to rearrange the database shards. + +To ensure that the optimized shard placement is relevant for the current cluster state, pass the `cluster-state-id`, taken from the response header of the `GET` request, in the [`PUT` `/v1/bdbs/{uid}`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#put-bdbs" >}}) request headers. + +The cluster will reject the update if its state was changed since the optimal shards placement was obtained. + +### Request + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| cluster-state-id | 30 | Cluster state ID | + +#### Example JSON body + +```json +{ + "shards_blueprint": [ + { + "nodes": [ + { + "node_uid": "2", + "role": "master" + } + ], + "slot_range": "0-8191" + }, + "..." + ] +} +``` + +{{}} +If you submit such an optimized blueprint, it may cause strain on the cluster and its resources. Use with caution. +{{}} +--- +Title: Import database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Import database requests +headerRange: '[1-2]' +linkTitle: import +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/import/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bdbs-actions-import) | `/v1/bdbs/{uid}/actions/import` | Initiate manual dataset import | + +## Initiate manual dataset import {#post-bdbs-actions-import} + +```sh +POST /v1/bdbs/{int: uid}/actions/import +``` + +Initiate a manual import process. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [start_bdb_import]({{< relref "/operate/rs/7.4/references/rest-api/permissions#start_bdb_import" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/bdbs/1/actions/import +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| Content-Length | 0 | Length of the request body in octets | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +#### Body + +The request _may_ contain a subset of the [BDB JSON object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}), which includes the following import-related attributes: + +| Field | Type | Description | +|-------|------|-------------| +| dataset_import_sources | array of [dataset_import_sources]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/dataset_import_sources" >}}) objects | Details for the import sources. Call [`GET /v1/jsonschema`]({{< relref "/operate/rs/7.4/references/rest-api/requests/jsonschema#get-jsonschema" >}}) on the bdb object and review the `dataset_import_sources` field to retrieve the object's structure. | +| email_notification | boolean | Enable/disable an email notification on import failure/ completion. (optional) | + +{{}} +Other attributes are not allowed and will cause the request to fail. +{{}} + +##### Example JSON Body + +```json +{ + "dataset_import_sources": [ + { + "type": "url", + "url": "http://..." + }, + { + "type": "url", + "url": "redis://..." + } + ], + "email_notification": true +} +``` + +This request initiates an import process using `dataset_import_sources` values that were previously configured for the database. + +### Response {#post-response} + +Returns a status code. + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. In order to monitor progress, the `import_status`, `import_progress`, and `import_failure_reason` attributes can be consulted. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to perform an action on a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not all the modules loaded to the database support 'backup_restore' capability. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database is currently busy with another action. In this context, this is a temporary condition and the request should be reattempted later. | +--- +Title: Export database action requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Export database requests +headerRange: '[1-2]' +linkTitle: export +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/export/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bdbs-actions-export) | `/v1/bdbs/{uid}/actions/export` | Initiate database export | + +## Initiate database export {#post-bdbs-actions-export} + +```sh +POST /v1/bdbs/{int: uid}/actions/export +``` + +Initiate a database export. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [start_bdb_export]({{< relref "/operate/rs/7.4/references/rest-api/permissions#start_bdb_export" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/bdbs/1/actions/export +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + + +#### Body + +The request body should contain a JSON object with the following export parameters: + +| Field | Type | Description | +|-------|------|-------------| +| export_location | [backup_location/export_location]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb/backup_location" >}}) object | Details for the export destination. Call [`GET /v1/jsonschema`]({{< relref "/operate/rs/7.4/references/rest-api/requests/jsonschema#get-jsonschema" >}}) on the bdb object and review the `backup_location` field to retrieve the object's structure. | +| email_notification | boolean | Enable/disable an email notification on export failure/ completion. (optional) | + +##### Example JSON body + +```json +{ + "export_location": { + "type": "url", + "url": "ftp://..." + }, + "email_notification": true +} +``` + +The above request initiates an export operation to the specified location. + +### Response {#post-response} + +Returns a status code. + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. In order to monitor progress, the BDB's `export_status`, `export_progress`, and `export_failure_reason` attributes can be consulted. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to perform an action on a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Not all the modules loaded to the database support 'backup_restore' capability | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database is currently busy with another action. In this context, this is a temporary condition and the request should be reattempted later. | +--- +Title: Database actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database action requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: actions +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/actions/' +--- + +## Backup + +| Method | Path | Description | +|--------|------|-------------| +| [PUT]({{< relref "./backup_reset_status#put-bdbs-actions-backup-reset-status" >}}) | `/v1/bdbs/{uid}/actions/backup_reset_status` | Reset database backup status | + +## Export + +| Method | Path | Description | +|--------|------|-------------| +| [PUT]({{< relref "./export_reset_status#put-bdbs-actions-export-reset-status" >}}) | `/v1/bdbs/{uid}/actions/export_reset_status` | Reset database export status | +| [POST]({{< relref "./export#post-bdbs-actions-export" >}}) | `/v1/bdbs/{uid}/actions/export` | Initiate database export | + +## Import + +| Method | Path | Description | +|--------|------|-------------| +| [PUT]({{< relref "./import_reset_status#put-bdbs-actions-import-reset-status" >}}) | `/v1/bdbs/{uid}/actions/import_reset_status` | Reset database import status | +| [POST]({{< relref "./import#post-bdbs-actions-import" >}}) | `/v1/bdbs/{uid}/actions/import` | Initiate manual dataset import | + +## Optimize shards placement + +| Method | Path | Description | +|--------|------|-------------| +| [GET]({{< relref "./optimize_shards_placement#get-bdbs-actions-optimize-shards-placement" >}}) | `/v1/bdbs/{uid}/actions/optimize_shards_placement` | Get optimized shards placement for a database | + +## Recover + +| Method | Path | Description | +|--------|------|-------------| +| [GET]({{}}) | `/v1/bdbs/{uid}/actions/recover` | Get database recovery plan | +| [POST]({{}}) | `/v1/bdbs/{uid}/actions/recover` | Recover database | +--- +Title: Database upgrade requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database upgrade requests +headerRange: '[1-2]' +linkTitle: upgrade +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/upgrade/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bdbs-upgrade) | `/v1/bdbs/{uid}/upgrade` | Upgrade database | + +## Upgrade database {#post-bdbs-upgrade} + + POST /v1/bdbs/{int: uid}/upgrade + +Upgrade a database. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_bdb_with_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb_with_action" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/bdbs/1/upgrade + +#### Example JSON body + +```json +{ + "swap_roles": true, + "may_discard_data": false +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| force_restart | boolean | Restart shards even if no version change (default: false) | +| keep_redis_version | boolean | Keep current Redis version (default: false) | +| keep_crdt_protocol_version | boolean | Keep current crdt protocol version (default: false) | +| may_discard_data | boolean | Discard data in a non-replicated, non-persistent bdb (default: false) | +| force_discard | boolean | Discard data even if the bdb is replicated and/or persistent (default: false) | +| preserve_roles | boolean | Preserve shards' master/replica roles (requires an extra failover) (default: false) | +| parallel_shards_upgrade | integer | Max number of shards to upgrade in parallel (default: all) | +| modules | list of modules | List of dicts representing the modules that will be upgraded.

Each dict includes:

• `current_module`: uid of a module to upgrade

• `new_module`: uid of the module we want to upgrade to

• `new_module_args`: args list for the new module (no defaults for the three module-related parameters). +| redis_version | version number | Upgrades the database to the specified Redis version instead of the latest version | +| latest_with_modules | boolean | Upgrades the database to the latest Redis version and latest supported versions of modules available in the cluster | + +### Response {#post-response} + +Returns the upgraded [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "replication": true, + "data_persistence": "aof", + "// additional fields..." +} +``` + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, bdb upgrade initiated (`action_uid` can be used to track progress) | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Malformed or bad command | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb not found | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | New module version capabilities don't comply with the database configuration | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Internal error | +--- +Title: Database modules config requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure Redis module requests +headerRange: '[1-2]' +linkTitle: config +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/modules/config/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bdb-modules-config) | `/v1/bdbs/{uid}/modules/config` | Configure module | + +## Configure module {#post-bdb-modules-config} + + POST /v1/bdbs/{string: uid}/modules/config + +Use the module runtime configuration command (if defined) to configure new arguments for the module. + +#### Required permissions + +| Permission name | +|-----------------| +| [edit_bdb_module]({{< relref "/operate/rs/7.4/references/rest-api/permissions#edit_bdb_module" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/bdbs/1/modules/config + +#### Example JSON body + +```json +{ + "modules": [ + { + "module_name": "search", + "module_args": "MINPREFIX 3 MAXEXPANSIONS 1000" + } + ] +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| modules | list of JSON objects | List of modules (module_name) and their new configuration settings (module_args) | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf` | Module's name | +| module_args | string | Module command line arguments (pattern does not allow special characters &,<,>,”) | + +### Response {#post-response} + +Returns a status code. If an error occurs, the response body may include an error code and message with more details. + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_not_exist | Database with given UID doesn't exist in cluster | +| missing_field | "module_name" or "module_args" are not defined in request | +| invalid_schema | JSON object received is not a dict object | +| param_error | "module_args" parameter was not parsed properly | +| module_not_exist | Module with given "module_name" does not exist for the database | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, module updated on bdb. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb not found. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Module does not support runtime configuration of arguments. | +--- +Title: Database upgrade modules requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade Redis module requests +headerRange: '[1-2]' +linkTitle: upgrade +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/modules/upgrade/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-bdb-modules-upgrade) | `/v1/bdbs/{uid}/modules/upgrade` | Upgrade module | + +## Upgrade module {#post-bdb-modules-upgrade} + + POST /v1/bdbs/{string: uid}/modules/upgrade + +Upgrades module version on a specific BDB. + +#### Required permissions + +| Permission name | +|-----------------| +| [edit_bdb_module]({{< relref "/operate/rs/7.4/references/rest-api/permissions#edit_bdb_module" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/bdbs/1/modules/upgrade + +#### Example JSON body + +```json +{ + "modules": [ + {"module_name": "ReJson", + "current_semantic_version": "2.2.1", + "new_module": "aa3648d79bd4082d414587c42ea0b234"} + ], + "// Optional fields to fine-tune restart and failover behavior:", + "preserve_roles": true, + "may_discard_data": false +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| modules | list | List of dicts representing the modules that will be upgraded. Each dict must include:

• **current_module**: uid of a module to upgrade

• **new_module**: UID of the module we want to upgrade to

• **new_module_args**: args list for the new module | +| preserve_roles | boolean | Preserve shards’ master/replica roles (optional) | +| may_discard_data | boolean | Discard data in a non-replicated non-persistent bdb (optional) | + +### Response {#post-response} + +Returns the upgraded [module object]({{< relref "/operate/rs/7.4/references/rest-api/objects/module" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "name of database #1", + "module_id": "aa3648d79bd4082d414587c42ea0b234", + "module_name": "ReJson", + "semantic_version": "2.2.2" + "// additional fields..." +} +``` + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| missing_module | Module is not present in cluster.| +| module_downgrade_unsupported | Module downgrade is not allowed.| +| redis_incompatible_version | Module min_redis_version is bigger than the current Redis version.| +| redis_pack_incompatible_version | Module min_redis_pack_version is bigger than the current Redis Enterprise version.| +| unsupported_module_capabilities | New version of module does support all the capabilities needed for the database configuration| + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, module updated on bdb. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb or node not found. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The requested configuration is invalid. | +--- +Title: Database modules requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis module requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: modules +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/modules/' +--- + +## Configure module +| Method | Path | Description | +|--------|------|-------------| +| [POST]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/modules/config#post-bdb-modules-config" >}}) | `/v1/bdbs/{uid}/modules/config` | Configure module | + +## Upgrade module +| Method | Path | Description | +|--------|------|-------------| +| [POST]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/modules/upgrade#post-bdb-modules-upgrade" >}}) | `/v1/bdbs/{uid}/modules/upgrade` | Upgrade module | +--- +Title: Database syncer source stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Syncer source statistics requests +headerRange: '[1-2]' +linkTitle: sync_source_stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/sync_source_stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-sync_source_stats) | `/v1/bdbs/{bdb_uid}/sync_source_stats` | Get stats for all syncer sources | +| [GET](#get-bdbs-sync_source_stats) | `/v1/bdbs/{bdb_uid}/sync_source_stats/{uid}` | Get stats for a specific syncer instance | + +## Get all syncer source stats {#get-all-bdbs-sync_source_stats} + +```sh +GET /v1/bdbs/{bdb_uid}/sync_source_stats +``` + +Get stats for all syncer sources of a local database. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/sync_source_stats?interval=5min +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | integer | The unique ID of the local database. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | Optional end time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all syncer sources. + +#### Example JSON body + +```json +{ "sync_source_stats": [ + { + "intervals": [ + { + "etime": "2017-10-22T19:30:00Z", + "ingress_bytes": 18528, + "ingress_bytes_decompressed": 185992, + "interval": "5min", + "local_ingress_lag_time": 0.244, + "stime": "2017-10-22T19:25:00Z" + }, + { + "etime": "2017-10-22T19:35:00Z", + "ingress_bytes": 18, + "ingress_bytes_decompressed": 192, + "interval": "5min", + "local_ingress_lag_time": 0.0, + "stime": "2017-10-22T19:30:00Z" + } + ], + "uid": "1" + } + ] + } +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database does not exist. | + +## Get syncer instance stats {#get-bdbs-sync_source_stats} + +```sh +GET /v1/bdbs/{bdb_uid}/sync_source_stats/{int: uid} +``` + +Get stats for a specific syncer (Replica Of) instance. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1/sync_source_stats/1?interval=5min +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | integer | The unique ID of the local database. | +| uid | integer | The sync_source uid. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Optional start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | Optional end time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for a specific syncer instance. + +#### Example JSON body + +```json +{ + "intervals": [ + { + "etime": "2017-10-22T19:30:00Z", + "ingress_bytes": 18528, + "ingress_bytes_decompressed": 185992, + "interval": "5min", + "local_ingress_lag_time": 0.244, + "stime": "2017-10-22T19:25:00Z" + }, + { + "etime": "2017-10-22T19:35:00Z", + "ingress_bytes": 18, + "ingress_bytes_decompressed": 192, + "interval": "5min", + "local_ingress_lag_time": 0.0, + "stime": "2017-10-22T19:30:00Z" + } + ], + "uid": "1" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database or sync_source do not exist. | +--- +Title: Latest database stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Most recent database statistics requests +headerRange: '[1-2]' +linkTitle: last +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/stats/last/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-stats-last) | `/v1/bdbs/stats/last` | Get most recent stats for all databases | +| [GET](#get-bdbs-stats-last) | `/v1/bdbs/stats/last/{uid}` | Get most recent stats for a specific database | + +## Get latest stats for all databases {#get-all-bdbs-stats-last} + +```sh +GET /v1/bdbs/stats/last +``` + +Get the most recent statistics for all databases. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +1. Without metrics filter (returns all metrics by default) + ``` + GET /v1/bdbs/stats/last + ``` + +2. With metrics filter + ``` + GET /v1/bdbs/stats/last?metrics=no_of_keys,used_memory + ``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| metrics | string | Comma-separated list of metric names for which we want statistics (default is all). (optional) | + +### Response {#get-all-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all databases. + +#### Example JSON body + +1. Without metrics filter (returns all metrics by default) + ```json + { + "1": { + "stime": "2015-05-28T08:06:37Z", + "etime": "2015-05-28T08:06:44Z", + "conns": 0.0, + "egress_bytes": 0.0, + "etime": "2015-05-28T08:06:44Z", + "evicted_objects": 0.0, + "expired_objects": 0.0, + "ingress_bytes": 0.0, + "instantaneous_ops_per_sec": 0.0, + "last_req_time": "1970-01-01T00:00:00Z", + "last_res_time": "1970-01-01T00:00:00Z", + "used_memory": 5651336.0, + "mem_size_lua": 35840.0, + "monitor_sessions_count": 0.0, + "no_of_keys": 0.0, + "other_req": 0.0, + "other_res": 0.0, + "read_hits": 0.0, + "read_misses": 0.0, + "read_req": 0.0, + "read_res": 0.0, + "total_connections_received": 0.0, + "total_req": 0.0, + "total_res": 0.0, + "write_hits": 0.0, + "write_misses": 0.0, + "write_req": 0.0, + "write_res": 0.0 + }, + "2": { + "stime": "2015-05-28T08:06:37Z", + "etime": "2015-05-28T08:06:44Z", + + "// additional fields..." + }, + + "// Additional BDBs..." + } + ``` + +2. With metrics filter + ```json + { + "1": { + "etime": "2015-05-28T08:06:44Z", + "used_memory": 5651576.0, + "no_of_keys": 0.0, + "stime": "2015-05-28T08:06:37Z" + }, + "2": { + "etime": "2015-05-28T08:06:44ZZ", + "used_memory": 5651440.0, + "no_of_keys": 0.0, + "stime": "2015-05-28T08:06:37Z" + }, + + "// Additional BDBs.." + } + ``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | No bdbs exist | + +## Get latest database stats {#get-bdbs-stats-last} + +```sh +GET /v1/bdbs/stats/last/{int: uid} +``` + +Get the most recent statistics for a specific database. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/stats/last/1?metrics=no_of_keys,used_memory +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the requested BDB. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| metrics | string | Comma-separated list of metric names for which we want statistics (default is all). (optional) | + +### Response {#get-response} + +Returns the most recent [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for a specific database. + +#### Example JSON body + +```json +{ + "1": { + "etime": "2015-06-23T12:05:08Z", + "used_memory": 5651576.0, + "no_of_keys": 0.0, + "stime": "2015-06-23T12:05:03Z" + } +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb does not exist | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | bdb isn't currently active | +| [503 Service Unavailable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | bdb is in recovery state | +--- +Title: Database stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-stats) | `/v1/bdbs/stats` | Get stats for all databases | +| [GET](#get-bdbs-stats) | `/v1/bdbs/stats/{uid}` | Get stats for a specific database | + +## Get all database stats {#get-all-bdbs-stats} + +```sh +GET /v1/bdbs/stats +``` + +Get statistics for all databases. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/stats?interval=1hour&stime=2014-08-28T10:00:00Z +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for all databases. + +#### Example JSON body + +```json +[ + { + "uid": "1", + "intervals": [ + { + "interval": "1hour", + "stime": "2015-05-27T12:00:00Z", + "etime": "2015-05-28T12:59:59Z", + "avg_latency": 0.0, + "conns": 0.0, + "egress_bytes": 0.0, + "etime": "2015-05-28T00:00:00Z", + "evicted_objects": 0.0, + "expired_objects": 0.0, + "ingress_bytes": 0.0, + "instantaneous_ops_per_sec": 0.00011973180076628352, + "last_req_time": "1970-01-01T00:00:00Z", + "last_res_time": "1970-01-01T00:00:00Z", + "used_memory": 5656299.362068966, + "mem_size_lua": 35840.0, + "monitor_sessions_count": 0.0, + "no_of_keys": 0.0, + "other_req": 0.0, + "other_res": 0.0, + "read_hits": 0.0, + "read_misses": 0.0, + "read_req": 0.0, + "read_res": 0.0, + "total_connections_received": 0.0, + "total_req": 0.0, + "total_res": 0.0, + "write_hits": 0.0, + "write_misses": 0.0, + "write_req": 0.0, + "write_res": 0.0 + }, + { + "interval": "1hour", + "interval": "1hour", + "stime": "2015-05-27T13:00:00Z", + "etime": "2015-05-28T13:59:59Z", + "avg_latency": 599.08, + "// additional fields..." + } + ] + }, + { + "uid": "2", + "intervals": [ + { + "interval": "1hour", + "stime": "2015-05-27T12:00:00Z", + "etime": "2015-05-28T12:59:59Z", + "avg_latency": 0.0, + "// additional fields..." + }, + { + "interval": "1hour", + "stime": "2015-05-27T13:00:00Z", + "etime": "2015-05-28T13:59:59Z", + + "// additional fields..." + } + ] + } +] +``` + +#### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | No bdbs exist | + +### Example requests + +#### cURL + +```sh +$ curl -k -u "[username]:[password]" -X GET + https://[host][:port]/v1/bdbs/stats?interval=1hour +``` + +#### Python + +```python +import requests + +url = "https://[host][:port]/v1/bdbs/stats?interval=1hour" +auth = ("[username]", "[password]") + +response = requests.request("GET", url, auth=auth) + +print(response.text) +``` + +## Get database stats {#get-bdbs-stats} + +```sh +GET /v1/bdbs/stats/{int: uid} +``` + +Get statistics for a specific database. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_stats" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/stats/1?interval=1hour&stime=2014-08-28T10:00:00Z +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the BDB requested. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/7.4/references/rest-api/objects/statistics" >}}) for a specific database. + +#### Example JSON body + +```json +{ + "uid": "1", + "intervals": [ + { + "interval": "1hour", + "stime": "2015-05-27T12:00:00Z", + "etime": "2015-05-28T12:59:59Z", + "avg_latency": 0.0, + "conns": 0.0, + "egress_bytes": 0.0, + "evicted_objects": 0.0, + "pubsub_channels": 0, + "pubsub_patterns": 0, + "expired_objects": 0.0, + "ingress_bytes": 0.0, + "instantaneous_ops_per_sec": 0.00011973180076628352, + "last_req_time": "1970-01-01T00:00:00Z", + "last_res_time": "1970-01-01T00:00:00Z", + "used_memory": 5656299.362068966, + "mem_size_lua": 35840.0, + "monitor_sessions_count": 0.0, + "no_of_keys": 0.0, + "other_req": 0.0, + "other_res": 0.0, + "read_hits": 0.0, + "read_misses": 0.0, + "read_req": 0.0, + "read_res": 0.0, + "total_connections_received": 0.0, + "total_req": 0.0, + "total_res": 0.0, + "write_hits": 0.0, + "write_misses": 0.0, + "write_req": 0.0, + "write_res": 0.0 + }, + { + "interval": "1hour", + "stime": "2015-05-27T13:00:00Z", + "etime": "2015-05-28T13:59:59Z", + "// additional fields..." + } + ] +} +``` + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb does not exist | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | bdb isn't currently active | +| [503 Service Unavailable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4) | bdb is in recovery state | +--- +Title: Database requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: bdbs +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs) | `/v1/bdbs` | Get all databases | +| [GET](#get-bdbs) | `/v1/bdbs/{uid}` | Get a single database | +| [PUT](#put-bdbs) | `/v1/bdbs/{uid}` | Update database configuration | +| [PUT](#put-bdbs-action) | `/v1/bdbs/{uid}/{action}` | Update database configuration and perform additional action | +| [POST](#post-bdbs-v1) | `/v1/bdbs` | Create a new database | +| [POST](#post-bdbs-v2) | `/v2/bdbs` | Create a new database | +| [DELETE](#delete-bdbs) | `/v1/bdbs/{uid}` | Delete a database | + +## Get all databases {#get-all-bdbs} + +```sh +GET /v1/bdbs +``` + +Get all databases in the cluster. + +### Permissions + +| Permission name | Roles | +|-----------------|---------| +| [view_all_bdbs_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdbs_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs?fields=uid,name +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| fields | string | Comma-separated list of field names to return (by default all fields are returned). (optional) | + +### Response {#get-all-response} + +The response body contains a JSON array with all databases, represented as [BDB objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Body + +```json +[ + { + "uid": 1, + "name": "name of database #1", + "// additional fields..." + }, + { + "uid": 2, + "name": "name of database #2", + "// additional fields..." + } +] +``` + +#### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +### Example requests + +#### cURL + +```sh +$ curl -k -X GET -u "[username]:[password]" \ + -H "accept: application/json" \ + https://[host][:port]/v1/bdbs?fields=uid,name +``` + +#### Python + +```python +import requests +import json + +url = "https://[host][:port]/v1/bdbs?fields=uid,name" +auth = ("[username]", "[password]") + +headers = { + 'Content-Type': 'application/json' +} + +response = requests.request("GET", url, auth=auth, headers=headers) + +print(response.text) +``` + +## Get a database {#get-bdbs} + +```sh +GET /v1/bdbs/{int: uid} +``` + +Get a single database. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_bdb_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/bdbs/1 +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| fields | string | Comma-separated list of field names to return (by default all fields are returned). (optional) | + +### Response {#get-response} + +Returns a [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "name of database #1", + "// additional fields..." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database UID does not exist | + +## Update database configuration {#put-bdbs} + +```sh +PUT /v1/bdbs/{int: uid} +``` +Update the configuration of an active database. + +If called with the `dry_run` URL query string, the function will validate the [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) against the existing database, but will not invoke the state machine that will update it. + +This is the basic version of the update request. See [Update database and perform action](#put-bdbs-action) to send an update request with an additional action. + +To track this request's progress, poll the [`/actions/` endpoint]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/actions" >}}) with the action_uid returned in the response body. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb" >}}) | admin
cluster_member
db_member | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1 +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | +| Content-type | application/json | + +#### Query parameters + +| Field | Type | Description | +|---------|------|---------------| +| dry_run | | Validate the new [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) but don't apply the update. | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database for which update is requested. | + +#### Body + +Include a [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) with updated fields in the request body. + +##### Example JSON body + +```json +{ + "replication": true, + "data_persistence": "aof" +} +``` + +The above request attempts to modify a database configuration to enable in-memory data replication and append-only file data persistence. + +### Response {#put-response} + +Returns the updated [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "replication": true, + "data_persistence": "aof", + "// additional fields..." +} +``` + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. The database state will be 'active-change-pending' until the request has been fully processed. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The requested configuration is invalid. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Attempting to change a database while it is busy with another configuration change. In this context, this is a temporary condition, and the request should be reattempted later. | + +#### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| rack_awareness_violation | • Non rack-aware cluster.
• Not enough nodes in unique racks. | +| invalid_certificate | SSL client certificate is missing or malformed.| +| certificate_expired | SSL client certificate has expired. | +| duplicated_certs | An SSL client certificate appears more than once. | +| insufficient_resources | Shards count exceeds shards limit per bdb. | +| not_supported_action_on_crdt | `reset_admin_pass` action is not allowed on CRDT enabled BDB. | +| name_violation | CRDT database name cannot be changed. | +| bad_shards_blueprint | The sharding blueprint is broken or doesn’t fit the BDB. | +| replication_violation | CRDT database must use replication. | +| eviction_policy_violation | LFU eviction policy is not supported on bdb version<4 | +| replication_node_violation | Not enough nodes for replication. | +| replication_size_violation | Database limit too small for replication. | +| invalid_oss_cluster_configuration | BDB configuration does not meet the requirements for OSS cluster mode | +| missing_backup_interval | BDB backup is enabled but backup interval is missing. | +| crdt_sharding_violation | CRDB created without sharding cannot be changed to use sharding +| invalid_proxy_policy | Invalid proxy_policy value. | +| invalid_bdb_tags | Tag objects with the same key parameter were passed. | +| unsupported_module_capabilities | Not all modules configured for the database support the capabilities needed for the database configuration. | +| redis_acl_unsupported | Redis ACL is not supported for this database. | + +## Update database and perform action {#put-bdbs-action} + +```sh +PUT /v1/bdbs/{int: uid}/{action} +``` +Update the configuration of an active database and perform an additional action. + +If called with the `dry_run` URL query string, the function will validate the [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) against the existing database, but will not invoke the state machine that will update it. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_bdb_with_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_bdb_with_action" >}}) | admin
cluster_member
db_member | + +### Request {#put-request-action} + +#### Example HTTP request + +```sh +PUT /v1/bdbs/1/reset_admin_pass +``` +The above request resets the admin password after updating the database. + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | +| Content-type | application/json | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to update. | +| action | string | Additional action to perform. Currently supported actions are: `flush`, `reset_admin_pass`. | + +#### Query parameters + +| Field | Type | Description | +|---------|------|---------------| +| dry_run | | Validate the new [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) but don't apply the update. | + +#### Body + +Include a [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) with updated fields in the request body. + +##### Example JSON body + +```json +{ + "replication": true, + "data_persistence": "aof" +} +``` + +The above request attempts to modify a database configuration to enable in-memory data replication and append-only file data persistence. + +{{}} +To change the shard hashing policy, you must flush all keys from the database. +{{}} + +### Response {#put-response-action} + +If the request succeeds, the response body returns the updated [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). If an error occurs, the response body may include an error code and message with more details. + +#### Status codes {#put-status-codes-action} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. The database state will be 'active-change-pending' until the request has been fully processed. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | redislabs license expired. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a nonexistent database. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The requested configuration is invalid. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Attempting to change a database while it is busy with another configuration change. In this context, this is a temporary condition, and the request should be reattempted later. | + +#### Error codes {#put-error-codes-action} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| rack_awareness_violation | • Non rack-aware cluster.
• Not enough nodes in unique racks. | +| invalid_certificate | SSL client certificate is missing or malformed.| +| certificate_expired | SSL client certificate has expired. | +| duplicated_certs | An SSL client certificate appears more than once. | +| insufficient_resources | Shards count exceeds shards limit per bdb. | +| not_supported_action_on_crdt | `reset_admin_pass` action is not allowed on CRDT enabled BDB. | +| name_violation | CRDT database name cannot be changed. | +| bad_shards_blueprint | The sharding blueprint is broken or doesn’t fit the BDB. | +| replication_violation | CRDT database must use replication. | +| eviction_policy_violation | LFU eviction policy is not supported on bdb version<4 | +| replication_node_violation | Not enough nodes for replication. | +| replication_size_violation | Database limit too small for replication. | +| invalid_oss_cluster_configuration | BDB configuration does not meet the requirements for OSS cluster mode | +| missing_backup_interval | BDB backup is enabled but backup interval is missing. | +| crdt_sharding_violation | CRDB created without sharding cannot be changed to use sharding +| invalid_proxy_policy | Invalid proxy_policy value. | +| invalid_bdb_tags | Tag objects with the same key parameter were passed. | +| unsupported_module_capabilities | Not all modules configured for the database support the capabilities needed for the database configuration. | +| redis_acl_unsupported | Redis ACL is not supported for this database. | + +## Create database v1 {#post-bdbs-v1} + +```sh +POST /v1/bdbs +``` +Create a new database in the cluster. + +The request must contain a single JSON [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) with the configuration parameters for the new database. + +The following parameters are required to create the database: + +| Parameter | Type/Value | Description | +|----------|------------|-------------| +| name | string | Name of the new database | +| memory_size | integer | Size of the database, in bytes | + +If passed with the `dry_run` URL query string, the function will validate the [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}), but will not invoke the state machine that will create it. + +To track this request's progress, poll the [`/actions/` endpoint]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/actions" >}}) with the `action_uid` returned in the response body. + +The cluster will use default configuration for any missing database field. The cluster creates a database UID if it is missing. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [create_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#create_bdb" >}}) | admin
cluster_member
db_member | + +### Request {#post-request-v1} + +#### Example HTTP request + +```sh +POST /v1/bdbs +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | +| Content-type | application/json | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| dry_run | | Validate the new [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) but don't create the database. | + +#### Body + +Include a [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) in the request body. + +The following parameters are required to create the database: + +| Paramter | Type/Value | Description | +|----------|------------|-------------| +| name | string | Name of the new database | +| memory_size | integer | Size of the database, in bytes | + +The `uid` of the database is auto-assigned by the cluster because it was not explicitly listed in this request. If you specify the database ID (`uid`), then you must specify the database ID for every subsequent database and make sure that the database ID does not conflict with an existing database. If you do not specify the database ID, then the it is automatically assigned in sequential order. + +Defaults are used for all other configuration parameters. + +#### Example JSON body + +```json +{ + "name": "test-database", + "type": "redis", + "memory_size": 1073741824 +} +``` + +The above request is an attempt to create a Redis database with a user-specified name and a memory limit of 1GB. + +### Response {#post-response-v1} + +The response includes the newly created [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "test-database", + "type": "redis", + "memory_size": 1073741824, + "// additional fields..." +} +``` + +#### Error codes {#post-error-codes-v1} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| uid_exists | The specified database UID is already in use. | +| missing_db_name | DB name is a required property. | +| missing_memory_size | Memory Size is a required property. | +| missing_module | Modules missing from the cluster. | +| port_unavailable | The specified database port is reserved or already in use. | +| invalid_sharding | Invalid sharding configuration was specified. | +| bad_shards_blueprint | The sharding blueprint is broken. | +| not_rack_aware | Cluster is not rack-aware and a rack-aware database was requested. | +| invalid_version | An invalid database version was requested. | +| busy | The request failed because another request is being processed at the same time on the same database. | +| invalid_data_persistence | Invalid data persistence configuration. | +| invalid_proxy_policy | Invalid proxy_policy value. | +| invalid_sasl_credentials | SASL credentials are missing or invalid. | +| invalid_replication | Not enough nodes to perform replication. | +| insufficient_resources | Not enough resources in cluster to host the database. | +| rack_awareness_violation | • Rack awareness violation.
• Not enough nodes in unique racks. | +| invalid_certificate | SSL client certificate is missing or malformed. | +| certificate_expired | SSL client certificate has expired. | +| duplicated_certs | An SSL client certificate appears more than once. | +| replication_violation | CRDT database must use replication. | +| eviction_policy_violation | LFU eviction policy is not supported on bdb version<4 | +| invalid_oss_cluster_configuration | BDB configuration does not meet the requirements for OSS cluster mode | +| memcached_cannot_use_modules | Cannot create a memcached database with modules. | +| missing_backup_interval | BDB backup is enabled but backup interval is missing. | +| wrong_cluster_state_id | The given CLUSTER-STATE-ID does not match the current one +| invalid_bdb_tags | Tag objects with the same key parameter were passed. | +| unsupported_module_capabilities | Not all modules configured for the database support the capabilities needed for the database configuration. | +| redis_acl_unsupported | Redis ACL is not supported for this database. | + +#### Status codes {#post-status-codes-v1} + +| Code | Description | +|------|-------------| +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | redislabs license expired. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Database with the same UID already exists. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid configuration parameters provided. | +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, database is being created. | + +## Create database v2 {#post-bdbs-v2} + +```sh +POST /v2/bdbs +``` +Create a new database in the cluster. See [`POST /v1/bdbs`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#post-bdbs-v1" >}}) for more information. + +The database's configuration should be under the "bdb" field. + +This endpoint allows you to specify a recovery_plan to recover a database. If you include a recovery_plan within the request body, the database will be loaded from the persistence files according to the recovery plan. The recovery plan must match the number of shards requested for the database. + +The persistence files must exist in the locations specified by the recovery plan. The persistence files must belong to a database with the same shard settings as the one being created (slot range distribution and shard_key_regex); otherwise, the operation will fail or yield unpredictable results. + +If you create a database with a shards_blueprint and a recovery plan, the shard placement may not fully follow the shards_blueprint. + +### Request {#post-request-v2} + +#### Example HTTP request + +```sh +POST /v2/bdbs +``` + +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | +| Content-type | application/json | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| dry_run | | Validate the new [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) but don't create the database. | + +#### Body + +Include a JSON object that contains a [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}) and an optional `recovery_plan` object in the request body. + +##### Example JSON body + +```json +{ + "bdb": { + "name": "test-database", + "type": "redis", + "memory_size": 1073741824, + "shards_count": 1 + }, + "recovery_plan": { + "data_files": [ + { + "shard_slots": "0-16383", + "node_uid": "1", + "filename": "redis-4.rdb" + } + ] + } +} +``` + +### Response {#post-response-v2} + +The response includes the newly created [BDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/bdb" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "test-database", + "type": "redis", + "memory_size": 1073741824, + "shards_count": 1, + "// additional fields..." +} +``` + +## Delete database {#delete-bdbs} + +```sh +DELETE /v1/bdbs/{int: uid} +``` +Delete an active database. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [delete_bdb]({{< relref "/operate/rs/7.4/references/rest-api/permissions#delete_bdb" >}}) | admin
cluster_member
db_member | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/bdbs/1 +``` +#### Headers + +| Key | Value | +|-----|-------| +| Host | The domain name or IP of the cluster | +| Accept | application/json | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database to delete. | + +### Response {#delete-response} + +Returns a status code that indicates the database deletion success or failure. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request is accepted and is being processed. The database state will be 'delete-pending' until the request has been fully processed. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Attempting to delete an internal database. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to delete a nonexistent database. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | Either the database is not in 'active' state and cannot be deleted, or it is busy with another configuration change. In the second case, this is a temporary condition, and the request should be re-attempted later. | +--- +Title: Database CRDT sources alerts requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Conflict-free replicated data type (CRDT) source alert requests +headerRange: '[1-2]' +linkTitle: crdt_sources/alerts +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/bdbs/crdt_sources-alerts/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-bdbs-crdt-sources-alerts) | `/v1/bdbs/crdt_sources/alerts` | Get all CRDT sources alert states for all CRDB databases | +| [GET](#get-bdbs-crdt-sources-alerts) | `/v1/bdbs/crdt_sources/alerts/{uid}` | Get all CRDT sources alert states for a database | +| [GET](#get-bdbs-crdt-source-all-alerts) | `/v1/bdbs/crdt_sources/alerts/{uid}/{crdt_src_id}` | Get all alert states for a CRDT source | +| [GET](#get-bdbs-crdt-source-alert) | `/v1/bdbs/crdt_sources/alerts/{uid}/{crdt_src_id}/{alert}` | Get a database alert state | + +## Get all CRDB CRDT source alert states {#get-all-bdbs-crdt-sources-alerts} + + GET /v1/bdbs/crdt_sources/alerts + +Get all alert states for all CRDT sources of all CRDBs. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_bdbs_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_bdbs_alerts" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/bdbs/crdt_sources/alerts + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a hash of alert UIDs and the [alerts states]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) for each local BDB of CRDB. + +#### Example JSON body + +```json +{ + "1": { + "crdt_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "change_time": "2014-08-29T11:19:49Z", + "severity": "WARNING", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." + }, + "..." +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get all BDB CRDT sources alert states {#get-bdbs-crdt-sources-alerts} + + GET /v1/bdbs/crdt_sources/alerts/{int: uid} + +Get all alert states for all crdt sources for a specific local bdb of a CRDB. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-all-crdt-alerts} + +#### Example HTTP request + + GET /v1/bdbs/crdt_sources/alerts/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | + +### Response {#get-response-all-crdt-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "crdt_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." +} +``` + +### Status codes {#get-status-codes-all-crdt-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified bdb does not exist | + +## Get all CRDT source alert states {#get-bdbs-crdt-source-all-alerts} + + GET /v1/bdbs/crdt_sources/alerts/{int: uid}/{int: crdt_src_id} + +Get all alert states for specific crdt source for a specific local BDB +of a CRDB. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-crdt-alerts} + +#### Example HTTP request + + GET /v1/bdbs/crdt_sources/alerts/1/2 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | +| crdt_src_id | integer | The ID of the crdt source in this BDB | + +### Response {#get-response-crdt-alerts} + +Returns a hash of [alert objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}) and their states. + +#### Example JSON body + +```json +{ + "crdt_src_syncer_connection_error": { + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } + }, + "..." +} +``` + +### Status codes {#get-status-codes-crdt-alerts} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified bdb does not exist | + +## Get database alert state {#get-bdbs-crdt-source-alert} + + GET /v1/bdbs/crdt_sources/alerts/{int: uid}/{int: crdt_src_id}/{alert} + +Get a BDB alert state. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_bdb_alerts]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_bdb_alerts" >}}) | + +### Request {#get-request-alert} + +#### Example HTTP request + + GET /v1/bdbs/crdt_sources/alerts/1/2/crdt_src_syncer_connection_error + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the database | +| crdt_src_id | integer | The ID of the crdt source in this BDB | +| alert | string | The alert name | + +### Response {#get-response-alert} + +Returns an [alert object]({{< relref "/operate/rs/7.4/references/rest-api/objects/alert" >}}). + +#### Example JSON body + +```json +{ + "enabled": true, + "state": true, + "threshold": "80", + "severity": "WARNING", + "change_time": "2014-08-29T11:19:49Z", + "change_value": { + "state": true, + "threshold": "80", + "memory_util": 81.2 + } +} +``` + +### Status codes {#get-status-codes-alert} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad request | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Specified alert or bdb does not exist | +--- +Title: CRDB flush requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Flush Active-Active database requests +headerRange: '[1-2]' +linkTitle: flush +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdbs/flush/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-crdbs-flush) | `/v1/crdbs/{crdb_guid}/flush` | Flush an Active-Active database | + +## Flush an Active-Active database {#put-crdbs-flush} + +```sh +PUT /v1/crdbs/{crdb_guid}/flush +``` + +Flush an Active-Active database. + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/crdbs/552bbccb-99f3-4142-bd17-93d245f0bc79/flush +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +### Response {#put-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +#### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action was successful. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The request is invalid or malformed. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration or Active-Active database not found. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Configuration cannot be accepted, typically because it was already committed. | +--- +Title: CRDB health report requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Active-Active database health report requests +headerRange: '[1-2]' +linkTitle: health_report +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdbs/health_report/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-crdbs-health) | `/v1/crdbs/{crdb_guid}/health_report` | Get a health report for an Active-Active database | + +## Get health report {#get-crdbs-health} + + GET /v1/crdbs/{crdb_guid}/health_report + +Get a health report for an Active-Active database. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/crdbs/{crdb_guid}/health_report + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| instance_id | integer | The request retrieves information from the specified Active-Active database instance. If this instance doesn’t exist, the request retrieves information from all instances. (optional) | + +### Response {#get-response} + +Returns a JSON array of [CRDB health report objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/health_report" >}}). + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action was successful. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration or Active-Active database not found. | +--- +Title: CRDB purge requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Purge removed Active-Active database requests +headerRange: '[1-2]' +linkTitle: purge +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdbs/purge/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PUT](#put-crdbs-purge) | `/v1/crdbs/{crdb_guid}/purge` | Purge data from an instance that was forcibly removed from the Active-Active database | + +## Purge data from removed instance {#put-crdbs-purge} + + PUT /v1/crdbs/{crdb_guid}/purge + +Purge the data from an instance that was removed from the +Active-Active database by force. + +When you force the removal of an instance from an Active-Active +database, the removed instance keeps the data and configuration +according to the last successful synchronization. + +To delete the data and configuration from the forcefully removed +instance you must use this API (Must be executed locally on the +removed instance). + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/crdbs/1/purge + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| instances | array of integers | Array of unique instance IDs | + +### Response {#put-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action was successful. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The request is invalid or malformed. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration, instance, or Active-Active database not found. | +--- +Title: CRDB updates requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Update Active-Active configuration requests +headerRange: '[1-2]' +linkTitle: updates +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdbs/updates/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-crdbs-updates) | `/v1/crdbs/{crdb_guid}/updates` | Modify Active-Active confgurarion | + +## Modify Active-Active configuration {#post-crdbs-updates} + + POST /v1/crdbs/{crdb_guid}/updates + +Modify Active-Active configuration. + +{{}} +This is a very powerful API request and can cause damage if used incorrectly. +{{}} + +In order to add or remove instances, you must use this API. For simple configuration updates, it is recommended to use PATCH on /crdbs/{crdb_guid} instead. + +Updating default_db_config affects both existing and new instances that may be added. + +When you update db_config, it changes the configuration of the database that you specify. This field overrides corresponding fields (if any) in default_db_config. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/crdbs/1/updates + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +#### Request body + +Include a [CRDB modify_request object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb/modify_request" >}}) with updated fields in the request body. + +### Response {#post-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request has been accepted. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The posted Active-Active database contains invalid parameters. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration, instance or Active-Active database not found. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The posted Active-Active database cannot be accepted. | +--- +Title: CRDBs requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Active-Active database requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: crdbs +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdbs/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-crdbs) | `/v1/crdbs` | Get all Active-Active databases | +| [GET](#get-crdb) | `/v1/crdbs/{crdb_guid}` | Get a specific Active-Active database | +| [PATCH](#patch-crdbs) | `/v1/crdbs/{crdb_guid}` | Update an Active-Active database | +| [POST](#post-crdb) | `/v1/crdbs` | Create a new Active-Active database | +| [DELETE](#delete-crdb) | `/v1/crdbs/{crdb_guid}` | Delete an Active-Active database | + +## Get all Active-Active databases {#get-all-crdbs} + +```sh +GET /v1/crdbs +``` + +Get a list of all Active-Active databases on the cluster. + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/crdbs +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +### Response {#get-all-response} + +Returns a JSON array of [CRDB objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb" >}}). + +##### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | A list of Active-Active database. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | + +## Get an Active-Active database {#get-crdb} + +```sh +GET /v1/crdbs/{crdb_guid} +``` + +Get a specific Active-Active database. + +### Request {#get-request} + +#### Example HTTP request + +```sh + GET /v1/crdbs/552bbccb-99f3-4142-bd17-93d245f0bc79 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| instance_id | integer | Instance from which to get the Active-Active database information | + +### Response {#get-response} + +Returns a [CRDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb" >}}). + +#### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Active-Active database information is returned. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Database or configuration does not exist. | + +## Update an Active-Active database {#patch-crdbs} + +```sh +PATCH /v1/crdbs/{crdb_guid} +``` + +Update an Active-Active database's configuration. + +In order to add or remove instances, use [`POST crdbs/{crdb_guid}/updates`]({{< relref "/operate/rs/7.4/references/rest-api/requests/crdbs/updates#post-crdbs-updates" >}}) instead. + +### Request {#patch-request} + +#### Example HTTP request + +```sh + PATCH /v1/crdbs/552bbccb-99f3-4142-bd17-93d245f0bc79 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +#### Request body + +Include a [CRDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb" >}}) with updated fields in the request body. + +### Response {#patch-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +#### Status codes {#patch-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request has been accepted. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The posted Active-Active database contains invalid parameters. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration or Active-Active database not found. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The posted Active-Active database cannot be accepted. | + +## Create an Active-Active database {#post-crdb} + +```sh +POST /v1/crdbs +``` + +Create a new Active-Active database. + +### Request {#post-request} + +#### Example HTTP request + +```sh + POST /v1/crdbs +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### Request body + +Include a [CRDB object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb" >}}), which defines the Active-Active database, in the request body. + +##### Example body + +```json +{ + "default_db_config": + { + "name": "sample-crdb", + "memory_size": 214748365 + }, + "instances": + [ + { + "cluster": + { + "url": "http://:9443", + "credentials": + { + "username": "", + "password": "" + }, + "name": "cluster-1" + }, + "compression": 6 + }, + { + "cluster": + { + "url": "http://:9443", + "credentials": + { + "username": "", + "password": "" + }, + "name": "cluster-2" + }, + "compression": 6 + } + ], + "name": "sample-crdb" +} +``` + +This JSON body creates an Active-Active database without TLS and with two participating clusters. + +### Response {#post-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The request has been accepted. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The request is invalid or malformed. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The posted Active-Active database cannot be accepted. | + +## Delete an Active-Active database {#delete-crdb} + +```sh +DELETE /v1/crdbs/{crdb_guid} +``` + +Delete an Active-Active database. + +### Request {#delete-request} + +#### Example HTTP request + +```sh + DELETE /v1/crdbs/552bbccb-99f3-4142-bd17-93d245f0bc79 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Task-ID | string | Specified task ID | +| X-Result-TTL | integer | Time (in seconds) to keep task result | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| crdb_guid | string | Globally unique Active-Active database ID (GUID) | + +### Response {#delete-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +#### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Action was successful. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Configuration or Active-Active database not found. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The Active-Active GUID is invalid or the Active-Active database was already deleted. | +--- +Title: Logs requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Cluster event logs requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: logs +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/logs/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-logs) | `/v1/logs` | Get cluster events log | + +## Get cluster events log {#get-logs} + + GET /v1/logs + +Get cluster events log. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_logged_events]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_logged_events" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/logs?order=desc + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| stime | ISO_8601 | Start time before which we don't want events. (optional) | +| etime | ISO_8601 | End time after which we don't want events. (optional) | +| order | string | desc/asc - get events in descending or ascending order. Defaults to asc. | +| limit | integer | Maximum number of events to return. (optional) | +| offset | integer | Skip offset events before returning first one (useful for pagination). (optional) | + +### Response {#get-response} + +Returns a JSON array of events. + +#### Example JSON body + +```json +[ + { + "time": "2014-08-29T11:19:49Z", + "type": "bdb_name_updated", + "severity": "INFO", + "bdb_uid": "1", + "old_val": "test", + "new_val": "test123" + }, + { + "time": "2014-08-29T11:18:48Z", + "type": "cluster_bdb_created", + "severity": "INFO", + "bdb_uid": "1", + "bdb_name": "test" + }, + { + "time": "2014-08-29T11:17:49Z", + "type": "cluster_node_joined", + "severity": "INFO", + "node_uid": 2 + } +] +``` + +#### Event object + +| Field | Description | +|-------|-------------| +| time | Timestamp when event happened. | +| type | Event type. Additional fields may be available for certain event types. | +| additional fields | Additional fields may be present based on event type.| + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +--- +Title: JSON schema requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: API object JSON schema requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: jsonschema +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/jsonschema/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-jsonschema) | `/v1/jsonschema` | Get JSON schema of API objects | + +## Get object JSON schema {#get-jsonschema} + + GET /v1/jsonschema + +Get the JSON schema of various [Redis Enterprise REST API objects]({{< relref "/operate/rs/7.4/references/rest-api/objects" >}}). + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/jsonschema?object=bdb + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| object | string | Optional. The API object name: 'cluster', 'node', 'bdb' etc. | + +### Response {#get-response} + +Returns the JSON schema of the specified API object. + +#### Example JSON body + +```json +{ + "type": "object", + "description": "An API object that represents a managed database in the cluster.", + "properties": { + "...." + }, + "...." +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Invalid object. | +--- +Title: LDAP mappings requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: LDAP mappings requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: ldap_mappings +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/ldap_mappings/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-ldap_mappings) | `/v1/ldap_mappings` | Get all LDAP mappings | +| [GET](#get-ldap_mapping) | `/v1/ldap_mappings/{uid}` | Get a single LDAP mapping | +| [PUT](#put-ldap_mapping) | `/v1/ldap_mappings/{uid}` | Update an LDAP mapping | +| [POST](#post-ldap_mappings) | `/v1/ldap_mappings` | Create a new LDAP mapping | +| [DELETE](#delete-ldap_mappings) | `/v1/ldap_mappings/{uid}` | Delete an LDAP mapping | + +## Get all LDAP mappings {#get-all-ldap_mappings} + + GET /v1/ldap_mappings + +Get all LDAP mappings. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_ldap_mappings_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_ldap_mappings_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/ldap_mappings + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [LDAP mapping objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap_mapping" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 17, + "name": "Admins", + "dn": "OU=ops.group,DC=redislabs,DC=com", + "email": "ops.group@redislabs.com", + "role_uids": ["1"], + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "cluster_email_alerts": true + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get LDAP mapping {#get-ldap_mapping} + + GET /v1/ldap_mappings/{int: uid} + +Get a specific LDAP mapping. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_ldap_mapping_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_ldap_mapping_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/ldap_mappings/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The object's unique ID. | + +### Response {#get-response} + +Returns an [LDAP mapping object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap_mapping" >}}). + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Admins", + "dn": "OU=ops.group,DC=redislabs,DC=com", + "email": "ops.group@redislabs.com", + "role_uids": ["1"], + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "cluster_email_alerts": true +} +``` + +### Error codes {#get-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| ldap_mapping_not_exist | An object does not exist| + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Operation is forbidden. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | ldap_mapping does not exist. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support LDAP mappings yet. | + +## Update LDAP mapping {#put-ldap_mapping} + + PUT /v1/ldap_mappings/{int: uid} + +Update an existing ldap_mapping object. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_ldap_mapping]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_ldap_mapping" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/ldap_mappings/17 + +#### Example JSON body + +```json +{ + "dn": "OU=ops,DC=redislabs,DC=com", + "email": "ops@redislabs.com", + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "cluster_email_alerts": true +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +Include an [LDAP mapping object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap_mapping" >}}) with updated fields in the request body. + +### Response {#put-response} + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Admins", + "dn": "OU=ops,DC=redislabs,DC=com", + "email": "ops@redislabs.com", + "role_uids": ["1"], + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "cluster_email_alerts": true +} +``` + +### Error codes {#put-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| name_already_exists | An object of the same type and name exists| +| ldap_mapping_not_exist | An object does not exist| +| invalid_dn_param | A dn parameter has an illegal value| +| invalid_name_param | A name parameter has an illegal value| +| invalid_role_uids_param | A role_uids parameter has an illegal value| +| invalid_account_id_param | An account_id parameter has an illegal value| + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, LDAP mapping is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a non-existing LDAP mapping. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support LDAP mapping yet. | + +## Create LDAP mapping {#post-ldap_mappings} + + POST /v1/ldap_mappings + +Create a new LDAP mapping. + +#### Required permissions + +| Permission name | +|-----------------| +| [create_ldap_mapping]({{< relref "/operate/rs/7.4/references/rest-api/permissions#create_ldap_mapping" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/ldap_mappings + +#### Example JSON body + +```json +{ + "name": "Admins", + "dn": "OU=ops.group,DC=redislabs,DC=com", + "email": "ops.group@redislabs.com", + "role_uids": ["1"] +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +Include an [LDAP mapping object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ldap_mapping" >}}) in the request body. + +### Response {#post-response} + +#### Example JSON body + +```json +{ + "uid": 17, + "name": "Admins", + "dn": "OU=ops.group,DC=redislabs,DC=com", + "email": "ops.group@redislabs.com", + "role_uids": ["1"] +} +``` + +### Error codes {#post-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| name_already_exists | An object of the same type and name exists| +| missing_field | A needed field is missing| +| invalid_dn_param | A dn parameter has an illegal value| +| invalid_name_param | A name parameter has an illegal value| +| invalid_role_uids_param | A role_uids parameter has an illegal value| + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, an LDAP-mapping object is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support LDAP mappings yet. | + +## Delete LDAP mapping {#delete-ldap_mappings} + + DELETE /v1/ldap_mappings/{int: uid} + +Delete an LDAP mapping object. + +#### Required permissions + +| Permission name | +|-----------------| +| [delete_ldap_mapping]({{< relref "/operate/rs/7.4/references/rest-api/permissions#delete_ldap_mapping" >}}) | + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/ldap_mappings/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The ldap_mapping unique ID. | + +### Response {#delete-response} + +Returns a status code. If an error occurs, the response body may include a more specific error code and message. + +### Error codes {#delete-error-codes} + +Possible `error_code` values: + +| Code | Description | +|------|-------------| +| unsupported_resource | The cluster is not yet able to handle this resource type. This could happen in a partially upgraded cluster, where some of the nodes are still on a previous version.| +| ldap_mapping_not_exist | An object does not exist| + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the ldap_mapping is deleted. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The request is not acceptable. | +| [501 Not Implemented](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.2) | Cluster doesn't support LDAP mappings yet. | +--- +Title: All nodes database debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API debuginfo/all/bdb requests. +headerRange: '[1-2]' +linkTitle: bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/debuginfo/all/bdb/' +--- + +{{}} +This REST API path is deprecated as of Redis Enterprise Software version 7.4.2. Use the new path [`/v1/bdbs/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/debuginfo" >}}) instead. +{{}} + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-debuginfo-bdb) | `/v1/debuginfo/all/bdb/{bdb_uid}` | Get debug info for a database from all nodes | + +## Get database debug info for all nodes {#get-all-debuginfo-bdb} + + GET /v1/debuginfo/all/bdb/{int: bdb_uid} + +Downloads a tar file that contains debug info for the specified database (`bdb_uid`) from all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/debuginfo/all/bdb/1 + +### Response {#get-all-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: All nodes debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API debuginfo/all requests. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: all +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/debuginfo/all/' +--- + +{{}} +This REST API path is deprecated as of Redis Enterprise Software version 7.4.2. Use the new path [`/v1/cluster/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/debuginfo" >}}) instead. +{{}} + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-debuginfo) | `/v1/debuginfo/all` | Get debug info for all nodes | + +## Get debug info for all nodes {#get-all-debuginfo} + + GET /v1/debuginfo/all + +Downloads a tar file that contains debug info from all nodes. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/debuginfo/all + +### Response {#get-all-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info for all nodes. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Current node database debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API debuginfo/node/bdb requests. +headerRange: '[1-2]' +linkTitle: bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/debuginfo/node/bdb/' +--- + +{{}} +This REST API path is deprecated as of Redis Enterprise Software version 7.4.2. Use the new path [`/v1/bdbs/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/debuginfo" >}}) instead. +{{}} + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-debuginfo-node-bdb) | `/v1/debuginfo/node/bdb/{bdb_uid}` | Get debug info for the current node regarding a specific database | + +## Get database debug info for current node {#get-debuginfo-node-bdb} + + GET /v1/debuginfo/node/bdb/{int: bdb_uid} + +Downloads a tar file that contains debug info for the specified database (`bdb_uid`) from the current node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/debuginfo/node/bdb/1 + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Current node debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API debuginfo/node requests. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: node +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/debuginfo/node/' +--- + +{{}} +This REST API path is deprecated as of Redis Enterprise Software version 7.4.2. Use the new path [`/v1/nodes/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/debuginfo" >}}) instead. +{{}} + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-debuginfo-node) | `/v1/debuginfo/node` | Get debug info for the current node | + +## Get debug info for current node {#get-debuginfo-node} + + GET /v1/debuginfo/node + +Downloads a tar file that contains debug info for the current node. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/debuginfo/node + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Debug info requests +hideListLinks: true +linkTitle: debuginfo +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/debuginfo/' +--- + +{{}} +These REST API paths are deprecated as of Redis Enterprise Software version 7.4.2. Use the new paths [`/v1/cluster/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/debuginfo" >}}), [`/v1/nodes/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/debuginfo" >}}), and [`/v1/bdbs/debuginfo`]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/debuginfo" >}}) instead. +{{}} + +Downloads a support package, which includes logs and information about the cluster, nodes, databases, and shards, as a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info. + +## Get debug info for all nodes in the cluster + +| Method | Path | Description | +|--------|------|-------------| +| [GET]({{< relref "./all#get-all-debuginfo" >}}) | `/v1/debuginfo/all` | Gets debug info for all nodes | +| [GET]({{< relref "./all/bdb#get-all-debuginfo-bdb" >}}) | `/v1/debuginfo/all/bdb/{bdb_uid}` | Gets debug info for a database from all nodes | + +## Get debug info for the current node + +| Method | Path | Description | +|--------|------|-------------| +| [GET]({{< relref "./node#get-debuginfo-node" >}}) | `/v1/debuginfo/node` | Gets debug info for the current node | +| [GET]({{< relref "./node/bdb#get-debuginfo-node-bdb" >}}) | `/v1/debuginfo/node/bdb/{bdb_uid}` | Gets debug info for a database from the current node | +--- +Title: Refresh JWT requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Refresh JW token requests +headerRange: '[1-2]' +linkTitle: refresh_jwt +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/users/refresh_jwt/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-refresh_jwt) | `/v1/users/refresh_jwt` | Get a new authentication token | + +## Get a new authentication token {#post-refresh_jwt} + + POST /v1/users/refresh_jwt + +Generate a new JSON Web Token (JWT) for authentication. + +Takes a valid token and returns the new token generated by the request. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/users/refresh_jwt + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Authorization | JWT eyJ5bGciOiKIUzI1NiIsInR5cCI6IkpXVCJ9.

eyJpYXViOjE0NjU0NzU0ODYsInVpZFI1IjEiLCJ

leHAiOjE0NjU0Nz30OTZ9.2xYXumd1rDoE0e

dFzcLElMOHsshaqQk2HUNgdsUKxMU | Valid JSON Web Token (JWT) | + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| ttl | integer | Time to live - The amount of time in seconds the token will be valid (optional) | + +### Response {#post-response} + +Returns a JSON object that contains the generated access token. + +#### Example JSON body + +```json + { + "access_token": "eyJ5bGciOiKIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXViOjE0NjU0NzU0ODYsInVpZFI1IjEiLCJleHAiOjE0NjU0Nz30OTZ9.2xYXumd1rDoE0edFzcLElMOHsshaqQk2HUNgdsUKxMU" + } +``` + + + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | A new token is given. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | The token is invalid or password has expired. | +--- +Title: User password requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: User password requests +headerRange: '[1-2]' +linkTitle: password +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/users/password/' +--- + +| Method | Path | Description | +|----------------------------|----------------------|-----------------------------| +| [PUT](#update-password) | `/v1/users/password` | Change an existing password | +| [POST](#add-password) | `/v1/users/password` | Add a new password | +| [DELETE](#delete-password) | `/v1/users/password` | Delete a password | + +## Update password {#update-password} + + PUT /v1/users/password + +Reset the password list of an internal user to include a new password. + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/users/password + +#### Example JSON body + + ```json + { + "username": "johnsmith", + "old_password": "a password that exists in the current list", + "new_password": "the new (single) password" + } + ``` + +#### Request headers +| Key | Value | Description | +|--------|------------------|---------------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +The request must contain a single JSON object with the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| username | string | Affected user (required) | +| old_password | string | A password that exists in the current list (required) | +| new_password | string | The new password (required) | + +### Response {#put-response} + +Returns a status code to indicate password update success or failure. + +### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with +`error_code` and `message` fields that provide additional information. +The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| password_not_complex | The given password is not complex enough (Only work when the password_complexity feature is enabled). | +| new_password_same_as_current | The given new password is identical to one of the already existing passwords. | + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, password changed | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing parameters. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | The user is unauthorized. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to reset password to a non-existing user. | + +## Add password {#add-password} + + POST /v1/users/password + +Add a new password to an internal user's passwords list. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/users/password + +#### Example JSON body + + ```json + { + "username": "johnsmith", + "old_password": "an existing password", + "new_password": "a password to add" + } + ``` + +#### Request headers +| Key | Value | Description | +|--------|------------------|---------------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +The request must contain a single JSON object with the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| username | string | Affected user (required) | +| old_password | string | A password that exists in the current list (required) | +| new_password | string | The new (single) password (required) | + +### Response {#post-response} + +Returns a status code to indicate password creation success or failure. If an error occurs, the response body may include a more specific error code and message. + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with +`error_code` and `message` fields that provide additional information. +The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| password_not_complex | The given password is not complex enough (Only work when the password_complexity feature is enabled). | +| new_password_same_as_current | The given new password is identical to one of the already existing passwords. | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, new password was added to the list of valid passwords. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing parameters. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | The user is unauthorized. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to add a password to a non-existing user. | + +## Delete password {#delete-password} + DELETE /v1/users/password + +Delete a password from an internal user's passwords list. + +### Request {#delete-request} + +#### Example HTTP request + + DELETE /v1/users/password + +#### Example JSON body + + ```json + { + "username": "johnsmith", + "old_password": "an existing password" + } + ``` + +#### Request headers +| Key | Value | Description | +|--------|------------------|---------------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +The request must contain a single JSON with the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| username | string | Affected user (required) | +| old_password | string | Existing password to be deleted (required) | + +### Response {#delete-response} + +### Error codes {#delete-error-codes} + +When errors are reported, the server may return a JSON object with +`error_code` and `message` fields that provide additional information. +The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| cannot_delete_last_password | Cannot delete the last password of a user | + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, new password was deleted from the list of valid passwords. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing parameters. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | The user is unauthorized. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to delete a password to a non-existing user. | +--- +Title: Users requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: User requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: users +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/users/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-users) | `/v1/users` | Get all users | +| [GET](#get-user) | `/v1/users/{uid}` | Get a single user | +| [PUT](#put-user) | `/v1/users/{uid}` | Update a user's configuration | +| [POST](#post-user) | `/v1/users` | Create a new user | +| [DELETE](#delete-user) | `/v1/users/{uid}` | Delete a user | + +## Get all users {#get-all-users} + +```sh +GET /v1/users +``` + +Get a list of all users. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_users_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_users_info" >}}) | admin | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/users +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [user objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 1, + "password_issue_date": "2017-03-02T09:43:34Z", + "email": "user@example.com", + "name": "John Doe", + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "role": "admin", + "auth_method": "regular", + "status": "active" + }, + { + "uid": 2, + "password_issue_date": "2017-03-02T09:43:34Z", + "email": "user2@example.com", + "name": "Jane Poe", + "email_alerts": true, + "role": "db_viewer", + "auth_method": "regular", + "status": "active" + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get user {#get-user} + +```sh +GET /v1/users/{int: uid} +``` + +Get a single user's details. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_user_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_user_info" >}}) | admin | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/users/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The user's unique ID | + +### Response {#get-response} + +Returns a [user object]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}) that contains the details for the specified user ID. + +#### Example JSON body + +```json +{ + "uid": 1, + "password_issue_date": "2017-03-07T15:11:08Z", + "role": "db_viewer", + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "email": "user@example.com", + "name": "John Doe", + "auth_method": "regular", + "status": "active" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [403 Forbidden](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4) | Operation is forbidden. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | User does not exist. | + +## Update user {#put-user} + +```sh +PUT /v1/users/{int: uid} +``` + +Update an existing user's configuration. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_user]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_user" >}}) | admin | + +Any user can change their own name, password, or alert preferences. + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/users/1 +``` + +#### Example JSON body + +```json +{ + "email_alerts": false, + "role_uids": [ 2, 4 ] +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The user's unique ID | + + +#### Request body + +Include a [user object]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns the updated [user object]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "password_issue_date": "2017-03-07T15:11:08Z", + "email": "user@example.com", + "name": "Jane Poe", + "email_alerts": false, + "role": "db_viewer", + "role_uids": [ 2, 4 ], + "auth_method": "regular" +} +``` + +{{}} +For [RBAC-enabled clusters]({{< relref "/operate/rs/7.4/security/access-control" >}}), the returned user details include `role_uids` instead of `role`. +{{}} + +### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| password_not_complex | The given password is not complex enough (Only works when the password_complexity feature is enabled).| +| new_password_same_as_current | The given new password is identical to the old password.| +| email_already_exists | The given email is already taken.| +| change_last_admin_role_not_allowed | At least one user with admin role should exist.| + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the user is updated. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to change a non-existing user. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The requested configuration is invalid. | + +## Create user {#post-user} + +```sh +POST /v1/users +``` + +Create a new user. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [create_new_user]({{< relref "/operate/rs/7.4/references/rest-api/permissions#create_new_user" >}}) | admin | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/users +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Body + +Include a single [user object]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}) in the request body. The user object must have an email, password, and role. + +{{}} +For [RBAC-enabled clusters]({{< relref "/operate/rs/7.4/security/access-control" >}}), use `role_uids` instead of `role` in the request body. +{{}} + +`email_alerts` can be configured either as: + +- `true` - user will receive alerts for all databases configured in `bdbs_email_alerts`. The user will receive alerts for all databases by default if `bdbs_email_alerts` is not configured. `bdbs_email_alerts` can be a list of database UIDs or `[‘all’]` meaning all databases. + +- `false` - user will not receive alerts for any databases + +##### Example JSON body + +```json +{ + "email": "newuser@example.com", + "password": "my-password", + "name": "Pat Doe", + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "role_uids": [ 3, 4 ], + "auth_method": "regular" +} +``` + +### Response {#post-response} + +Returns the newly created [user object]({{< relref "/operate/rs/7.4/references/rest-api/objects/user" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "password_issue_date": "2017-03-07T15:11:08Z", + "email": "newuser@example.com", + "name": "Pat Doe", + "email_alerts": true, + "bdbs_email_alerts": ["1","2"], + "role": "db_viewer", + "role_uids": [ 3, 4 ], + "auth_method": "regular" +} +``` + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. + +The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| password_not_complex | The given password is not complex enough (Only works when the password_complexity feature is enabled).| +| email_already_exists | The given email is already taken.| +| name_already_exists | The given name is already taken.| + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, user is created. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [409 Conflict](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) | User with the same email already exists. | + +### Examples + +#### cURL + +```sh +$ curl -k -X POST -u '[username]:[password]' \ + -H 'Content-Type: application/json' \ + -d '{ "email": "newuser@example.com", \ + "password": "my-password", \ + "name": "Pat Doe", \ + "email_alerts": true, \ + "bdbs_email_alerts": ["1","2"], \ + "role_uids": [ 3, 4 ], \ + "auth_method": "regular" }' \ + 'https://[host][:port]/v1/users' +``` + +#### Python + +```python +import requests +import json + +url = "https://[host][:port]/v1/users" +auth = ("[username]", "[password]") + +payload = json.dumps({ + "email": "newuser@example.com", + "password": "my-password", + "name": "Pat Doe", + "email_alerts": True, + "bdbs_email_alerts": [ + "1", + "2" + ], + "role_uids": [ + 3, + 4 + ], + "auth_method": "regular" +}) + +headers = { + 'Content-Type': 'application/json' +} + +response = requests.request("POST", url, auth=auth, headers=headers, data=payload, verify=False) + +print(response.text) +``` + +## Delete user {#delete-user} + +```sh +DELETE /v1/users/{int: uid} +``` + +Delete a user. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [delete_user]({{< relref "/operate/rs/7.4/references/rest-api/permissions#delete_user" >}}) | admin | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/users/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The user's unique ID | + +### Response {#delete-response} + +Returns a status code to indicate the success or failure of the user deletion. + +### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the user is deleted. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The request is not acceptable. | +--- +Title: Authorize user requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Users authorization requests +headerRange: '[1-2]' +linkTitle: authorize +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/users/authorize/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-authorize) | `/v1/users/authorize` | Authorize a user | + +## Authorize user {#post-authorize} + + POST /v1/users/authorize + +Generate a JSON Web Token (JWT) for a user to use as authorization to access the REST API. + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/users/authorize + +#### Example JSON body + + ```json + { + "username": "user@redislabs.com", + "password": "my_password" + } + ``` + +#### Request headers +| Key | Value | Description | +|--------|------------------|---------------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a [JWT authorize object]({{< relref "/operate/rs/7.4/references/rest-api/objects/jwt_authorize" >}}) with a valid username and password in the request body. + +### Response {#post-response} + +Returns a JSON object that contains the generated access token. + +#### Example JSON body + + ```json + { + "access_token": "eyJ5bGciOiKIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXViOjE0NjU0NzU0ODYsInVpZFI1IjEiLCJleHAiOjE0NjU0Nz30OTZ9.2xYXumd1rDoE0edFzcLElMOHsshaqQk2HUNgdsUKxMU" + } + ``` + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with +`error_code` and `message` fields that provide additional information. +The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| password_expired | The password has expired and must be changed. | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | The user is authorized. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | The request could not be understood by the server due to malformed syntax. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | The user is unauthorized. | +--- +Title: Endpoints stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Endpoint statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: endpoints/stats +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/endpoints-stats/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-endpoints-stats) | `/v1/endpoints/stats` | Get stats for all endpoints | + +## Get all endpoints stats {#get-endpoints-stats} + + GET /v1/endpoints/stats + +Get statistics for all endpoint-proxy links. + +{{}} +This method will return both endpoints and listeners stats for backwards +compatability. +{{}} + +#### Required permissions + +| Permission name | +|-----------------| +| [view_endpoint_stats]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_endpoint_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/endpoints/stats?interval=1hour&stime=2014-08-28T10:00:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +The `uid` format in the response is: `"BDB_UID:ENDPOINT_UID:PROXY_UID"` + +For example: `{"uid": "1:2:3"}` means `BDB_UID=1`, `ENDPOINT_UID=2`, and `PROXY_UID=3` + +#### Example JSON body + +```json +[ + { + "uid" : "365:1:1", + "intervals" : [ + { + "interval" : "10sec", + "total_req" : 0, + "egress_bytes" : 0, + "cmd_get" : 0, + "monitor_sessions_count" : 0, + "auth_errors" : 0, + "acc_latency" : 0, + "stime" : "2017-01-12T14:59:50Z", + "write_res" : 0, + "total_connections_received" : 0, + "cmd_set" : 0, + "read_req" : 0, + "max_connections_exceeded" : 0, + "acc_write_latency" : 0, + "write_req" : 0, + "other_res" : 0, + "cmd_flush" : 0, + "acc_read_latency" : 0, + "other_req" : 0, + "conns" : 0, + "write_started_res" : 0, + "cmd_touch" : 0, + "read_res" : 0, + "auth_cmds" : 0, + "etime" : "2017-01-12T15:00:00Z", + "total_started_res" : 0, + "ingress_bytes" : 0, + "last_res_time" : 0, + "read_started_res" : 0, + "acc_other_latency" : 0, + "total_res" : 0, + "last_req_time" : 0, + "other_started_res" : 0 + } + ] + }, + { + "intervals" : [ + { + "acc_read_latency" : 0, + "other_req" : 0, + "etime" : "2017-01-12T15:00:00Z", + "auth_cmds" : 0, + "total_started_res" : 0, + "write_started_res" : 0, + "cmd_touch" : 0, + "conns" : 0, + "read_res" : 0, + "total_res" : 0, + "other_started_res" : 0, + "last_req_time" : 0, + "read_started_res" : 0, + "last_res_time" : 0, + "ingress_bytes" : 0, + "acc_other_latency" : 0, + "egress_bytes" : 0, + "interval" : "10sec", + "total_req" : 0, + "acc_latency" : 0, + "auth_errors" : 0, + "cmd_get" : 0, + "monitor_sessions_count" : 0, + "read_req" : 0, + "max_connections_exceeded" : 0, + "total_connections_received" : 0, + "cmd_set" : 0, + "acc_write_latency" : 0, + "write_req" : 0, + "stime" : "2017-01-12T14:59:50Z", + "write_res" : 0, + "cmd_flush" : 0, + "other_res" : 0 + } + ], + "uid" : "333:1:2" + } +] +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +--- +Title: Database actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Database actions requests +headerRange: '[1-2]' +linkTitle: bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/actions/bdb/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-db-actions) | `/v1/actions/bdb/{bdb_uid}` | Get the status of a specific database's actions | + +## Get database actions {#get-db-actions} + +``` +GET /v1/actions/bdb/{bdb_uid} +``` + +Get the status of all currently executing, pending, or completed state-machine-related actions for a specific database. This API tracks short-lived API requests that return `action_uid`. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_status_of_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_cluster_action" >}}) | + +### Request {#get-request} + +#### Example HTTP request + +``` +GET /v1/actions/bdb/1 +``` + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| bdb_uid | string | Unique database ID | + +### Response {#get-response} + +Returns an array of JSON objects with attributes from [actions]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}) and [state machines]({{< relref "/operate/rs/7.4/references/rest-api/objects/state-machine" >}}). + +Each action contains the following attributes: `name`, `action_uid`, `status`, and `progress`. + +#### Example JSON body + +```json +[ + { + "action_uid": "8afc7f70-f3ae-4244-a5e9-5133e78b2e97", + "heartbeat": 1703067908, + "name": "SMUpdateBDB", + "object_name": "bdb:1", + "pending_ops": {}, + "progress": 50.0, + "state": "proxy_policy", + "status": "active" + } +] +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides info about state-machine actions | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb not found | +--- +Title: Actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Actions requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: actions +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/actions/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-actions) | `/v1/actions` | Get all actions | +| [GET](#get-action) | `/v1/actions/{uid}` | Get a single action | + +## Get all actions {#get-all-actions} + +``` +GET /v1/actions +``` + +Get the status of all actions (executing, queued, or completed) on all entities (clusters, nodes, and databases). This API tracks long-lived API requests that return either a `task_id` or an `action_uid`. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_status_of_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_cluster_action" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + +``` +GET /v1/actions +``` + +### Response {#get-all-response} + +Returns a JSON array of [action objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}) and an array of [state-machine objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/state-machine" >}}). + +Regardless of an action’s source, each action in the response contains the following attributes: `name`, `action_uid`, `status`, and `progress`. + +#### Example JSON body + +```json +{ + "actions": [ + { + "action_uid": "159ca2f8-7bf3-4cda-97e8-4eb560665c28", + "name": "retry_bdb", + "node_uid": "2", + "progress": "100", + "status": "completed", + "task_id": "159ca2f8-7bf3-4cda-97e8-4eb560665c28" + }, + { + "action_uid": "661697c5-c747-41bd-ab81-ffc8fd13c494", + "name": "retry_bdb", + "node_uid": "1", + "progress": "100", + "status": "completed", + "task_id": "661697c5-c747-41bd-ab81-ffc8fd13c494" + } + ], + "state-machines": [ + { + "action_uid": "a10586b1-60bc-428e-9bc6-392eb5f0d8ae", + "heartbeat": 1650378874, + "name": "SMCreateBDB", + "object_name": "bdb:1", + "progress": 100, + "status": "completed" + } + ] +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides info about an ongoing action | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run).| + +## Get a specific action {#get-action} + +``` +GET /v1/actions/{uid} +``` + +Get the status of a currently executing, queued, or completed action. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_status_of_cluster_action]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_status_of_cluster_action" >}}) | + +### Request {#get-request} + +#### Example HTTP request + +``` +GET /v1/actions/{uid} +``` + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | string | The action_uid to check | + +### Response {#get-response} + +Returns an [action object]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}). + +Regardless of an action’s source, each action contains the following attributes: `name`, `action_uid`, `status`, and `progress`. + +#### Example JSON body + +```json +{ + "action_uid": "159ca2f8-7bf3-4cda-97e8-4eb560665c28", + "name": "retry_bdb", + "node_uid": "2", + "progress": "100", + "status": "completed", + "task_id": "159ca2f8-7bf3-4cda-97e8-4eb560665c28" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, response provides info about an ongoing action | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Action does not exist (i.e. not currently running and no available status of last run) | +--- +Title: CRDB tasks requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Active-Active database task status requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: crdb_tasks +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/crdb_tasks/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-crdb_task) | `/v1/crdb_tasks/{task_id}` | Get the status of an executed task | + +## Get task status {#get-crdb_task} + + GET /v1/crdb_tasks/{task_id} + +Get the status of an executed task. + +The status of a completed task is kept for 500 seconds by default. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/crdb_tasks/1 + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| X-Result-TTL | integer | Task time to live | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| task_id | string | Task ID | + +### Response {#get-response} + +Returns a [CRDB task object]({{< relref "/operate/rs/7.4/references/rest-api/objects/crdb_task" >}}). + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Task status. | +| [401 Unauthorized](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.2) | Unauthorized request. Invalid credentials | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Task not found. | +--- +Title: OCSP status requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: OCSP status requests +headerRange: '[1-2]' +linkTitle: status +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/ocsp/status/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-status) | `/v1/ocsp/status` | Get OCSP status | + +## Get OCSP status {#get-status} + + GET /v1/ocsp/status + +Gets the latest cached status of the proxy certificate’s OCSP response. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_ocsp_status]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_ocsp_status" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/ocsp/status + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns an [OCSP status object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp_status" >}}). + +#### Example JSON body + +```json +{ + "responder_url": "http://responder.ocsp.url.com", + "cert_status": "REVOKED", + "produced_at": "Wed, 22 Dec 2021 12:50:11 GMT", + "this_update": "Wed, 22 Dec 2021 12:50:11 GMT", + "next_update": "Wed, 22 Dec 2021 14:50:00 GMT", + "revocation_time": "Wed, 22 Dec 2021 12:50:04 GMT" +} +``` + +### Error codes {#get-error-codes} + +When errors occur, the server returns a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| ocsp_unsupported_by_capability | Not all nodes support OCSP capability | +| invalid_ocsp_response | The server returned a response that is not OCSP-compatible | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Feature not supported in all nodes | +--- +Title: OCSP test requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: OCSP test requests +headerRange: '[1-2]' +linkTitle: test +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/ocsp/test/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-test) | `/v1/ocsp/test` | Test OCSP | + +## Test OCSP {#post-test} + + POST /v1/ocsp/test + +Queries the OCSP server for the proxy certificate’s latest status and returns the response as JSON. It caches the response if the OCSP feature is enabled. + +#### Required permissions + +| Permission name | +|-----------------| +| [test_ocsp_status]({{< relref "/operate/rs/7.4/references/rest-api/permissions#test_ocsp_status" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/ocsp/test + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#post-response} + +Returns an [OCSP status object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp_status" >}}). + +#### Example JSON body + +```json +{ + "responder_url": "http://responder.ocsp.url.com", + "cert_status": "REVOKED", + "produced_at": "Wed, 22 Dec 2021 12:50:11 GMT", + "this_update": "Wed, 22 Dec 2021 12:50:11 GMT", + "next_update": "Wed, 22 Dec 2021 14:50:00 GMT", + "revocation_time": "Wed, 22 Dec 2021 12:50:04 GMT" +} +``` + +### Error codes {#post-error-codes} + +When errors occur, the server returns a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| no_responder_url | Tried to test OCSP status with no responder URL configured | +| ocsp_unsupported_by_capability | Not all nodes support OCSP capability | +| task_queued_for_too_long | OCSP polling task was in status “queued” for over 5 seconds | +| invalid_ocsp_response | The server returned a response that is not compatible with OCSP | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success querying the OCSP server | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Feature is not supported in all nodes | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | `responder_url` is not configured or polling task failed | +--- +Title: OCSP requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: OCSP requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: ocsp +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/ocsp/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-ocsp) | `/v1/ocsp` | Get OCSP configuration | +| [PUT](#put-ocsp) | `/v1/ocsp` | Update OCSP configuration | + +## Get OCSP configuration {#get-ocsp} + + GET /v1/ocsp + +Gets the cluster's OCSP configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_ocsp_config]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_ocsp_config" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/ocsp + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-response} + +Returns an [OCSP configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp" >}}). + +#### Example JSON body + +```json +{ + "ocsp_functionality": true, + "responder_url": "http://responder.ocsp.url.com", + "query_frequency": 3800, + "response_timeout": 2, + "recovery_frequency": 80, + "recovery_max_tries": 20 +} +``` + +### Error codes {#get-error-codes} + +When errors occur, the server returns a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| ocsp_unsupported_by_capability | Not all nodes support OCSP capability | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Feature not supported in all nodes | + +## Update OCSP configuration {#put-ocsp} + + PUT /v1/ocsp + +Updates the cluster's OCSP configuration. + +#### Required permissions + +| Permission name | +|-----------------| +| [config_ocsp]({{< relref "/operate/rs/7.4/references/rest-api/permissions#config_ocsp" >}}) | + +### Request {#put-request} + +#### Example HTTP request + + PUT /v1/ocsp + +#### Example JSON body + +```json +{ + "ocsp_functionality": true, + "query_frequency": 3800, + "response_timeout": 2, + "recovery_frequency": 80, + "recovery_max_tries": 20 +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include an [OCSP configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp" >}}) with updated fields in the request body. + +### Response {#put-response} + +Returns the updated [OCSP configuration object]({{< relref "/operate/rs/7.4/references/rest-api/objects/ocsp" >}}). + +### Error codes {#put-error-codes} + +When errors occur, the server returns a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| invalid_schema | An illegal parameter or a parameter with an illegal value | +| no_responder_url | Tried to enable OCSP with no responder URL configured | +| ocsp_unsupported_by_capability | Not all nodes support OCSP capability | + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, OCSP config has been set | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Feature not supported in all nodes | +--- +Title: Proxy requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Proxy requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: proxies +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/proxies/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-proxies) | `/v1/proxies` | Get all proxies | +| [GET](#get-proxy) | `/v1/proxies/{uid}` | Get a proxy | +| [PUT](#put-proxy) | `/v1/proxies/{uid}` | Update a proxy | +| [PUT](#put-all-proxies) | `/v1/proxies` | Update all proxies | + +## Get all proxies {#get-all-proxies} + +```sh +GET /v1/proxies +``` + +Get all the proxies in the cluster. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_all_proxies_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_all_proxies_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-all-request} + +#### Example HTTP request + +```sh +GET /v1/proxies +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +Returns a JSON array of [proxy objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/proxy" >}}). + +#### Example JSON body + +```json +[ + { + "uid": 1, + "client_keepintvl": 30, + "max_worker_server_conns": 16384, + "client_keepcnt": 6, + "max_threads": 64, + "ignore_bdb_cconn_output_buff_limits": false, + "dynamic_threads_scaling": false, + "max_worker_client_conns": 16384, + "max_servers": 4096, + "client_keepidle": 180, + "duration_usage_threshold": 30, + "max_worker_txns": 65536, + "threads": 3, + "max_listeners": 1024, + "conns": 500000, + "ignore_bdb_cconn_limit": false, + "threads_usage_threshold": 80, + "backlog": 1024 + }, + { + "uid": 2, + "threads": 3, + // additional fields... + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get proxy {#get-proxy} + +```sh +GET /v1/proxies/{int: uid} +``` + +Get a single proxy's info. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_proxy_info]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_proxy_info" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/proxies/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The proxy's unique node ID | + +### Response {#get-response} + +Returns a [proxy object]({{< relref "/operate/rs/7.4/references/rest-api/objects/proxy" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "client_keepintvl": 30, + "max_worker_server_conns": 16384, + "client_keepcnt": 6, + "max_threads": 64, + "ignore_bdb_cconn_output_buff_limits": false, + "dynamic_threads_scaling": false, + "max_worker_client_conns": 16384, + "max_servers": 4096, + "client_keepidle": 180, + "duration_usage_threshold": 30, + "max_worker_txns": 65536, + "threads": 3, + "max_listeners": 1024, + "conns": 500000, + "ignore_bdb_cconn_limit": false, + "threads_usage_threshold": 80, + "backlog": 1024 +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Proxy UID does not exist | + +## Update proxy {#put-proxy} + +```sh +PUT /v1/proxies/{int: uid} +``` + +Updates a proxy object, notifies the proxy, and waits for acknowledgment (ACK) unless the node is dead. + +Automatically restarts the proxy service if `allow_restart` is `true` and any updated parameters require a restart for the changes to take effect. For example, a restart is required if you change `threads` to a lower number. + +However, if `allow_restart` is `false`, such changes only take effect after the next proxy restart. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_proxy]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_proxy" >}}) | admin | + +### Request {#put-request} + +#### Example HTTP request + +```sh +PUT /v1/proxies/1 +``` + +#### Example JSON body + +```json +{ + "allow_restart": true, + "proxy": { + "threads": 8 + } +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| Content-Type | application/json | Request body media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the updated proxy. Corresponds to the node ID. | + +#### Request body + +Include a JSON object in the request body. The JSON object can contain the boolean field `allow_restart` and a [proxy object]({{< relref "/operate/rs/7.4/references/rest-api/objects/proxy" >}}) with updated fields. + +### Response {#put-response} + +Returns a status code to indicate the success or failure of the proxy update. + +### Status codes {#put-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, the request has been processed | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad content provided | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Proxy does not exist | +| [500 Internal Server Error](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Error while waiting for confirmation from proxy | +| [504 Gateway Timeout](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.5) | Timeout while waiting for confirmation from proxy | + +## Update all proxies {#put-all-proxies} + +```sh +PUT /v1/proxies +``` + +Updates all the proxy objects, notifies the proxies, and waits for acknowledgment (ACK) unless the node is dead. + +Automatically restarts the relevant proxy services if `allow_restart` is `true` and any updated parameters require a restart for the changes to take effect. + +However, if `allow_restart` is `false`, such changes only take effect after the next proxy restart. + +### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_proxy]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_proxy" >}}) | admin | + +### Request {#put-all-request} + +#### Example HTTP request + +```sh +PUT /v1/proxies +``` + +#### Example JSON body + +```json +{ + "allow_restart": true, + "proxy": { + "threads": 8, + "max_threads": 12 + } +} +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | +| Content-Type | application/json | Request body media type | + +#### Request body + +Include a JSON object in the request body. The JSON object can contain the boolean field `allow_restart` and a [proxy object]({{< relref "/operate/rs/7.4/references/rest-api/objects/proxy" >}}) with updated fields. + +### Response {#put-all-response} + +Returns a status code to indicate the success or failure of the proxy updates. + +### Status codes {#put-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error, the request has been processed | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad content provided | +| [500 Internal Server Error](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Error while waiting for confirmation from proxy | +| [504 Gateway Timeout](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.5) | Timeout while waiting for confirmation from proxy | +--- +Title: Configure module requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure module requests +headerRange: '[1-2]' +linkTitle: config/bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/modules/config/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-modules-config-bdb) | `/v1/modules/config/bdb/{uid}` | Configure module | + +## Configure module {#post-modules-config-bdb} + + POST /v1/modules/config/bdb/{string: uid} + +Use the module runtime configuration command (if defined) to configure new arguments for the module. + +#### Required permissions + +| Permission name | +|-----------------| +| [edit_bdb_module]({{< relref "/operate/rs/7.4/references/rest-api/permissions#edit_bdb_module" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/modules/config/bdb/1 + +#### Example JSON body + +```json +{ + "modules": [ + { + "module_name": "search", + "module_args": "MINPREFIX 3 MAXEXPANSIONS 1000" + } + ] +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| modules | list of JSON objects | List of modules (module_name) and their new configuration settings (module_args) | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf` | Module's name | +| module_args | string | Module command line arguments (pattern does not allow special characters &,<,>,”) | + +### Response {#post-response} + +Returns a status code. If an error occurs, the response body may include an error code and message with more details. + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_not_exist | Database with given UID doesn't exist in cluster | +| missing_field | "module_name" or "module_args" are not defined in request | +| invalid_schema | JSON object received is not a dict object | +| param_error | "module_args" parameter was not parsed properly | +| module_not_exist | Module with given "module_name" does not exist for the database | + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, module updated on bdb. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb not found. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | Module does not support runtime configuration of arguments. | +--- +Title: Upgrade module requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade module requests +headerRange: '[1-2]' +linkTitle: upgrade/bdb +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/modules/upgrade/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-modules-upgrade-bdb) | `/v1/modules/upgrade/bdb/{uid}` | Upgrade module | + +## Upgrade module {#post-modules-upgrade-bdb} + + POST /v1/modules/upgrade/bdb/{string: uid} + +Upgrades the module version on a specific database. + +#### Required permissions + +| Permission name | +|-----------------| +| [edit_bdb_module]({{< relref "/operate/rs/7.4/references/rest-api/permissions#edit_bdb_module" >}}) | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/modules/upgrade/bdb/1 + +#### Example JSON body + +```json +{ + "modules": [ + {"module_name": "ReJson", + "current_semantic_version": "2.2.1", + "new_module": "aa3648d79bd4082d414587c42ea0b234"} + ], + "// Optional fields to fine-tune restart and failover behavior:", + "preserve_roles": true, + "may_discard_data": false +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Request body + +| Field | Type | Description | +|-------|------|-------------| +| modules | list | List of dicts representing the modules that will be upgraded. Each dict must include:

• **current_module**: UID of a module to upgrade

• **new_module**: UID of the module we want to upgrade to

• **new_module_args**: args list for the new module | +| preserve_roles | boolean | Preserve shards’ master/replica roles (optional) | +| may_discard_data | boolean | Discard data in a non-replicated non-persistent database (optional) | + +### Response {#post-response} + +Returns the upgraded [module object]({{< relref "/operate/rs/7.4/references/rest-api/objects/module" >}}). + +#### Example JSON body + +```json +{ + "uid": 1, + "name": "name of database #1", + "module_id": "aa3648d79bd4082d414587c42ea0b234", + "module_name": "ReJson", + "semantic_version": "2.2.2" + "// additional fields..." +} +``` + +### Error codes {#post-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| missing_module | Module is not present in cluster.| +| module_downgrade_unsupported | Module downgrade is not allowed.| +| redis_incompatible_version | Module min_redis_version is bigger than the current Redis version.| +| redis_pack_incompatible_version | Module min_redis_pack_version is bigger than the current Redis Enterprise version.| +| unsupported_module_capabilities | New version of module does support all the capabilities needed for the database configuration| + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, module updated on bdb. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | bdb or node not found. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Bad or missing configuration parameters. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The requested configuration is invalid. | +--- +Title: Modules requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis modules requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: modules +weight: $weight +url: '/operate/rs/7.4/references/rest-api/requests/modules/' +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#list-modules) | `/v1/modules` | List available modules | +| [GET](#get-module) | `/v1/modules/{uid}` | Get a specific module | +| [POST](#post-module) | `/v1/modules` | Upload a new module (deprecated) | +| [POST](#post-module-v2) | `/v2/modules` | Upload a new module | +| [DELETE](#delete-module) | `/v1/modules/{uid}` | Delete a module (deprecated) | +| [DELETE](#delete-module-v2) | `/v2/modules/{uid}` | Delete a module | + +## List modules {#list-modules} + +```sh +GET /v1/modules +``` + +List available modules, i.e. modules stored within the CCS. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_cluster_modules]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_modules" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#list-request} + +#### Example HTTP request + +```sh +GET /v1/modules +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | 127.0.0.1:9443 | Domain name | +| Accept | \*/\* | Accepted media type | + +### Response {#list-response} + +Returns a JSON array of [module objects]({{< relref "/operate/rs/7.4/references/rest-api/objects/module" >}}). + +#### Status codes {#list-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | + +## Get module {#get-module} + +```sh +GET /v1/modules/{string: uid} +``` + +Get specific available modules, i.e. modules stored within the CCS. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [view_cluster_modules]({{< relref "/operate/rs/7.4/references/rest-api/permissions#view_cluster_modules" >}}) | admin
cluster_member
cluster_viewer
db_member
db_viewer | + +### Request {#get-request} + +#### Example HTTP request + +```sh +GET /v1/modules/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | 127.0.0.1:9443 | Domain name | +| Accept | \*/\* | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The module's unique ID. | + +### Response {#get-response} + +Returns a [module object]({{< relref "/operate/rs/7.4/references/rest-api/objects/module" >}}). + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Module does not exist. | + +## Upload module v1 {#post-module} + +```sh +POST /v1/modules +``` + +{{}} +`POST /v1/modules` is deprecated as of Redis Enterprise Software version 7.2. Use [`POST /v2/modules`](#post-module-v2) instead. +{{}} + +Uploads a new module to the cluster. + +The request must contain a Redis module, bundled using [RedisModule +Packer](https://github.com/RedisLabs/RAMP). For modules in Redis Stack, download the module from the [download center](https://redis.io/downloads/). + +See [Install a module on a cluster]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/add-module-to-cluster#rest-api-method" >}}) for more information. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | admin | + +### Request {#post-request} + +#### Example HTTP request + +```sh +POST /v1/modules +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | string | Domain name | +| Accept | \*/\* | Accepted media type | +| Content-Length | integer | Length of the request body in octets | +| Expect | 100-continue | Requires particular server behaviors | +| Content-Type | multipart/form-data | Media type of request/response body | + +### Response {#post-response} + +Returns a status code. If an error occurs, the response body may include an error code and message with more details. + +#### Error codes {#post-error-codes} + +The server may return a JSON object with `error_code` and `message` fields that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| no_module | Module wasn't provided or could not be found | +| invalid_module | Module either corrupted or packaged files are wrong | +| module_exists | Module already in system | +| min_redis_pack_version | Module isn't supported yet in this Redis pack | +| unsupported_module_capabilities | The module does not support required capabilities| +| os_not_supported | This module is not supported for this operating system | +| dependencies_not_supported | This endpoint does not support dependencies, see v2 | + +#### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Either missing module file or an invalid module file. | + +### Examples + +#### cURL + +```sh +$ curl -k -u "[username]:[password]" -X POST + -F "module=@/tmp/rejson.Linux-ubuntu18.04-x86_64.2.0.8.zip" + https://[host][:port]/v1/modules +``` + +#### Python + +```python +import requests + +url = "https://[host][:port]/v1/modules" + +files=[ + ('module', + ('rejson.Linux-ubuntu18.04-x86_64.2.0.8.zip', + open('/tmp/rejson.Linux-ubuntu18.04-x86_64.2.0.8.zip','rb'), + 'application/zip') + ) +] +auth=("[username]", "[password]") + +response = requests.request("POST", url, + auth=auth, files=files, verify=False) + +print(response.text) +``` + +## Upload module v2 {#post-module-v2} + +```sh +POST /v2/modules +``` + +Asynchronously uploads a new module to the cluster. + +The request must contain a Redis module bundled using [RedisModule Packer](https://github.com/RedisLabs/RAMP). + +For modules in Redis Stack, download the module from the [download center](https://redis.io/downloads/#software). See [Install a module on a cluster]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/add-module-to-cluster#rest-api-method" >}}) for more information. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | admin | + +### Request {#post-request-v2} + +#### Example HTTP request + +```sh +POST /v2/modules +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | string| Domain name | +| Accept | \*/\* | Accepted media type | +| Content-Length | integer | Length of the request body in octets | +| Expect | 100-continue | Requires particular server behaviors | +| Content-Type | multipart/form-data; | Media type of request/response body | + +### Response {#post-response-v2} + +Returns a [module object]({{< relref "/operate/rs/7.4/references/rest-api/objects/module" >}}) with an additional `action_uid` field. + +You can use the `action_uid` to track the progress of the module upload. + +#### Example JSON body + +```json +{ + "action_uid":"dfc0152c-8449-4b1c-9184-480ea7cb526c", + "author":"RedisLabs", + "capabilities":[ + "types", + "crdb", + "failover_migrate", + "persistence_aof", + "persistence_rdb", + "clustering", + "backup_restore" + ], + "command_line_args":"Plugin gears_python CreateVenv 1", + "config_command":"RG.CONFIGSET", + "dependencies":{ + "gears_jvm":{ + "sha256":"b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9", + "url":"http://example.com/redisgears_plugins/jvm_plugin/gears-jvm.linux-centos7-x64.0.1.0.tgz" + }, + "gears_python":{ + "sha256":"22dca9cd75484cb15b8130db37f5284e22e3759002154361f72f6d2db46ee682", + "url":"http://example.com/redisgears-python.linux-centos7-x64.1.2.1.tgz" + } + }, + "description":"Dynamic execution framework for your Redis data", + "display_name":"RedisGears", + "email":"user@example.com", + "homepage":"http://redisgears.io", + "is_bundled":false, + "license":"Redis Source Available License Agreement", + "min_redis_pack_version":"6.0.0", + "min_redis_version":"6.0.0", + "module_name":"rg", + "semantic_version":"1.2.1", + "sha256":"2935ea53611803c8acf0015253c5ae1cd81391bbacb23e14598841e1edd8d28b", + "uid":"98f255d5d33704c8e4e97897fd92e32d", + "version":10201 +} +``` + +### Error codes {#post-error-codes-v2} + +The server may return a JSON object with `error_code` and `message` fields that provide additional information. + +Possible `error_code` values include [`/v1/modules` error codes](#post-error-codes) and the following: + +| Code | Description | +|------|-------------| +| invalid_dependency_data | Provided dependencies have an unexpected format | + +### Status codes {#post-status-codes-v2} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, scheduled module upload. | +| [400 Bad Request](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1) | Module name or version does not exist. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Dependency not found. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get dependency. | + +## Delete module v1 {#delete-module} + +```sh +DELETE /v1/modules/{string: uid} +``` + +{{}} +`DELETE /v1/modules` is deprecated as of Redis Enterprise Software version 7.2. Use [`DELETE /v2/modules`](#delete-module-v2) instead. +{{}} + +Delete a module. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | admin | + +### Request {#delete-request} + +#### Example HTTP request + +```sh +DELETE /v1/modules/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The module's unique ID. | + +### Response {#delete-response} + +Returns a status code to indicate module deletion success or failure. + +#### Error codes {#delete-error-codes} + +| Code | Description | +|------|-------------| +| dependencies_not_supported | You can use the following API endpoint to delete this module with its dependencies: [`/v2/modules/`](#delete-module-v2) | + +#### Status codes {#delete-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, the module is deleted. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to delete a nonexistent module. | +| [406 Not Acceptable](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7) | The request is not acceptable. | + +## Delete module v2 {#delete-module-v2} + +```sh +DELETE /v2/modules/{string: uid} +``` + +Delete a module. + +#### Permissions + +| Permission name | Roles | +|-----------------|-------| +| [update_cluster]({{< relref "/operate/rs/7.4/references/rest-api/permissions#update_cluster" >}}) | admin | + +### Request {#delete-request-v2} + +#### Example HTTP request + +```sh +DELETE /v2/modules/1 +``` + +#### Headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The module's unique ID. | + +### Response {#delete-response-v2} + +Returns a JSON object with an `action_uid` that allows you to track the progress of module deletion. + +#### Status codes {#delete-status-codes-v2} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success, scheduled module deletion. | +| [404 Not Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5) | Attempting to delete a nonexistent module. | +--- +Title: Redis Enterprise REST API requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the requests supported by the Redis Enterprise Software REST + API calls. +hideListLinks: true +linkTitle: Requests +weight: 30 +url: '/operate/rs/7.4/references/rest-api/requests/' +--- + +A REST API request requires the following components: +- [HTTP method](https://restfulapi.net/http-methods/) (`GET`, `PUT`, `PATCH`, `POST`, `DELETE`) +- Base URL +- Endpoint + +Some requests may also require: +- URL parameters +- [Query parameters](https://en.wikipedia.org/wiki/Query_string) +- [JSON](http://www.json.org) request body +- [Permissions]({{< relref "/operate/rs/7.4/references/rest-api/permissions" >}}) + +{{< table-children columnNames="Request,Description" columnSources="LinkTitle,Description" enableLinks="LinkTitle" >}} +--- +Title: Encrypt REST API requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Encrypt requests +weight: 30 +url: '/operate/rs/7.4/references/rest-api/encryption/' +--- + +## Require HTTPS for API endpoints + +By default, the Redis Enterprise Software API supports communication over HTTP and HTTPS. However, you can turn off support for HTTP to ensure that API requests are encrypted. + +Before you turn off HTTP support, be sure to migrate any scripts or proxy configurations that use HTTP to the encrypted API endpoint to prevent broken connections. + +To turn off HTTP support for API endpoints, run: + +```sh +rladmin cluster config http_support disabled +``` +--- +Title: REST API +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the REST API available to Redis Enterprise Software deployments. +hideListLinks: true +weight: $weight +url: '/operate/rs/7.4/references/rest-api/' +--- +Redis Enterprise Software provides a REST API to help you automate common tasks. + +Here, you'll find the details of the API and how to use it. + +For more info, see: + +- Supported [request endpoints]({{< relref "/operate/rs/7.4/references/rest-api/requests" >}}), organized by path +- Supported [objects]({{< relref "/operate/rs/7.4/references/rest-api/objects" >}}), both request and response +- Built-in roles and associated [permissions]({{< relref "/operate/rs/7.4/references/rest-api/permissions" >}}) +- [Redis Enterprise Software REST API quick start]({{< relref "/operate/rs/7.4/references/rest-api/quick-start" >}}) with examples + +## Authentication + +Authentication to the Redis Enterprise Software API occurs via [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication). Provide your username and password as the basic auth credentials. + +If the username and password is incorrect or missing, the request will fail with a [`401 Unauthorized`](https://www.rfc-editor.org/rfc/rfc9110.html#name-401-unauthorized) status code. + +Example request using [cURL](https://curl.se/): + +``` bash +curl -u "demo@redislabs.com:password" \ + https://localhost:9443/v1/bdbs +``` + +For more examples, see the [Redis Enterprise Software REST API quick start]({{< relref "/operate/rs/7.4/references/rest-api/quick-start" >}}). + +### Permissions + +By default, the admin user is authorized for access to all endpoints. Use [role-based access controls]({{< relref "/operate/rs/7.4/security/access-control" >}}) and [role permissions]({{< relref "/operate/rs/7.4/references/rest-api/permissions" >}}) to manage access. + +If a user attempts to access an endpoint that is not allowed in their role, the request will fail with a [`403 Forbidden`](https://www.rfc-editor.org/rfc/rfc9110.html#name-403-forbidden) status code. For more details on which user roles can access certain endpoints, see [Permissions]({{< relref "/operate/rs/7.4/references/rest-api/permissions" >}}). + +### Certificates + +The Redis Enterprise Software REST API uses [Self-signed certificates]({{< relref "/operate/rs/7.4/security/certificates" >}}) to ensure the product is secure. When you use the default self-signed certificates, the HTTPS requests will fail with `SSL certificate problem: self signed certificate` unless you turn off SSL certificate verification. + +## Ports + +All calls must be made over SSL to port 9443. For the API to work, port 9443 must be exposed to incoming traffic or mapped to a different port. + +If you are using a [Redis Enterprise Software Docker image]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}), run the following command to start the Docker image with port 9443 exposed: + +```sh +docker run -p 9443:9443 redislabs/redis +``` + +## Versions + +All API requests are versioned in order to minimize the impact of backwards-incompatible API changes and to coordinate between different versions operating in parallel. + +Specify the version in the request [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier), as shown in the following table: + +| Request path | Description | +|--------------|-------------| +| POST `/v1/bdbs` | A version 1 request for the `/bdbs` endpoint. | +| POST `/v2/bdbs` | A version 2 request for the `/bdbs` endpoint. | + +When an endpoint supports multiple versions, each version is documented on the corresponding endpoint. For example, the [bdbs request]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/" >}}) page documents POST requests for [version 1]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#post-bdbs-v1" >}}) and [version 2]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/#post-bdbs-v2" >}}). + +## Headers + +### Requests + +Redis Enterprise REST API requests support the following HTTP headers: + +| Header | Supported/Required Values | +|--------|---------------------------| +| Accept | `application/json` | +| Content-Length | Length (in bytes) of request message | +| Content-Type | `application/json` (required for PUT or POST requests) | + +If the client specifies an invalid header, the request will fail with a [`400 Bad Request`](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) status code. + +### Responses + +Redis Enterprise REST API responses support the following HTTP headers: + +| Header | Supported/Required Values | +|--------|---------------------------| +| Content-Type | `application/json` | +| Content-Length | Length (in bytes) of response message | + +## JSON requests and responses + +The Redis Enterprise Software REST API uses [JavaScript Object Notation (JSON)](http://www.json.org) for requests and responses. See the [RFC 4627 technical specifications](http://www.ietf.org/rfc/rfc4627.txt) for additional information about JSON. + +Some responses may have an empty body but indicate the response with standard [HTTP codes](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html). + +Both requests and responses may include zero or more objects. + +If the request is for a single entity, the response returns a single JSON object or none. If the request is for a list of entities, the response returns a JSON array with zero or more elements. + +If you omit certain JSON object fields from a request, they may be assigned default values, which often indicate that these fields are not in use. + +## Response types and error codes + +[HTTP status codes](https://www.rfc-editor.org/rfc/rfc9110.html#name-status-codes) indicate the result of an API request. This can be `200 OK` if the server accepted the request, or it can be one of many error codes. + +The most common responses for a Redis Enterprise API request are: + +| Response | Condition/Required handling | +|----------|-----------------------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | Success | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | The request failed, generally due to a typo or other mistake. | +| [401 Unauthorized](https://www.rfc-editor.org/rfc/rfc9110.html#name-401-unauthorized) | The request failed because the authentication information was missing or incorrect. | +| [403 Forbidden](https://www.rfc-editor.org/rfc/rfc9110.html#name-403-forbidden) | The user cannot access the specified [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier). | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | The [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) does not exist. | +| [503 Service Unavailable](https://www.rfc-editor.org/rfc/rfc9110.html#name-503-service-unavailable) | The node is not responding or is not a member of the cluster. | +| [505 HTTP Version Not Supported](https://www.rfc-editor.org/rfc/rfc9110.html#name-505-http-version-not-suppor) | An unsupported `x-api-version` was used. See [versions](#versions). | + +Some endpoints return different response codes. The request references for these endpoints document these special cases. +--- +Title: Reference +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: false +weight: 80 +url: '/operate/rs/7.4/references/' +--- +--- +Title: Configure cluster DNS +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure DNS to communicate between nodes in your cluster. +linkTitle: Configure cluster DNS +weight: $weight +url: '/operate/rs/7.4/networking/cluster-dns/' +--- + +By default, Redis Enterprise Software deployments use DNS to communicate between nodes. You can also use the [Discovery Service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}), which uses IP addresses to connect and complies with the [Redis Sentinel API]({{< relref "/operate/oss_and_stack/management/sentinel" >}}) supported by Redis Open Source. + +Each node in a Redis Enterprise cluster includes a small DNS server to manage internal functions, such as high availability, automatic failover, automatic migration, and so on. +Nodes should only run the DNS server included with the software. Running additional DNS servers can lead to unexpected behavior. + +## Cluster name and connection management + +Whether you're administering Redis Enterprise Software or accessing databases, there are two ways to connect: + +- URL-based connections - URL-based connections use DNS to resolve the fully qualified cluster domain name (FQDN). This means that DNS records might need to be updated when topology changes, such as adding (or removing) nodes from the cluster. + + Because apps and other client connections rely on the URL (rather than the address), they do not need to be modified when topology changes. + +- IP-based connections - IP-based connections do not require DNS setup, as they rely on the underlying TCP/IP addresses. As long as topology changes do not change the address of the cluster nodes, no configuration changes are needed, DNS or otherwise. + + However, changes to IP addresses (or changes to IP address access) impact all connections to the node, including apps and clients. IP address changes can therefore be unpredictable or time-consuming. + +## URL-based connections + +The fully qualified domain name (FQDN) is the unique cluster identifier that enables clients to connect to the different components of Redis Enterprise Software. +The FQDN is a crucial component of the high-availability mechanism because it's used internally to enable and implement automatic and transparent failover of nodes, databases, shards, and endpoints. + +{{< note >}} +Setting the cluster's FQDN is a one-time operation, one that cannot be changed after being set. +{{< /note >}} + +The FQDN must always comply with the IETF's [RFC 952](https://datatracker.ietf.org/doc/html/rfc952) standard +and section 2.1 of the [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123) standard. + +## Identify the cluster + +To identify the cluster, either use DNS to define a fully qualified domain name or use the IP addresses of each node. + +### Define domain using DNS + +Use DNS if you: + +- have your own domain +- want to integrate the cluster into that domain +- can access and update the DNS records for that domain + +1. Make sure that the cluster and at least one node (preferably all nodes) in the cluster + are correctly configured in the DNS with the appropriate NS entries. + + For example: + + - Your domain is: `mydomain.com` + - You would like to name the Redis Enterprise Software cluster `mycluster` + - You have three nodes in the cluster: + - node1 (IP address 1.1.1.1) + - node2 (2.2.2.2) + - node3 (3.3.3.3) + +1. In the FQDN field, enter the value `mycluster.mydomain.com` + and add the following records in the DNS table for `mydomain.com`: + + ``` sh + mycluster.mydomain.com NS node1.mycluster.mydomain.com + node2.mycluster.mydomain.com + node3.mycluster.mydomain.com + + node1.mycluster.mydomain.com A 1.1.1.1 + + node2.mycluster.mydomain.com A 2.2.2.2 + + node3.mycluster.mydomain.com A 3.3.3.3 + ``` + +### Zero-configuration using mDNS {#zeroconfiguration-using-mdns-development-option-only} + +Development and test environments can use [Multicast DNS](https://en.wikipedia.org/wiki/Multicast_DNS) (mDNS), a zero-configuration service designed for small networks. Production environments should _not_ use mDNS. + +mDNS is a standard protocol that provides DNS-like name resolution and service discovery capabilities +to machines on local networks with minimal to no configuration. + +Before adopting mDNS, verify that it's supported by each client you wish to use to connect to your Redis databases. Also make sure that your network infrastructure permits mDNS/multi-casting between clients and cluster nodes. + +Configuring the cluster to support mDNS requires you to assign the cluster a `.local` name. + +For example, if you want to name the Redis Enterprise Software cluster `rediscluster`, specify the FQDN name as `rediscluster.local`. + +When using the DNS or mDNS option, failover can be done transparently and the DNS is updated automatically to point to the IP address of the new primary node. + +## IP-based connections + +When you use the IP-based connection option, the FQDN does not need to have any special format +because clients use IP addresses instead of hostnames to access the databases so you are free to choose whatever name you want. +Using the IP-based connection option does not require any DNS configuration either. + +To administer the cluster you do need to know the IP address of at least one of the nodes in the cluster. +Once you have the IP address, you can simply connect to port number 8443 (for example: ). +However, as the topology of the cluster changes and node with the given IP address is removed, +you need to remember the IP address of another node participating in this cluster to connect to the Cluster Manager UI and manage the cluster. + +Applications connecting to Redis Software databases have the same constraints. +When using the IP-based connection method, you can use the [Discovery Service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}) +to discover the database endpoint for a given database name as long as you have an IP address for at least one of the nodes in the cluster. +The API used for discovery service is compliant with the Redis Sentinel API. + +To test your connection, try pinging the service. For help, see [Connect to your database]({{< relref "/operate/rs/7.4/databases/connect/test-client-connectivity" >}}). + +--- +Title: Network port configurations +alwaysopen: false +categories: +- docs +- operate +- rs +description: This document describes the various network port ranges and their uses. +linkTitle: Network ports +weight: $weight +url: '/operate/rs/7.4/networking/port-configurations/' +--- + +All Redis Enterprise Software deployments span multiple physical/virtual nodes. You'll need to keep several ports open between these nodes. This document describes the various port ranges and their uses. + +{{< note >}} +Whenever you create a new database, you must verify that the ports assigned to the new database's endpoints are open. The cluster will not perform this verification for you. +{{< /note >}} + +## Ports and port ranges used by Redis Enterprise Software + +Redis Enterprise Software's port usage falls into three general categories: + +- Internal: For traffic between or within cluster nodes +- External: For traffic from client applications or external monitoring resources +- Active-Active: For traffic to and from clusters hosting Active-Active databases + +| Protocol | Port | Configurable | Connection source | Description | +|----------|------|--------------|-------------------|-------------| +| TCP | 8001 | ❌ No | Internal, External | Traffic from application to Redis Enterprise Software [Discovery Service]({{< relref "/operate/rs/7.4/databases/durability-ha/discovery-service.md" >}}) | +| TCP | 8000, 8070, 8071, 9090, 9125 | ❌ No | Internal, External | Metrics exported and managed by the web proxy | +| TCP | 8443 | ✅ Yes | Internal, External | Secure (HTTPS) access to the management web UI | +| TCP | 9081 | ✅ Yes | Internal | CRDB coordinator for Active-Active management (internal) | +| TCP | 9443 (Recommended), 8080 | ✅ Yes | Internal, External, Active-Active | REST API traffic, including cluster management and node bootstrap | +| TCP | 10050 | ❌ No | Internal | Zabbix monitoring | +| TCP | 10000-10049, 10051-19999 | ✅ Yes | Internal, External, Active-Active | Database traffic | +| UDP | 53, 5353 | ❌ No | Internal, External | DNS/mDNS traffic | +| ICMP | * | ❌ No | Internal | Connectivity checking between nodes | +| TCP | 1968 | ❌ No | Internal | Proxy traffic | +| TCP | 3333-3345, 36379, 36380 | ❌ No | Internal | Internode communication | +| TCP | 20000-29999 | ❌ No | Internal | Database shard traffic | +| TCP | 8002, 8004, 8006 | ✅ Yes | Internal | Default system health monitoring (envoy admin, envoy management server, gossip envoy admin)| +| TCP | 8444, 9080 | ❌ No | Internal | Traffic between web proxy and cnm_http/cm | + +## Change port configuration + +### Reserve ports + +Redis Enterprise Software reserves some ports by default (`system_reserved_ports`). To reserve other ports or port ranges and prevent the cluster from assigning them to database endpoints, configure `reserved_ports` using one of the following methods: + +- [rladmin cluster config]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}) + + ```sh + rladmin cluster config reserved_ports + ``` + + For example: + + ```sh + rladmin cluster config reserved_ports 11000 13000-13010 + ``` + +- [Update cluster settings]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster#put-cluster" >}}) REST API request + + ```sh + PUT /v1/cluster + { "reserved_ports": ["list of ports/port ranges"] } + ``` + + For example: + + ```sh + PUT /v1/cluster + { "reserved_ports": ["11000", "13000-13010"] } + ``` + +### Change the Cluster Manager UI port + +The Redis Enterprise Software Cluster Manager UI uses port 8443, by default. You can change this to a custom port as long as the new port is not in use by another process. + +To change this port, run: + +```sh +rladmin cluster config cm_port  +``` + +After changing the Redis Enterprise Software web UI port, you must connect any new node added to the cluster to the UI with the custom port number: +`https://newnode.mycluster.example.com:`**``** + +### Change the envoy ports + +For system health monitoring, Redis uses the following ports by default: + +- Port 8002 for envoy admin + +- Port 8004 for envoy management server + +- Port 8006 for gossip envoy admin + +You can change each envoy port to a custom port using the [`rladmin cluster config`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/config" >}}) command as long as the new port is not in use by another process. When you change `envoy_admin_port`, expect a restart of envoy. + +To change the envoy admin port, run: + +```sh +$ rladmin cluster config envoy_admin_port  +Updating envoy_admin_port... restarting now +``` + +To change the envoy management server port, run: + +```sh +$ rladmin cluster config envoy_mgmt_server_port  +Cluster configured successfully +``` + +To change the gossip envoy admin port, run: + +```sh +$ rladmin cluster config gossip_envoy_admin_port  +Cluster configured successfully +``` + +### Change the REST API port + +For the REST API, Redis Enterprise Software uses port 9443 (secure) and port 8080 (not secure), by default. You can change this to a custom port as long as the new port is not in use by another process. + +To change these ports, run: + +```sh +rladmin cluster config cnm_http_port  +``` + +```sh +rladmin cluster config cnm_https_port  +``` + +### OS conflicts with port 53 + +{{}} + + +### Update `sysctl.conf` to avoid port collisions + +{{}} + + +## Configure HTTPS + +### Require HTTPS for API endpoints + +By default, the Redis Enterprise Software API supports communication over HTTP and HTTPS. However, you can turn off HTTP support to ensure that API requests are encrypted. + +Before you turn off HTTP support, make sure you migrate any scripts or proxy configurations that use HTTP to the encrypted API endpoint to prevent broken connections. + +To turn off HTTP support for API endpoints, run: + +```sh +rladmin cluster config http_support disabled +``` + +After you turn off HTTP support, traffic sent to the unencrypted API endpoint is blocked. + + +### HTTP to HTTPS redirection +Starting with version 6.0.12, you cannot use automatic HTTP to HTTPS redirection. +To poll metrics from the `metrics_exporter` or to access the Cluster Manager UI, use HTTPS in your request. HTTP requests won't be automatically redirected to HTTPS for those services. + +## Nodes on different VLANs + +Nodes in the same cluster must reside on the same VLAN. If you can't +host the nodes on the same VLAN, then you must open [all ports]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) between them. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to enable public and private endpoints for databases on + a cluster. +linkTitle: Public and private endpoints +title: "Enable private and\_public database endpoints" +weight: $weight +url: '/operate/rs/7.4/networking/private-public-endpoints/' +--- +Each node in Redis Enterprise can be configured with [private and external IP addresses]({{< relref "/operate/rs/7.4/networking/multi-ip-ipv6" >}}). By default, Redis Enterprise Software databases expose a single endpoint, e.g. cluster.com (FQDN), using the external IP addresses, making it available to the public network (e.g. the internet). Additionally, the cluster can be configured to expose a private FQDN, which utilizes the private IP addresses for access from the private network only (e.g. VPC or an internal network). + +When you create a cluster via the UI, you can configure it to expose private and public endpoints. +This is common for environments such as cloud platforms and enterprises. + +When doing so, the cluster creates an additional FQDN, e.g. internal.cluster.com for private network (e.g. VPC or an internal network), while the cluster.com FQDN can be used by a public network (e.g. the internet). + +You can enable public and private endpoints during cluster creation only. +However, you can still add an additional FQDN in a different domain (cluster.io, for example) after cluster creation. + +To enable private and public endpoints: + +1. Verify the IP addresses are bound to the server or instance. + +1. During cluster setup, turn on **Enable public endpoints support** in the **Cluster** screen's **Configuration** section. + + {{The endpoint support setting appears in the **Configuration section** of the **Cluster** setup screen.}} + + If this setting is not enabled when the cluster is created, databases on the cluster support only a single endpoint. + +1. Select **Next** to proceed to **Node** configuration. + +1. In the **Network configuration** section: + + 1. Configure the machine's public IP address for external traffic. + + 1. Configure the private IP address for both internal and external traffic so it can be used for private database endpoints. + +After cluster creation, both sets of endpoints are available for databases in the cluster. + +To view and copy public and private endpoints for a database in the cluster, see the database's **Configuration > General** section. + +{{View public and private endpoints from the General section of the database's Configuration screen.}} +--- +Title: Manage IP addresses +alwaysopen: false +categories: +- docs +- operate +- rs +description: Information and requirements for using multiple IP addresses or IPv6 addresses with Redis Enterprise Software. +linkTitle: Manage IP addresses +weight: $weight +url: '/operate/rs/7.4/networking/multi-ip-ipv6/' +--- + +Redis Enterprise Software supports servers, instances, and VMs with +multiple IPv4 or IPv6 addresses. + +## Traffic overview + +Redis Enterprise Software traffic is divided into internal traffic and external traffic: + +- "Internal traffic" refers to internal cluster communications, such as communications between the nodes for cluster management. + +- "External traffic" refers to communications between clients and databases and connections to the Cluster Manager UI. + +When only one IP address exists on a machine that serves as a Redis Enterprise node, it is used for both internal and external traffic. + +## Multiple IP addresses + +During node configuration on a machine with multiple IP addresses, you must assign one address for internal traffic and one or more other addresses for external traffic. + +If the cluster uses IPv4 for internal traffic, all communication between cluster nodes uses IPv4 addresses. If the cluster uses IPv6 for internal traffic, all communication between cluster nodes uses IPv6 addresses. + +To update IP address configuration after cluster setup, see [Change internal IP address](#change-internal-ip-address) or [Configure external IP addresses](#configure-external-ip-addresses). + +## Enable IPv6 for internal traffic + +IPv6 for internal communication is supported only for new clusters with Redis Enterprise Software version 7.4.2 or later. + +If the server has only IPv6 interfaces, IPv6 is automatically used for internal and external traffic. Otherwise, internal traffic uses IPv4 by default. + +To use IPv6 for internal traffic on a machine with both IPv4 and IPv6 interfaces, set `use_internal_ipv6` to `true` when you create a cluster using the [bootstrap REST API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/bootstrap#post-bootstrap" >}}): + +```sh +POST /v1/bootstrap/create_cluster +{ + "action": "create_cluster", + "cluster": { + "name": "cluster.fqdn" + }, + "credentials": { + "username": "admin_username", + "password": "admin_password" + }, + "node": { + "identity": { + "addr": "2001:DB8::/32", + "external_addr": ["2001:0db8:85a3:0000:0000:8a2e:0370:7334"], + "use_internal_ipv6": true + }, + }, + ... +} +``` + +When other IPv6 nodes join a cluster that has `use_internal_ipv6` enabled, they automatically use IPv6 for internal traffic. Do not manually set `use_internal_ipv6` when joining a node to an existing IPv6 cluster, or a `NodeBootstrapError` can occur if the values do not match. + +If you try to add a node without an IPv6 interface to a cluster that has `use_internal_ipv6` enabled, a `NodeBootstrapError` occurs. + +The host file `/etc/hosts` on each node in the cluster must include the following entry: + +```sh +::1 localhost +``` + +## Change internal IP address + +Before you change an internal IP address, consider the following: + +- Verify the address is valid and bound to an active interface on the node. Failure to do so prevents the node from coming back online and rejoining the cluster. + +- Joining a node that only has IPv4 network interfaces to a master node with IPv6 enabled causes a `NodeBootstrapError`. + +- Joining a node that only has IPv6 network interfaces to a master node that does not have IPv6 enabled causes a `NodeBootstrapError`. + +- You cannot change the internal address from IPv4 to IPv6 or IPv6 to IPv4 in a running cluster. You can only change the internal address within the same protocol as the cluster. + +If you need to update the internal IP address in the OS, one option is to remove that node from the cluster, change the IP address, and then add the node back into the cluster. + +Alternatively, you can use the following steps to update a node's internal IP address without removing it from the cluster: + +1. Turn the node into a replica using [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/node/enslave" >}}): + + ```sh + rladmin node enslave demote_node + ``` + +1. Deactivate the `rlec_supervisor` service on the node: + + ```sh + systemctl disable rlec_supervisor + ``` + +1. Restart the node. + +1. Follow the operating system vendor's instructions to change the node's IP address. + +1. From a different cluster node, use [`rladmin node addr set`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/node/addr" >}}) to update the first node's IP address: + + ```sh + rladmin node addr set + ``` + +1. Enable the `rlec_supervisor` service on the node: + + ```sh + systemctl enable rlec_supervisor + ``` + +1. Restart `rlec_supervisor` or restart the node. + + + ```sh + systemctl start rlec_supervisor + ``` + +1. Verify the node rejoined the cluster: + + ```sh + rladmin status nodes + ``` + +Repeat this procedure for other cluster nodes to change their internal IP addresses. + +## Configure external IP addresses + +You can configure external addresses that are not bound to an active interface, but are otherwise mapped or configured to route traffic to the node (such as AWS Elastic IPs or a load balancer VIP). + +You can use [rladmin node external_addr]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/node/external-addr" >}}) to change a node's external IP addresses. + +Add an external IP address: + +```sh +rladmin node external_addr add +``` + +Set one or more external IP addresses: + +```sh +rladmin node external_addr set +``` + + +Remove an external IP address: + +```sh +rladmin node external_addr remove +``` + +{{< note >}} +While [joining a new node to a +cluster]({{< relref "/operate/rs/7.4/clusters/add-node.md" >}}) +during the node bootstrap process, +when prompted to provide an IP of an existing node in the cluster, +if you use the node's IP, provide the node's internal IP address. +{{< /note >}} + +## Known limitations + +- Using IPv6 for internal traffic is supported only for new clusters running Redis Enterprise Software version 7.4.2 or later. + +- Changing an existing cluster's internal traffic from IPv4 to IPv6 is not supported. + +- All nodes must use the same protocol for internal traffic. + +- If a Redis Enterprise node's host machine has both IPv4 and IPv6 addresses, internal communication within the node initially uses IPv4 until the bootstrap process finishes. +--- +Title: AWS Route53 DNS management +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to configure AWS Route 53 DNS +linkTitle: AWS Route 53 DNS +weight: $weight +url: '/operate/rs/7.4/networking/configuring-aws-route53-dns-redis-enterprise/' +--- + +Redis Enterprise Software uses DNS to achieve high availability and fail-over regardless of where it is installed. + + +## What is AWS Route 53? + +Route 53 is a scalable DNS service by Amazon Web Service (AWS). It routes user traffic to AWS resources and external sites, offering DNS health checks, traffic management, and failover capabilities. It's integral for high-availability architectures and also provides domain registration services. + +## Create a hosted zone + +Creating a hosted zone in Amazon Route 53 is a foundational step in managing your domain's DNS settings. + +A hosted zone functions as a container for the DNS records of a specific domain. To create one, you first need to: + +1. Log into the AWS Management Console +2. Navigate to the Route 53 dashboard +3. Select "Create Hosted Zone" +4. Enter your domain name, and choose public hosted zone + +The hosted zone provides you with a set of Name Server (NS) records, which you will need to update at your domain registrar. This process effectively delegates the DNS management of your domain to Route 53, allowing you to create, update, and manage DNS records for your domain within the AWS ecosystem. + +{{< image filename="/images/rs/00-CreateHostedZone-en.png" >}} + +Once created, it will appear in the list of **Hosted zones** + +{{< image filename="/images/rs/03-HostedZoneSelection-en.png" >}} + +## Create glue records + +A **glue record** is a type of DNS record that helps prevent circular dependencies by providing the IP addresses of your nameservers. To create glue records in Route 53, you first need to set up a hosted zone for your domain. You will create a separate A record for each node in your Redis Enterprise cluster. The **Record name** will be a subdomain definition of the NS record you will define and the **value** should be set to the IP address of the node in your cluster. + +{{< image filename="/images/rs/05-NS1Configuration-en.png" >}} + +Once complete, it should look something like this + +{{< image filename="/images/rs/06-NSList-en.png" >}} + + +## Create nameserver record + +When you create a new hosted zone in Route 53 for your domain, a set of NS records is automatically generated. These records list the nameservers assigned by Route 53 to your domain. + +You will need to create a new NS record which will point to the glue records created in the previous step. + +{{}} +It is important to make sure that the **Record Name** of the NS record equals the FQDN (Fully Qualified Domain Name) of your Redis Enterprise cluster. If not, DNS resolution will not function correctly. +{{}} + +{{< image filename="/images/rs/07-NSRecord-en.png" >}} + + +## Validate + +Once all steps are completed, the configuration should look similar to this + +{{< image filename="/images/rs/08-FinalConfig-en.png" >}} + +You can test and validate your settings by using the ```dig``` command. + +```sh +dig ns test.demo-rlec.redislabs.com + +; <<>> DiG 9.9.5-9+deb8u9-Debian <<>> ns test.demo-rlec.redislabs.com +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25061 +;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;test.demo-rlec.redislabs.com. IN NS + +;; ANSWER SECTION: +test.demo-rlec.redislabs.com. 3409 IN NS node2.test.demo-rlec.redislabs.com. +test.demo-rlec.redislabs.com. 3409 IN NS node1.test.demo-rlec.redislabs.com. +test.demo-rlec.redislabs.com. 3409 IN NS node3.test.demo-rlec.redislabs.com. + +;; Query time: 31 msec +;; SERVER: 192.168.1.254#53(192.168.1.254) +;; WHEN: Tue Feb 14 16:49:13 CET 2017 +;; MSG SIZE rcvd: 120 +``` + +You can see that the name are given a prefix of `ns-`. This answer does not come +from *Route53* but from the cluster nameservers themselves. + +--- +Title: Client prerequisites for mDNS +alwaysopen: false +categories: +- docs +- operate +- rs +description: Requirements for using the mDNS protocol in development and testing environments. +linkTitle: mDNS client prerequisites +weight: $weight +url: '/operate/rs/7.4/networking/mdns/' +--- +{{< note >}} +mDNS is only supported for development and testing environments. +{{< /note >}} + +If you choose to use the mDNS protocol when [you set the cluster name]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}), +make sure that the configurations and prerequisites for resolving database endpoints are met on the client machines. +If you have [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) databases on the cluster, +the configurations and prerequisites are also required for the Redis Enterprise Software nodes. + +To prepare a client or node for mDNS: + +1. Make sure that the clients and cluster nodes are on the same physical network + or have the network infrastructure configured to allow multicasting between them. +1. Install these prerequisite packages: + + - For Ubuntu: + + ```sh + apt-get install libnss-mdns + ``` + + - For RHEL/CentOS 6.x: + + ```sh + $ rpm -ivh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + $ yum install nss-mdns + $ service avahi-daemon start + ``` + + - For RHEL/CentOS 7: + + ```sh + $ rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-12.noarch.rpm + $ yum install nss-mdns + $ service avahi-daemon start + ``` + +1. If you are using [mDNS with IPv6 addresses]({{< relref "/operate/rs/7.4/networking/multi-ip-ipv6" >}}), + update the hosts line in `/etc/nsswitch.conf` to: + + ```yaml + hosts: files mdns4_minimal + \[NOTFOUND=return\] mdns + ``` +--- +Title: Set up a Redis Enterprise cluster behind a load balancer +alwaysopen: false +categories: +- docs +- operate +- rs +description: Set up a Redis Enterprise cluster using a load balancer instead of DNS to direct traffic to cluster nodes. +linkTitle: Cluster load balancer setup +weight: $weight +url: '/operate/rs/7.4/networking/cluster-lba-setup/' +--- +To set up a Redis Enterprise cluster in an environment that doesn't allow DNS, you can use a load balancer (LB) to direct traffic to the cluster nodes. + +## DNS role for databases + +Normally, Redis Enterprise uses DNS to provide dynamic database endpoints. +A DNS name such as `redis-12345.clustername.domain` gives clients access to the database resource: + +- If multiple proxies are in use, the DNS name resolves to multiple IP addresses so clients can load balance. +- On failover or topology changes, the DNS name is automatically updated to reflect the live IP addresses. + +When DNS cannot be used, clients can still connect to the endpoints with the IP addresses, +but the benefits of load balancing and automatic updates to IP addresses won't be available. + +## Network architecture with load balancer + +You can compensate for the lack of DNS resolution with load balancers that can expose services and provide service discovery. +A load balancer is configured in front of the Redis Enterprise cluster, exposing several logical services: + +- Control plane services, such as the Cluster Manager UI +- Data plane services, such as database endpoints for client application connections + +Depending on which Redis Enterprise services you want to access outside the cluster, you may need to configure the load balancers separately. +One or more virtual IPs (VIPs) are defined on the load balancer to expose Redis Enterprise services. +The architecture is shown in the following diagram with a 3-node Redis Enterprise cluster with one database (DB1) configured on port 12000: + +{{< image filename="/images/rs/cluster-behind-load-balancer-top-down.png" alt="cluster-behind-load-balancer-top-down" >}} + +## Set up a cluster with load balancers + +### Prerequisites + +- [Install]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) the latest version of Redis Enterprise Software on your clusters +- Configure the cluster with the cluster name (FQDN) even though DNS is not in use. + Remember that the same cluster name is used to issue the license keys. + We recommend that you use a “.local” suffix in the FQDN. + +### Configure load balancers + +- Make sure that the load balancer is performing TCP health checks on the cluster nodes. +- Expose the services that you require through a virtual IP, for example: + - Cluster Manager UI on port 8443 + - Rest API on port 9443 for secure HTTPS connections and port 8080 for HTTP + - Database ports 10000-19999 + +Other ports are shown in the list of [Redis Enterprise network ports]({{< relref "/operate/rs/7.4/networking/port-configurations" >}}). + +{{< note >}} +Sticky, secured connections are needed only for the Redis Enterprise Cluster Manager UI on port 8443. + +- Certain load balancers provide specific logic to close idle connections. Either turn off this feature or make sure the applications connecting to Redis use reconnection logic. +- Make sure the load balancer is fast enough to resolve connections between two clusters or applications that are connected to Redis databases through a load balancer. +- Choose the standard load balancer that is commonly used in your environment so that you have easy access to in-house expertise for troubleshooting issues. +{{< /note >}} + +### Configure cluster + +For clusters behind load balancers, we recommend using the `all-nodes` [proxy policy]({{}}) and enabling `handle_redirects`. + +To allow inbound connections to be terminated on the relevant node inside the cluster, run the following `rladmin` commands on the cluster: + +```sh +# Enable all-nodes proxy policy by default +rladmin tune cluster default_sharded_proxy_policy all-nodes default_non_sharded_proxy_policy all-nodes + +# Redirect where necessary when behind a load balancer +rladmin cluster config handle_redirects enabled +``` + +Optionally configure sparse shard placement to allow closer termination of client connections to where the Redis shard is located: + +```sh +# Enable sparse placement by default +rladmin tune cluster default_shards_placement sparse +``` + +### Configure database + +After you update the cluster settings and configure the load balancers, you can go to the Redis Enterprise Cluster Manager UI at `https://load-balancer-virtual-ip:8443/` and [create a new database]({{< relref "/operate/rs/7.4/databases/create.md" >}}). + +To create an Active-Active database, use the `crdb-cli` utility. See the [`crdb-cli` reference]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}) for more information about creating Active-Active databases from the command line. + +### Update load balancer configuration when cluster configuration changes + +When your Redis Enterprise cluster is behind a load balancer, you must update the load balancer when the cluster topology and IP addresses change. +Some common cases that require you to update the load balancer are: + +- Adding new nodes to the Redis Enterprise cluster +- Removing nodes from the Redis Enterprise cluster +- Maintenance for Redis Enterprise cluster nodes +- IP address changes for Redis Enterprise cluster nodes + +After these changes, make sure that the Redis connections in your applications can connect to the Redis database, +especially if they are directly connected on IP addresses that have changed. + +## Intercluster communication considerations + +Redis Enterprise supports several topologies that allow intercluster replication, such as [Replica Of]({{< relref "/operate/rs/7.4/databases/import-export/replica-of/" >}}) and [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/" >}}) deployment options. +When your Redis Enterprise software clusters are behind load balancers, you must allow some network services to be open and defined in the load balancers to allow the replication to work. + +### Replica Of + +For Replica Of communication to work, you must expose database ports locally in each cluster and allow these ports through any firewalls between the clusters. + +### Active-Active + +For Active-Active communication to work, you must expose several ports, including every database port and several control plane ports as defined in [Network port configurations]({{< relref "/operate/rs/7.4/networking/port-configurations" >}}). Pay attention to services that include "Active-Active" in the connection source column, and allow these ports through any firewalls between the clusters. +--- +Title: Manage networks +alwaysopen: false +categories: +- docs +- operate +- rs +description: Networking features and considerations for designing your Redis Enterprise Software deployment. +hideListLinks: false +linktitle: Networking +weight: 39 +url: '/operate/rs/7.4/networking/' +--- +--- +Title: Maintenance mode for cluster nodes +alwaysopen: false +categories: +- docs +- operate +- rs +description: Prepare a cluster node for maintenance. +linkTitle: Maintenance mode +weight: 60 +url: '/operate/rs/7.4/clusters/maintenance-mode/' +--- + +Use maintenance mode to prevent data loss during hardware patching or operating system maintenance on Redis Enterprise servers. When maintenance mode is on, all shards move off of the node under maintenance and migrate to another available node. + +## Activate maintenance mode + +When you activate maintenance mode, Redis Enterprise does the following: + +1. Checks whether a shut down of the node will cause quorum loss. If so, maintenance mode will not turn on. + + Maintenance mode does not protect against quorum loss. If you activate maintenance mode for the majority of nodes in a cluster and restart them simultaneously, quorum is lost, which can lead to data loss. + +1. If no maintenance mode snapshots already exist or if you use `overwrite_snapshot` when you activate maintenance mode, Redis Enterprise creates a new node snapshot that records the node's shard and endpoint configuration. + +1. Marks the node as a quorum node to prevent shards and endpoints from migrating to it. + + At this point, [`rladmin status`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) displays the node's shards field in yellow, which indicates that shards cannot migrate to the node. + + {{< image filename="/images/rs/maintenance_mode.png" >}} + +1. Migrates shards and binds endpoints to other nodes, when space is available. + +Maintenance mode does not demote a master node by default. The cluster elects a new master node when the original master node restarts. + +Add the `demote_node` option to the `rladmin` command to [demote a master node](#demote-a-master-node) when you activate maintenance mode. + +To activate maintenance mode for a node, run the following command: + +```sh +rladmin node maintenance_mode on overwrite_snapshot +``` + +You can start server maintenance if: + +- All shards and endpoints have moved to other nodes + +- Enough nodes are still online to maintain quorum + +### Prevent replica shard migration + +If you do not have enough resources available to move all of the shards to other nodes, you can turn maintenance mode on without migrating the replica shards. + +Before you prevent replica shard migration during maintenance mode, consider the following effects: + +- Replica shards remain on the node during maintenance. + +- If the maintenance node fails, the master shards do not have replica shards to maintain data redundancy and high availability. + +- Replica shards that remain on the node can still be promoted during failover to preserve availability. + +To activate maintenance mode without replica shard migration, run: + +```sh +rladmin node maintenance_mode on evict_ha_replica disabled evict_active_active_replica disabled +``` + +### Demote a master node + +If maintenance might affect connectivity to the master node, you can demote the master node when you activate maintenance mode. This lets the cluster elect a new master node. + +To demote a master node when activating maintenance mode, run: + +```sh +rladmin node maintenance_mode on demote_node +``` + +### Verify maintenance mode activation + +To verify maintenance mode for a node, use `rladmin status` and review the node's shards field. If that value is displayed in yellow (shown earlier), then the node is in maintenance mode. + +Avoid activating maintenance mode when it is already active. Maintenance mode activations stack. If you activate maintenance mode for a node that is already in maintenance mode, you will have to deactivate maintenance mode twice in order to restore full functionality. + +## Deactivate maintenance mode + +When you deactivate maintenance mode, Redis Enterprise: + +1. Loads a [specified node snapshot](#specify-a-snapshot) or defaults to the latest maintenance mode snapshot. + +1. Unmarks the node as a quorum node to allow shards and endpoints to migrate to the node. + +1. Restores the shards and endpoints that were in the node at the time of the snapshot. + +1. Deletes the snapshot. + +To deactivate maintenance mode after server maintenance, run: + +```sh +rladmin node maintenance_mode off +``` + +By default, a snapshot is required to deactivate maintenance mode. If the snapshot cannot be restored, deactivation is cancelled and the node remains in maintenance mode. In such events, it may be necessary to [reset node status](#reset_node_status). + +### Specify a snapshot + +When you turn off maintenance mode, you can restore the node configuration from a maintenance mode snapshot or any snapshots previously created by [`rladmin node snapshot create`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/node/snapshot#node-snapshot-create" >}}). If you do not specify a snapshot, Redis Enterprise uses the latest maintenance mode snapshot by default. + +To get a list of available snapshots, run: + +```sh +rladmin node snapshot list +``` + +To specify a snapshot when you turn maintenance mode off, run: + +```sh +rladmin node maintenance_mode off snapshot_name +``` + +{{}} +If an error occurs when you turn on maintenance mode, the snapshot is not deleted. +When you rerun the command, use the snapshot from the initial attempt since it contains the original state of the node. +{{}} + +### Skip shard restoration + +You can prevent the migrated shards and endpoints from returning to the original node after you turn off maintenance mode. + +To turn maintenance mode off and skip shard restoration, run: + +```sh +rladmin node maintenance_mode off skip_shards_restore +``` + +### Reset node status + +In extreme cases, you may need to reset a node's status. Run the following commands to do so: + +``` +$ rladmin tune node max_listeners 100 +$ rladmin tune node quorum_only disabled +``` + +Use these commands with caution. For best results, contact Support before running these commands. + +## Cluster status example + +This example shows how the output of [`rladmin status`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) changes when you turn on maintenance mode for a node. + +The cluster status before turning on maintenance mode: + +```sh +redislabs@rp1_node1:/opt$ rladmin status +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS +*node:1 master 172.17.0.2 rp1_node1 2/100 +node:2 slave 172.17.0.4 rp3_node1 2/100 +node:3 slave 172.17.0.3 rp2_node1 0/100 +``` + +The cluster status after turning on maintenance mode: + +```sh +redislabs@rp1_node1:/opt$ rladmin node 2 maintenance_mode on +Performing maintenance_on action on node:2: 0% +created snapshot NodeSnapshot + +node:2 will not accept any more shards +Performing maintenance_on action on node:2: 100% +OK +redislabs@rp1_node1:/opt$ rladmin status +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS +*node:1 master 172.17.0.2 rp1_node1 2/100 +node:2 slave 172.17.0.4 rp3_node1 0/0 +node:3 slave 172.17.0.3 rp2_node1 2/100 +``` + +After turning on maintenance mode for node 2, Redis Enterprise saves a snapshot of its configuration and then moves its shards and endpoints to node 3. + +Now node 2 has `0/0` shards because shards cannot migrate to it while it is in maintenance mode. + +## Maintenance mode REST API + +You can also turn maintenance mode on or off using [REST API requests]({{< relref "/operate/rs/7.4/references/rest-api" >}}) to [POST `/nodes/{node_uid}/actions/{action}`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/actions#post-node-action" >}}). + +### Activate maintenance mode (REST API) + +Use `POST /nodes/{node_uid}/actions/maintenance_on` to activate maintenance mode: + +```sh +POST https://:9443/v1/nodes//actions/maintenance_on +{ + "overwrite_snapshot": true, + "evict_ha_replica": true, + "evict_active_active_replica": true +} +``` + +You can set `evict_ha_replica` and `evict_active_active_replica` to `false` to [prevent replica shard migration](#prevent-replica-shard-migration). + +The `maintenance_on` request returns a JSON response body: + +```json +{ + "status":"queued", + "task_id":"" +} +``` + +### Deactivate maintenance mode (REST API) + +Use `POST /nodes/{node_uid}/actions/maintenance_off` deactivate maintenance mode: + +```sh +POST https://:9443/v1/nodes//actions/maintenance_off +{ "skip_shards_restore": false } +``` + +The `skip_shards_restore` boolean flag allows the `maintenance_off` action to [skip shard restoration](#skip-shard-restoration) when set to `true`. + +The `maintenance_off` request returns a JSON response body: + +```json +{ + "status":"queued", + "task_id":"" +} +``` + +### Track action status + +You can send a request to [GET `/nodes/{node_uid}/actions/{action}`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/actions#get-node-action" >}}) to track the [status]({{< relref "/operate/rs/7.4/references/rest-api/objects/action" >}}) of the `maintenance_on` and `maintenance_off` actions. + +This request returns the status of the `maintenance_on` action: + +```sh +GET https://:9443/v1/nodes//actions/maintenance_on +``` + +The response body: + +```json +{ + "status":"completed", + "task_id":"38c7405b-26a7-4379-b84c-cab4b3db706d" +} +``` +--- +Title: Recover a failed cluster +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to use the cluster configuration file and database data to recover + a failed cluster. +linktitle: Recover a cluster +weight: 70 +url: '/operate/rs/7.4/clusters/cluster-recovery/' +--- +When a Redis Enterprise Software cluster fails, +you must use the cluster configuration file and database data to recover the cluster. + +{{< note >}} +For cluster recovery in a Kubernetes deployment, see [Recover a Redis Enterprise cluster on Kubernetes]({{< relref "/operate/kubernetes/re-clusters/cluster-recovery" >}}). +{{< /note >}} + +Cluster failure can be caused by: + +- A hardware or software failure that causes the cluster to be unresponsive to client requests or administrative actions. +- More than half of the cluster nodes lose connection with the cluster, resulting in quorum loss. + +To recover a cluster and re-create it as it was before the failure, +you must restore the cluster configuration `ccs-redis.rdb` to the cluster nodes. +To recover databases in the new cluster, you must restore the databases from persistence files such as backup files, append-only files (AOF), or RDB snapshots. +These files are stored in the [persistent storage location]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}). + +The cluster recovery process includes: + +1. Install Redis Enterprise Software on the nodes of the new cluster. +1. Mount the persistent storage with the recovery files from the original cluster to the nodes of the new cluster. +1. Recover the cluster configuration on the first node in the new cluster. +1. Join the remaining nodes to the new cluster. +1. [Recover the databases]({{< relref "/operate/rs/7.4/databases/recover.md" >}}). + +## Prerequisites + +- We recommend that you recover the cluster to clean nodes. + If you use the original nodes, + make sure there are no Redis processes running on any nodes in the new cluster. +- We recommend that you use clean persistent storage drives for the new cluster. + If you use the original storage drives, + make sure you back up the files on the original storage drives to a safe location. +- Identify the cluster configuration file that you want to use as the configuration for the recovered cluster. + The cluster configuration file is `/ccs/ccs-redis.rdb` on the persistent storage for each node. + +## Recover the cluster + +1. (Optional) If you want to recover the cluster to the original cluster nodes, uninstall Redis Enterprise Software from the nodes. + +1. [Install Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}) on the new cluster nodes. + + The new servers must have the same basic hardware and software configuration as the original servers, including: + + - The same number of nodes + - At least the same amount of memory + - The same Redis Enterprise Software version + - The same installation user and paths + + {{< note >}} +The cluster recovery can fail if these requirements are not met. + {{< /note >}} + +1. Mount the persistent storage drives with the recovery files to the new nodes. + These drives must contain the cluster configuration backup files and database persistence files. + + {{< note >}} +Make sure that the user redislabs has permissions to access the storage location +of the configuration and persistence files on each of the nodes. + {{< /note >}} + + If you use local persistent storage, place all of the recovery files on each of the cluster nodes. + +1. To recover the original cluster configuration, run [`rladmin cluster recover`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/recover" >}}) on the first node in the new cluster: + + ```sh + rladmin cluster recover filename [ | ] node_uid rack_id + ``` + + For example: + + ```sh + rladmin cluster recover filename /tmp/persist/ccs/ccs-redis.rdb node_uid 1 rack_id 5 + ``` + + When the recovery command succeeds, + this node is configured as the node from the old cluster that has ID 1. + +1. To join the remaining servers to the new cluster, run [`rladmin cluster join`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/join" >}}) from each new node: + + ```sh + rladmin cluster join nodes username password replace_node + ``` + + For example: + + ```sh + rladmin cluster join nodes 10.142.0.4 username admin@example.com password mysecret replace_node 2 + ``` + +1. Run [`rladmin status`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) to verify the recovered nodes are now active and the databases are pending recovery: + + ```sh + rladmin status + ``` + + {{< note >}} +Make sure that you update your [DNS records]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}) +with the IP addresses of the new nodes. + {{< /note >}} + +After the cluster is recovered, you must [recover the databases]({{< relref "/operate/rs/7.4/databases/recover.md" >}}). +--- +Title: "Redis OSS Cluster API" +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Use the Redis OSS Cluster API to improve performance and keep applications current with cluster topology changes. +linktitle: "Redis OSS Cluster API" +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/oss-cluster-api/' +--- +{{< embed-md "oss-cluster-api-intro.md" >}} + +You can use the Redis OSS Cluster API along with other Redis Enterprise Software high availability +to get high performance with low latency +and let applications stay current with cluster topology changes, including add node, remove node, and node failover. + +For more about working with the OSS Cluster API in Redis Enterprise Software, see [Enable OSS Cluster API]({{< relref "/operate/rs/7.4/databases/configure/oss-cluster-api" >}}). + +To learn how to enable OSS Cluster API in Redis Cloud, see [Clustering Redis databases]({{< relref "/operate/rc/databases/configuration/clustering#cluster-api" >}}). +--- +Title: Turn off services to free system memory +alwaysopen: false +categories: +- docs +- operate +- rs +description: Turn off services to free memory and improve performance. +linktitle: Free system memory +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/turn-off-services/' +--- +The Redis Enterprise Software cluster nodes host a range of services that support the cluster processes. +In most deployments, either all of these services are required, +or there are enough memory resources on the nodes for the database requirements. + +In a deployment with limited memory resources, certain services can be disabled from API endpoint to free system memory or using the `rladmin` command. +Before you turn off a service, make sure that your deployment does not depend on that service. +After you turn off a service, you can re-enable in the same way. + +The services that you can turn off are: + +- RS Admin Console - `cm_server` +- Logs in CSV format - `stats_archiver` +- [LDAP authentication]({{< relref "/operate/rs/7.4/security/access-control/ldap" >}}) - `saslauthd` +- [Discovery service]({{< relref "/operate/rs/7.4/networking/cluster-dns.md" >}})- `mdns_server`, `pdns_server` +- [Active-Active databases]({{< relref "/operate/rs/7.4/databases/active-active" >}}) - `crdb_coordinator`, `crdb_worker` +- Alert Manager - `alert_mgr` (For best results, disable only if you have an alternate alert system.) + +To turn off a service with the `rladmin cluster config` command, use the `services` parameter and the name of the service, followed by `disabled`. +```text + rladmin cluster config + [ services ] +``` + +To turn off a service with the API, use the [`PUT /v1/services_configuration`]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/services_configuration#put-cluster-services_config" >}}) endpoint +with the name of the service and the operating mode (enabled/disabled) in JSON format. + +For example: +- To turn off the Redis Enterprise Cluster Manager UI, use this PUT request: + + ```sh + PUT https://[host][:9443]/v1/cluster/services_configuration + '{ + "cm_server":{ + "operating_mode":"disabled" + } + }' + ``` + +- To turn off the CRDB services and enable the `stats_archiver` for cluster component statistics, use this PUT request: + + ```sh + PUT https://[host][:9443]/v1/cluster/services_configuration + '{ + "crdb_coordinator":{ + "operating_mode":"disabled" + }, + "crdb_worker":{ + "operating_mode":"disabled" + }, + "stats_archiver":{ + "operating_mode":"enabled" + } + }' + ``` +--- +Title: Benchmarking Redis Enterprise +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use the `memtier_benchmark` tool to perform a performance benchmark of + Redis Enterprise Software. +linkTitle: Benchmark +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/memtier-benchmark/' +--- + +Use the `memtier_benchmark` tool to perform a performance benchmark of Redis Enterprise Software. + +Prerequisites: + +- Redis Enterprise Software installed +- A cluster configured +- A database created + +For help with the prerequisites, see the [Redis Enterprise Software quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}). + +It is recommended to run memtier_benchmark on a separate node that is +not part of the cluster being tested. If you run it on a node of the +cluster, be mindful that it affects the performance of both the +cluster and memtier_benchmark. + +```sh +/opt/redislabs/bin/memtier_benchmark -s $DB_HOST -p $DB_PORT -a $DB_PASSWORD -t 4 -R --ratio=1:1 +``` + +This command instructs memtier_benchmark to connect to your Redis +Enterprise database and generates a load doing the following: + +- A 50/50 Set to Get ratio +- Each object has random data in the value + +## Populate a database with testing data + +If you need to populate a database with some test data for a proof of +concept, or failover testing, etc. here is an example for you. + +```sh +/opt/redislabs/bin/memtier_benchmark -s $DB_HOST -p $DB_PORT -a $DB_PASSWORD -R -n allkeys -d 500 --key-pattern=P:P --ratio=1:0 +``` + +This command instructs memtier_benchmark to connect to your Redis +Enterprise database and generates a load doing the following: + +- Write objects only, no reads +- A 500 byte object +- Each object has random data in the value +- Each key has a random pattern, then a colon, followed by a + random pattern. + +Run this command until it fills up your database to where you want it +for testing. The easiest way to check is on the database metrics page. + +{{< image filename="/images/rs/memtier_metrics_page.png" >}} + +Another use for memtier_benchmark is to populate a database with data +for failure testing. +--- +Title: Disk sizing for heavy write scenarios +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sizing considerations for persistent disk space for heavy throughput + databases. +linktitle: Disk sizing +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/disk-sizing-heavy-write-scenarios/' +--- +In extreme write scenarios, when AOF is enabled, the AOF rewrite process +may require considerably more disk space for database persistence. + +To estimate the required persistent disk space in such cases, use the +formula described below. + +**The required persistent disk space for AOF rewrite purposes in extreme +write scenarios, assuming identical shard sizes:** + +**X (1 + 3Y +Y²)** +where: +**X** = each shard size +**Y** = number of shards + +Following are examples of database configurations and the persistence +disk space they would require in this scenario: + +| | Example 1 | Example 2 | Example 3 | Example 4 | +|---|------------|-----------------|------------|-----------------| +| Database size (GB) | 10 | 10 | 40 | 40 | +| Number of shards | 4 | 16 | 5 | 15 | +| Shard size (GB) | 2.5 | 0.625 | 8 | 2.67 | +| Required disk space (GB) | 73 | 191 | 328 | 723 | + +For disk size requirements in standard usage scenarios, refer to the +[Hardware +requirements]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}) +section. +--- +Title: Cluster environment optimization +alwaysopen: false +categories: +- docs +- operate +- rs +description: Optimize your cluster environments for better performance. +linktitle: Environment optimization +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/optimization/' +--- +Redis Enterprise Software uses various algorithms to optimize +performance. As part of this process, Redis Enterprise Software examines usage +and load to adjust its runtime configuration. Depending +on your specific usage and load, Redis Enterprise Software might take some +time to adjust for optimal performance. + +To ensure optimal performance, you must run your workload several times +and for a long duration until performance stabilizes. + +## Failure detection sensitivity policies + +You can optimize your cluster's thresholds and timeouts for different environments using the `failure_detection_sensitivity` cluster policy: + +- `high` (previously known as `local-network watchdog_profile`) – high failure detection sensitivity, lower thresholds, and faster failure detection and failover + +- `low` (previously known as `cloud watchdog_profile`) – low failure detection sensitivity and higher tolerance for latency variance (also called network jitter) + +Depending on which policy you choose, Redis Enterprise Software uses different +thresholds to make operation-related decisions. + +The default policy is `high` failure detection sensitivity for `local-network` environments. If you are +running Redis Enterprise in a cloud environment, you should change the +configuration. + +## Change failure detection sensitivity + +To change the cluster's `failure_detection_sensitivity`, run one of the following [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/tune#tune-cluster" >}}) commands. + +- For Redis Enterprise Software version 6.4.2-69 and later, run: + + ```sh + rladmin tune cluster failure_detection_sensitivity [ low | high ] + ``` + +- For Redis Enterprise Software version 6.4.2-61 and earlier, run: + + ```sh + rladmin tune cluster watchdog_profile [ cloud | local-network ] + ``` + +If Redis Enterprise Software still +does not meet your performance expectations after following these instructions, [contact support](https://redis.com/company/support/) for help optimizing your environment. +--- +Title: Use the WAIT command to improve data safety and durability +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use the wait command to take full advantage of Redis Enterprise Software's + replication-based durability. +linkTitle: WAIT command +weight: $weight +url: '/operate/rs/7.4/clusters/optimize/wait/' +--- +Redis Enterprise Software comes with the ability to replicate data +to another replica for high availability and persist in-memory data on +disk permanently for durability. With the [`WAIT`]({{}}) command, you can +control the consistency and durability guarantees for the replicated and +persisted database. + +## Non-blocking Redis write operation + +Any updates that are issued to the database are typically performed with the following flow: + +1. Application issues a write. +1. Proxy communicates with the correct primary shard in the system that contains the given key. +1. The acknowledgment is sent to proxy after the write operation completes. +1. The proxy sends the acknowledgment back to the application. + +Independently, the write is communicated from the primary shard to the replica, and +replication acknowledges the write back to the primary shard. These are steps 5 and 6. + +Independently, the write to a replica is also persisted to disk and +acknowledged within the replica. These are steps 7 and 8. + +{{< image filename="/images/rs/weak-consistency.png" >}} + +## Blocking write operation on replication + +With the [`WAIT`]({{}}) or [`WAITAOF`]({{}}) commands, applications can ask to wait for +acknowledgments only after replication or persistence is confirmed on +the replica. The flow of a write operation with `WAIT` or `WAITAOF` is: + +1. The application issues a write. +1. The proxy communicates with the correct primary shard in the system that contains the given key. +1. Replication communicates the update to the replica shard. +1. If using `WAITAOF` and the AOF every write setting, the replica persists the update to disk before sending the acknowledgment. +1. The acknowledgment is sent back from the replica all the way to the proxy with steps 5 to 8. + +The application only gets the acknowledgment from the write after durability is achieved with replication to the replica for `WAIT` or `WAITAOF` and to the persistent storage for `WAITAOF` only. + +{{< image filename="/images/rs/strong-consistency.png" >}} + +The `WAIT` command always returns the number of replicas that acknowledged the write commands sent by the current client before the `WAIT` command, both in the case where the specified number of replicas are reached, or when the timeout is reached. In Redis Enterprise Software, the number of replicas is always 1 for databases with high availability enabled. + +See the [`WAITAOF`]({{}}) command for details for enhanced data safety and durability capabilities introduced with Redis 7.2. +--- +Title: Optimize clusters +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configuration changes and information you can use to optimize your performance + and memory usage. +hideListLinks: false +linktitle: Optimize +weight: 50 +url: '/operate/rs/7.4/clusters/optimize/' +--- +--- +Title: Rack-zone awareness in Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Rack-zone awareness ensures high availability in the event of a rack + or zone failure. +linkTitle: Rack-zone awareness +weight: 70 +url: '/operate/rs/7.4/clusters/configure/rack-zone-awareness/' +--- +Rack-zone awareness helps ensure high availability in the event of a rack or zone failure. + +When you enable rack-zone awareness in a Redis Enterprise Software cluster, you assign +a [rack-zone ID](#rack-zone-id-rules) to each node. This ID is used to map the node to a +physical rack or logical zone. The cluster can then ensure that master shards, corresponding replica shards, and associated endpoints are placed on [nodes in different racks or zones](#node-layout-guidelines). + +In the event of a rack or zone failure, the replicas and endpoints in the remaining racks and zones are promoted. This ensures high availability when a rack or zone fails. + +There is no limitation on the number of racks and zones per cluster. Each +node can belong to a different rack or multiple nodes can belong to the +same rack. + +Rack-zone awareness affects various cluster, node, and database actions, such as node rebalancing, node removal, node replacement, shard and endpoint migration, and database failover. + +## Rack-zone ID rules + +The rack-zone ID must comply with the following rules: + +- Maximum length of 63 characters. +- Characters consist of letters, digits, and hyphens ('-'). Underscores ('_') are also accepted as of Redis Enterprise Software [6.4.2-61]({{< relref "/operate/rs/release-notes/rs-6-4-2-releases/rs-6-4-2-61" >}}). +- ID starts with a letter and ends with a letter or a digit. + +{{< note >}} +Rack-zone IDs are **case-insensitive** (uppercase and lowercase letter are treated as the same). +{{< /note >}} + +## Node layout guidelines + +Avoid placing the majority of nodes in one availability zone. + +If a Redis Enterprise Software cluster consists of three nodes (the recommended minimum), follow these guidelines: + +- For high availability, the three nodes must be distributed across three *distinct* racks or zones. + +- When using availability zones, all three zones should exist within the same *region* to avoid potential latency issues. + +## Set up rack-zone awareness + +To enable rack-zone awareness, you need to configure it for the +cluster, nodes, and [databases](#enable-database-rack-zone-awareness). + +### New cluster + +You can set up rack-zone awareness for the cluster and its nodes during [cluster creation]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup" >}}): + +1. In the **Cluster** screen's **Configuration** section, enable **Rack zone awareness**. + +1. Select **Next** to continue to the **Node** configuration screen. + +1. Enter a **Rack-zone ID** for the current node. + +1. Finish [cluster setup]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup" >}}). + +1. For every [node you add to the cluster]({{< relref "/operate/rs/7.4/clusters/add-node" >}}), assign a different **Rack-zone ID**. + +### Existing cluster + +If you did not configure rack-zone awareness during cluster creation, you can configure rack-zone awareness for existing clusters using the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}): + +1. For each node in the cluster, assign a different rack-zone ID using the REST API to [update the node]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes#put-node" >}}): + + ```sh + PUT /v1/nodes/ + { "rack_id": "rack-zone-ID" } + ``` + +1. [Update the cluster policy]({{< relref "/operate/rs/7.4/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) to enable rack-zone awareness: + + ```sh + PUT /v1/cluster/policy + { "rack_aware": true } + ``` + +## Enable database rack-zone awareness + +Before you can enable rack-zone awareness for a database, you must configure rack-zone awareness for the cluster and its nodes. For more information, see [set up rack-zone awareness](#set-up-rack-zone-awareness). + + + +To enable rack-zone awareness for a database, use a [REST API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs#put-bdbs" >}}): + +```sh +PUT /v1/bdbs/ +{ "rack_aware": true } +``` + +### Rearrange database shards + +After you enable rack-zone awareness for an existing database, you should generate an optimized shard placement blueprint using the [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) and use it to rearrange the shards in different racks or zones. + +1. [Generate an optimized shard placement blueprint]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/actions/optimize_shards_placement#get-bdbs-actions-optimize-shards-placement" >}}): + + 1. Send the following `GET` request: + + ```sh + GET /v1/bdbs//actions/optimize_shards_placement + ``` + + 1. Copy the `cluster-state-id` from the response headers. + + 1. Copy the JSON response body, which represents the new shard placement blueprint. + +1. [Rearrange the database shards]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs/actions/optimize_shards_placement#put-bdbs-rearrange-shards" >}}) according to the new shard placement blueprint: + + 1. In the request headers, include the `cluster-state-id` from the `optimize_shards_placement` response. + + 1. Add the following JSON in the request body and replace `` with the new blueprint: + + ```sh + { + "shards_blueprint": + } + ``` + + 1. Send the following `PUT` request to rearrange the shards: + + ```sh + PUT /v1/bdbs/ + ``` + +## Shard placement without rack-zone awareness + +Even if a database has rack-zone awareness turned off, the cluster still ensures that master and replica shards are placed on distinct nodes. +--- +Title: Cluster settings +alwaysopen: false +categories: +- docs +- operate +- rs +description: You can view and set various cluster settings such as cluster name, email + service, time zone, and license. +linktitle: Cluster settings +toc: 'true' +weight: 10 +url: '/operate/rs/7.4/clusters/configure/cluster-settings/' +--- +You can view and set various cluster settings, such as cluster name, email service, time zone, and license, on the **Cluster > Configuration** page. + +## General configuration tab + +### Upload cluster license key + +After purchasing a cluster license and if your account has the "Admin" role, +you can upload the cluster license key, either during initial +cluster creation or at any time afterward. The license key defines various +cluster settings, such as the maximum number of shards you can have in +the cluster. For more detailed information see [Cluster license +keys]({{< relref "/operate/rs/7.4/clusters/configure/license-keys.md" >}}). + +### View max number of allowed shards + +The maximum number of allowed shards, which is determined by the cluster license +key, appears in the **Max number of shards** field in the **License** section. + +### View cluster name + +The cluster name appears in the **Cluster name** field in the **License** section. This gives a +common name that your team or Redis support can refer to. It is +especially helpful if you have multiple clusters. + +### Set time zone + +You can change the **Time zone** field to ensure the date, time fields, and log entries in the Cluster Manager UI are shown in your preferred time zone. This setting doesn't affect other system logs or services. + +## Alert settings tab + +The **Alert Settings** tab lets you configure alerts that are relevant to the entire cluster, such as alerts for cluster utilization, nodes, node utilization, security, and database utilization. + +You can also configure email server settings and [send alerts by email]({{< relref "/operate/rs/7.4/clusters/monitoring#send-alerts-by-email" >}}) to relevant users. + +### Configure email server settings + +To enable email alerts: + +1. Enter your email +server details in the **Email server settings** section. + +1. Select a connection security method: + + - TLS/SSL + + - STARTTLS + + - None + +1. Send a test email to verify your email server settings. +--- +Title: Configure clusters +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configuration options for your Redis Enterprise cluster. +hideListLinks: false +linktitle: Configure +weight: 50 +url: '/operate/rs/7.4/clusters/configure/' +--- +You can manage your Redis Enterprise Software clusters with several different tools: + +- Cluster Manager UI (the web-based user interface) +- Command-line tools ([rladmin]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}), [redis-cli]({{< relref "/develop/tools/cli" >}}), [crdb-cli]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}})) +- [REST API]({{< relref "/operate/rs/7.4/references/rest-api/_index.md" >}}) + + + +--- +Title: Cluster license keys +alwaysopen: false +categories: +- docs +- operate +- rs +description: The cluster key (or license) enables features and capacity within Redis + Enterprise Software +linktitle: License keys +weight: 20 +url: '/operate/rs/7.4/clusters/configure/license-keys/' +--- +The cluster license key enables Redis Enterprise Software features and determines shard usage and limits. +You can add or update a cluster key at any time. + +## Trial mode + +Trial mode allows all features to be enabled during the trial period. + +Trial mode is limited to 30 days and 4 shards, including master and replica shards. A new Redis Enterprise Software installation starts its 30-day trial period from the day you set up the cluster on the first node. + +Trial mode requires a trial license. If you do not provide a license when you create a cluster using the Cluster Manager UI or a [bootstrapping REST API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/bootstrap#post-bootstrap" >}}), a trial cluster license is generated by default. + +## View cluster license key + +To view the cluster license key, use: + +- Cluster Manager UI + + 1. Go to **Cluster > Configuration > General > License** to see the cluster license details. + + 1. Select **Change** to view the cluster license key. + +- REST API - [`GET /v1/license`]({{< relref "/operate/rs/7.4/references/rest-api/requests/license#get-license" >}}) + + For a list of returned fields, see the [response section]({{< relref "/operate/rs/7.4/references/rest-api/requests/license#get-response" >}}). + +{{}} +As of version 7.2, Redis Enterprise enforces shard limits by shard types, RAM or flash, instead of the total number of shards. The flash shards limit only appears in the UI if Auto Tiering is enabled. +{{}} + +## Update cluster license + +{{< note >}} +After you add a cluster key, you cannot remove the key to return the cluster to trial mode. +{{< /note >}} + +You can update the cluster license key: + +- During cluster setup using the Cluster Manager UI or CLI + +- After cluster setup using the Cluster Manager UI: + + 1. Go to **Cluster > Configuration > General > License**. + + 1. Select **Change**. + + 1. Upload or enter your cluster license key. + + 1. Select **Save**. + +You can update an existing cluster key at any time. +Redis Enterprise checks its validity for the following: +- Cluster name +- Activation and expiration dates +- Shard usage and limits +- Features + +If saving a new cluster key fails, the operation returns an error with the failure's cause. +In this case, the existing key stays in effect. + +## Expired cluster license + +When the license is expired: + +- You cannot do these actions: + + - Change database settings, including security and configuration options. + + - Add a database. + +- You can do these actions: + + - Sign in to the Cluster Manager UI and view settings and metrics at all resolutions for the cluster, nodes, and databases. + + - Change cluster settings, including the license key, security for administrators, and cluster alerts. + + - Fail over when a node fails and explicitly migrate shards between nodes. + + - Upgrade a node to a new version of Redis Enterprise Software. + +## Monitor cluster license + +As of version 7.2, Redis Enterprise exposes the license quotas and the shards consumption metrics in the Cluster Manager UI or via the [Prometheus integration]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}). + +The `cluster_shards_limit` metric displays the total shard limit by shard type. + +Examples: +- `cluster_shards_limit{cluster="mycluster.local",shard_type="ram"} 100.0` +- `cluster_shards_limit{cluster="mycluster.local",shard_type="flash"} 50.0` + +The `bdb_shards_used` metric displays the used shard count by database and shard type. + +Examples: +- `bdb_shards_used{bdb="2",cluster="mycluster.local",shard_type="ram"} 86.0` +- `bdb_shards_used{bdb="3",cluster="mycluster.local",shard_type="flash"} 23.0` + +--- +Title: Synchronize cluster node clocks +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sync node clocks to avoid problems with internal custer communication. +linktitle: Sync node clocks +weight: 30 +url: '/operate/rs/7.4/clusters/configure/sync-clocks/' +--- +To avoid problems with internal cluster communications that can impact your data integrity, +make sure that the clocks on all of the cluster nodes are synchronized using Chrony and/or NTP. + +When you install Redis Enterprise Software, +the install script checks if Chrony or NTP is running. +If they are not, the installation process asks for permission to configure a scheduled Cron job. +This should make sure that the node's clock is always synchronized. +If you did not confirm configuring this job during the installation process, +you must use the Network Time Protocol (NTP) regularly to make sure that all server clocks are synchronized. + +To synchronize the server clock, run the command that is appropriate for your operating system. + +## Set up NTP synchronization + +To set up NTP synchronization, see the following sections for instructions for specific operating systems. + +### Ubuntu 18.04 and Ubuntu 20.04 + +1. Install Chrony, a replacement for NTP: + ```sh + sudo apt install chrony + ``` + +1. Edit the Chrony configuration file: + ```sh + sudo nano /etc/chrony/chrony.conf + ``` + +1. Add `server pool.ntp.org` to the file, replace `pool.ntp.org` with your own NTP server, then save. + +1. Restart the Chrony service: + ```sh + sudo systemctl restart chrony + ``` + +1. Check the Chrony service status: + ```sh + sudo systemctl status chrony + ``` + +For more details, refer to the official [Ubuntu 20.04 documentation](https://ubuntu.com/server/docs/network-ntp). + +### RHEL 7 + +1. Install Chrony, a replacement for NTP: + ```sh + sudo yum install chrony + ``` + +1. Edit the Chrony configuration file: + ```sh + sudo nano /etc/chrony.conf + ``` + +1. Add `server pool.ntp.org` to the file, replace `pool.ntp.org` with your own NTP server, then save. + +1. Enable and start the Chrony service: + ```sh + sudo systemctl enable chronyd && sudo systemctl start chronyd + ``` + +1. Check the Chrony service status: + ```sh + sudo systemctl status chronyd + ``` + +For more details, refer to the official [RHEL 7 documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-configuring_ntp_using_the_chrony_suite#sect-Using_chrony). + +### RHEL 8 and RHEL 9 + +1. Install Chrony, a replacement for NTP: + ```sh + sudo dnf install chrony + ``` + +1. Edit the Chrony configuration file: + ```sh + sudo nano /etc/chrony.conf + ``` + +1. Add `server pool.ntp.org` to the file, replace `pool.ntp.org` with your own NTP server, then save. + +1. Enable and start the Chrony service: + ```sh + sudo systemctl enable chronyd && sudo systemctl start chronyd + ``` + +1. Check the Chrony service status: + ```sh + sudo systemctl status chronyd + ``` + +For more details, refer to the official [RHEL 8 and 9 documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/configuring-time-synchronization_configuring-basic-system-settings). + +### Amazon Linux 2 + +1. Install Chrony, a replacement for NTP: + ```sh + sudo yum install chrony + ``` + +1. Edit the Chrony configuration file: + ```sh + sudo nano /etc/chrony.conf + ``` + +1. Add `server pool.ntp.org` to the file, replace `pool.ntp.org` with your own NTP server, then save. + +1. Enable and start the Chrony service: + ```sh + sudo systemctl enable chronyd && sudo systemctl start chronyd + ``` + +1. Check the Chrony service status: + ```sh + sudo systemctl status chronyd + ``` + +For more details, refer to the official [Amazon Linux 2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html). + +If you are using Active-Active databases, you must use [Network Time Service (ntpd)]({{< relref "/operate/rs/7.4/databases/active-active/_index.md#network-time-service-ntp-or-chrony" >}}) +to synchronize OS clocks consistently across clusters to handle conflict resolution according to the OS time. +--- +Title: Replace a cluster node +alwaysopen: false +categories: +- docs +- operate +- rs +description: Replace a node in your cluster that is down. +linkTitle: Replace node +weight: 90 +url: '/operate/rs/7.4/clusters/replace-node/' +--- +A failed node will appear as `Down` ({{< image filename="/images/rs/icons/node-down-icon.png#no-click" alt="Node down icon" class="inline" >}}) in the **Nodes** list. + +To replace a failed node: + +1. Prepare a new node identical to the old one. + +1. Install and + configure Redis Enterprise Software on the node. See [Install and setup]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) for more information. + + {{< note >}} +If you are using [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}), make sure the required flash storage is set up on this new node. + {{< /note >}} + +1. [Add the node]({{< relref "/operate/rs/7.4/clusters/add-node" >}}) to the cluster. Make sure the new node has as much available memory as the faulty + node. + + If the new node does not have enough memory, you will be prompted to add a node with enough memory. + +1. A message will appear informing you that the cluster has a faulty node + and that the new node will replace the faulty node. + + {{< note >}} +- If there is a faulty node in the cluster to which you are adding a node, Redis Enterprise Software will use the new node to replace the faulty one. +- Any existing [DNS records]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}) must be updated +each time a node is added or replaced. + {{< /note >}} +--- +Title: Remove a cluster node +alwaysopen: false +categories: +- docs +- operate +- rs +description: Remove a node from your Redis Enterprise cluster. +linkTitle: Remove node +weight: 80 +url: '/operate/rs/7.4/clusters/remove-node/' +--- +You might want to remove a node from a Redis Enterprise cluster for one of the following reasons: + +- To [permanently remove a node](#permanently-remove-a-node) if you no longer need the extra capacity. +- To [replace a faulty node](#replace-a-faulty-node) with a healthy node. +- To [replace a healthy node](#replace-a-healthy-node) with a different node. + +You can configure [email alerts from the cluster]({{< relref "/operate/rs/7.4/clusters/monitoring#cluster-alerts" >}}) to notify you of cluster changes, including when a node is removed. + +{{}} +Read through these explanations thoroughly before taking +any action. +{{}} + +## Permanently remove a node + +Permanently removing a node means you are decreasing cluster capacity. +Before trying to remove a node, make sure that the cluster has enough +capacity for all resources without that node, otherwise you cannot remove the node. + +If there is not enough capacity in the cluster to facilitate removing +the node, you can either delete databases or add another node instead of +the one you would like to remove. + +During the removal process, the cluster migrates all resources from the +node being removed to other nodes in the cluster. In order to ensure +database connectivity, and database high availability (when replication +is enabled), the cluster first creates replacement shards or endpoints +on one of the other nodes in the cluster, initiates failover as needed, +and only then removes the node. + +If a cluster has only two nodes (which is not recommended for production +deployments) and some databases have replication enabled, you cannot remove a node. + +## Replace a faulty node + +If the cluster has a faulty node that you would like to replace, you +only need to add a new node to the cluster. The cluster recognizes the +existence of a faulty node and automatically replaces the faulty node +with the new node. + +For guidelines, refer to [Replacing a faulty +node]({{< relref "/operate/rs/7.4/clusters/replace-node.md" >}}). + +## Replace a healthy node + +If you would like to replace a healthy node with a different node, you +must first add the new node to the cluster, migrate all the resources +from the node you would like to remove, and only then remove the node. + +For further guidance, refer to [adding a new node to a +cluster]({{< relref "/operate/rs/7.4/clusters/add-node.md" >}}). + +You can migrate resources by using the `rladmin` command-line interface +(CLI). For guidelines, refer to [`rladmin` command-line interface +(CLI)]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}). + +{{< note >}} +The [DNS records]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}) must be updated each time a node is added or replaced. +{{< /note >}} + +## Remove a node + +To remove a node using the Cluster Manager UI: + +1. If you are using the new Cluster Manager UI, switch to the legacy admin console. + + {{Select switch to legacy admin console from the dropdown.}} + +1. On the **nodes** page, select the node you want to remove. + +1. Click **Remove** at the top of the **node** page. + +1. Confirm you want to **Remove** the node when prompted. + +1. Redis Enterprise Software examines the node and the cluster and takes the actions required + to remove the node. + +1. At any point, you can click the **Abort** button to stop the + process. When aborted, the current internal action is completed, and + then the process stops. + +1. Once the process finishes, the node is no longer shown in + the UI. + +To remove a node using the REST API, use [`POST /v1/nodes//actions/remove`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/actions#post-node-action" >}}). + +By default, the remove node action completes after all resources migrate off the removed node. Node removal does not wait for migrated shards' persistence files to be created on the new nodes. + +To change node removal to wait for the creation of new persistence files for all migrated shards, set `wait_for_persistence` to `true` in the request body or [update the cluster policy]({{}}) `persistent_node_removal` to `true` to change the cluster's default behavior. + +For example: + +```sh +POST https://:9443/v1/nodes//actions/remove +{ + "wait_for_persistence": true +} +``` + +{{< note >}} +If you need to add a removed node back to the cluster, +you must [uninstall]({{< relref "/operate/rs/7.4/installing-upgrading/uninstalling.md" >}}) +and [reinstall]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) the software on that node. +{{< /note >}} +--- +Title: Add a cluster node +alwaysopen: false +categories: +- docs +- operate +- rs +description: Add a node to your existing Redis Enterprise cluster. +linktitle: Add a node +weight: 20 +url: '/operate/rs/7.4/clusters/add-node/' +--- +When you install Redis Enterprise Software on the first node of a cluster, you create the new cluster. +After you install the first node, you can add more nodes to the cluster. + +{{< note >}} +Before you add a node to the cluster: + +- The clocks on all nodes must always be [synchronized]({{< relref "/operate/rs/7.4/clusters/configure/sync-clocks.md" >}}). + + If the clock in the node you are trying to join to the cluster is not synchronized with the nodes already in the cluster, + the action fails and an error message is shown indicating that you must synchronize the clocks first. + +- You must [update the DNS records]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}) + each time a node is added or replaced. + +- We recommend that you add nodes one after the other rather than in parallel + to avoid errors that occur because the connection to the other nodes in the cluster cannot be verified. +{{< /note >}} + +To add a node to an existing cluster: + +1. [Install the Redis Enterprise Software installation package]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) on a clean installation + of a [supported operating system]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms.md" >}}). + +1. To connect to the Cluster Manager UI of the new Redis Enterprise Software installation, go to: + + For example, if you installed Redis Enterprise Software on a machine with IP address 10.0.1.34, go to `https://10.0.1.34:8443`. + + {{< tip >}} +The management UI uses TLS encryption with a default certificate. +You can also [replace the TLS certificate]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates" >}}) +with a custom certificate. + {{< /tip >}} + +1. Select **Join cluster**. + +1. For **Cluster identification**, enter the internal IP address or DNS name of a node that is a cluster member. + + If the node only has one IP address, enter that IP address. + +1. For **Cluster sign in**, enter the credentials of the cluster administrator. + + The cluster administrator is the user account that you create when you configure the first node in the cluster. + +1. Click **Next**. + +1. Configure storage and network settings: + + 1. Enter a path for [*Ephemeral storage*]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}), or leave the default path. + + 1. Enter a path for [*Persistent storage*]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}), + or leave the default path. + + 1. To enable [*Auto Tiering*]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}), + select **Enable flash storage** and enter the path to the flash storage. + + 1. If the cluster is configured to support [rack-zone awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}), set the **Rack-zone ID** for the new node. + + 1. If your machine has multiple IP addresses, assign a single IPv4 type address for **Node-to-node communication (internal traffic)** and multiple IPv4/IPv6 type addresses for **External traffic**. + +1. Select **Join cluster**. + +The node is added to the cluster. +You can see it in the list of nodes in the cluster. + +If you see an error when you add the node, try adding the node again. + +{{< tip >}} +We recommend that you run the [rlcheck utility]({{< relref "/operate/rs/7.4/references/cli-utilities/rlcheck" >}}) to verify that the node is functioning properly. +{{< /tip >}} + +--- +Title: Set up a new cluster +alwaysopen: false +categories: +- docs +- operate +- rs +description: How to set up a new cluster using the management UI. +linktitle: Set up cluster +weight: 10 +url: '/operate/rs/7.4/clusters/new-cluster-setup/' +--- +A Redis Enterprise Software cluster typically consists of several nodes. +For production deployments, we recommend an uneven number of nodes, with a minimum of three. + +{{< note >}} +In a cluster that consists of only one node, some features and capabilities are not enabled, +such as database replication that provides high availability. +{{< /note >}} + +To set up a new cluster, you must first [install the Redis Enterprise Software package]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) +and then set up the cluster as described below. +After the cluster is created you can [add multiple nodes to the cluster]({{< relref "/operate/rs/7.4/clusters/add-node.md" >}}). + +To create a cluster: + +1. In a browser, go to `https://:8443`. + For example, if you installed Redis Enterprise Software on a machine with IP address 10.0.1.34, go to . + + {{< note >}} +- The management UI uses a [self-signed certificate for TLS encryption]({{< relref "/operate/rs/7.4/security/certificates/updating-certificates" >}}). +- If the machine has both an internal IP address and an external IP address, use the external IP address to access the setup UI. + {{< /note >}} + +1. Select **Create new cluster**. + +1. Enter an email and password for the administrator account, then select **Next** to proceed to cluster setup. + +1. Enter your cluster license key if you have one. Otherwise, the cluster uses the trial license by default. + +1. In the **Configuration** section: + + 1. For **FQDN (Fully Qualified Domain Name)**, enter a unique name for the cluster. + + See the [instructions for DNS setup]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}) + to make sure your cluster is reachable by name. + + 1. Choose whether to [**Enable private & public endpoints support**]({{< relref "/operate/rs/7.4/networking/private-public-endpoints.md" >}}). + + 1. Choose whether to [**Enable rack-zone awareness**]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}). + +1. Click **Next**. + +1. Configure storage and network settings: + + 1. Enter a path for [*Ephemeral storage*]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}), or leave the default path. + + 1. Enter a path for [*Persistent storage*]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}), + or leave the default path. + + 1. To enable [*Auto Tiering*]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}), + select **Enable flash storage** and enter the path to the flash storage. + + 1. If the cluster is configured to support [rack-zone awareness]({{< relref "/operate/rs/7.4/clusters/configure/rack-zone-awareness.md" >}}), set the **Rack-zone ID** for the new node. + + 1. If your machine has multiple IP addresses, assign a single IPv4 type address for **Node-to-node communication (internal traffic)** and multiple IPv4/IPv6 type addresses for **External traffic**. + +1. Select **Create cluster**. + +1. Click **OK** to confirm that you are aware of the replacement of the HTTPS TLS certificate on the node, + and proceed through the browser warning. + +After a short wait, your cluster is created and you can sign in to the Cluster Manager UI. + +You can now access any of the management capabilities, including: + +- [Creating a new database]({{< relref "/operate/rs/7.4/databases/create.md" >}}) +- [Joining a new node to a cluster]({{< relref "/operate/rs/7.4/clusters/add-node.md" >}}) +--- +Title: Monitoring with metrics and alerts +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Use the metrics that measure the performance of your Redis Enterprise Software clusters, nodes, databases, and shards to track the performance of your databases. +hideListLinks: true +linkTitle: Monitoring +weight: 96 +url: '/operate/rs/7.4/clusters/monitoring/' +--- +You can use the metrics that measure the performance of your Redis Enterprise Software clusters, nodes, databases, and shards +to monitor the performance of your databases. +In the Redis Enterprise Cluster Manager UI, you can see real-time metrics and configure alerts that send notifications based on alert parameters. You can also access metrics and configure alerts through the REST API. + +To integrate Redis Enterprise metrics into your monitoring environment, see the integration guides for [Prometheus and Grafana]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}) or [Uptrace]({{< relref "/integrate/uptrace-with-redis-enterprise/" >}}). + +Make sure you read the [definition of each metric]({{< relref "/operate/rs/7.4/references/metrics/" >}}) +so that you understand exactly what it represents. + +## Real-time metrics + +You can see the metrics of the cluster in: + +- **Cluster > Metrics** +- **Node > Metrics** for each node +- **Database > Metrics** for each database, including the shards for that database + +The scale selector at the top of the page allows you to set the X-axis (time) scale of the graph. + +To choose which metrics to display in the two large graphs at the top of the page: + +1. Hover over the graph you want to show in a large graph. +1. Click on the right or left arrow to choose which side to show the graph. + +We recommend that you show two similar metrics in the top graphs so you can compare them side-by-side. + +## Cluster alerts + +In **Cluster > Alert Settings**, you can enable alerts for node or cluster events, such as high memory usage or throughput. + +Configured alerts are shown: + +- As a notification on the status icon ( {{< image filename="/images/rs/icons/icon_warning.png#no-click" alt="Warning" width="18px" class="inline" >}} ) for the node and cluster +- In the **log** +- In email notifications, if you configure [email alerts](#send-alerts-by-email) + +{{< note >}} +If you enable alerts for "Node joined" or "Node removed" actions, +you must also enable "Receive email alerts" so that the notifications are sent. +{{< /note >}} + +To enable alerts for a cluster: + +1. In **Cluster > Alert Settings**, click **Edit**. +1. Select the alerts that you want to show for the cluster and click **Save**. + +## Database alerts + +For each database, you can enable alerts for database events, such as high memory usage or throughput. + +Configured alerts are shown: + +- As a notification on the status icon ( {{< image filename="/images/rs/icons/icon_warning.png#no-click" alt="Warning" width="18px" class="inline" >}} ) for the database +- In the **log** +- In emails, if you configure [email alerts](#send-alerts-by-email) + +To enable alerts for a database: + +1. In **Configuration** for the database, click **Edit**. +1. Select the **Alerts** section to open it. +1. Select the alerts that you want to show for the database and click **Save**. + +## Send alerts by email + +To send cluster and database alerts by email: + +1. In **Cluster > Alert Settings**, click **Edit**. +1. Select **Set an email** to configure the [email server settings]({{< relref "/operate/rs/7.4/clusters/configure/cluster-settings#configuring-email-server-settings" >}}). +1. In **Configuration** for the database, click **Edit**. +1. Select the **Alerts** section to open it. +1. Select **Receive email alerts** and click **Save**. +1. In **Access Control**, select the [database and cluster alerts]({{< relref "/operate/rs/7.4/security/access-control/manage-users" >}}) that you want each user to receive. +--- +Title: Alerts and events +alwaysopen: false +categories: +- docs +- operate +- rs +description: Logged alerts and events +linkTitle: Alerts and events +weight: 50 +aliases: + - /operate/rs/clusters/logging/rsyslog-logging/cluster-events/ + - /operate/rs/clusters/logging/rsyslog-logging/bdb-events/ + - /operate/rs/clusters/logging/rsyslog-logging/node-events/ + - /operate/rs/clusters/logging/rsyslog-logging/user-events/ +url: '/operate/rs/7.4/clusters/logging/alerts-events/' +--- + +The following alerts and events can appear in `syslog` and the Cluster Manager UI logs. + +| Alert/Event | UI message | Severity | Notes | +|-----------------------------------|----------------------------------------------------------------|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------| +| aof_slow_disk_io | Redis performance is degraded as a result of disk I/O limits | True: error, False: info | node alert | +| authentication_err | | error | bdb event; Replica of - error authenticating with the source database | +| backup_delayed | Periodic backup has been delayed for longer than `` minutes | True: warning, False: info | bdb alert; Has threshold parameter in the data: section of the log entry. | +| backup_failed | | error | bdb event | +| backup_started | | info | bdb event | +| backup_succeeded | | info | bdb event | +| bdb_created | | info | bdb event | +| bdb_deleted | | info | bdb event | +| bdb_updated | | info | bdb event; Indicates that a bdb configuration has been updated | +| checks_error | | error | node event; Indicates that one or more node checks have failed | +| cluster_updated | | info | cluster event; Indicates that cluster settings have been updated | +| compression_unsup_err | | error | bdb event; Replica of - Compression not supported by sync destination | +| crossslot_err | | error | bdb event; Replica of - sharded destination does not support operation executed on source | +| cpu_utilization | CPU utilization has reached ``% | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| even_node_count | True high availability requires an odd number of nodes | True: warning, False: info | cluster alert | +| ephemeral_storage | Ephemeral storage has reached ``% of its capacity | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| export_failed | | error | bdb event | +| export_started | | info | bdb event | +| export_succeeded | | info | bdb event | +| failed | Node failed | critical | node alert | +| free_flash | Flash storage has reached ``% of its capacity | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| high_latency | Latency is higher than `` milliseconds | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| high_syncer_lag | Replica of - sync lag is higher than `` seconds | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| high_throughput | Throughput is higher than `` RPS (requests per second) | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| import_failed | | error | bdb event | +| import_started | | info | bdb event | +| import_succeeded | | info | bdb event | +| inconsistent_redis_sw | Not all databases are running the same open source version | True: warning, False: info | cluster alert | +| inconsistent_rl_sw | Not all nodes in the cluster are running the same Redis Labs Enterprise Cluster version | True: warning, False: info | cluster alert | +| insufficient_disk_aofrw | Node has insufficient disk space for AOF rewrite | True: error, False: info | node alert | +| internal_bdb | Issues with internal cluster databases | True: warning, False: info | cluster alert | +| license_added | | info | cluster event | +| license_deleted | | info | cluster event | +| license_updated | | info | cluster event | +| low_throughput | Throughput is lower than `` RPS (requests per second) | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| memory | Node memory has reached ``% of its capacity | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| multiple_nodes_down | Multiple cluster nodes are down - this might cause data loss | True: warning, False: info | cluster alert | +| net_throughput | Network throughput has reached ``MB/s | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| node_abort_remove_request | | info | node event | +| node_joined | Node joined | info | cluster event | +| node_operation_failed | Node operation failed | error | cluster event | +| node_remove_abort_completed | Node removed | info | cluster event; The remove node is a process that can fail and can also be aborted. If aborted, the abort can succeed or fail. | +| node_remove_abort_failed | Node removed | error | cluster event; The remove node is a process that can fail and can also be aborted. If aborted, the abort can succeed or fail. | +| node_remove_completed | Node removed | info | cluster event; The remove node is a process that can fail and can also be aborted. If aborted, the abort can succeed or fail. | +| node_remove_failed | Node removed | error | cluster event; The remove node is a process that can fail and can also be aborted. If aborted, the abort can succeed or fail. | +| node_remove_request | | info | node event | +| ocsp_query_failed | Failed querying OCSP server | True: error, False: info | cluster alert | +| ocsp_status_revoked | OCSP status revoked | True: error, False: info | cluster alert | +| oom_err | | error | bdb event; Replica of - Replication source/target out of memory | +| persistent_storage | Persistent storage has reached ``% of its capacity | True: warning, False: info | node alert; Has global_threshold parameter in the key/value section of the log entry. | +| ram_dataset_overhead | RAM Dataset overhead in a shard has reached ``% of its RAM limit | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| ram_overcommit | Cluster capacity is less than total memory allocated to its databases | True: error, False: info | cluster alert | +| ram_values | Percent of values in a shard's RAM is lower than ``% of its key count | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| shard_num_ram_values | Number of values in a shard's RAM is lower than `` values | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| size | Dataset size has reached ``% of the memory limit | True: warning, False: info | bdb alert; Has threshold parameter in the key/value section of the log entry. | +| syncer_connection_error | | error | bdb alert | +| syncer_general_error | | error | bdb alert | +| too_few_nodes_for_replication | Database replication requires at least two nodes in cluster | True: warning, False: info | cluster alert | +| user_created | | info | user event | +| user_deleted | | info | user event | +| user_updated | | info | user event; Indicates that a user configuration has been updated | +--- +Title: Manage logs +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linktitle: Manage logs +weight: 50 +url: '/operate/rs/7.4/clusters/logging/log-security/' +--- +Redis Enterprise comes with [a set of logs]({{< relref "/operate/rs/7.4/clusters/logging" >}}) on the server and available through the user interface to assist users in investigating actions taken on the server and to troubleshoot issues. + +## Send logs to a remote logging server + +Redis Enterprise sends logs to syslog by default. You can send these logs to a remote logging server by configuring syslog. + +To do this, modify the syslog or rsyslog configuration on your operating system to send logs in the `$logdir` directory (`/var/opt/redislabs/log` in default installations) to a remote monitoring server of your choice. See [rsyslog logging]({{< relref "/operate/rs/7.4/clusters/logging/rsyslog-logging/" >}}) for additional details. + +## Log rotation + +Redis Enterprise Software's job scheduler runs `logrotate` every five minutes to examine logs stored on the operating system and rotate them based on the log rotation configuration. You can find the log rotation configuration file at `$pkgconfdir/logrotate.conf` as of Redis Enterprise Software version 7.2 (`pkgconfdir` is `/opt/redislabs/config` by default, but can be changed in a custom installation). + +By default, log rotation occurs when a log exceeds 200 MB. We recommend sending log files to a remote logging server so you can maintain them more effectively. + +The following log rotation policy is enabled by default in Redis Enterprise Software, but you can modify it as needed. + +```sh +/var/opt/redislabs/log/*.log { + su ${osuser} ${osgroup} + size 200M + missingok + copytruncate + # 2000 is logrotate's way of saying 'infinite' + rotate 2000 + maxage 7 + compress + notifempty + nodateext + nosharedscripts + prerotate + # copy cluster_wd log to another file that will have longer retention + if [ "\$1" = "/var/opt/redislabs/log/cluster_wd.log" ]; then + cp -p /var/opt/redislabs/log/cluster_wd.log /var/opt/redislabs/log/cluster_wd.log.long_retention + fi + endscript +} +/var/opt/redislabs/log/cluster_wd.log.long_retention { + su ${osuser} ${osgroup} + daily + missingok + copytruncate + rotate 30 + compress + notifempty + nodateext +} +``` + +- `/var/opt/redislabs/log/*.log` - `logrotate` checks the files under the `$logdir` directory (`/var/opt/redislabs/log/`) and rotates any files that end with the extension `.log`. + +- `/var/opt/redislabs/log/cluster_wd.log.long_retention` - The contents of `cluster_wd.log` is copied to `cluster_wd.log.long_retention` before rotation, and this copy is kept for longer than normal (30 days). + +- `size 200M` - Rotate log files that exceed 200 MB. + +- `missingok` - If there are missing log files, do nothing. + +- `copytruncate` - Truncate the original log file to zero sizes after creating a copy. + +- `rotate 2000` - Keep up to 2000 (effectively infinite) log files. + +- `compress` - gzip log files. + +- `maxage 7` - Keep the rotated log files for 7 days. + +- `notifempty` - Don't rotate the log file if it is empty. + +{{}} +For large scale deployments, you might need to rotate logs at faster intervals than daily. You can also use a cronjob or external vendor solutions. +{{}} +--- +Title: View Redis slow log +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Slow log +weight: $weight +url: '/operate/rs/7.4/clusters/logging/redis-slow-log/' +--- +On the **Databases** \> **Slowlog** page, you can view Slow Log details +for Redis Enterprise Software databases. + +[Redis Slow Log](http://redis.io/commands/slowlog) is one of the best +tools for debugging and tracing your Redis database, especially if you +experience high latency and high CPU usage with Redis operations. +Because Redis is based on a single threaded architecture, Redis Slow Log +can be much more useful than slow log mechanisms of multi-threaded +database systems such as MySQL Slow Query Log. + +Unlike tools that introduce lock overhead (which complicates the debugging +process), Redis Slow Log is highly effective at showing the actual processing time of each command. + +Redis Enterprise Software includes enhancements to the standard Redis +Slow Log capabilities that allow you to analyze the execution time +complexity of each command. This enhancement can help you better analyze +Redis operations, allowing you to compare the differences between +execution times of the same command, observe spikes in CPU usage, and +more. + +This is especially useful with complex commands such as +[ZUNIONSTORE](http://redis.io/commands/zunionstore), +[ZINTERSTORE](http://redis.io/commands/zinterstore) and +[ZRANGEBYSCORE](http://redis.io/commands/zrangebyscore). + +The enhanced Redis Enterprise Software Slow Log adds the **Complexity info** field to the +output data. + +View the complexity info data by its respective command in the table +below: + +| Command | Value of interest | Complexity | +|------------|-----------------|-----------------| +| LINSERT | N - list len | O(N) | +| LREM | N - list len | O(N) | +| LTRIM | N - number of removed elements | O(N) | +| PUBLISH | N - number of channel subscribers
M - number of subscribed patterns | O(N+M) | +| PSUBSCRIBE | N - number of patterns client is subscribed to
argc - number of arguments passed to the command | O(argc\*N) | +| PUNSUBSCRIBE | N - number of patterns client is subscribed to
M - total number of subscribed patterns
argc - number of arguments passed to the command | O(argc\*(N+M)) | +| SDIFF | N - total number of elements in all sets | O(N) | +| SDIFFSTORE | N - total number of elements in all sets | O(N) | +| SINTER | N - number of elements in smallest set
argc - number of arguments passed to the command | O(argc\*N) | +| SINTERSTORE | N - number of elements in smallest set
argc - number of arguments passed to the command | O(argc\*N) | +| SMEMBERS | N - number of elements in a set | O(N) | +| SORT | N - number of elements in the when no sorting list/set/zset
M - number of elements in result | O(N+M\*log(M))O(N) | +| SUNION | N - number of elements in all sets | O(N) | +| SUNIONSTORE | N - number of elements in all sets | O(N) | +| UNSUBSCRIBE | N - total number of clients subscribed to all channels | O(N) | +| ZADD | N - number of elements in the zset | O(log(N)) | +| ZCOUNT | N - number of elements in the zset
M - number of elements between min and max | O(log(N)+M) | +| ZINCRBY | N - number of elements in the zset | O(log(N)) | +| ZINTERSTORE | N – number of elements in the smallest zset
K – number of zsets
M – number of elements in the results set | O(N\*K)+O(M\*log(M)) | +| ZRANGE | N – number of elements in the zset
M – number of results | O(log(N)+M) | +| ZRANGEBYSCORE | N – number of elements in the zset
M – number of results | O(log(N)+M) | +| ZRANK | N – number of elements in the zset | O(log(N)) | +| ZREM | N – number of elements in the zset
argc – number of arguments passed to the command | O(argc\*log(N)) | +| ZREMRANGEBYRANK | N – number of elements in the zset
argc – number of arguments passed to the command | O(log(N)+M) | +| ZREMRANGEBYSCORE | N – number of elements in the zset
M – number of elements removed | O(log(N)+M) | +| ZREVRANGE | N – number of elements in the zset
M – number of results | O(log(N)+M) | +| ZREVRANK | N – number of elements in the zset | O(log(N)) | +| ZUNIONSTORE | N – sum of element counts of all zsets
M – element count of result | O(N)+O(M\*log(M)) | +--- +Title: rsyslog logging +alwaysopen: false +categories: +- docs +- operate +- rs +description: This document explains the structure of Redis Enterprise Software log + entries in `rsyslog` and how to use these log entries to identify events. +hideListLinks: true +linktitle: rsyslog +weight: $weight +url: '/operate/rs/7.4/clusters/logging/rsyslog-logging/' +--- + +## Log concepts + +Redis Enterprise Software logs information from a variety of components in response to actions and events that occur within the cluster. + +In some cases, a single action, such as removing a node from the cluster, may actually consist of several events. These actions may generate multiple log entries. + +All log entries displayed in the Cluster Manager UI are also written to `syslog`. You can configure `rsyslog` to monitor `syslog`. Enabled alerts are logged to `syslog` and appear with other log entries. + +You can also [manage your logs]({{< relref "/operate/rs/7.4/clusters/logging/log-security" >}}) with a remote logging server and log rotation. + +### Types of log entries + +Log entries are categorized into events and alerts. Both types of entries appear in the logs, but alert log entries also include a boolean `"state"` parameter that indicates whether the alert is enabled or disabled. + +Log entries include information about the specific event that occurred. See the log entry tables for [alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +### Severity + +You can also configure `rsyslog` to add other information, such as the event severity. + +Since `rsyslog` entries do not include severity by default, you can follow these steps to enable it: + +1. Add the following line to `/etc/rsyslog.conf`: + ``` + $template TraditionalFormatWithPRI,"%pri-text%: %timegenerated% %HOSTNAME% %syslogtag%%msg:::drop-last-lf%\n" + ``` + +2. Modify `$ActionFileDefaultTemplate` to use your new template `$ActionFileDefaultTemplateTraditionalFormatWithPRI` + +3. Save these changes and restart `rsyslog` to apply them + +You can see the log entries for alerts and events in the `/var/log/messages` file. + +**Command components:** + +- `%pri­text%` ­adds the severity +- `%timegenerated%` ­adds the timestamp +- `%HOSTNAME%` ­adds the machine name +- `%syslogtag%` adds ­the Redis Enterprise Software message. See the [log entry structure](#log-entry-structure) section for more details. +- `%msg:::drop­last­lf%n` ­removes duplicated log entries + +## Log entry structure + +The log entries have the following basic structure: + + event_log[]:{} + +- **event_log**:­ Plain static text is always shown at the beginning of the entry. +- **process id­**: The ID of the logging process +- **list of key-value pairs in any order**:­ A list of key-value pairs that describe the specific event. They can appear in any order. Some key­-value pairs are always shown, and some appear depending on the specific event. + - **Key-­value pairs that always appear:** + - `"type"`: A unique code­ name for the logged event. For the list of codenames, see the [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) tables. + - `"object"`: Defines the object type and ID (if relevant) of the object this event relates to, such as cluster, node with ID, BDB with ID, etc. Has the format of `[:]`. + - `"time"`: Unix epoch time but can be ignored in this context. + - **Key-­value pairs that might appear depending on the specific entry:** + - `"state"`: A boolean where `true` means the alert is enabled, and `false` means the alert is disabled. This is only relevant for alert log entries. + - `"global_threshold"`: The value of a threshold for alerts related to cluster or node objects. + - `"threshold"`: The value of a threshold for alerts related to a BDB object + +## Log entry samples + +This section provides examples of log entries that include the [`rsyslog` configuration](#severity) to add the severity, timestamp, and machine name. + +### Ephemeral storage passed threshold + +#### "Alert on" log entry sample + +``` +daemon.warning: Jun 14 14:49:20 node1 event_log[3464]: +{ + "storage_util": 90.061643120001, + "global_threshold": "70", + "object": "node:1", + "state": true, + "time": 1434282560, + "type": "ephemeral_storage" +} +``` + +In this example, the storage utilization on node 1 reached the value of ~90%, which triggered the alert for "Ephemeral storage has reached 70% of its capacity." + +**Log entry components:** + +- `daemon.warning` -­ Severity of entry is `warning` +- `Jun 14 14:49:20` -­ The timestamp of the event +- `node1`:­ Machine name +- `event_log` -­ Static text that always appears +- `[3464]­` - Process ID +- `"storage_util":90.061643120001` - Current ephemeral storage utilization +- `"global_threshold":"70"` - The user-configured threshold above which the alert is raised +- `"object":"node:1"`­ - The object related to this alert +- `"state":true­` - Current state of the alert +- `"time":1434282560­` - Can be ignored +- `"type":"ephemeral_storage"` - The code name of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +#### "Alert off" log entry sample + +``` +daemon.info: Jun 14 14:51:35 node1 event_log[3464]: +{ + "storage_util":60.051723520008, + "global_threshold": "70", + "object": "node:1", + "state":false, + "time": 1434283480, + "type": "ephemeral_storage" +} +``` + +This log entry is an example of when the alert for the node with ID 1 "Ephemeral storage has reached 70% of its capacity" has been turned off as result of storage utilization reaching the value of ~60%. + +**Log entry components**: + +- `daemon.info` -­ Severity of entry is `info` +- `Jun 14 14:51:35` -­ The timestamp of the event +- `node1` -­ Machine name +- `event_log` -­ Static text that always appears +- `[3464]` -­ Process ID +- `"storage_util":60.051723520008­` - Current ephemeral storage utilization +- `"global_threshold":"70"` - The user configured threshold above which the alert is raised (70% in this case) +- `"object":"node:1"` -­ The object related to this alert +- `"state":false­` - Current state of the alert +- `"time":1434283480­` - Can be ignored +- `"type":"ephemeral_storage"` -­ The code name identifier of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +### Odd number of nodes with a minimum of three nodes alert + +#### "Alert on" log entry sample + +``` +daemon.warning: Jun 14 15:25:00 node1 event_log[8310]: +{ + "object":"cluster", + "state": true, + "time": 1434284700, + "node_count": 1, + "type":"even_node_count" +} +``` + +This log entry is an example of when the alert for "True high availability requires an odd number of nodes with a minimum of three nodes" has been turned on as result of the cluster having only one node. + +**Log entry components:** + +- `daemon.warning­` - Severity of entry is warning +- `Jun 14 15:25:00` - The timestamp of the event +- `node1­` - Machine name +- `event_log` -­ Static text that always appears +- `[8310]­` - Process ID +- `"object":"cluster"­` - The object related to this alert +- `"state":true` -­ Current state of the alert +- `"time":1434284700­` - Can be ignored +- `"node_count":1­` - The number of nodes in the cluster +- `"type":"even_node_count"­` - The code name identifier of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +#### "Alert off" log entry sample + +``` +daemon.warning: Jun 14 15:30:40 node1 event_log[8310]: +{ + "object":"cluster", + "state": false, + "time": 1434285200, + "node_count": 3, + "type":"even_node_count" +} +``` + +This log entry is an example of when the alert for "True high availability requires an odd number of nodes with a minimum of three nodes" has been turned off as result of the cluster having 3 nodes. + +**Log entry components:** + +- `daemon.warning` - Severity of entry is warning +- `Jun 14 15:30:40` -­ The timestamp of the event +- `node1­` - Machine name +- `event_log­` - Static text that always appears +- `[8310]` -­ Process ID +- `"object":"cluster"` -­ The object related to this alert +- `"state":false­` - Current state of the alert +- `"time":1434285200­` - Can be ignored +- `"node_count":3­` - The number of nodes in the cluster +- `"type":"even_node_count"` -­ The code name of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +### Node has insufficient disk space for AOF rewrite + +#### "Alert on" log entry sample + +``` +daemon.err: Jun 15 13:51:23 node1 event_log[34252]: +{ + "used": 23457188, + "missing": 604602126, + "object": "node:1", + "free": 9867264, + "needed":637926578, + "state": true, + "time": 1434365483, + "disk": 705667072, + "type":"insufficient_disk_aofrw" +} +``` + +This log entry is an example of when the alert for "Node has insufficient disk space for AOF rewrite" has been turned on as result of not having enough persistent storage disk space for AOF rewrite purposes. It is missing 604602126 bytes. + +**Log entry components:** + +- `daemon.err`­ - Severity of entry is error +- `Jun 15 13:51:23` - The timestamp of the event +- `node1­` - Machine name +- `event_log` -­ Static text that always appears +- `[34252]` -­ Process ID +- `"used":23457188­` - The amount of disk space in bytes currently used for AOF files +- `"missing":604602126­` - The amount of disk space in bytes that is currently missing for AOF rewrite purposes +- `"object":"node:1″` -­ The object related to this alert +- `"free":9867264­` - The amount of disk space in bytes that is currently + free +- `"needed":637926578­` - The amount of total disk space in bytes that is needed for AOF rewrite purposes +- `"state":true­` - Current state of the alert +- `"time":1434365483` -­ Can be ignored +- `"disk":705667072­` - The total size in bytes of the persistent storage +- `"type":"insufficient_disk_aofrw"­` - The code name of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. + +#### "Alert off" log entry sample + +``` +daemon.info: Jun 15 13:51:11 node1 event_log[34252]: +{ + "used": 0, "missing":-21614592, + "object": "node:1", + "free": 21614592, + "needed": 0, + "state":false, + "time": 1434365471, + "disk": 705667072, + "type":"insufficient_disk_aofrw" +} +``` + +**Log entry components:** + +- `daemon.info­` - Severity of entry is info +- `Jun 15 13:51:11` - The timestamp of the event +- `node1­` - Machine name +- `event_log` -­ Static text that always appears +- `[34252]­` - Process ID +- `"used":0­` - The amount of disk space in bytes currently used for AOF files +- `"missing":‐21614592­` - The amount of disk space in bytes that is currently missing for AOF rewrite purposes. In this case, it is not missing because the number is negative. +- `"object":"node:1″` -­ The object related to this alert +- `"free":21614592` -­ The amount of disk space in bytes that is currently free +- `"needed":0­` - The amount of total disk space in bytes that is needed for AOF rewrite purposes. In this case, no space is needed. +- `"state":false­` - Current state of the alert +- `"time":1434365471­` - Can be ignored +- `"disk":705667072­` - The total size in bytes of the persistent storage +- `"type":"insufficient_disk_aofrw"`­ - The code name of this specific event. See [logged alerts and events]({{< relref "/operate/rs/7.4/clusters/logging/alerts-events" >}}) for more details. +--- +Title: Logging events +alwaysopen: false +categories: +- docs +- operate +- rs +description: Management actions performed with Redis Enterprise are logged to make + sure system management tasks are appropriately performed or monitored by administrators + and for compliance with regulatory standards. +hideListLinks: true +linkTitle: Logging +weight: 95 +url: '/operate/rs/7.4/clusters/logging/' +--- +Management actions performed with Redis Enterprise are logged to make sure system management tasks are appropriately performed or monitored by administrators and for compliance with regulatory standards. + +Log entries contain the +following information: + +1. Who performed the action? +1. What exactly was the performed action? +1. When was the action performed? +1. Did the action succeed or not? + +To get the list of logged events, you can use the REST API or +the **Logs** screen in the UI. The **Logs** screen displays the system and user +events regarding alerts, notifications, and configuration. + +{{Logs screen in the new Cluster Manager UI.}} + +You can use the **Logs** screen to review what actions a user has performed, such as editing a database's configuration. + +- [Redis slow + log]({{< relref "/operate/rs/7.4/clusters/logging/redis-slow-log.md" >}}) +- [rsyslog logging]({{< relref "/operate/rs/7.4/clusters/logging/rsyslog-logging/" >}}) + +## View logs in the UI + +Redis Enterprise provides log files for auditing cluster management actions and troubleshooting. You can view these logs in the UI and on the host operating system. + +To view event logs in the new Cluster Manager UI, go to **Cluster > Logs**. + +## View logs on the server + +Server logs can be found by default in the directory `/var/opt/redislabs/log/`. + +These log files are used by the Redis support team to troubleshoot issues. The logs you will most frequently interact with is 'event_log.log'. This log file is where logs of configuration actions within Redis are stored and is useful to determine events that occur within Redis Enterprise. + +## Configure log timestamps + +Redis Enterprise allows you to configure log timestamps. To configure log timestamps in the new Cluster Manager UI: + +1. Go to **Cluster > Configuration > General**. + +1. Change the **Time zone** for the logs based on your location. +--- +Title: Manage clusters +alwaysopen: false +categories: +- docs +- operate +- rs +description: Administrative tasks and information related to the Redis Enterprise + cluster. +hideListLinks: false +linktitle: Clusters +weight: 36 +url: '/operate/rs/7.4/clusters/' +--- + +You can manage your Redis Enterprise Software clusters with several different tools: + +- Cluster Manager UI (the web-based user interface) +- Command-line tools ([rladmin]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}), [redis-cli]({{< relref "/develop/tools/cli" >}}), [crdb-cli]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}})) +- [REST API]({{< relref "/operate/rs/7.4/references/rest-api/_index.md" >}}) +--- +Title: Previous releases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Release notes for Redis Enterprise Software 5.6.0 (April 2020) and earlier + versions. +hideListLinks: true +linkTitle: Previous releases +weight: 100 +url: '/operate/rs/7.4/release-notes/legacy-release-notes/' +--- + +{{}} +--- +Title: Release notes +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: true +weight: 90 +url: '/operate/rs/7.4/release-notes/' +--- + +Here's what changed recently in Redis Enterprise Software: + +{{< table-children columnNames="Version (Release date) ,Major changes,OSS Redis compatibility" columnSources="LinkTitle,Description,compatibleOSSVersion" enableLinks="LinkTitle" >}} +--- +Title: What's new in Redis Enterprise Software 6.x? +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +draft: true +weight: 20 +url: '/operate/rs/7.4/new-features-redis-enterprise/' +--- +Below are detailed a few of the major features of this release of Redis Enterprise Software +along with bug fixes and patches. + +## Geo-Distributed Active-Active Conflict-free Replicated Databases (CRDB) {#geodistributed-activeactive-conflictfree-replicated-databases-crdb} + +Developing globally distributed applications can be challenging, as +developers have to think about race conditions and complex combinations +of events under geo-failovers and cross-region write conflicts. Active-Active databases +simplify developing such applications by directly using built-in smarts +for handling conflicting writes based on the data type in use. Instead +of depending on just simplistic "last-writer-wins" type conflict +resolution, geo-distributed Active-Active databases combines techniques defined in CRDT +(conflict-free replicated data types) research with Redis types to +provide smart and automatic conflict resolution based on the data type's +intent. + +For more information, go here. For information, go to [Developing with +Active-Active databases]({{< relref "/operate/rs/7.4/developing/crdbs" >}}). + +## Redis modules + +Redis Modules enable you to extend the functionality of Redis Enterprise +Software. One can add new data types, capabilities, etc. to tailor the +cluster to a specific use case or need. Once installed, modules benefit +from the high performance, scalability, and high availability that Redis +Enterprise is known for. + +### Certified modules + +Redis developed and certified these modules for use with Redis Enterprise Software: + +- [RedisBloom]({{< relref "/operate/modules/redisbloom" >}}) + - Enables Redis to have a scalable bloom filter as a data type. Bloom + filters are probabilistic data structures that quickly determine if something is contained within a set. +- RedisGraph + - RedisGraph is the first queryable Property Graph database to use sparse + matrices to represent the adjacency matrix in graphs and linear algebra to query the graph. + RedisGraph uses [Cypher](https://www.opencypher.org/) as its query language. +- [RedisJSON]({{< relref "/operate/modules/redisjson" >}}) + - Now you have the convenience JSON as a built-in data type and easily + able to address nested data via a path. +- [RediSearch]({{< relref "/operate/modules/redisearch" >}}) + - This module turns Redis into a distributed in-memory + full-text indexing and search beast. + +### Custom modules + +In addition, Redis Enterprise Software provides the ability to load and +use custom [Redis modules](https://redislabs.com/community/redis-modules-hub/) or +of your own creation. + +## Support for Docker + +Deploying and running your Redis Enterprise Software cluster on Docker +containers is supported in development systems and +available to pull from Docker hub. With the official image, you can +easily and quickly test several containers to build the scalable +and highly available cluster Redis Enterprise Software is famous for. + +For more information go to [quick start with Redis Enterprise Software +on Docker.]({{< relref "/operate/rs/7.4/installing-upgrading/get-started-docker.md" >}}) + +## LDAP integration + +As part of our continued emphasis on security, administrative user +accounts in Redis Enterprise Software can now use either built-in +authentication or authenticate externally via LDAP with saslauthd. The +accounts can be used for administering resources on the cluster via +command line, Rest API, or admin console. + +For more information see [LDAP +Integration]({{< relref "/operate/rs/7.4/security/passwords-users-roles.md#setting-up-ldap" >}}). +--- +Title: Manage installation questions +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes Redis Enterprise Software installation questions and how to + answer them automatically. +linkTitle: Manage install questions +weight: 25 +url: '/operate/rs/7.4/installing-upgrading/install/manage-installation-questions/' +--- + +Several questions are displayed during the Redis Enterprise Software installation process. + +Here, you'll find a list of these questions and learn how to automatically answer these questions to perform a silent install. + +## Installation questions + +Several questions appear during installation: + +- **Linux swap file** - `Swap is enabled. Do you want to proceed? [Y/N]?` + + We recommend that you [disable Linux swap]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/linux-swap.md" >}}) in the operating system configuration + to give Redis Enterprise Software control of the memory allocation. + +- **Automatic OS tuning** - `Do you want to automatically tune the system for best performance [Y/N]?` + + To allow the installation process to optimize the OS for Redis Enterprise Software, answer `Y`. + The installation process prompts you for additional information. + + The `/opt/redislabs/sbin/systune.sh` file contains details about the tuning process. + +- **Network time** - `Do you want to set up NTP time synchronization now [Y/N]?` + + Redis Enterprise Software requires that all cluster nodes have synchronized time. + You can either let the installation process configure NTP + or you can [configure NTP manually]({{< relref "/operate/rs/7.4/clusters/configure/sync-clocks.md" >}}). + +- **Firewall ports** - `Would you like to open RedisLabs cluster ports on the default firewall zone [Y/N]?` + + Redis Enterprise Software requires that all nodes have [specific network ports]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) open. + To open the ports, you can: + + - Answer `Y` to let the installation process open these ports. + - Answer `N` and configure the firewall manually for [RHEL/CentOS firewall]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/centos-rhel-firewall" >}}). + - Answer `N` and configure the firewall on the node manually for your OS. + +- **Installation verification (rlcheck)** - `Would you like to run rlcheck to verify proper configuration? [Y/N]?` + + Run the `rlcheck` installation verification to make sure that the installation completed successfully. + If you want to run this verification at a later time, you can run: + + ```sh + /opt/redislabs/bin/rlcheck + ``` + +- **User already exists** - `The user 'redislabs' already exists, which may lead to problems if it wasn't configured correctly. Would you like to proceed with the installation? (Y/N)?` + +- **Group already exists** - `The group 'redislabs' already exists, which may lead to problems if it wasn't configured correctly. Would you like to proceed with the installation? (Y/N)?` + +## Answer install questions automatically + +To perform a silent (or automated) install, answer the questions when you start the [install]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}). + +### Answer yes to all questions + +To automatically answer `yes` to all questions (which accepts the default values), run the [installation script]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) with the `-y` parameter: + +```bash +./install.sh -y +``` + +### Configure file to answer + +Use an answer file to manage your response: + +1. Create a text file to serve as an answer file. + + The answer file can contain any of the parameters for the installation questions and indicate the answer for each question with `yes` or `no`. + + For example: + + ```sh + ignore_swap=no + systune=yes + ntp=no + firewall=no + rlcheck=yes + ignore_existing_osuser_osgroup=no + ``` + + If you use `systune=yes`, the installation answers `yes` to all of the system tuning questions. + +1. Run the [installation script]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) with the `-c` command-line option and add the path to the answer file. + + For example: + + ```sh + ./install.sh -c /home/user/answers + ``` + +--- +Title: Install Redis Enterprise Software on Linux +alwaysopen: false +categories: +- docs +- operate +- rs +description: Install Redis Enterprise Software on Linux. +linkTitle: Install on Linux +weight: 10 +url: '/operate/rs/7.4/installing-upgrading/install/install-on-linux/' +--- + +After you [download a Redis Enterprise Software installation package]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install/download-install-package" >}}), install it on one of the nodes in the cluster. + +For installation on machines without an internet connection, see [Offline installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/offline-installation" >}}). + +## Install on Linux + +To install Redis Enterprise Software, use the command line: + +1. Copy the installation package to the node. + +1. On the node, change to the directory where the installation package is located and extract the installation files: + + ```sh + tar vxf + ``` + +1. _(Optional)_ Use the {{< download "GPG key file" "../GPG-KEY-redislabs-packages.gpg" >}} to confirm the authenticity of Ubuntu/Debian or RHEL RPM packages: + + - For Ubuntu: + 1. Import the key: + ```sh + gpg --import + ``` + 2. Verify the package signature: + ```sh + dpkg-sig --verify + ``` + + - For RHEL: + 1. Import the key: + ```sh + rpm --import + ``` + 2. Verify the package signature: + ```sh + rpm --checksig + ``` + +1. To start the installation process, run the installation script. See [installation script options]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) for a list of command-line options you can add to the following command: + + ```sh + sudo ./install.sh + ``` + + {{< note >}} +- The Redis Enterprise Software files are installed in the default [file locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/file-locations.md" >}}). +- By default, Redis Enterprise Software runs on the OS as the `redislabs` user and `redislabs` group. If needed, you can [specify a different user and group]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) during the installation. +- You must either be the root user or use `sudo` to run the installation script. + {{< /note >}} + +1. Answer the [installation questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions.md" >}}) when shown to complete the installation process. + + {{< note >}} +To skip the installation questions, use one of the following methods: + +- Run `./install.sh -y` to answer yes to all of the questions. +- Create an [answer file]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions#configure-file-to-answer" >}}) to answer installation questions automatically. + {{< /note >}} + +1. When installation completes successfully, the output displays the Cluster Manager UI's IP address: + + ```sh + Summary: + ------- + ALL TESTS PASSED. + 2017-04-24 10:54:15 [!] Please logout and login again to make + sure all environment changes are applied. + 2017-04-24 10:54:15 [!] Point your browser at the following + URL to continue: + 2017-04-24 10:54:15 [!] https://:8443 + ``` + +1. Repeat this process for each node in the cluster. + + +## Auto Tiering installation + +If you want to use Auto Tiering for your databases, review the prerequisites, storage requirements, and [other considerations]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}) for Auto Tiering databases and prepare and format the flash memory. + +After you [install on Linux](#install-on-linux), use the `prepare_flash` script to prepare and format flash memory: + +```sh +sudo /opt/redislabs/sbin/prepare_flash.sh +``` + +This script finds unformatted disks and mounts them as RAID partitions in `/var/opt/redislabs/flash`. + +To verify the disk configuration, run: + +```sh +sudo lsblk +``` + +## More info and options + +To learn more about customization and find answers to related questions, see: + +- [CentOS/RHEL firewall configuration]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/centos-rhel-firewall.md" >}}) +- [Change socket file location]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/change-location-socket-files.md" >}}) +- [Cluster DNS configuration]({{< relref "/operate/rs/7.4/networking/cluster-dns.md" >}}) +- [Cluster load balancer setup]({{< relref "/operate/rs/7.4/networking/cluster-lba-setup.md" >}}) +- [mDNS client prerequisites]({{< relref "/operate/rs/7.4/networking/mdns.md" >}}) +- [File locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/file-locations.md" >}}) +- [Supported platforms]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms.md" >}}) + +## Limitations + +Several Redis Enterprise Software installation reference files are installed to the directory `/etc/opt/redislabs/` even if you use [custom installation directories]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-install-directories" >}}). + +As a workaround to install Redis Enterprise Software without using any root directories, do the following before installing Redis Enterprise Software: + +1. Create all custom, non-root directories you want to use with Redis Enterprise Software. + +1. Mount `/etc/opt/redislabs` to one of the custom, non-root directories. + +## Next steps + +1. [Create]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup.md" >}}) + or [join]({{< relref "/operate/rs/7.4/clusters/add-node.md" >}}) an existing Redis Enterprise Software cluster. + +1. [Create a database]({{< relref "/operate/rs/7.4/databases/create" >}}). + + For geo-distributed Active-Active replication, create an [Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/create.md" >}}) database. + +1. [Add users]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}) to the cluster with specific permissions. To begin, start with [Access control]({{< relref "/operate/rs/7.4/security/access-control" >}}). +--- +Title: Supported platforms +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software is supported on several operating systems, + cloud environments, and virtual environments. +linkTitle: Supported platforms +weight: 30 +tocEmbedHeaders: true +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms/' +--- +{{< embed-md "supported-platforms-embed.md">}} +--- +Title: Configure AWS EC2 instances for Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Considerations for installing and running Redis Enterprise Software on + Amazon Elastic Cloud Compute (EC2) instances. +linkTitle: AWS EC2 configuration +weight: 80 +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/configuring-aws-instances/' +--- +There are some special considerations for installing +and running Redis Enterprise Software on Amazon Elastic Cloud Compute (EC2) instances. + +These include: + +- [Storage considerations](#storage) +- [Instance types](#instance-types) +- [Security group configuration](#security) + +## Storage considerations {#storage} + +AWS EC2 instances are ephemeral, but your persistent database storage should +not be. If you require a persistent storage location for your database, +the storage must be located outside of the instance. When you +set up an instance, make sure it has a properly sized EBS-backed volume +connected. When you set up Redis Enterprise Software on the instance, make sure that [the +persistence storage]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}) is configured to use this volume. + +{{< note >}} +After [installing the Redis Enterprise Software package]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) on the instance +and **before** running through [the setup process]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup.md" >}}), +you must give the group `redislabs` permission to the EBS volume by +running the following command from the OS command-line interface (CLI): +```sh +chown redislabs:redislabs /< ebs folder name> +``` +{{< /note >}} + +Another feature that may be of importance to you is the use of +Provisioned IOPS for EBS-backed volumes. Provisioned IOPS guarantee a +certain level of disk performance. There are two features in Redis Enterprise Software where +this feature could be critical to use: + +1. When using [Auto Tiering]({{< relref "/operate/rs/7.4/databases/auto-tiering/" >}}) +1. When using AOF on every write and there is a high write load. In + this case, the provisioned IOPS should be on the nodes used as + replicas in the cluster. + +## Instance types {#instance-types} + +Choose an instance type that has (at minimum) enough free memory and +disk space to meet the Redis Enterprise Software [hardware +requirements]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}). + +In addition, some instance types are optimized for EBS-backed volumes +and some are not. If you are using persistent storage, you should use an +instance type that is, especially if disk drain rate matters to your database +implementation. + +## Security group configuration {#security} + +When configuring the security group: + +- Define a custom TCP rule for port 8443 to allow web browser access + to the Redis Enterprise Software Cluster Manager UI from the IP address range you use to + access the Cluster Manager UI. +- If you are using the DNS resolving option with Redis Enterprise Software, define a DNS UDP + rule for port 53 to allow access to the databases' endpoints by + using the [DNS resolving mechanism]({{< relref "/operate/rs/7.4/networking/cluster-dns" >}}). +- To create a cluster that has multiple nodes all running as instances on AWS, + you need to define a security group that has an All TCP rule for all ports, 0 - 65535, + and add it to all instances that are part of the cluster. + This ensures that all nodes are able to communicate with each other. + To limit the number of open ports, you can open only the [ports used by Redis Enterprise Software]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}). + +After successfully launching the instances: + +1. Install Redis Enterprise Software from the [Linux package or AWS AMI]({{< relref "/operate/rs/7.4/installing-upgrading" >}}). +2. [Set up the cluster]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup.md" >}}). +--- +Title: File locations +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software file installation locations. +linkTitle: File locations +weight: 60 +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/file-locations/' +--- +{{}} +To ensure that Redis Enterprise Software functions properly, be careful with the files in the application directories. If you modify or delete the application files, Redis Enterprise Software might not work as expected. +{{}} + +## Application directories + +The directories that Redis Enterprise Software installs into are: + +| **Path** | **Description** | +|------------|-----------------| +| /opt/redislabs | Main installation directory for all Redis Enterprise Software binaries | +| /opt/redislabs/bin | Binaries for all the utilities for command-line access and management, such as [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}) or [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}) | +| /opt/redislabs/config | System configuration files | +| /opt/redislabs/lib | System library files | +| /opt/redislabs/sbin | System binaries for tweaking provisioning | + +## Configuration and data directories + +The default directories that Redis Enterprise Software uses for data and metadata are: + +| **Path** | **Description** | +|------------|-----------------| +| /var/opt/redislabs | Default storage location for the cluster data, system logs, backups, and ephemeral, persisted data | +| /var/opt/redislabs/log | System logs for Redis Enterprise Software | +| /var/opt/redislabs/run | Socket files for Redis Enterprise Software | +| /etc/opt/redislabs | Default location for cluster manager configuration and certificates | +| /tmp | Temporary files | + +You can change these file locations for: + +- [Ephemeral and persistence storage]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup.md" >}}) during cluster setup +- [Socket files]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/change-location-socket-files.md" >}}) after cluster setup +--- +Title: Persistent and ephemeral node storage +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Configure paths for persistent storage and ephemeral storage. +linktitle: Persistent node storage +toc: 'true' +weight: 50 +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage/' +--- +For each node in the cluster, you can configure paths for both persistent +storage and ephemeral storage. To do so, the volume must have full permissions for user and group `redislabs` or users:group `redislabs:redislabs`. See the [Customize system user and group]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group" >}}) page for instructions. + +{{< note >}} +The persistent storage and ephemeral storage discussed in this document are not related +to Redis persistence or AWS ephemeral drives. +{{< /note >}} + +## Persistent storage + +Persistent storage is mandatory. The cluster uses persistent storage to store +information that needs to persist if a shard or a node fails, +such as server logs, configurations, and files. + +To set the frequency of syncs, you can configure [persistence]({{< relref "/operate/rs/7.4/databases/configure/database-persistence" >}}) +options for a database. + +The persistent volume must be a storage area network (SAN) +using an EXT4 or XFS file system and be connected as an external storage volume. + +When using append-only file (AOF) persistence, use flash-based storage +for the persistent volume. + +## Ephemeral storage + +Ephemeral storage is optional. If configured, temporary information that does not need to be persisted is stored by the cluster in the ephemeral storage. +This improves performance and helps reduce the load on the persistent storage. + +Ephemeral storage must be a locally attached volume on each node. + +## Disk size requirements + +For disk size requirements, see: + +- [Hardware + requirements]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements" >}}) + for general guidelines regarding the ideal disk size for each type of + storage. +- [Disk size requirements for extreme write + scenarios]({{< relref "/operate/rs/7.4/clusters/optimize/disk-sizing-heavy-write-scenarios" >}}) + for special considerations when dealing with a high rate of write + commands. +--- +Title: Plan Redis Enterprise Software deployment +alwaysopen: false +categories: +- docs +- operate +- rs +description: Plan a deployment of Redis Enterprise Software. +hideListLinks: true +linkTitle: Plan deployment +weight: 4 +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/' +--- + +Before installing Redis Enterprise Software, you need to: + +- Set up your hardware. See [Hardware requirements]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}) and [Persistent and ephemeral node storage +]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}) for more information. + +- Choose your [deployment platform]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms.md" >}}). + + Redis Enterprise Software supports a variety of platforms, including: + + - Multiple Linux distributions (Ubuntu, Red Hat Enterprise Linux (RHEL), IBM CentOS, Oracle Linux) + - [Amazon AWS AMI]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/configuring-aws-instances" >}}) + - [Docker container]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}) (for development and testing only) + - [Kubernetes]({{< relref "/operate/kubernetes" >}}) + + For more details, see [Supported platforms]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms.md" >}}). + +- Open appropriate [network ports]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) in the firewall to allow connections to the nodes. + +- Configure [cluster DNS]({{< relref "/operate/rs/7.4/networking/cluster-dns.md" >}}) so that cluster nodes can reach each other by DNS names. +- By default, the installation process requires an internet connection to install dependencies and synchronize the operating system clock. To learn more, see [Offline installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/offline-installation" >}}). + +## Next steps + +After you finish planning your deployment, you can: + +- [Download an installation package]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install/download-install-package" >}}). + +- [Prepare to install]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install" >}}) Redis Enterprise Software. + +- [View installation questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions" >}}) and prepare answers before installation. +--- +Title: Hardware requirements +alwaysopen: false +categories: +- docs +- operate +- rs +- kubernetes +description: Redis Enterprise Software hardware requirements for development and production + environments. +linkTitle: Hardware requirements +weight: 20 +url: '/operate/rs/7.4/installing-upgrading/install/plan-deployment/hardware-requirements/' +--- +{{< embed-md "hardware-requirements-embed.md" >}} + +## Sizing considerations + +### General database sizing {#general-sizing} + +Factors to consider when sizing your database. + +- **Dataset size** – Your limit should be greater than your dataset size to leave room for overhead. +- **Database throughput** – High throughput needs more shards, leading to a higher memory limit. +- [**Modules**]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) – Using modules with your database consumes more memory. +- [**Database clustering**]({{< relref "/operate/rs/7.4/databases/durability-ha/clustering" >}}) – Allows you to spread your data into shards across multiple nodes. +- [**Database replication**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}) – Enabling replication doubles memory consumption. + +### Active-Active database sizing {#active-active-sizing} + +Additional factors for sizing Active-Active databases: + +- [**Active-Active replication**]({{< relref "/operate/rs/7.4/databases/active-active" >}}) – Requires double the memory of regular replication, which can be up to two times (2x) the original data size per instance. +- [**Database replication backlog**]({{< relref "/operate/rs/7.4/databases/durability-ha/replication#database-replication-backlog" >}}) – For synchronization between shards. By default, this is set to 1% of the database size. +- [**Active-Active replication backlog**]({{< relref "/operate/rs/7.4/databases/active-active/manage#replication-backlog" >}}) – For synchronization between clusters. By default, this is set to 1% of the database size. + +{{}} +Active-Active databases have a lower threshold for activating the eviction policy, because it requires propagation to all participating clusters. The eviction policy starts to evict keys when one of the Active-Active instances reaches 80% of its memory limit. +{{}} + +### Sizing databases with Auto Tiering enabled {#redis-on-flash-sizing} + +Additional factors for sizing databases with Auto Tiering enabled: + +- [**Database persistence**]({{< relref "/operate/rs/7.4/databases/configure/database-persistence#redis-on-flash-data-persistence" >}}) – Auto Tiering uses dual database persistence where both the primary and replica shards persist to disk. This may add some processor and network overhead, especially in cloud configurations with network-attached storage. + +--- +Title: Customize installation directories +alwaysopen: false +categories: +- docs +- operate +- rs +description: Customize Redis Enterprise Software installation directories. +linkTitle: Customize install locations +weight: 30 +url: '/operate/rs/7.4/installing-upgrading/install/customize-install-directories/' +--- + +When you install Redis Enterprise Software on Red Hat Enterprise Linux, you can customize the installation directories. + +The files are installed in the `redislabs` directory located in the path that you specify. + +{{< note >}} +- When you install with custom directories, the installation does not run as an RPM file. +- If a `redislabs` directory already exists in the path that you specify, the installation fails. +- All nodes in a cluster must be installed with the same file locations. +- Custom installation directories are not supported for databases using Auto Tiering. +{{< /note >}} + +You can specify these file locations: + +| Files | Installer flag | Example parameter | Example file location | +| ------------------- | -------------- | ----------------- | --------------------- | +| Binaries files | --install-dir | /opt | /opt/redislabs | +| Configuration files | --config-dir | /etc/opt | /etc/opt/redislabs | +| Data and log files | --var-dir | /var/opt | /var/opt/redislabs | + +These files are not in the custom directories: + +- OS files + - /etc/cron.d/redislabs + - /etc/firewalld/services + - /etc/firewalld/services/redislabs-clients.xml + - /etc/firewalld/services/redislabs.xml + - /etc/ld.so.conf.d/redislabs_ldconfig.conf.tmpl + - /etc/logrotate.d/redislabs + - /etc/profile.d/redislabs_env.sh + - /usr/lib/systemd/system/rlec_supervisor.service.tmpl + - /usr/share/selinux/mls/redislabs.pp + - /usr/share/selinux/targeted/redislabs.pp + +- Installation reference files + - /etc/opt/redislabs/redislabs_custom_install_version + - /etc/opt/redislabs/redislabs_env_config.sh + +To specify directories during [installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}), include installer flags as [command-line options]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) when you run the `install.sh` script. For example: + +```sh +sudo ./install.sh --install-dir --config-dir --var-dir +``` + +## Limitations + +Several Redis Enterprise Software installation reference files are installed to the directory `/etc/opt/redislabs/` even if you use custom installation directories. + +As a workaround to install Redis Enterprise Software without using any root directories, do the following before installing Redis Enterprise Software: + +1. Create all custom, non-root directories you want to use with Redis Enterprise Software. + +1. Mount `/etc/opt/redislabs` to one of the custom, non-root directories. +--- +Title: Offline installation +alwaysopen: false +categories: +- docs +- operate +- rs +description: If you install Redis Enterprise Software on a machine with no internet + connection, you need to perform two tasks manually. +linkTitle: Offline installation +weight: 60 +url: '/operate/rs/7.4/installing-upgrading/install/offline-installation/' +--- +By default, the installation process requires an internet connection to +enable installing dependency packages and for [synchronizing the +operating system clock]({{< relref "/operate/rs/7.4/clusters/configure/sync-clocks.md" >}}) against an NTP server. + +If you install Redis Enterprise Software on a machine without an +internet connection, you need to perform two tasks manually. + +## Install required dependency packages + +When you install Redis Enterprise Software on a machine that is not connected to the internet, the installation process fails and displays an error message informing you it failed to automatically install dependencies. Review the installation steps in the console to see which missing dependencies the process attempted to install. Install all these dependency packages and then run the installation again. + +## Set up NTP time synchronization + +At the end of the installation, the process asks if you want to set up NTP time synchronization. If you choose `Yes` while you are not connected to the internet, the action fails and displays the appropriate error message, but the installation completes successfully. Despite the successful completion of the installation, you still have to configure all nodes for [NTP time synchronization]({{< relref "/operate/rs/7.4/clusters/configure/sync-clocks.md" >}}). +--- +Title: Customize system user and group +alwaysopen: false +categories: +- docs +- operate +- rs +description: Specify the user and group who own all Redis Enterprise Software processes. +linkTitle: Customize user and group +weight: 40 +url: '/operate/rs/7.4/installing-upgrading/install/customize-user-and-group/' +--- + +By default, Redis Enterprise Software is installed with the user:group `redislabs:redislabs`. See [Access control]({{< relref "/operate/rs/7.4/security/access-control" >}}) for user and group security information. + +During installation, you can specify the user and group that own all Redis Enterprise Software processes. + +If you specify the user only, then installation is run with the primary group that the user belongs to. + +{{< note >}} +- Custom installation user is supported on Red Hat Enterprise Linux. +- When you install with custom directories, the installation does not run as an RPM file. +- You must create the user and group before attempting to install Redis Software. +- You can specify an LDAP user as the installation user. +{{< /note >}} + +To customize the user or group during [installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}), include the `--os-user` or `--os-group` [command-line options]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) when you run the `install.sh` script. For example: + +```sh +sudo ./install.sh --os-user --os-group +``` + +--- +Title: Download a Redis Enterprise Software installation package +alwaysopen: false +categories: +- docs +- operate +- rs +description: Download a Redis Enterprise Software installation package. +linkTitle: Download installation package +weight: 20 +url: '/operate/rs/7.4/installing-upgrading/install/prepare-install/download-install-package/' +--- + +To download the installation package for any of the supported platforms: + +1. Go to the [Redis download page](https://cloud.redis.io/#/rlec-downloads). +1. Sign in with your Redis credentials or create a new account. +1. In the **Downloads** section for Redis Enterprise Software, select the installation package for your platform then select **Go**. + +{{< note >}} +Before you install the Linux package or AWS AMI on an AWS EC2 instance, +review the [configuration requirements for AWS EC2 instances]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/configuring-aws-instances" >}}). +{{< /note >}} +--- +Title: Ensure port availability +alwaysopen: false +categories: +- docs +- operate +- rs +description: Make sure required ports are available. +linkTitle: Ensure port availability +weight: 40 +url: '/operate/rs/7.4/installing-upgrading/install/prepare-install/port-availability/' +--- + +Before [installing Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/install" >}}), make sure all required ports are available. + +{{}} + +## Update `sysctl.conf` to avoid port collisions + +{{}} + +## OS conflicts with port 53 + +{{}} +--- +Title: Prepare to install Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Prepare to install Redis Enterprise Software. +hideListLinks: true +linkTitle: Prepare to install +weight: 6 +url: '/operate/rs/7.4/installing-upgrading/install/prepare-install/' +--- + +Before you install Redis Enterprise Software: + +- [Download an installation package]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install/download-install-package" >}}). + +- [View installation questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions" >}}) and optionally prepare answers before installation. + +- Review the [security considerations]({{< relref "/operate/rs/7.4/security/" >}}) for your deployment. + +- Check that you have root-level access to each node, either directly or with `sudo`. + +- Check that all [required ports are available]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install/port-availability" >}}). + +- [Turn off Linux swap]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/linux-swap.md" >}}) on all cluster nodes. + +- If you require the `redislabs` UID (user ID) and GID (group ID) numbers to be the same on all the nodes, create the `redislabs` user and group with the required numbers on each node. + +- If you want to use Auto Tiering for your databases, see [Auto Tiering installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux#auto-tiering-installation" >}}). + +## Next steps + +- View [installation script options]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) before starting the installation. + +- [Install Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/install" >}}). +--- +Title: Install Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Install Redis Enterprise Software on Linux. +hideListLinks: true +linkTitle: Install +weight: 35 +url: '/operate/rs/7.4/installing-upgrading/install/' +--- + +After you [plan your deployment]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment" >}}), [download a Redis Enterprise Software installation package]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install/download-install-package" >}}), and finish [installation preparation]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install" >}}): + +1. [Install the Redis Enterprise Software package]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}) on one of the nodes in the cluster. + +1. Repeat this process for each node in the cluster. + +For installation on machines without an internet connection, see [Offline installation]({{< relref "/operate/rs/7.4/installing-upgrading/install/offline-installation" >}}). + +## Permissions and access + +- Redis Enterprise Software installation creates the `redislabs:redislabs` user and group. + + Assigning other users to the `redislabs` group is optional. Users belonging to the `redislabs` group have permission to read and execute (e.g. use the `rladmin` status command) but are not allowed to write (or delete) files or directories. + +- Redis Enterprise Software is certified to run with permissions set to `750`, an industry standard. + + {{}} +Do not reduce permissions to `700`. This configuration has not been tested and is not supported. + {{}} + +## More info and options + +If you've already installed Redis Enterprise Software, you can also: + +- [Upgrade an existing deployment]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading" >}}). + +- [Uninstall an existing deployment]({{< relref "/operate/rs/7.4/installing-upgrading/uninstalling.md" >}}). + +To learn more about customization and find answers to related questions, see: + +- [CentOS/RHEL Firewall configuration]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/centos-rhel-firewall.md" >}}) +- [Change socket file location]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/change-location-socket-files.md" >}}) +- [Cluster DNS configuration]({{< relref "/operate/rs/7.4/networking/cluster-dns.md" >}}) +- [Cluster load balancer setup]({{< relref "/operate/rs/7.4/networking/cluster-lba-setup.md" >}}) +- [File locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/file-locations.md" >}}) +- [Supported platforms]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms.md" >}}) +- [Manage installation questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions.md" >}}) +- [mDNS client prerequisites]({{< relref "/operate/rs/7.4/networking/mdns.md" >}}) +- [User and group ownership]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) + +## Next steps + +After your cluster is set up with nodes, you can: + +- [Add users]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}) to the cluster with specific permissions. To begin, start with [Access control]({{< relref "/operate/rs/7.4/security/access-control" >}}). +- [Create databases]({{< relref "/operate/rs/7.4/databases/create" >}}) to use with your applications. + +--- +Title: Installation script command-line options +alwaysopen: false +categories: +- docs +- operate +- rs +description: Command-line options for the install.sh script. +linkTitle: Installation script options +weight: 20 +url: '/operate/rs/7.4/installing-upgrading/install/install-script/' +--- + +Run `./install.sh --help` to view command-line options supported by the installation script. + +The following options are supported: + +| Option | Description | +|--------|-------------| +| `-y` | Automatically answers `yes` to all install prompts, accepting all default values
See [Manage install questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions" >}})| +| `-c ` | Specify answer file used to respond to install prompts
See [Manage install questions]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions" >}})| +| `-s ` | Specify directory for redislabs unix sockets _(new installs only)_| +| `--install-dir ` | Specifies installation directory _(new installs only)_
See [Customize install locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-install-directories" >}})| +| `--config-dir ` | Configuration file directory *(new installs only)*
See [Customize install locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-install-directories" >}})| +|
`--var-dir ` | Var directory used for installation *(new installs only)*
See [Customize install locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-install-directories" >}})| +| `--os-user `| Operating system user account associated with install; default: `redislabs`
See [Customize user and group]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group" >}}) *(new installs only)*| +|
`--os-group ` | Operating system group associated with install; default: `redislabs`
See [Customize user and group]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group" >}}) *(new installs only)* | +--- +Title: Redis Enterprise Software product lifecycle +alwaysopen: false +categories: +- docs +- operate +- rs +description: The product lifecycle of Redis Enterprise Software. +linkTitle: Product lifecycle +weight: 100 +tocEmbedHeaders: true +url: '/operate/rs/7.4/installing-upgrading/product-lifecycle/' +--- +The Redis Enterprise Software product lifecycle fully reflects the [subscription agreement](https://redis.com/software-subscription-agreement). +However, for any discrepancy between the two policies, the subscription agreement prevails. + +Redis Enterprise modules follow the [modules lifecycle]({{< relref "/operate/oss_and_stack/stack-with-enterprise/modules-lifecycle" >}}). + +## Release numbers + +Redis uses a four-place numbering scheme to designate released versions of its products. +The format is “Major1.Major2.Minor-Build”. + +- Major sections of the version number represents fundamental changes and additions in + capabilities to Redis Enterprise Software. The Major1 and Major2 part of the + version number are incremented based on the size and scale of the changes in each + release. +- The Minor section of the version number represents quality improvements, fixes to + existing capabilities, and new capabilities which are typically minor, feature-flagged, or optional. +- Build number is incremented with any changes to the product. Build number is + incremented with each build when any change is made to the binaries. + +Redis Enterprise Software typically gets two major releases every year but the product shipping cycles may vary. +Maintenance releases, typically available on the last minor release of the current major1.major2 release are typically made available on a monthly cadence, although cycles may vary. + +## End-of-life schedule {#endoflife-schedule} + +For Redis Enterprise Software versions 6.2 and later, the end-of-life (EOL) for each major release occurs 24 months after the formal release of the subsequent major version. Monthly maintenance will be provided on the last minor release of the major1.major2 releases. +This update to the EOL policy allows a lead time of at least 24 months to upgrade to the new release after it is available. + + +| Version - Release date | End of Life (EOL) | +| ----------------------------------------- | ------------------ | +| 7.4 – February 2024 | - | +| 7.2 – August 2023 | February 28, 2026 | +| 6.4 – February 2023 | August 31, 2025 | +| 6.2 – August 2021 | February 28, 2025 | +| 6.0 – May 2020 | May 31, 2022 | +| 5.6 – April 2020 | October 31, 2021 | +| 5.4 – December 2018 | December 31, 2020 | +| 5.2 – June 2018 | December 31, 2019 | + +{{}} + +For detailed upgrade instructions, see [Upgrade a Redis Enterprise Software cluster]({{}}). +--- +LinkTitle: Uninstall +Title: Uninstall Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +weight: 70 +url: '/operate/rs/7.4/installing-upgrading/uninstalling/' +--- + +Use the script `rl_uninstall.sh` to uninstall Redis Enterprise Software and remove its files from a node. The script also deletes all Redis data and configuration from the node. + +The uninstall script does not remove the node from the cluster, but the node's status changes to down. For node removal instructions, see [Remove a cluster node]({{}}). + +## Uninstall Redis Enterprise Software + +To uninstall Redis Enterprise Software from a cluster node: + +1. Navigate to the script's location, which is in `/opt/redislabs/bin/` by default. + +1. Run the uninstall script as the root user: + + ```sh + sudo ./rl_uninstall.sh + ``` + +When you run the uninstall script on a node, it only uninstalls Redis Enterprise Software from that node. To uninstall Redis Enterprise Software for the entire cluster, run the uninstall script on each cluster node. +--- +Title: Upgrade a cluster's operating system +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade a Redis Enterprise Software cluster's operating system to a later + major version. +linkTitle: Upgrade operating system +toc: 'true' +weight: 70 +url: '/operate/rs/7.4/installing-upgrading/upgrading/upgrade-os/' +--- + +To upgrade the operating system (OS) on a Redis Enterprise Software cluster to a later major version, perform a rolling upgrade. Because you upgrade one node at a time, you can upgrade your cluster's OS without downtime. + +## Prerequisites + +Before you upgrade a cluster's operating system: + +1. [Upgrade all nodes in the cluster]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-cluster" >}}) to a Redis Enterprise Software version that supports the OS's current version and upgrade version. + + To learn which versions of Redis Enterprise Software support specific OS versions, see [Supported platforms]({{< relref "/operate/rs/7.4/references/supported-platforms#supported-platforms" >}}). + +1. If the cluster contains databases that use modules: + + 1. Update all nodes in the cluster to [Redis Enterprise Software version 7.2.4-52]({{< relref "/operate/rs/release-notes/rs-7-2-4-releases" >}}) or later before you upgrade the OS. + + 1. Check the status of modules using [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}): + + ```sh + rladmin status modules + ``` + + The output lists the module versions installed on the cluster and the module versions used by existing databases: + + ```sh + CLUSTER MODULES: + MODULE VERSION + RedisBloom 2.6.3 + RediSearch 2 2.8.4 + RedisGears 2.0.12 + RedisGraph 2.10.12 + RedisJSON 2.6.6 + RedisTimeSeries 1.10.6 + + DATABASE MODULES: + DB:ID NAME MODULE VERSION ARGS STATUS + db:1 db1 RediSearch 2 2.6.9 PARTITIONS AUTO OK, OLD MODULE VERSION + db:1 db1 RedisJSON 2.4.7 OK, OLD MODULE VERSION + ``` + + 1. If any databases use custom modules, manually uploaded modules, or modules marked with `OLD MODULE VERSION`, upload module packages for the OS upgrade version to a cluster node. See [Install a module on a cluster]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/add-module-to-cluster" >}}) for instructions. + + {{}} +The uploaded module packages have the following requirements: + +- The module is compiled for the OS upgrade version. + +- The module version matches the version currently used by databases. + {{}} + +1. If the cluster uses custom directories, make sure the OS upgrade version also supports custom directories, and specify the same custom directories during installation for all nodes. See [Customize installation directories]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-install-directories" >}}) for details. + +## Perform OS rolling upgrade + +To upgrade the cluster's operating system, use one of the following rolling upgrade methods: + +- [Extra node method](#extra-node-upgrade) - recommended if you have additional resources available + +- [Replace node method](#replace-node-upgrade) - recommended if you cannot temporarily allocate additional resources + +### Extra node upgrade method {#extra-node-upgrade} + +1. Create a node with the OS upgrade version. + +1. [Install the cluster's current Redis Enterprise Software version]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}) on the new node using the installation package for the OS upgrade version. + +1. [Add the new node]({{< relref "/operate/rs/7.4/clusters/add-node" >}}) to the cluster. + +1. [Remove one node]({{< relref "/operate/rs/7.4/clusters/remove-node#remove-a-node" >}}) running the earlier OS version from the cluster. + +1. Repeat the previous steps until all nodes with the earlier OS version are removed. + +### Replace node upgrade method {#replace-node-upgrade} + +1. [Remove a node]({{< relref "/operate/rs/7.4/clusters/remove-node#remove-a-node" >}}) with the earlier OS version from the cluster. + +1. Uninstall Redis Enterprise Software from the removed node: + + ```sh + sudo ./rl_uninstall.sh + ``` + +1. Either upgrade the existing node to the OS upgrade version, or create a new node with the OS upgrade version. + +1. [Install the cluster's current Redis Enterprise Software version]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-on-linux" >}}) on the upgraded node using the installation package for the OS upgrade version. + +1. [Add the new node]({{< relref "/operate/rs/7.4/clusters/add-node" >}}) to the cluster. + + If you want to reuse the removed node's ID when you add the node to the cluster, run [`rladmin cluster join`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/join" >}}) with the `replace_node` flag: + + ```sh + rladmin cluster join nodes username password replace_node + ``` + +1. Verify node health: + + 1. Run `rlcheck` on all nodes: + + ```sh + rlcheck + ``` + + The output lists the result of each verification test: + + ```sh + ##### Welcome to Redis Enterprise Cluster settings verification utility #### + Running test: verify_bootstrap_status + PASS + ... + Running test: verify_encrypted_gossip + PASS + Summary: + ------- + ALL TESTS PASSED. + ``` + + For healthy nodes, the expected output is `ALL TESTS PASSED`. + + 1. Run [`rladmin status`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) on the new node: + + ```sh + rladmin status extra all + ``` + + The expected output is the `OK` status for the cluster, nodes, endpoints, and shards: + + ```sh + CLUSTER: + OK. Cluster master: 2 () + Cluster health: OK, [0, 0.0, 0.0] + failures/minute - avg1 0.00, avg15 0.00, avg60 0.00. + ... + ``` + +1. Repeat the previous steps until all nodes with the earlier OS version are replaced. +--- +Title: Upgrade a Redis Enterprise Software cluster +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade a Redis Enterprise Software cluster. +linkTitle: Upgrade cluster +toc: 'true' +weight: 30 +tocEmbedHeaders: true +url: '/operate/rs/7.4/installing-upgrading/upgrading/upgrade-cluster/' +--- + +{{}} + +See the [Redis Enterprise Software product lifecycle]({{}}) for more information about release numbers and the end-of-life schedule. + +## Upgrade prerequisites + +Before upgrading a cluster: + +- Verify access to [rlcheck]({{< relref "/operate/rs/7.4/references/cli-utilities/rlcheck/" >}}) and [rladmin]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/#use-the-rladmin-shell" >}}) commands + +- Verify that you meet the upgrade path requirements for your desired cluster version and review the relevant [release notes]({{< relref "/operate/rs/release-notes" >}}) for any preparation instructions. + +- Avoid changing the database configuration or performing other cluster management operations during the upgrade process, as this might cause unexpected results. + +- Upgrade the cluster's primary (master) node first. To identify the primary node, use one of the following methods: + + - **Nodes** screen in the new Cluster Manager UI (only available for Redis Enterprise versions 7.2 and later) + + - [`rladmin status nodes`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-nodes" >}}) command + + - [`GET /nodes/status`]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes/status#get-all-nodes-status" >}}) REST API request + +## Upgrade cluster + +Starting with the primary (master) node, follow these steps for every node in the cluster. To ensure cluster availability, upgrade each node separately. + +1. Verify node operation with the following commands: + + ``` shell + $ rlcheck + $ rladmin status extra all + ``` + +2. Download the Redis Enterprise Software installation package to the machine running the node from the Download Center on [https://cloud.redis.io](https://cloud.redis.io). + +3. Extract the installation package: + + ```sh + tar vxf + ``` + + {{}} +You cannot change the installation path or the user during the upgrade. + {{}} + +1. Run the install command. See [installation script options]({{< relref "/operate/rs/7.4/installing-upgrading/install/install-script" >}}) for a list of command-line options you can add to the following command: + + ``` shell + sudo ./install.sh + ``` + + The installation script automatically recognizes the upgrade and responds accordingly. + + The upgrade replaces all node processes, which might briefly interrupt any active connections. + +2. Verify the node was upgraded to the new version and is still operational: + + ``` shell + $ rlcheck + $ rladmin status extra all + ``` + +3. Visit the Cluster Manager UI. + + If the Cluster Manager UI was open in a web browser during the upgrade, refresh the browser to reload the console. + +After all nodes are upgraded, the cluster is fully upgraded. Certain features introduced in the new version of Redis Enterprise Software only become available after upgrading the entire cluster. + +After upgrading from version 6.0.x to 6.2.x, restart `cnm_exec` on each cluster node to enable more advanced state machine handling capabilities: + +```sh +supervisorctl restart cnm_exec +``` +--- +Title: Upgrade an Active-Active database +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade an Active-Active database. +linkTitle: Active-Active databases +weight: 70 +url: '/operate/rs/7.4/installing-upgrading/upgrading/upgrade-active-active/' +--- + +When you upgrade an [Active-Active (CRDB) database]({{< relref "/operate/rs/7.4/databases/active-active" >}}), you can also upgrade the CRDB protocol version and feature version. + +## CRDB protocol version guidelines + +Redis Enterprise Software versions 5.4.2 and later use CRDB protocol version 1 to help support Active-Active features. + +CRDB protocol version 1 is backward compatible, which means Redis Enterprise v5.4.2 CRDB instances can understand write operations from instances using the earlier CRDB protocol version 0. + +After you upgrade one instance's CRDB protocol to version 1: + +- Any instances that use CRDB protocol version 1 can receive updates from both version 1 and version 0 instances. + +- However, instances that still use CRDB protocol version 0 cannot receive write updates from version 1 instances. + +- After you upgrade an instance from CRDB protocol version 0 to version 1, it automatically receives any missing write operations. + +Follow these upgrade guidelines: + +- Upgrade all instances of a specific CRDB within a reasonable time frame to avoid temporary inconsistencies between the instances. + +- Make sure that you upgrade all instances of a specific CRDB before you do global operations on the CRDB, such as removing instances and adding new instances. + +- As of v6.0.20, protocol version 0 is deprecated and support will be removed in a future version. + +- To avoid upgrade failures, update all Active-Active databases to protocol version 1 _before_ upgrading Redis Enterprise Software to v6.0.20 or later. + +## Feature version guidelines + +Starting with version 5.6.0, a new feature version (also called a _feature set version_) helps support new Active-Active features. + +When you update the feature version for an Active-Active database, the feature version is updated for all database instances. + +Follow these upgrade guidelines: + +- As of v6.0.20, feature version 0 is deprecated and support will be removed in a future version. + +- To avoid upgrade failures, update all Active-Active databases to protocol version 1 _before_ upgrading Redis Enterprise Software to v6.0.20 or later. + +## Upgrade Active-Active database instance + +To upgrade an Active-Active database (CRDB) instance: + +1. [Upgrade Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-cluster" >}}) on each node in the clusters where the Active-Active instances are located. + +1. To see the status of your Active-Active instances, run: + + ```sh + rladmin status + ``` + + The statuses of the Active-Active instances on the node can indicate: + + - `OLD REDIS VERSION` + - `OLD CRDB PROTOCOL VERSION` + - `OLD CRBD FEATURESET VERSION` + + {{< image filename="/images/rs/crdb-upgrade-node.png" >}} + +1. To upgrade each Active-Active instance, including the Redis version and CRDB protocol version, run: + + - To upgrade a database without modules: + + ```sh + rladmin upgrade db + ``` + + - If the database has modules enabled and new module versions are available in the cluster, run `rladmin upgrade db` with additional parameters to upgrade the module versions when you upgrade the database. See [Upgrade modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/upgrade-module" >}}) for more details. + + If the protocol version is old, read the warning message carefully and confirm. + + {{< image filename="/images/rs/crdb-upgrade-protocol.png" >}} + + The Active-Active instance uses the new Redis version and CRDB protocol version. + + Use the `keep_crdt_protocol_version` option to upgrade the database feature version +without upgrading the CRDB protocol version. + + If you use this option, make sure that you upgrade the CRDB protocol soon after with the [`rladmin upgrade db`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/upgrade#upgrade-db" >}}) command. + + You must upgrade the CRDB protocol before you update the CRDB feature set version. + +1. If the feature set version is old, you must upgrade all of the Active-Active instances. Then, to update the feature set for each active-active database, run: + + ```sh + crdb-cli crdb update --crdb-guid --featureset-version yes + ``` + + You can retrieve the `` with the following command: + + ```sh + crdb-cli crdb list + ``` + + Look for the fully qualified domain name (CLUSTER-FDQN) of your cluster and use the associated GUID: + + ```sh + CRDB-GUID NAME REPL-ID CLUSTER-FQDN + 700140c5-478e-49d7-ad3c-64d517ddc486 aatest 1 aatest1.example.com + 700140c5-478e-49d7-ad3c-64d517ddc486 aatest 2 aatest2.example.com + ``` + +1. Update module information in the CRDB configuration using the following command syntax: + + ```sh + crdb-cli crdb update --crdb-guid --default-db-config \ + '{ "module_list": + [ + { + "module_name": "", + "semantic_version": "" + }, + { + "module_name": "", + "semantic_version": "" + } + ]}' + ``` + + For example: + + ```sh + crdb-cli crdb update --crdb-guid 82a80988-f5fe-4fa5-bca0-aef2a0fd60db --default-db-config \ + '{ "module_list": + [ + { + "module_name": "search", + "semantic_version": "2.4.6" + }, + { + "module_name": "ReJSON", + "semantic_version": "2.4.5" + } + ]}' + ``` +--- +Title: Upgrade a Redis Enterprise Software database +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrade a Redis Enterprise Software database. +linkTitle: Upgrade database +weight: 50 +url: '/operate/rs/7.4/installing-upgrading/upgrading/upgrade-database/' +--- + +## Default Redis database versions {#default-db-versions} + +When you upgrade an existing database, it uses the latest bundled Redis version unless you specify a different version with the `redis_version` option in the [REST API]({{< relref "/operate/rs/7.4/references/rest-api/requests/bdbs" >}}) or [`rladmin upgrade db`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/upgrade#upgrade-db" >}}). + +Redis Enterprise Software v6.x includes two Redis database versions: 6.0 and 6.2. +As of version 7.2, Redis Enterprise Software includes three Redis database versions. + +To view available Redis database versions: + +- In the Cluster Manager UI, see **Redis database versions** on the **Cluster > Configuration** screen. + +- Send a [`GET /nodes` REST API request]({{< relref "/operate/rs/7.4/references/rest-api/requests/nodes" >}}) and see `supported_database_versions` in the response. + +The default Redis database version differs between Redis Enterprise releases as follows: + +| Redis
Enterprise | Bundled Redis
DB versions | Default DB version
(upgraded/new databases) | +|-------|----------|-----| +| 7.4.2 | 6.0, 6.2, 7.2 | 7.2 | +| 7.2.4 | 6.0, 6.2, 7.2 | 7.2 | +| 6.4.2 | 6.0, 6.2 | 6.2 | +| 6.2.x | 6.0, 6.2 | 6.0 | + + +The upgrade policy is only relevant for Redis Enterprise Software versions 6.2.4 through 6.2.18. For more information about upgrade policies, see the [6.2 version of this document](https://docs.redis.com/6.2/rs/installing-upgrading/upgrading/#redis-upgrade-policy). + +## Upgrade prerequisites + +Before upgrading a database: + +- Review the relevant [release notes]({{< relref "/operate/rs/release-notes" >}}) for any preparation instructions. + +- Verify that the database version meets the minimums specified earlier. + + To determine the database version: + + - Use the Cluster Manager UI to open the **Configuration** tab for the database and select {{< image filename="/images/rs/icons/info-icon.png#no-click" alt="The About database button" width="18px" class="inline" >}} **About**. + + - _(Optional)_ Use the [`rladmin status extra all`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status" >}}) command to display configuration details: + + ```sh + rladmin status extra all + ``` + + When the database compatibility version is outdated,
`OLD REDIS VERSION` appears in the command output. + +- Verify the cluster is fully upgraded and operational. + + Use the Cluster Manager UI to display the **Configuration** tab for the cluster. The tab displays the cluster version information and the Redis database compatibility version. + +- Check client compatibility with the database version. + + If you run Redis Stack commands with Go-Redis versions 9 and later or Lettuce versions 6 and later, set the client’s protocol version to RESP2 before upgrading your database to Redis version 7.2 to prevent potential application issues due to RESP3 breaking changes. See [Client prerequisites for Redis 7.2 upgrade]({{< relref "/operate/rs/7.4/references/compatibility/resp#client-prerequisites-for-redis-72-upgrade" >}}) for more details and examples. + +- To avoid data loss during the upgrade, [back up your data]({{< relref "/operate/rs/7.4/databases/import-export/schedule-backups" >}}). + + You can [export the data]({{< relref "/operate/rs/7.4/databases/import-export/export-data" >}}) to an external location, [enable replication]({{< relref "/operate/rs/7.4/databases/durability-ha/replication" >}}), or [enable persistence]({{< relref "/operate/rs/7.4/databases/configure/database-persistence" >}}). + + When choosing how to back up data, keep the following in mind: + + - To reduce downtime when replication is enabled, a failover is performed before restarting the primary (master) database. + + - When persistence is enabled without replication, the database is unavailable during restart because the data is restored from the persistence file. AOF persistence restoration is slower than snapshot restoration. + +## Upgrade database + +To upgrade a database: + +1. _(Optional)_ Back up the database to minimize the risk of data loss. + +1. Use [`rladmin`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/upgrade" >}}) to upgrade the database. During the upgrade process, the database will restart without losing any data. + + - To upgrade a database without modules: + + ``` shell + rladmin upgrade db + ``` + + Example of a successful upgrade: + + ``` shell + rladmin> upgrade db demo + Monitoring d194c4a3-631c-4726-b799-331b399fc85c + active - SMUpgradeBDB init + active - SMUpgradeBDB wait_for_version + active - SMUpgradeBDB configure_shards + completed - SMUpgradeBDB + Done + ``` + + - If the database has modules enabled and new module versions are available in the cluster, run `rladmin upgrade db` with additional parameters to upgrade the module versions when you upgrade the database. See [Upgrade modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/upgrade-module" >}}) for more details. + + - To upgrade the database to a version other than the default version, use the `redis_version` parameter: + + ```sh + rladmin upgrade db redis_version + ``` + +1. Check the Redis database compatibility version for the database to confirm the upgrade. + + To do so: + + - Use the Cluster Manager UI to open the **Configuration** tab for the database and select {{< image filename="/images/rs/icons/info-icon.png#no-click" alt="The About database button" width="18px" class="inline" >}} **About**. + + - Use [`rladmin status databases extra all`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/status#status-databases" >}}) to display a list of the databases in your cluster and their current Redis database compatibility version: + + ```sh + rladmin status databases extra all + ``` + + Verify that the Redis version is set to the expected value. +--- +Title: Upgrade an existing Redis Enterprise Software deployment +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: true +linkTitle: Upgrade +weight: 60 +url: '/operate/rs/7.4/installing-upgrading/upgrading/' +--- +To upgrade Redis Enterprise Software: + +1. Verify appropriate [network ports]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) are either open or used by Redis Enterprise Software. + +1. [Upgrade the software on all nodes of the cluster.]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-cluster" >}}) + +2. _(Optional)_ [Upgrade each database]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-database" >}}) in the cluster or [upgrade an Active-Active database]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-active-active" >}}) to enable new features and important fixes. +--- +LinkTitle: Socket file location +Title: Change socket file locations +alwaysopen: false +categories: +- docs +- operate +- rs +description: Change socket file locations. +weight: $weight +url: '/operate/rs/7.4/installing-upgrading/configuring/change-location-socket-files/' +--- + +## Default socket file locations + +There are two default locations for the socket files in Redis Enterprise Software: + +- `/tmp` - In clean installations of Redis Enterprise Software version earlier than 5.2.2 +- `/var/opt/redislabs/run` - In clean installations of Redis Enterprise Software version 5.2.2 and later + + {{}} +The default location was changed in case you run any maintenance procedures that delete the `/tmp` directory. + {{}} + +When you upgrade Redis Enterprise Software from an earlier version to 5.2.2 or later, the socket files +are not moved to the new location by default. You need to either specify a custom location +for the socket files during [installation]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) or use the [following procedure](#change-socket-file-locations) after installation. + +## Change socket file locations + +To change the location of the socket files: + +1. On each node in the cluster, run: + + ```sh + sudo rlutil create_socket_path socket_path=/var/opt/redislabs/run + ``` + +1. Identify the node with the `master` role by running the following command on any node in the cluster: + + ```sh + rladmin status nodes + ``` + +1. On the master node, change the socket file location: + + ```sh + sudo rlutil set_socket_path socket_path=/var/opt/redislabs/run + ``` + +1. To update the socket file location for all other nodes, restart Redis Enterprise Software on each node in the cluster, one at a time: + + ```sh + sudo service rlec_supervisor restart + ``` + +1. Restart each database in the cluster to update the socket file location: + + ```sh + rladmin restart db + ``` + + {{< warning >}} +Restarting databases can cause interruptions in data traffic. + {{< /warning >}} +--- +Title: Configure swap for Linux +alwaysopen: false +categories: +- docs +- operate +- rs +description: Turn off Linux swap space. +linkTitle: Linux swap configuration +weight: $weight +url: '/operate/rs/7.4/installing-upgrading/configuring/linux-swap/' +--- +Linux operating systems use swap space, which is enabled by default, to help manage memory (pages) by +copying pages from RAM to disk. Due to the way Redis Enterprise Software +utilizes and manages memory, it is best to prevent OS swapping. For more details, see [memory limits]({{< relref "/operate/rs/7.4/databases/memory-performance/memory-limit.md" >}}). The +recommendation is to turn off Linux swap completely in the OS. + +When you install or build the OS on the machine intended to host your Redis Enterprise Software cluster, avoid configuring swap partitions if possible. + +## Turn off swap + +To turn off swap in the OS of an existing server, VM, or instance, you +must have `sudo` access or be a root user to run the following commands: + +1. Turn off swap: + + ```sh + sudo swapoff -a + ``` + +1. Comment out the swap partitions configured in the OS so swap remains off even after a reboot: + + ```sh + sudo sed -i.bak '/ swap / s/^(.*)$/#1/g' /etc/fstab + ``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure firewall rules for Redis Enterprise Software on CentOS or Red + Hat Enterprise Linux (RHEL). +linkTitle: CentOS/RHEL firewall +title: Configure CentOS/RHEL firewall +weight: $weight +url: '/operate/rs/7.4/installing-upgrading/configuring/centos-rhel-firewall/' +--- +CentOS and Red Hat Enterprise Linux (RHEL) distributions use [**firewalld**](https://firewalld.org/) by default to manage the firewall and configure [iptables](https://en.wikipedia.org/wiki/Iptables). +The default configuration assigns the network interfaces to the **public** zone and blocks all ports except port 22, which is used for [SSH](https://en.wikipedia.org/wiki/Secure_Shell). + +When you install Redis Enterprise Software on CentOS or RHEL, it automatically creates two firewalld system services: + +- A service named **redislabs**, which includes all ports and protocols needed for communication between cluster nodes. +- A service named **redislabs-clients**, which includes the ports and protocols needed for external communication (outside of the cluster). + +These services are defined but not allowed through the firewall by default. +During Redis Enterprise Software installation, the [installer prompts]({{< relref "/operate/rs/7.4/installing-upgrading/install/manage-installation-questions" >}}) you to confirm auto-configuration of a default (public) zone +to allow the **redislabs** service. + +Although automatic firewall configuration simplifies installation, your deployment might not be secure if you did not use other methods to secure the host machine's network, such as external firewall rules or security groups. +You can use firewalld configuration tools such as **firewall-cmd** (command line) or **firewall-config** (UI) +to create more specific firewall policies that allow these two services through the firewall, as necessary. + +{{}} +If databases are created with non-standard [Redis Enterprise Software ports]({{< relref "/operate/rs/7.4/networking/port-configurations" >}}), +you need to explicitly configure firewalld to make sure those ports are not blocked. +{{}} +--- +Title: Additional configuration +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: false +weight: 80 +url: '/operate/rs/7.4/installing-upgrading/configuring/' +--- +This section describes additional configuration options for Redis Enterprise Software installation. + + +--- +Title: Create a support package +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create a support package that gathers essential information to help debug + issues. +linkTitle: Create support package +toc: 'true' +weight: $weight +url: '/operate/rs/7.4/installing-upgrading/creating-support-package/' +--- +If you encounter any issues that you are not able to resolve yourself +and need to [contact Redis support](https://redis.io/support/) for assistance, you can [create a support package](#create-support-package) that gathers all essential information to help debug +your issues. + +{{< note >}} +The process of creating the support package can take several minutes and generates load on the system. +{{< /note >}} + +## Support package files + +The support package is a zip file that contains all cluster configuration and logs. + +When downloaded from the Cluster Manager UI, the support package's name is `debuginfo.tar.gz`. + +### Database support package files + +Cluster and database support packages collect database details in `database_` directories, where `` is the database ID, and Redis shard details in `` directories. + +The following table describes the included files: + +| File | Description | +|------|-------------| +| ccs-redis.json | Primary node's local cluster configuration store (CCS). | +| /database_/ | Directory that includes files for a specific database. is the database ID. | +| database__ccs_info.txt | Database information from the cluster configuration store (CCS). Includes settings for databases, endpoints, shards, replicas, and CRDB. | +| database_.clientlist | List of clients connected to the database when the support package was created. | +| database_.info | Redis information and statistics for the database. See [`INFO`]({{}}) for details about the collected fields. | +| database_.rladmin | Database information. See [`rladmin info db`]({{}}) for an example of collected fields. Also includes creation time, last changed time, Redis version, memory limit, persistence type, eviction policy, hashing policy, and whether SSL, backups, and email alerts are enabled. | +| database_.slowlog | Contains slowlog output, which includes commands that took longer than 10 milliseconds. Only included if `slowlog_in_sanitized_support` is `true` in cluster settings. | +| /node_/redis_.txt | For each shard of the specified database only. Includes shard configuration and [information]({{}}), slowlog information, and latency information. | + +### Node support package files + +Cluster and node support packages collect node details in `node_` directories, where `` is the node ID. + +The following table describes the included files: + +| File | Description | +|------|-------------| +| ccs-redis.json | The node's local cluster configuration store (CCS). | +| /conf/ | Directory that contains configuration files. | +| /logs/ | Directory that includes logs. | +| node_.ccs | Includes cluster configuration, node configuration, and DMC proxy configuration. | +| node__envoy_config.json | Envoy configuration. | +| node_.rladmin | Information about the cluster's nodes, databases, endpoints, and shards. See [`rladmin status`]({{}}) for example output. | +| node__sys_info.txt | Node's system information including:
• Socket files list
• Log files list
• Processes running on the node
• Disk usage
• Persistent files list
• Memory usage
• Network interfaces
• Installed packages
• Active iptables
• OS and platform
• Network connection
• Status of Redis processes | +| redis_.txt | For each shard of the specified database only. Includes shard configuration and [information]({{}}), slowlog information, and latency information. | + +Each node's `/conf/` directory contains the following files: + +- bootstrap_status.json +- ccs-paths.conf +- config.json +- envoy.yaml +- gossip_envoy.yaml +- heartbeatd-config.json +- last_bootstrap.json +- local_addr.conf +- node.id +- node_local_config.json +- redislabs_env_config.sh +- socket.conf +- supervisord_alert_mgr.conf +- supervisord_cm_server.conf +- supervisord_crdb_coordinator.conf +- supervisord_crdb_worker.conf +- supervisord_mdns_server.conf +- supervisord_pdns_server.conf + +Each node's `/conf/` directory also contains the following key and cert modulus files: + +- api_cert.modulus +- api_key.modulus +- ccs_internode_encryption_cert.modulus +- ccs_internode_encryption_key.modulus +- cm_cert.modulus +- cm_key.modulus +- data_internode_encryption_cert.modulus +- data_internode_encryption_key.modulus +- gossip_ca_signed_cert.modulus +- gossip_ca_signed_key.modulus +- mesh_ca_signed_cert.modulus +- mesh_ca_signed_key.modulus +- metrics_exporter_cert.modulus +- metrics_exporter_key.modulus +- proxy_cert.modulus +- proxy_key.modulus +- syncer_cert.modulus +- syncer_key.modulus + +## Create support package + +### Cluster Manager UI method + +To create a support package from the Cluster Manager UI: + +1. In the navigation menu, select **Support**. + + {{Select Support and create a support package.}} + +1. Select **Proceed**. + +1. In the **Create support package** dialog, select **Run process**. + +1. The package is created and downloaded by your browser. + +### Command-line method + +If package creation fails with `internal error` or if you cannot access the UI, create a support package for the cluster from the command line on any node in the cluster using the [`rladmin cluster debug_info`]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin/cluster/debug_info" >}}) command: + +```sh +/opt/redislabs/bin/rladmin cluster debug_info +``` + +- If `rladmin cluster debug_info` fails for lack of space in the `/tmp` directory, you can: + + 1. Change the storage location where the support package is saved: + + ```sh + rladmin cluster config debuginfo_path + ``` + + The `redislabs` user must have write access to the storage location on all cluster nodes. + + 1. On any node in the cluster, run: + + ```sh + rladmin cluster debug_info + ``` + +- If `rladmin cluster debug_info` fails for another reason, you can create a support package for the cluster from the command line on each node in the cluster with the command: + + ```sh + /opt/redislabs/bin/debuginfo + ``` + +Upload the tar file to [Redis support](https://redis.com/company/support/). The path to the archive is shown in the command output. + +### REST API method + +You can also use `debuginfo` [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) requests to create and download support packages. + +To download debug info from all nodes and databases: + +```sh +GET /v1/cluster/debuginfo +``` + +To download debug info from all nodes: + +```sh +GET /v1/nodes/debuginfo +``` + +To download debug info from a specific node, replace `` in the following request with the node ID: + +```sh +GET /v1/nodes//debuginfo +``` + +To download debug info from all databases: + +```sh +GET /v1/bdbs/debuginfo +``` + +To download debug info from a specific database, replace `` in the following request with the database ID: + +```sh +GET /v1/bdbs//debuginfo +``` +--- +Title: Install, set up, and upgrade Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Learn how to install, set up, and upgrade Redis Enterprise Software. +hideListLinks: true +linkTitle: Install and upgrade +toc: 'true' +weight: 35 +url: '/operate/rs/7.4/installing-upgrading/' +--- + +You can run self-managed Redis Enterprise Software in an on-premises data center or on your preferred cloud platform. + +If you prefer a fully managed Redis database-as-a-service, available on major public cloud services, consider setting up a [Redis Cloud]({{}}) subscription. You can [try Redis Cloud](https://redis.io/try-free/) for free. + +## Quickstarts + +If you want to try out Redis Enterprise Software, see the following quickstarts: + +- [Redis Enterprise Software quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) + +- [Docker quickstart for Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}) + +## Install Redis Enterprise Software + +To install Redis Enterprise Software on a [supported platform]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/supported-platforms" >}}), you need to: + +1. [Plan your deployment]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment" >}}). + +1. [Prepare to install]({{< relref "/operate/rs/7.4/installing-upgrading/install/prepare-install" >}}). + +1. [Perform the install]({{< relref "/operate/rs/7.4/installing-upgrading/install" >}}). + +Depending on your needs, you may also want to [customize the installation](#more-info-and-options). + +## Upgrade existing deployment + +If you already installed Redis Enterprise Software, you can: + +- [Upgrade a cluster]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-cluster" >}}) + +- [Upgrade a database]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-database" >}}) + +- [Upgrade an Active-Active database]({{< relref "/operate/rs/7.4/installing-upgrading/upgrading/upgrade-active-active" >}}) + +## Uninstall Redis Enterprise Software + +- [Uninstall existing deployment]({{< relref "/operate/rs/7.4/installing-upgrading/uninstalling" >}}) + +## More info and options + +More information is available to help with customization and related questions: + +- [CentOS/RHEL firewall configuration]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/centos-rhel-firewall.md" >}}) +- [Change socket file location]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/change-location-socket-files.md" >}}) +- [Cluster DNS configuration]({{< relref "/operate/rs/7.4/networking/cluster-dns.md" >}}) +- [Cluster load balancer setup]({{< relref "/operate/rs/7.4/networking/cluster-lba-setup.md" >}}) +- [File locations]({{< relref "/operate/rs/7.4/installing-upgrading/install/plan-deployment/file-locations.md" >}}) +- [Linux swap space configuration]({{< relref "/operate/rs/7.4/installing-upgrading/configuring/linux-swap.md" >}}) +- [mDNS client prerequisites]({{< relref "/operate/rs/7.4/networking/mdns.md" >}}) +- [User and group ownership]({{< relref "/operate/rs/7.4/installing-upgrading/install/customize-user-and-group.md" >}}) + +## Next steps + +After you install Redis Enterprise Software and set up your cluster, you can: + +- [Add users]({{< relref "/operate/rs/7.4/security/access-control/create-users" >}}) to the cluster with specific permissions. To begin, start with [Access control]({{< relref "/operate/rs/7.4/security/access-control" >}}). + +- [Create databases]({{< relref "/operate/rs/7.4/databases/create" >}}) to use with your applications. + +--- +Title: Docker quickstart for Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Set up a development or test deployment of Redis Enterprise Software + using Docker. +linkTitle: Docker quickstart +weight: 2 +url: '/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart/' +--- +{{< warning >}} +Docker containers are currently only supported for development and test environments, not for production. Use [Redis Enterprise on Kubernetes]({{< relref "/operate/kubernetes" >}}) for a supported containerized deployment. +{{< /warning >}} + +For testing purposes, you can run Redis Enterprise Software on Docker containers on +Linux, Windows, or MacOS. +The [Redis Enterprise Software container](https://hub.docker.com/r/redislabs/redis/) +acts as a node in a cluster. + +To get started with a single Redis Enterprise Software container: + +1. [Install Docker](#install-docker) for your operating system + +2. [Run the Redis Enterprise Software Docker container](#run-the-container) + +3. [Set up a cluster](#set-up-a-cluster) + +4. [Create a new database](#create-a-database) + +5. [Connect to your database](#connect-to-your-database) + +## Install Docker + +Follow the Docker installation instructions for your operating system: + +- [Linux](https://docs.docker.com/install/#supported-platforms) +- [MacOS](https://docs.docker.com/docker-for-mac/install/) +- [Windows](https://store.docker.com/editions/community/docker-ce-desktop-windows) + +## Run the container + +To download and start the Redis Enterprise Software Docker container, run the following +[`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command in the terminal or command line for your operating system. + +{{< note >}} +On Windows, make sure Docker is configured to run Linux-based containers. +{{< /note >}} + +```sh +docker run -d --cap-add sys_resource --name RE -p 8443:8443 -p 9443:9443 -p 12000:12000 redislabs/redis +``` + +The example command runs the Docker container with Redis Enterprise Software on `localhost` and opens the following ports: + +- Port 8443 for HTTPS connections + +- Port 9443 for [REST API]({{< relref "/operate/rs/7.4/references/rest-api" >}}) connections + +- Port 12000 configured Redis database port allowing client connections + +You can publish other [ports]({{< relref "/operate/rs/7.4/networking/port-configurations.md" >}}) +with `-p :` or use the `--network host` option to open all ports to the host network. + +## Set up a cluster + +{{}} + +## Create a database + +{{}} + +{{< note >}} +{{< embed-md "docker-memory-limitation.md" >}} +{{< /note >}} + +## Connect to your database + +After you create the Redis database, you can connect to it to begin storing data. + +### Use redis-cli inside Docker {#connect-inside-docker} + +Every installation of Redis Enterprise Software includes the command-line tool [`redis-cli`]({{< relref "/operate/rs/7.4/references/cli-utilities/redis-cli" >}}) to interact with your Redis database. You can use `redis-cli` to connect to your database from within the same Docker network. + +Use [`docker exec`](https://docs.docker.com/engine/reference/commandline/exec/) to start an interactive `redis-cli` session in the running Redis Enterprise Software container: + +```sh +$ docker exec -it redis-cli -h redis-12000.cluster.local -p 12000 +127.0.0.1:12000> SET key1 123 +OK +127.0.0.1:12000> GET key1 +"123" +``` + +### Connect from the host environment {#connect-outside-docker} + +The database you created uses port `12000`, which is also mapped from the Docker container back to the host environment. This lets you use any method you have available locally to [connect to a Redis database]({{< relref "/operate/rs/7.4/databases/connect/" >}}). Use `localhost` as the `host` and `12000` as the port. + +## Test different topologies + +{{< warning >}} +Docker containers are currently only supported for development and test environments, not for production. Use [Redis Enterprise on Kubernetes]({{< relref "/operate/kubernetes" >}}) for a supported containerized deployment. +{{< /warning >}} + +When deploying Redis Enterprise Software using Docker for testing, several common topologies are available, according to your requirements: + +- [Single-node cluster](#single-node) – For local development or functional testing + +- [Multi-node cluster on a single host](#multi-node-one-host) – For a small-scale deployment that is similar to production + +- [Multi-node cluster with multiple hosts](#multi-node-multi-host) – For more predictable performance or high availability compared to single-host deployments + +### Single node {#single-node} + +The simplest topology is to run a single-node Redis Enterprise Software cluster with a single container on a single host machine. You can use this topology for local development or functional testing. + +Single-node clusters have limited functionality. For example, Redis Enterprise Software can't use replication or protect against failures if the cluster has only one node. + +{{< image filename="/images/rs/RS-Docker-container.png" >}} + +### Multiple nodes on one host {#multi-node-one-host} + +You can create a multi-node Redis Enterprise Software cluster by deploying multiple containers to a single host machine. The resulting cluster is scale minimized but similar to production deployments. + +However, this will also have several limitations. For example, you cannot map the same port on multiple containers on the same host. + +{{< image filename="/images/rs/RS-Docker-cluster-single-host.png" >}} + +### Multiple nodes and hosts {#multi-node-multi-host} + +You can create a multi-node Redis Enterprise Software cluster with multiple containers by deploying each container to a different host machine. + +This topology minimizes interference between containers, allowing for the testing of more Redis Enterprise Software features. + +{{< image filename="/images/rs/RS-Docker-cluster-multi-host.png" >}} +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Set up a test deployment of Redis Enterprise Software for Linux. +linkTitle: Quickstart +title: Redis Enterprise Software quickstart +weight: 1 +url: '/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart/' +--- +This guide helps you install Redis Enterprise Software on a Linux host to test its capabilities. + +When finished, you'll have a simple cluster with a single node: + +1. [Ensure port availability](#ensure-port-availability) + +1. [Install Redis Enterprise Software](#install-redis-enterprise-software) + +1. [Set up a Redis Enterprise Software cluster](#set-up-a-cluster) + +1. [Create a new Redis database](#create-a-database) + +1. [Connect to your Redis database](#connect-to-your-database) + +{{< note >}} +**This quickstart is designed for local testing only.** +For production environments, see the [install and setup]({{< relref "/operate/rs/7.4/installing-upgrading#install-redis-enterprise-software" >}}) guide for deployment options and instructions. +{{< /note >}} + +## Ensure port availability + +{{}} + +### Update `sysctl.conf` to avoid port collisions + +{{}} + +### OS conflicts with port 53 + +{{}} + + +### Configuration for AWS and GCP + +For detailed configuration instructions, see your cloud provider's documentation. + +1. Create a VPC that you can use with regional subnets. + +1. Within this VPC, create firewall rules that allow external and internal access for Redis Enterprise Software. + + +| Ingress/Egress | Source | Protocol | Ports | Other protocols | +|------------------|----------------------------------------------------|-----------|------------------------------------------|------------------| +| Ingress | 0.0.0.0/0 | TCP | 21, 22, 53, 8001, 8443, 9443, 8070, 10000-19999 | ICMP | +| Ingress | 0.0.0.0/0 | UDP | 53, 5353 | | +| Ingress | 10.0.0.0/8 (if subnets use 10. ranges) | all | all | | + + +## Install Redis Enterprise Software + +To install Redis Enterprise Software: + +1. Download the installation files from the [Redis Enterprise Download Center](https://redis.io/downloads/#software) +and copy the download package to a machine with a Linux-based OS. + + {{< note >}} +You are required to create a free account to access the download center. + {{< /note >}} + +1. Extract the installation files: + + ```sh + tar vxf + ``` + +1. Run the `install.sh` script in the current directory: + + ```sh + sudo ./install.sh -y + ``` + +## Set up a cluster + +To set up your machine as a Redis Enterprise Software cluster: + +{{< embed-md "cluster-setup.md" >}} + +## Create a database + +{{}} + +## Connect to your database + +After you create the Redis database, you can connect to it and store data. +See [Test client connection]({{< relref "/operate/rs/7.4/databases/connect/test-client-connectivity" >}}) for connection options and examples. + +## Supported web browsers + +To use the Redis Enterprise Software Cluster Manager UI, you need a modern browser with JavaScript enabled. + +The following browsers have been tested with the current version of the Cluster Manager UI: + +- Microsoft Windows, version 10 or later. + - [Google Chrome](https://www.google.com/chrome/), version 48 and later + - [Microsoft Edge](https://www.microsoft.com/edge), version 20 and later + - [Mozilla Firefox](https://www.mozilla.org/firefox/), version 44 and and later + - [Opera](https://www.opera.com/), version 35 and later + +- Apple macOS: + - [Google Chrome](https://www.google.com/chrome/), version 48 and later + - [Mozilla Firefox](https://www.mozilla.org/firefox/), version 44 and and later + - [Opera](https://www.opera.com/), version 35 and later + +- Linux: + - [Google Chrome](https://www.google.com/chrome/), version 49 and later + - [Mozilla Firefox](https://www.mozilla.org/firefox/), version 44 and and later + - [Opera](https://www.opera.com/), version 35 and later +--- +Title: Redis Enterprise Software quickstarts +alwaysopen: false +categories: +- docs +- operate +- rs +description: Follow these quickstarts to try out Redis Enterprise Software. +hideListLinks: true +linkTitle: Quickstarts +weight: 10 +url: '/operate/rs/7.4/installing-upgrading/quickstarts/' +--- + +Try out Redis Enterprise Software using one of the following quickstarts: + +- [Redis Enterprise Software quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) + +- [Docker quickstart for Redis Enterprise Software]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}) + +Additional quickstart guides are available to help you: + +- Set up a [Auto Tiering cluster]({{< relref "/operate/rs/7.4/databases/auto-tiering/quickstart.md" >}}) to optimize memory resources. + +- Set up an [Active-Active cluster]({{< relref "/operate/rs/7.4/databases/active-active/get-started.md" >}}) to enable high availability. + +- [Benchmark]({{< relref "/operate/rs/7.4/clusters/optimize/memtier-benchmark.md" >}}) Redis Enterprise Software performance. +--- +Title: Archive +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes where to view the archive of Redis Enterprise Software documentation. +linkTitle: Archive +weight: 99 +url: '/operate/rs/7.4/rs-archive/' +--- + +Previous versions of Redis Enterprise Software documentation are available on the archived web site: + +- [Redis Enterprise Software v7.4 documentation archive](https://docs.redis.com/7.4/rs/)   + +- [Redis Enterprise Software v7.2 documentation archive](https://docs.redis.com/7.2/rs/) + +- [Redis Enterprise Software v6.4 documentation archive](https://docs.redis.com/6.4/rs/) + +- [Redis Enterprise Software v6.2 documentation archive](https://docs.redis.com/6.2/rs/) + +- [Redis Enterprise Software v6.0 documentation archive](https://docs.redis.com/6.0/rs/) +--- +Title: Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: The self-managed, enterprise-grade version of Redis. +hideListLinks: true +weight: 10 +url: '/operate/rs/7.4/' +linkTitle: 7.4 +bannerText: This documentation applies to Redis Software versions 7.4.x. +bannerChildren: true +--- + +[Redis Enterprise](https://redis.io/enterprise/) is a self-managed, enterprise-grade version of Redis. + +With Redis Enterprise, you get many enterprise-grade capabilities, including: +- Linear scalability +- High availability, backups, and recovery +- Predictable performance +- 24/7 support + +You can run self-managed Redis Enterprise Software in an on-premises data center or on your preferred cloud platform. + +If you prefer a fully managed Redis database-as-a-service, available on major public cloud services, consider setting up a [Redis Cloud]({{}}) subscription. You can [try Redis Cloud](https://redis.io/try-free/) for free. + +## Get started +Build a small-scale cluster with the Redis Enterprise Software container image. +- [Linux quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) +- [Docker quickstart]({{< relref "/operate/rs/7.4/installing-upgrading/quickstarts/docker-quickstart" >}}) +- [Get started with Active-Active]({{< relref "/operate/rs/7.4/databases/active-active/get-started" >}}) + +## Install & setup +[Install & set up]({{< relref "/operate/rs/7.4/installing-upgrading" >}}) a Redis Enterprise Software cluster. +- [Networking]({{< relref "/operate/rs/7.4/networking" >}}) +- [Set up]({{< relref "/operate/rs/7.4/clusters/new-cluster-setup" >}}) & [configure]({{< relref "/operate/rs/7.4/clusters/configure" >}}) a [cluster]({{< relref "/operate/rs/7.4/clusters" >}}) +- [Release notes]({{< relref "/operate/rs/release-notes" >}}) + +## Databases +Create and manage a [Redis database]({{< relref "/operate/rs/7.4/databases" >}}) on a cluster. +- [Create a Redis Enterprise Software database]({{< relref "/operate/rs/7.4/databases/create" >}}) +- [Configure database]({{< relref "/operate/rs/7.4/databases/configure" >}}) +- [Create Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/create" >}}) +- [Edit Active-Active database]({{< relref "/operate/rs/7.4/databases/active-active/manage.md" >}}) + +## Security +[Manage secure connections]({{< relref "/operate/rs/7.4/security" >}}) to the cluster and databases. +- [Access control]({{< relref "/operate/rs/7.4/security/access-control" >}}) +- [Users]({{< relref "/operate/rs/7.4/security/access-control/manage-users" >}}) & [roles]({{< relref "/operate/rs/7.4/security/access-control" >}}) +- [Certificates]({{< relref "/operate/rs/7.4/security/certificates" >}}) +- [TLS]({{< relref "/operate/rs/7.4/security/encryption/tls" >}}) & [Encryption]({{< relref "/operate/rs/7.4/security/encryption" >}}) + +## Reference +Use command-line utilities and the REST API to manage the cluster and databases. +- [rladmin]({{< relref "/operate/rs/7.4/references/cli-utilities/rladmin" >}}), [crdb-cli]({{< relref "/operate/rs/7.4/references/cli-utilities/crdb-cli" >}}), & [other utilities]({{< relref "/operate/rs/7.4/references/cli-utilities" >}}) +- [REST API reference]({{< relref "/operate/rs/7.4/references/rest-api" >}}) & [examples]({{< relref "/operate/rs/7.4/references/rest-api/quick-start" >}}) +- [Redis commands]({{< relref "/commands" >}}) + +## Archive + +You can use the version selector in the navigation menu to view documentation for Redis Enterprise Software versions 7.4 and later. + +To view documentation earlier than version 7.4, see the archived website: + +- [Redis Enterprise Software v7.2 documentation archive](https://docs.redis.com/7.2/rs/) + +- [Redis Enterprise Software v6.4 documentation archive](https://docs.redis.com/6.4/rs/) + +- [Redis Enterprise Software v6.2 documentation archive](https://docs.redis.com/6.2/rs/) + +- [Redis Enterprise Software v6.0 documentation archive](https://docs.redis.com/6.0/rs/) + +## Related info +- [Redis Cloud]({{< relref "/operate/rc" >}}) +- [Redis Open Source]({{< relref "/operate/oss_and_stack" >}}) +- [Redis Stack]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) +- [Glossary]({{< relref "/glossary" >}}) + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes internode encryption which improves the security of data in + transit. +linkTitle: Internode encryption +title: Internode encryption +weight: 15 +--- +As of v6.2.4, Redis Enterprise Software supports _internode encryption_, which encrypts internal communication between nodes. This improves the security of data as it travels within a cluster. + +Internode encryption is enabled for the _control plane_, which manages the cluster and its databases. + +Internode encryption is supported for the _data plane_, which encrypts communication used to replicate shards between nodes and proxy communication with shards located on different nodes. + +The following diagram shows how this works. + +{{A diagram showing the interaction between data internode encryption, control plane encryption, and various elements of a cluster.}} + +Data internode encryption is disabled by default for individual databases in order to optimize for performance. Encryption adds latency and overhead; the impact is measurable and varies according to the database, its field types, and the details of the underlying use case. + +You can enable data internode encryption for a database by changing the database configuration settings. This lets you choose when to favor performance and when to encrypt data. + +## Prerequisites + +Internode encryption requires certain prerequisites. + +You need to: + +- Upgrade all nodes in the cluster to v6.2.4 or later. + +- Open port 3342 for the TLS channel used for encrypted communication. + + +## Enable data internode encryption + +To enable internode encryption for a database (also called _data internode encryption_), you need to enable the appropriate setting for each database you wish to encrypt. To do so, you can: + +- Use the Cluster Manager UI to enable the **Internode Encryption** setting from the database **Security** screen. + +- Use the `rladmin` command-line utility to set the [data_internode_encryption]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-db" >}}) setting for the database: + + ``` shell + rladmin tune db data_internode_encryption enabled + ``` + +- Use the Redis Enterprise Software REST API to set the `data_internode_encryption` setting for the database. + + ``` rest + put /v1/bdbs/${database_id} + { “data_internode_encryption” : true } + ``` + +When you change the data internode encryption setting for a database, all active remote client connections are disconnected. This restarts the internal (DMC) proxy and disconnects all client connections. + +## Change cluster policy + +To enable internode encryption for new databases by default, use one of the following methods: + +- Cluster Manager UI + + 1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + + 1. Select **Database defaults**. + + 1. Go to **Internode Encryption** and click **Change**. + + 1. Select **Enabled** to enable internode encryption for new databases by default. + + 1. Click **Change**. + + 1. Select **Save**. + +- [rladmin tune cluster]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster data_internode_encryption enabled + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "data_internode_encryption": true } + ``` + +## Encryption ciphers and settings + +To encrypt internode communications, Redis Enterprise Software uses TLS 1.2 and the following cipher suites: + +- ECDHE-RSA-AES256-GCM-SHA384 +- ECDHE-RSA-AES128-GCM-SHA256 + +As of Redis Enterprise Software v7.4, internode encryption also supports TLS 1.3 with the following cipher suites: + +- TLS_AES_128_GCM_SHA256 +- TLS_AES_256_GCM_SHA384 + +The TLS layer determines which TLS version to use. + +No configurable settings are exposed; internode encryption is used internally within a cluster and not exposed to any outside service. + +## Certificate authority and rotation + +Starting with v6.2.4, internode communication is managed, in part, by two certificates: one for the control plane and one for the data plane. These certificates are signed by a private certificate authority (CA). The CA is not exposed outside of the cluster, so it cannot be accessed by external processes or services. In addition, each cluster generates a unique CA that is not used anywhere else. + +The private CA is generated when a cluster is created or upgraded to 6.2.4. + +When nodes join the cluster, the cluster CA is used to generate certificates for the new node, one for each plane. Certificates signed by the private CA are not shared between clusters and they're not exposed outside the cluster. + +All certificates signed by the internal CA expire after ninety (90) days and automatically rotate every thirty (30) days. Alerts also monitor certificate expiration and trigger when certificate expiration falls below 45 days. If you receive such an alert, contact support. + +You can use the Redis Enterprise Software REST API to rotate certificates manually: + +``` rest +POST /v1/cluster/certificates/rotate +``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Configure TLS protocol +title: Configure TLS protocol +weight: 50 +--- + +You can change TLS protocols to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. + +## Configure TLS protocol + +The communications for which you can modify TLS protocols are: + +- Control plane - The TLS configuration for cluster administration. +- Data plane - The TLS configuration for the communication between applications and databases. +- Discovery service (Sentinel) - The TLS configuration for the [discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service.md" >}}). + +You can configure TLS protocols with the [Cluster Manager UI](#edit-tls-ui), [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/config" >}}), or the [REST API]({{< relref "/operate/rs/references/rest-api/requests/cluster#put-cluster" >}}). + +{{}} +- After you set the minimum TLS version, Redis Enterprise Software does not accept communications with TLS versions older than the specified version. + +- If you set TLS 1.3 as the minimum TLS version, clients must support TLS 1.3 to connect to Redis Enterprise. +{{}} + +TLS support depends on the operating system. You cannot enable support for protocols or versions that aren't supported by the operating system running Redis Enterprise Software. In addition, updates to the operating system or to Redis Enterprise Software can impact protocol and version support. + +If you have trouble enabling specific versions of TLS, verify that they're supported by your operating system and that they're configured correctly. + +{{}} +TLSv1.2 is generally recommended as the minimum TLS version for encrypted communications. Check with your security team to confirm which TLS protocols meet your organization's policies. +{{}} + +### Edit TLS settings in the UI {#edit-tls-ui} + +To configure minimum TLS versions using the Cluster Manager UI: + +1. Go to **Cluster > Security**, then select the **TLS** tab. + +1. Click **Edit**. + +1. Select the minimum TLS version for cluster connections, database connections, and the discovery service: + + {{Cluster > Security > TLS settings in edit mode in the Cluster Manager UI.}} + +1. Select the TLS mode for the discovery service: + + - **Allowed** - Allows both TLS and non-TLS connections + - **Required** - Allows only TLS connections + - **Disabled** - Allows only non-TLS connections + +1. Click **Save**. + +### Control plane TLS + +To set the minimum TLS protocol for the control plane using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_control_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +For example: + +```sh +rladmin cluster config min_control_TLS_version 1.2 +``` + +### Data plane TLS + +To set the minimum TLS protocol for the data path using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_data_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +For example: + +```sh +rladmin cluster config min_data_TLS_version 1.2 +``` + + +### Discovery service TLS + +To enable TLS for the discovery service using `rladmin`: + +- Default: Allows both TLS and non-TLS connections +- Syntax: `rladmin cluster config sentinel_tls_mode ` +- `ssl_policy` values available: + - `allowed` - Allows both TLS and non-TLS connections + - `required` - Allows only TLS connections + - `disabled` - Allows only non-TLS connections + +To set the minimum TLS protocol for the discovery service using `rladmin`: + +- Default minimum TLS protocol: TLSv1.2 +- Syntax: `rladmin cluster config min_sentinel_TLS_version ` +- TLS versions available: + - For TLSv1.2 - 1.2 + - For TLSv1.3 - 1.3 + +To enforce a minimum TLS version for the discovery service, run the following commands: + +1. Allow only TLS connections: + + ```sh + rladmin cluster config sentinel_tls_mode required + ``` + +1. Set the minimal TLS version: + + ```sh + rladmin cluster config min_sentinel_TLS_version 1.2 + ``` + +1. Restart the discovery service on all cluster nodes to apply your changes: + + ```sh + supervisorctl restart sentinel_service + ``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows how to configure cipher suites. +linkTitle: Configure cipher suites +title: Configure cipher suites +weight: 60 +--- + +Ciphers are algorithms that help secure connections between clients and servers. You can change the ciphers to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. + +## TLS 1.2 cipher suites + +| Name | Configurable | Description | +|------------|--------------|-------------| +| control_cipher_suites | ✅ Yes | Cipher list for TLS 1.2 communications for cluster administration (control plane) | +| data_cipher_list | ✅ Yes | Cipher list for TLS 1.2 communications between applications and databases (data plane) | +| sentinel_cipher_suites | ✅ Yes | Cipher list for [discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service" >}}) (Sentinel) TLS 1.2 communications | + +## TLS 1.3 cipher suites + +| Name | Configurable | Description | +|------------|--------------|-------------| +| control_cipher_suites_tls_1_3 | ❌ No | Cipher list for TLS 1.3 communications for cluster administration (control plane) | +| data_cipher_suites_tls_1_3 | ✅ Yes | Cipher list for TLS 1.3 communications between applications and databases (data plane) | +| sentinel_cipher_suites_tls_1_3 | ❌ No | Cipher list for [discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service" >}}) (Sentinel) TLS 1.3 communications | + +## Configure cipher suites + +You can configure ciphers with the [Cluster Manager UI](#edit-ciphers-ui), [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/config" >}}), or the [REST API]({{< relref "/operate/rs/references/rest-api/requests/cluster#put-cluster" >}}). + +{{}} +Configuring cipher suites overwrites existing ciphers rather than appending new ciphers to the list. +{{}} + +When you modify your cipher suites, make sure: + +- The configured TLS version matches the required cipher suites. +- The certificates in use are properly signed to support the required cipher suites. + +{{}} +- Redis Enterprise Software doesn't support static [Diffie–Hellman (`DH`) key exchange](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange) ciphers. + +- Support for Ephemeral Diffie–Hellman (ECDHE) key exchange ciphers depends on the operating system version and security policy. +{{}} + +### Edit cipher suites in the UI {#edit-ciphers-ui} + +To configure cipher suites using the Cluster Manager UI: + +1. Go to **Cluster > Security**, then select the **TLS** tab. + +1. In the **Cipher suites lists** section, click **Configure**: + + {{Cipher suites lists as shown in the Cluster Manager UI.}} + +1. Edit the TLS cipher suites in the text boxes: + + {{Edit cipher suites drawer in the Cluster Manager UI.}} + +1. Click **Save**. + +### Control plane cipher suites {#control-plane-ciphers-tls-1-2} + +As of Redis Enterprise Software version 6.0.12, control plane cipher suites can use the BoringSSL library format for TLS connections to the Cluster Manager UI. See the BoringSSL documentation for a full list of available [BoringSSL configurations](https://github.com/google/boringssl/blob/master/ssl/test/runner/cipher_suites.go#L99-L131). + +#### Configure TLS 1.2 control plane cipher suites + +To configure TLS 1.2 cipher suites for cluster communication, use the following [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config control_cipher_suites +``` + +See the example below to configure cipher suites for the control plane: + +```sh +rladmin cluster config control_cipher_suites ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305 +``` +{{}} +- The deprecated 3DES and RC4 cipher suites are no longer supported. +{{}} + + +### Data plane cipher suites {#data-plane-ciphers-tls-1-2} + +Data plane cipher suites use the OpenSSL library format in Redis Enterprise Software version 6.0.20 or later. For a list of available OpenSSL configurations, see [Ciphers](https://www.openssl.org/docs/man1.1.1/man1/ciphers.html) (OpenSSL). + +#### Configure TLS 1.2 data plane cipher suites + +To configure TLS 1.2 cipher suites for communications between applications and databases, use the following [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config data_cipher_list +``` + +See the example below to configure cipher suites for the data plane: + +```sh +rladmin cluster config data_cipher_list AES128-SHA:AES256-SHA +``` +{{}} +- The deprecated 3DES and RC4 cipher suites are no longer supported. +{{}} + +#### Configure TLS 1.3 data plane cipher suites + +To configure TLS 1.3 cipher suites for communications between applications and databases, use the following [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config data_cipher_suites_tls_1_3 +``` + +The following example configures TLS 1.3 cipher suites for the data plane: + +```sh +rladmin cluster config data_cipher_suites_tls_1_3 TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 +``` + +### Discovery service cipher suites {#discovery-service-ciphers-tls-1-2} + +Sentinel service cipher suites use the golang.org OpenSSL format for [discovery service]({{< relref "/operate/rs/databases/durability-ha/discovery-service" >}}) TLS connections in Redis Enterprise Software version 6.0.20 or later. See their documentation for a list of [available configurations](https://golang.org/src/crypto/tls/cipher_suites.go). + +#### Configure TLS 1.2 discovery service cipher suites + +To configure TLS 1.2 cipher suites for the discovery service cipher suites, use the following [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command syntax: + +```sh +rladmin cluster config sentinel_cipher_suites +``` + +See the example below to configure cipher suites for the sentinel service: + +```sh +rladmin cluster config sentinel_cipher_suites TLS_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 +``` +--- +Title: Enable TLS +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows how to enable TLS. +linkTitle: Enable TLS +weight: 40 +--- + +You can use TLS authentication for one or more of the following types of communication: + +- Communication from clients (applications) to your database +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/" >}}) +- Communication to and from your database to other clusters for synchronization using [Active-Active]({{< relref "/operate/rs/databases/active-active/_index.md" >}}) + +{{}} +When you enable or turn off TLS, the change applies to new connections but does not affect existing connections. Clients must close existing connections and reconnect to apply the change. +{{}} + +## Enable TLS for client connections {#client} + +To enable TLS for client connections: + +1. From your database's **Security** tab, select **Edit**. + +1. In the **TLS - Transport Layer Security for secure connections** section, make sure the checkbox is selected. + +1. In the **Apply TLS for** section, select **Clients and databases + Between databases**. + +1. Select **Save**. + +To enable mutual TLS for client connections: + +1. Select **Mutual TLS (Client authentication)**. + + {{Mutual TLS authentication configuration.}} + +1. For each client certificate, select **+ Add certificate**, paste or upload the client certificate, then select **Done**. + + If your database uses Replica Of, you also need to add the syncer certificates for the participating clusters. See [Enable TLS for Replica Of cluster connections](#enable-tls-for-replica-of-cluster-connections) for instructions. + +1. You can configure **Additional certificate validations** to further limit connections to clients with valid certificates. + + Additional certificate validations occur only when loading a [certificate chain](https://en.wikipedia.org/wiki/Chain_of_trust#Computer_security) that includes the [root certificate](https://en.wikipedia.org/wiki/Root_certificate) and intermediate [CA](https://en.wikipedia.org/wiki/Certificate_authority) certificate but does not include a leaf (end-entity) certificate. If you include a leaf certificate, mutual client authentication skips any additional certificate validations. + + 1. Select a certificate validation option. + + | Validation option | Description | + |-------------------|-------------| + | _No validation_ | Authenticates clients with valid certificates. No additional validations are enforced. | + | _By Subject Alternative Name_ | A client certificate is valid only if its Common Name (CN) matches an entry in the list of valid subjects. Ignores other [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes. | + | _By full Subject Name_ | A client certificate is valid only if its [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes match an entry in the list of valid subjects. | + + 1. If you selected **No validation**, you can skip this step. Otherwise, select **+ Add validation** to create a new entry and then enter valid [`Subject`](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) attributes for your client certificates. All `Subject` attributes are case-sensitive. + + | Subject attribute
(case-sensitive) | Description | + |-------------------|-------------| + | _Common Name (CN)_ | Name of the client authenticated by the certificate (_required_) | + | _Organization (O)_ | The client's organization or company name | + | _Organizational Unit (OU)_ | Name of the unit or department within the organization | + | _Locality (L)_ | The organization's city | + | _State / Province (ST)_ | The organization's state or province | + | _Country (C)_ | 2-letter code that represents the organization's country | + + You can only enter a single value for each field, except for the _Organizational Unit (OU)_ field. If your client certificate has a `Subject` with multiple _Organizational Unit (OU)_ values, press the `Enter` or `Return` key after entering each value to add multiple Organizational Units. + + {{An example that shows adding a certificate validation with multiple organizational units.}} + + **Breaking change:** If you use the [REST API]({{< relref "/operate/rs/references/rest-api" >}}) instead of the Cluster Manager UI to configure additional certificate validations, note that `authorized_names` is deprecated as of Redis Enterprise v6.4.2. Use `authorized_subjects` instead. See the [BDB object reference]({{< relref "/operate/rs/references/rest-api/objects/bdb" >}}) for more details. + +1. Select **Save**. + +By default, Redis Enterprise Software validates client certificate expiration dates. You can use `rladmin` to turn off this behavior. + +```sh +rladmin tune db < db:id | name > mtls_allow_outdated_certs enabled +``` + +## Enable TLS for Active-Active cluster connections + +You cannot enable or turn off TLS after the Active-Active database is created, but you can change the TLS configuration. + +To enable TLS for Active-Active cluster connections: + +1. During [database creation]({{}}), expand the **TLS** configuration section. + +1. Select **On** to enable TLS. + + {{TLS is enabled on the Cluster Manager UI screen.}} + +1. Click **Create**. + +If you also want to require TLS for client connections, you must edit the Active-Active database configuration after creation. See [Enable TLS for client connections](#client) for instructions. + +## Enable TLS for Replica Of cluster connections + +{{}} +--- +Title: Transport Layer Security (TLS) +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of Transport Layer Security (TLS). +hideListLinks: true +linkTitle: TLS +weight: 10 +--- +[Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), a successor to SSL, ensures the privacy of data sent between applications and Redis databases. TLS also secures connections between Redis Enterprise Software nodes. + +You can [use TLS authentication]({{< relref "/operate/rs/security/encryption/tls/enable-tls" >}}) for the following types of communication: + +- Communication from clients (applications) to your database +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of" >}}) +- Communication to and from your database to other clusters for synchronization using [Active-Active]({{< relref "/operate/rs/databases/active-active/" >}}) + +## Protocols and ciphers + +TLS protocols and ciphers define the overall suite of algorithms that clients are able to connect to the servers with. + +You can change the [TLS protocols]({{< relref "/operate/rs/security/encryption/tls/tls-protocols" >}}) and [ciphers]({{< relref "/operate/rs/security/encryption/tls/ciphers" >}}) to improve the security of your Redis Enterprise cluster and databases. The default settings are in line with industry best practices, but you can customize them to match the security policy of your organization. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable PEM encryption to encrypt all private keys on disk. +linkTitle: Encrypt private keys +title: Encrypt private keys +toc: 'true' +weight: 50 +--- + +Enable PEM encryption to automatically encrypt all private keys on disk. Public keys (`.cert` files) are not encrypted. + +When certificates are rotated, the encrypted private keys are also rotated. + +## Enable PEM encryption + +To enable PEM encryption and encrypt private keys on the disk, use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) or the [REST API]({{< relref "/operate/rs/references/rest-api" >}}). + + +- [`rladmin cluster config`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config encrypt_pkeys enabled + ``` + +- [Update cluster settings]({{< relref "/operate/rs/references/rest-api/requests/cluster#put-cluster" >}}) REST API request: + + ```sh + PUT /v1/cluster + { "encrypt_pkeys": true } + ``` + +## Deactivate PEM encryption + +To deactivate PEM encryption and decrypt private keys on the disk, use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) or the [REST API]({{< relref "/operate/rs/references/rest-api" >}}). + +- [`rladmin cluster config`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config encrypt_pkeys disabled + ``` + +- [Update cluster settings]({{< relref "/operate/rs/references/rest-api/requests/cluster#put-cluster" >}}) REST API request: + + ```sh + PUT /v1/cluster + { "encrypt_pkeys": false } + ``` +--- +Title: Encryption in Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Encryption in Redis Enterprise Software. +hideListLinks: true +linkTitle: Encryption +toc: 'true' +weight: 60 +--- + +Redis Enterprise Software uses encryption to secure communications between clusters, nodes, databases, and clients and to protect [data in transit](https://en.wikipedia.org/wiki/Data_in_transit), [at rest](https://en.wikipedia.org/wiki/Data_at_rest), and [in use](https://en.wikipedia.org/wiki/Data_in_use). + +## Encrypt data in transit + +### TLS + +Redis Enterprise Software uses [Transport Layer Security (TLS)]({{}}) to encrypt communications for the following: + +- Cluster Manager UI + +- Command-line utilities + +- REST API + +- Internode communication + +You can also [enable TLS authentication]({{< relref "/operate/rs/security/encryption/tls/enable-tls" >}}) for the following: + +- Communication from clients or applications to your database + +- Communication from your database to other clusters for replication using [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/" >}}) + +- Communication to and from your database to other clusters for [Active-Active]({{< relref "/operate/rs/databases/active-active/_index.md" >}}) synchronization + +### Internode encryption + +[Internode encryption]({{}}) uses TLS to encrypt data in transit between cluster nodes. + +By default, internode encryption is enabled for the control plane, which manages the cluster and databases. If you also want to encrypt replication and proxy communications between database shards on different nodes, [enable data internode encryption]({{< relref "/operate/rs/security/encryption/internode-encryption#enable-data-internode-encryption" >}}). + +### Require HTTPS for REST API endpoints + +By default, the Redis Enterprise Software API supports communication over HTTP and HTTPS. However, you can [turn off HTTP support]({{< relref "/operate/rs/references/rest-api/encryption" >}}) to ensure that API requests are encrypted. + +## Encrypt data at rest + +### File system encryption + +To encrypt data stored on disk, use file system-based encryption capabilities available on Linux operating systems before you install Redis Enterprise Software. + +### Private key encryption + +Enable PEM encryption to [encrypt all private keys]({{< relref "/operate/rs/security/encryption/pem-encryption" >}}) on disk. + +## Encrypt data in use + +### Client-side encryption + +Use client-side encryption to encrypt the data an application stores in a Redis database. The application decrypts the data when it retrieves it from the database. + +You can add client-side encryption logic to your application or use built-in client functions. + +Client-side encryption has the following limitations: + +- Operations that must operate on the data, such as increments, comparisons, and searches will not function properly. + +- Increases management overhead. + +- Reduces performance. +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to audit connection events. +linkTitle: Audit events +title: Audit connection events +weight: 15 +--- + +Starting with version 6.2.18, Redis Enterprise Software lets you audit database connection and authentication events. This helps you track and troubleshoot connection activity. + +The following events are tracked: + +- Database connection attempts +- Authentication requests, including requests for new and existing connections +- Database disconnects + +When tracked events are triggered, notifications are sent via TCP to an address and port defined when auditing is enabled. Notifications appear in near real time and are intended to be consumed by an external listener, such as a TCP listener, third-party service, or related utility. + +Example external listeners include: + +- [`ncat`](https://nmap.org/ncat/): useful for debugging but not suitable for production environments. + +- Imperva Sonar: a third-party service available for purchase separately from Redis Enterprise Software. See [Redis Onboarding Steps](https://docs.imperva.com/bundle/onboarding-databases-to-sonar-reference-guide/page/Redis-Onboarding-Steps_48368215.html) for more information. + +For development and testing environments, notifications can be saved to a local file; however, this is neither supported nor intended for production environments. + +For performance reasons, auditing is not enabled by default. In addition, auditing occurs in the background (asynchronously) and is non-blocking by design. That is, the action that triggered the notification continues without regard to the status of the notification or the listening tool. + +## Enable audit notifications + +### Cluster audits + +To enable auditing for your cluster, use: + +- `rladmin` + + ``` + rladmin cluster config auditing db_conns \ + audit_protocol \ + audit_address
\ + audit_port \ + audit_reconnect_interval \ + audit_reconnect_max_attempts + ``` + + where: + + - _audit\_protocol_ indicates the protocol used to process notifications. For production systems, _TCP_ is the only value. + + - _audit\_address_ defines the TCP/IP address where one can listen for notifications + + - _audit\_port_ defines the port where one can listen for notifications + + - _audit\_reconnect\_interval_ defines the interval (in seconds) between attempts to reconnect to the listener. Default is 1 second. + + - _audit\_reconnect\_max\_attempts_ defines the maximum number of attempts to reconnect. Default is 0. (infinite) + + Development systems can set _audit\_protocol_ to `local` for testing and training purposes; however, this setting is _not_ supported for production use. + + When `audit_protocol` is set to `local`, `
` should be set to a [stream socket](https://man7.org/linux/man-pages/man7/unix.7.html) defined on the machine running Redis Enterprise and _``_ should not be specified: + + ``` + rladmin cluster config auditing db_conns \ + audit_protocol local audit_address + ``` + + The output file (and path) must be accessible by the user and group running Redis Enterprise Software. + +- the [REST API]({{< relref "/operate/rs/references/rest-api/requests/cluster/auditing-db-conns#put-cluster-audit-db-conns" >}}) + + ``` + PUT /v1/cluster/auditing/db_conns + { + "audit_address": "
", + "audit_port": , + "audit_protocol": "TCP", + "audit_reconnect_interval": , + "audit_reconnect_max_attempts": + } + ``` + + where `
` is a string containing the TCP/IP address, `` is a numeric value representing the port, `` is a numeric value representing the interval in seconds, and `` is a numeric value representing the maximum number of attempts to execute. + +### Database audits + +Once auditing is enabled for your cluster, you can audit individual databases. To do so, use: + +- `rladmin` + + ``` + rladmin tune db db: db_conns_auditing enabled + ``` + + where the value of the _db:_ parameter is either the cluster ID of the database or the database name. + + To deactivate auditing, set `db_conns_auditing` to `disabled`. + + Use `rladmin info` to retrieve additional details: + + ``` + rladmin info db + rladmin info cluster + ``` + +- the [REST API]({{< relref "/operate/rs/references/rest-api/requests/bdbs#put-bdbs" >}}) + + ``` + PUT /v1/bdbs/1 + { "db_conns_auditing": true } + ``` + + To deactivate auditing, set `db_conns_auditing` to `false`. + +You must enable auditing for your cluster before auditing a database; otherwise, an error appears: + +> _Error setting description: Unable to enable DB Connections Auditing before feature configurations are set. +> Error setting error_code: db_conns_auditing_config_missing_ + +To resolve this error, enable the protocol for your cluster _before_ attempting to audit a database. + +### Policy defaults for new databases + +To audit connections for new databases by default, use: + +- `rladmin` + + ``` + rladmin tune cluster db_conns_auditing enabled + ``` + + To deactivate this policy, set `db_conns_auditing` to `disabled`. + +- the [REST API]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) + + ``` + PUT /v1/cluster/policy + { "db_conns_auditing": true } + ``` + + To deactivate this policy, set `db_conns_auditing` to `false`. + +## Notification examples + +Audit event notifications are reported as JSON objects. + +### New connection + +This example reports a new connection for a database: + +``` json +{ + "ts":1655821384, + "new_conn": + { + "id":2285001002 , + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5" + } +} +``` + +### Authentication request + +Here is a sample authentication request for a database: + +``` json +{ + "ts":1655821384, + "action":"auth", + "id":2285001002 , + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5", + "status":2, + "username":"user_one", + "identity":"user:1", + "acl-rules":"~* +@all" +} +``` + +The `status` field reports the following: + +- Values of 2, 7, or 8 indicate success. + +- Values of 3 or 5 indicate that the client authentication is in progress and should conclude later. + +- Other values indicate failures. + +### Database disconnect + +Here's what's reported when a database connection is closed: + +``` json +{ + "ts":1655821384, + "close_conn": + { + "id":2285001002, + "srcip":"127.0.0.1", + "srcp":"39338", + "trgip":"127.0.0.1", + "trgp":"12635", + "hname":"", + "bdb_name":"DB1", + "bdb_uid":"5" + } +} +``` + +## Notification field reference + +The field value that appears immediately after the timestamp describes the action that triggered the notification. The following values may appear: + +- `new_conn` indicates a new external connection +- `new_int_conn` indicates a new internal connection +- `close_conn` occurs when a connection is closed +- `"action":"auth"` indicates an authentication request and can refer to new authentication requests or authorization checks on existing connections + +In addition, the following fields may also appear in audit event notifications: + +| Field name | Description | +|:---------:|-------------| +| `acl-rules` | ACL rules associated with the connection, which includes a rule for the `default` user. | +| `bdb_name` | Destination database name - The name of the database being accessed. | +| `bdb_uid` | Destination database ID - The cluster ID of the database being accessed. | +| `hname` | Client hostname - The hostname of the client. Currently empty; reserved for future use. | +| `id` | Connection ID - Unique connection ID assigned by the proxy. | +| `identity` | Identity - A unique ID the proxy assigned to the user for the current connection. | +| `srcip` | Source IP address - Source TCP/IP address of the client accessing the Redis database. | +| `srcp` | Source port - Port associated with the source IP address accessing the Redis database. Combine the port with the address to uniquely identify the socket. | +| `status` | Status result code - An integer representing the result of an authentication request. | +| `trgip` | Target IP address - The IP address of the destination being accessed by the action. | +| `trgp` | Target port - The port of the destination being accessed by the action. Combine the port with the destination IP address to uniquely identify the database being accessed. | +| `ts` | Timestamp - The date and time of the event, in [Coordinated Universal Time](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) (UTC). Granularity is within one second. | +| `username` | Authentication username - Username associated with the connection; can include `default` for databases that allow default access. (Passwords are _not_ recorded). | + +## Status result codes + +The `status` field reports the results of an authentication request as an integer. Here's what different values mean: + +| Error value | Error code | Description | +|:-------------:|------------|-------------| +| `0` | AUTHENTICATION_FAILED | Invalid username and/or password. | +| `1` | AUTHENTICATION_FAILED_TOO_LONG | Username or password are too long. | +| `2` | AUTHENTICATION_NOT_REQUIRED | Client tried to authenticate, but authentication isn't necessary. | +| `3` | AUTHENTICATION_DIRECTORY_PENDING | Attempting to receive authentication info from the directory in async mode. | +| `4` | AUTHENTICATION_DIRECTORY_ERROR | Authentication attempt failed because there was a directory connection error. | +| `5` | AUTHENTICATION_SYNCER_IN_PROGRESS | Syncer SASL handshake. Return SASL response and wait for the next request. | +| `6` | AUTHENTICATION_SYNCER_FAILED | Syncer SASL handshake. Returned SASL response and closed the connection. | +| `7` | AUTHENTICATION_SYNCER_OK | Syncer authenticated. Returned SASL response. | +| `8` | AUTHENTICATION_OK | Client successfully authenticated. | + +--- +Title: Certificate-based authentication +alwaysopen: false +categories: +- docs +- operate +- rs +description: Certificate-based authentication allows secure, passwordless access to the REST API and databases. +linkTitle: Certificate-based authentication +weight: 70 +--- + +You can set up certificate-based authentication for specific users to enable secure, passwordless access to the Redis Enterprise Software [REST API]({{}}) and databases. + +## Set up certificate-based authentication + +To set up certificate-based authentication: + +1. [Add the `mtls_trusted_ca` certificate.](#add-cert) + +1. [Configure cluster settings.](#config-cluster) + +1. If you want to enable certificate-based authentication for databases, you must [enable mutual TLS for the relevant databases](#enable-mtls-dbs). Otherwise, you can skip this step. + +1. [Create certificate auth_method users.](#create-cert-users) + +### Add mtls_trusted_ca certificate {#add-cert} + +Add a trusted CA certificate `mtls_trusted_ca` to the cluster using an [update cluster certificate]({{}}) request: + +```sh +PUT /v1/cluster/update_cert +{ + "name": "mtls_trusted_ca", + "certificate": "" +} +``` + +### Configure cluster settings {#config-cluster} + +[Update cluster settings]({{}}) with mutual TLS configuration. + +For certificate validation by Subject Alternative Name (SAN), use: + +```sh +PUT /v1/cluster +{ + "mtls_certificate_authentication": true, + "mtls_client_cert_subject_validation_type": "san_cn", + "mtls_authorized_subjects": [{ + "CN": "" + }] +} +``` + +For certificate validation by full Subject Name, use: + +```sh +PUT /v1/cluster +{ + "mtls_certificate_authentication": true, + "mtls_client_cert_subject_validation_type": "full_subject", + "mtls_authorized_subjects": [{ + "CN": "", + "OU": [], + "O": "", + "C": "<2-letter country code>", + "L": "", + "ST": "" + }] +} +``` + +Replace the placeholder values `<>` with your client certificate's subject values. + +### Enable mutual TLS for databases {#enable-mtls-dbs} + +Before you can connect to a database using certificate-based authentication, you must enable mutual TLS (mTLS). See [Enable TLS]({{}}) for detailed instructions. + +### Create certificate auth_method users {#create-cert-users} + +When you [create new users]({{}}), include `"auth_method": "certificate"` and `certificate_subject_line` in the request body : + +```sh +POST /v1/users +{ + "auth_method": "certificate", + "certificate_subject_line": "CN=, OU=, O=, L=, ST=, C=" +} +``` + +Replace the placeholder values `<>` with your client certificate's subject values. + +## Authenticate REST API requests + +To use the REST API with certificate-based authentication, you must provide a client certificate, signed by the trusted CA `mtls_trusted_ca`, and a private key. + +The following example uses [cURL](https://curl.se/) to send a [REST API request]({{}}): + +```sh +curl --request --url https://:9443// --cert client.pem --key client.key +``` + +## Authenticate database connections + +To connect to a database with certificate-based authentication, you must provide a client certificate, signed by the trusted CA `mtls_trusted_ca`, and a private key. + +The following example shows how to connect to a Redis database with [`redis-cli`]({{}}): + +```sh +redis-cli -h -p --tls --cacert .pem --cert redis_user.crt --key redis_user_private.key +``` + +## Limitations + +- Certificate-based authentication is not implemented for the Cluster Manager UI.--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Monitor certificates on a Redis Enterprise cluster. +linkTitle: Monitor certificates +title: Monitor certificates +weight: 10 +--- + +You can monitor certificates used by Redis Enterprise Software. + +### Monitor certificates with Prometheus + +Redis Enterprise Software exposes the expiration time (in seconds) of each certificate on each node. To learn how to monitor Redis Enterprise Software metrics using Prometheus, see the [Prometheus integration quick start]({{< relref "/integrate/prometheus-with-redis-enterprise/" >}}). + +Here are some examples of the `node_cert_expiration_seconds` metric: + +```sh +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="cm",node="1",path="/etc/opt/redislabs/cm_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="api",node="1",path="/etc/opt/redislabs/api_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="proxy",node="1",path="/etc/opt/redislabs/proxy_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="metrics_exporter",node="1",path="/etc/opt/redislabs/metrics_exporter_cert.pem"} 31104000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="syncer",node="1",path="/etc/opt/redislabs/syncer_cert.pem"} 31104000.0 +``` + +The following certificates relate to [internode communication TLS encryption]({{< relref "/operate/rs/security/encryption/internode-encryption" >}}) and are automatically rotated by Redis Enterprise Software: + +```sh +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="ccs_internode_encryption",node="1",path="/etc/opt/redislabs/ccs_internode_encryption_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="data_internode_encryption",node="1",path="/etc/opt/redislabs/data_internode_encryption_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="mesh_ca_signed",node="1",path="/etc/opt/redislabs/mesh_ca_signed_cert.pem"} 2592000.0 +node_cert_expiration_seconds{cluster="mycluster.local",logical_name="gossip_ca_signed",node="1",path="/etc/opt/redislabs/gossip_ca_signed_cert.pem"} 2592000.0 +``` +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create self-signed certificates to install on a Redis Enterprise cluster. +linkTitle: Create certificates +title: Create certificates +weight: 10 +--- + +When you first install Redis Enterprise Software, self-signed certificates are created to enable encryption for Redis Enterprise endpoints. These certificates expire after a year (365 days) and must be renewed. + +You can renew these certificates by replacing them with new self-signed certificates or by replacing them with certificates signed by a [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority) (CA). + +## Renew self-signed certificates + +As of [v6.2.18-70]({{< relref "/operate/rs/release-notes/rs-6-2-18-releases/rs-6-2-18-70" >}}), Redis Enterprise Software includes a script to generate self-signed certificates. + +By default, the `generate_self_signed_certs.sh` script is located in `/opt/redislabs/utils/`. + +Here, you learn how to use this script to generate new certificates and how to install them. + +### Step 1: Generate new certificates + +Sign in to the machine hosting the cluster's master node and then run the following command: + +``` bash +% sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "" -d -t +``` + +where: + +- _\_ is the fully qualified domain name (FQDN) of the cluster. (This is the name given to the cluster when first created.) +- _\_ is an optional FQDN for the cluster. Multiple domain names are allowed, separated by whitespace. Quotation marks (`""`) should enclose the full set of names. +- _\_ is an integer specifying the number of days the certificate should be valid. We recommend against setting this longer than a year (365 days). + + _\_ is optional and defaults to `365`. + +- _\_ is a string identifying the name of the certificate to generate. + + The following values are supported: + + | Value | Description | + |-------|-------------| + | `api` | The REST API | + | `cm` | The Cluster Manager UI | + | `metrics` | The metrics exporter | + | `proxy` | The database endpoint | + | `syncer` | The synchronization process | + | `all` | Generates all certificates in a single operation | + + _Type_ is optional and defaults to `all`. + +When you run the script, it either reports success (`"Self signed cert generated successfully"`) or an error message. Use the error message to troubleshoot any issues. + +The following example generates all self signed certificates for `mycluster.example.com`; these certificates expire one year after the command is run: + +``` bash +$ sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "mycluster.example.com"` +``` + +Suppose you want to create a Cluster Manager UI certificate to support two clusters for a period of two years. The following example shows how: + +``` bash +$ sudo -u redislabs /opt/redislabs/utils/generate_self_signed_certs.sh \ + -f "mycluster.example.com anothercluster.example.com" -d 730 -t cm +``` + +Here, a certificate file and certificate key are generated to support the following domains: + +``` text +mycluster.example.com +*.mycluster.example.com +anothercluster.example.com +*.anothercluster.example.com +``` + +### Step 2: Locate the new certificate files + +When successful, the script generates two .PEM files for each generated certificate: a certificate file and a certificate key, each named after the type of certificate generated (see earlier table for individual certificate names.) + +These files can be found in the `/tmp` directory. + +``` bash +$ ls -la /tmp/*.pem +``` + +### Step 3: Set permissions + +We recommend setting the permissions of your new certificate files to limit read and write access to the file owner and to set group and other user permissions to read access. + +``` bash +$ sudo chmod 644 /tmp/*.pem +``` + +### Step 4: Replace existing certificates {#replace-self-signed} + +You can use `rladmin` to replace the existing certificates with new certificates: + +``` console +$ rladmin cluster certificate set certificate_file \ + .pem key_file .pem +``` + +The following values are supported for the _\_ parameter: + +| Value | Description | +|-------|-------------| +| `api` | The REST API | +| `cm` | The Cluster Manager UI | +| `metrics_exporter` | The metrics exporter | +| `proxy` | The database endpoint | +| `syncer` | The synchronization process | + +You can also use the REST API. To learn more, see [Update certificates]({{< relref "/operate/rs/security/certificates/updating-certificates#how-to-update-certificates" >}}). + +## Create CA-signed certificates + +You can use certificates signed by a [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority) (CA). + +For best results, use the following guidelines to create the certificates. + +### TLS certificate guidelines + +When you create certificates signed by a certificate authority, you need to create server certificates and client certificates. The following provide guidelines that apply to both certificates and guidance for each certificate type. + +#### Guidelines for server and client certificates + +1. Include the full [certificate chain](https://en.wikipedia.org/wiki/X.509#Certificate_chains_and_cross-certification) when creating certificate .PEM files for either server or client certificates. + +1. List (_chain_) certificates in the .PEM file in the following order: + + ``` text + -----BEGIN CERTIFICATE----- + Domain (leaf) certificate + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + Intermediate CA certificate + -----END CERTIFICATE---- + -----BEGIN CERTIFICATE----- + Trusted Root CA certificate + -----END CERTIFICATE----- + ``` + +#### Server certificate guidelines + +Server certificates support clusters. + +In addition to the general guidelines described earlier, the following guidelines apply to server certificates: + +1. Use the cluster's fully qualified domain name (FQDN) as the certificate Common Name (CN). + +1. Set the following values according to the values specified by your security team or certificate authority: + + - Country Name (C) + - State or Province Name (ST) + - Locality Name (L) + - Organization Name (O) + - Organization Unit (OU) + +1. The [Subject Alternative Name](https://en.wikipedia.org/wiki/Subject_Alternative_Name) (SAN) should include the following values based on the FQDN: + + ``` text + dns= + dns=*. + dns=internal. + dns=*.internal. + ``` + +1. The Extended Key Usage attribute should be set to `TLS Web Client Authentication` and `TLS Web Server Authentication`. + +1. We strongly recommend using a strong hash algorithm, such as SHA-256 or SHA-512. + + Individual operating systems might limit access to specific algorithms. For example, Ubuntu 20.04 [limits access](https://manpages.ubuntu.com/manpages/focal/man7/crypto-policies.7.html) to SHA-1. In such cases, Redis Enterprise Software is limited to the features supported by the underlying operating system. + + +#### Client certificate guidelines + +Client certificates support database connections. + +In addition to the general guidelines described earlier, the following guidelines apply to client certificates: + +1. The Extended Key Usage attribute should be set to `TLS Web Client Authentication`. + +1. We strongly recommend using a strong hash algorithm, such as SHA-256 or SHA-512. + + Individual operating systems might limit access to specific algorithms. For example, Ubuntu 20.04 [limits access](https://manpages.ubuntu.com/manpages/focal/man7/crypto-policies.7.html) to SHA-1. In such cases, Redis Enterprise Software is limited to the features supported by the underlying operating system. + +### Create certificates + +The actual process of creating CA-signed certificates varies according to the CA. In addition, your security team may have custom instructions that you need to follow. + +Here, we demonstrate the general process using OpenSSL. If your CA provides alternate tools, you should use those according to their instructions. + +However you choose to create the certificates, be sure to incorporate the guidelines described earlier. + +1. Create a private key. + + ``` bash + $ openssl genrsa -out .pem 2048 + ``` + +1. Create a certificate signing request. + + ``` bash + $ openssl req -new -key .pem -out \ + .csr -config .cnf + ``` + _Important: _ The .CNF file is a configuration file. Check with your security team or certificate authority for help creating a valid configuration file for your environment. + +3. Sign the private key using your certificate authority. + + ```sh + $ openssl x509 -req -in .csr -signkey .pem -out .pem + ``` + + The signing process varies for each organization and CA vendor. Consult your security team and certificate authority for specific instructions describing how to sign a certificate. + +4. Upload the certificate to your cluster. + + You can use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/certificate" >}}) to replace the existing certificates with new certificates: + + ``` console + $ rladmin cluster certificate set certificate_file \ + .pem key_file .pem + ``` + + For a list of values supported by the `` parameter, see the [earlier table](#replace-self-signed). + + You can also use the REST API. To learn more, see [Update certificates]({{< relref "/operate/rs/security/certificates/updating-certificates#how-to-update-certificates" >}}). + +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Use OCSP stapling to verify certificates maintained by a third-party + CA and authenticate connection attempts between clients and servers. +linkTitle: Enable OCSP stapling +title: Enable OCSP stapling +weight: 50 +--- + +OCSP ([Online Certificate Status Protocol](https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol)) lets a client or server verify the status (`GOOD`, `REVOKED`, or `UNKNOWN`) of a certificate maintained by a third-party [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). + +To check whether a certificate is still valid or has been revoked, a client or server can send a request to the CA's OCSP server (also called an OCSP responder). The OCSP responder checks the certificate's status in the CA's [certificate revocation list](https://en.wikipedia.org/wiki/Certificate_revocation_list) and sends the status back as a signed and timestamped response. + +## OCSP stapling overview + + With OCSP enabled, the Redis Enterprise server regularly polls the CA's OCSP responder for the certificate's status. After it receives the response, the server caches this status until its next polling attempt. + + When a client tries to connect to the Redis Enterprise server, they perform a [TLS handshake](https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake) to authenticate the server and create a secure, encrypted connection. During the TLS handshake, [OCSP stapling](https://en.wikipedia.org/wiki/OCSP_stapling) lets the Redis Enterprise server send (or "staple") the cached certificate status to the client. + +If the stapled OCSP response confirms the certificate is still valid, the TLS handshake succeeds and the client connects to the server. + +The TLS handshake fails and the client blocks the connection to the server if the stapled OCSP response indicates either: + +- The certificate has been revoked. + +- The certificate's status is unknown. This can happen if the OCSP responder fails to send a response. + +## Set up OCSP stapling + +You can configure and enable OCSP stapling for your Redis Enterprise cluster with the [Cluster Manager UI](#cluster-manager-ui-method), the [REST API](#rest-api-method), or [`rladmin`](#rladmin-method). + +While OCSP is enabled, the server always staples the cached OCSP status when a client tries to connect. It is the client's responsibility to use the stapled OCSP status. Some Redis clients, such as [Jedis](https://github.com/redis/jedis) and [redis-py](https://github.com/redis/redis-py), already support OCSP stapling, but others might require additional configuration. + +### Cluster Manager UI method + +To set up OCSP stapling with the Redis Enterprise Cluster Manager UI: + +1. Go to **Cluster > Security > OCSP**. + +1. In the **Responder URI** section, select **Replace Certificate** to update the proxy certificate. + +1. Provide the key and certificate signed by your third-party CA, then select **Save**. + +1. Configure query settings if you don't want to use their default values: + + | Name | Default value | Description | + |------|---------------|-------------| + | **Query frequency** | 1 hour | The time interval between OCSP queries to the responder URI. | + | **Response timeout** | 1 second | The time interval in seconds to wait for a response before timing out. | + | **Recovery frequency** | 1 minute | The time interval between retries after a failed query. | + | **Recovery maximum tries** | 5 | The number of retries before the validation query fails and invalidates the certificate. | + +1. Select **Enable** to turn on OCSP stapling. + +### REST API method + +To set up OCSP stapling with the [REST API]({{< relref "/operate/rs/references/rest-api" >}}): + +1. Use the REST API to [replace the proxy certificate]({{< relref "/operate/rs/security/certificates/updating-certificates#use-the-rest-api" >}}) with a certificate signed by your third-party CA. + +1. To configure and enable OCSP, send a [`PUT` request to the `/v1/ocsp`]({{< relref "/operate/rs/references/rest-api/requests/ocsp#put-ocsp" >}}) endpoint and include an [OCSP JSON object]({{< relref "/operate/rs/references/rest-api/objects/ocsp" >}}) in the request body: + + ```json + { + "ocsp_functionality": true, + "query_frequency": 3600, + "response_timeout": 1, + "recovery_frequency": 60, + "recovery_max_tries": 5 + } + ``` + +### `rladmin` method + +To set up OCSP stapling with the [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command-line utility: + +1. Use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/certificate" >}}) to [replace the proxy certificate]({{< relref "/operate/rs/security/certificates/updating-certificates#use-the-cli" >}}) with a certificate signed by your third-party CA. + +1. Update the cluster's OCSP settings with the [`rladmin cluster ocsp config`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/ocsp#ocsp-config" >}}) command if you don't want to use their default values. + + For example: + + ```sh + rladmin cluster ocsp config recovery_frequency set 30 + ``` + +1. Enable OCSP: + + ```sh + rladmin cluster ocsp config ocsp_functionality set enabled + ``` +--- +Title: Certificates +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of certificates in Redis Enterprise Software. +hideListLinks: true +linkTitle: Certificates +weight: 60 +--- + +Redis Enterprise Software uses self-signed certificates by default to ensure that the product is secure. These certificates are autogenerated on the first node of each Redis Enterprise Software installation and are copied to all other nodes added to the cluster. + +You can replace a self-signed certificate with one signed by a certificate authority of your choice. + +## Supported certificates + +Here's the list of supported certificates that create secure, encrypted connections to your Redis Enterprise Software cluster: + +| Certificate name | Autogenerated | Description | +|------------------|:---------------:|-------------| +| `api` | | Encrypts [REST API]({{< relref "/operate/rs/references/rest-api/" >}}) requests and responses. | +| `cm` | | Secures connections to the Redis Enterprise Cluster Manager UI. | +| `ldap_client` | :x: | Secures connections between LDAP clients and LDAP servers. | +| `metrics_exporter` | | Sends Redis Enterprise metrics to external [monitoring tools]({{< relref "/operate/rs/monitoring/" >}}) over a secure connection. | +| `mtls_trusted_ca` | :x: | Required to enable certificate-based authentication for secure, passwordless access to the REST API. | +| `proxy` | | Creates secure, encrypted connections between clients and databases. | +| `syncer` | | For [Active-Active]({{< relref "/operate/rs/databases/active-active/" >}}) or [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/" >}}) databases, encrypts data during the synchronization of participating clusters. | + +Certificates that are not autogenerated are optional unless you want to use certain features. For example, you must provide your own `ldap_client` certificate to enable [LDAP authentication]({{}}) or an `mtls_trusted_ca` certificate to enable certificate-based authentication. + +## Accept self-signed certificates to access the Cluster Manager UI + +When you use the default self-signed certificates and you connect to the Cluster Manager UI over a web browser, you'll see an untrusted connection notification. Depending on your browser, you can allow the connection for each session or add an exception to trust the certificate for all future sessions.--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Update certificates in a Redis Enterprise cluster. +linkTitle: Update certificates +title: Update certificates +weight: 20 +--- + +{{}} +When you update the certificates, the new certificate replaces the same certificates on all nodes in the cluster. +{{}} + +## How to update certificates + +You can use the [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) command-line interface (CLI) or the [REST API]({{< relref "/operate/rs/references/rest-api" >}}) to update certificates. The Cluster Manager UI lets you update proxy and syncer certificates on the **Cluster > Security > Certificates** screen. + +The new certificates are used the next time the clients connect to the database. + +When you upgrade Redis Enterprise Software, the upgrade process copies the certificates that are on the first upgraded node to all of the nodes in the cluster. + +{{}} +Don't manually overwrite the files located in `/etc/opt/redislabs`. Instead, upload new certificates to a temporary location on one of the cluster nodes, such as the `/tmp` directory. +{{}} + +### Use the Cluster Manager UI + +To replace proxy or syncer certificates using the Cluster Manager UI: + +1. Go to **Cluster > Security > Certificates**. + +1. Expand the section for the certificate you want to update: + - For the proxy certificate, expand **Server authentication**. + - For the syncer certificate, expand **Replica Of and Active-Active authentication**. + + {{Expanded proxy certificate for server authentication.}} + +1. Click **Replace Certificate** to open the dialog. + + {{Replace proxy certificate dialog.}} + +1. Upload the key file. + +1. Upload the new certificate. + +1. Click **Save**. + +### Use the CLI + +To replace certificates with the `rladmin` CLI, run the [`cluster certificate set`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/certificate" >}}) command: + +```sh + rladmin cluster certificate set certificate_file .pem key_file .pem +``` + +Replace the following variables with your own values: + +- `` - The name of the certificate you want to replace. See the [certificates table]({{< relref "/operate/rs/security/certificates" >}}) for the list of valid certificate names. +- `` - The name of your certificate file +- `` - The name of your key file + +For example, to replace the Cluster Manager UI (`cm`) certificate with the private key `key.pem` and the certificate file `cluster.pem`: + +```sh +rladmin cluster certificate set cm certificate_file cluster.pem key_file key.pem +``` + +### Use the REST API + +To replace a certificate using the REST API, use [`PUT /v1/cluster/update_cert`]({{< relref "/operate/rs/references/rest-api/requests/cluster/certificates#put-cluster-update_cert" >}}): + +```sh +PUT https://[host][:port]/v1/cluster/update_cert + '{ "name": "", "key": "", "certificate": "" }' +``` + +Replace the following variables with your own values: + +- `` - The name of the certificate to replace. See the [certificates table]({{< relref "/operate/rs/security/certificates" >}}) for the list of valid certificate names. +- `` - The contents of the \*\_key.pem file + + {{< tip >}} + + The key file contains `\n` end of line characters (EOL) that you cannot paste into the API call. + You can use `sed -z 's/\n/\\\n/g'` to escape the EOL characters. + {{< /tip >}} + +- `` - The contents of the \*\_cert.pem file + +## Replica Of database certificates + +This section describes how to update certificates for Replica Of databases. + +### Update proxy certificates {#update-ap-proxy-certs} + +To update the proxy certificate on clusters running Replica Of databases: + +1. Use the Cluster Manager UI, `rladmin`, or the REST API to update the proxy certificate on the source database cluster. + +1. From the Cluster Manager UI, update the destination database (_replica_) configuration with the [new certificate]({{< relref "/operate/rs/databases/import-export/replica-of/create#encrypt-replica-database-traffic" >}}). + +{{}} +- Perform step 2 as quickly as possible after performing step 1. Connections using the previous certificate are rejected after applying the new certificate. Until both steps are performed, recovery of the database sync cannot be established. +{{}} + +## Active-Active database certificates + +### Update proxy certificates {#update-aa-proxy-certs} + +To update proxy certificate on clusters running Active-Active databases: + +1. Use the Cluster Manager UI, `rladmin`, or the REST API to update proxy certificates on a single cluster, multiple clusters, or all participating clusters. + +1. Use the [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) utility to update Active-Active database configuration from the command line. Run the following command once for each Active-Active database residing on the modified clusters: + + ```sh + crdb-cli crdb update --crdb-guid --force + ``` + +{{}} +- Perform step 2 as quickly as possible after performing step 1. Connections using the previous certificate are rejected after applying the new certificate. Until both steps are performed, recovery of the database sync cannot be established.
+- Do not run any other `crdb-cli crdb update` operations between the two steps. +{{
}} + +### Update syncer certificates {#update-aa-syncer-certs} + +To update your syncer certificate on clusters running Active-Active databases, follow these steps: + +1. Update your syncer certificate on one or more of the participating clusters using the Cluster Manager UI, `rladmin`, or the REST API. You can update a single cluster, multiple clusters, or all participating clusters. + +1. Update the Active-Active database configuration from the command line with the [`crdb-cli`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) utility. Run this command once for each Active-Active database that resides on the modified clusters: + + ```sh + crdb-cli crdb update --crdb-guid --force + ``` + +{{}} +- Run step 2 as quickly as possible after step 1. Between the two steps, new syncer connections that use the ‘old’ certificate will get rejected by the cluster that has been updated with the new certificate (in step 1).
+- Do not run any other `crdb-cli crdb update` operations between the two steps.
+{{
}} +--- +Title: Recommended security practices +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Recommended security practices +hideListLinks: true +weight: 5 +--- + +## Deployment security + +When deploying Redis Enterprise Software to production, we recommend the following practices: + +- **Deploy Redis Enterprise inside a trusted network**: Redis Enterprise is database software and should be deployed on a trusted network not accessible to the public internet. Deploying Redis Enterprise in a trusted network reduces the likelihood that someone can obtain unauthorized access to your data or the ability to manage your database configuration. + +- **Implement anti-virus exclusions**: To ensure that anti-virus solutions that scan files or intercept processes to protect memory do not interfere with Redis Enterprise software, you should ensure that anti-virus exclusions are implemented across all nodes in their Redis Enterprise cluster in a consistent policy. This helps ensure that anti-virus software does not impact the availability of your Redis Enterprise cluster. + + If you are replacing your existing antivirus solution or installing/supporting Redis Enterprise, make sure that the below paths are excluded: + + {{< note >}} +For antivirus solutions that intercept processes, binary files may have to be excluded directly depending on the requirements of your anti-virus vendor. + {{< /note >}} + + | **Path** | **Description** | + |------------|-----------------| + | /opt/redislabs | Main installation directory for all Redis Enterprise Software binaries | + | /opt/redislabs/bin | Binaries for all the utilities for command line access and managements such as "rladmin" or "redis-cli" | + | /opt/redislabs/config | System configuration files | + | /opt/redislabs/lib | System library files | + | /opt/redislabs/sbin | System binaries for tweaking provisioning | + +- **Send logs to a remote logging server**: Redis Enterprise is configured to send logs by default to syslog. To send these logs to a remote logging server you must [configure syslog]({{}}) based the requirements of the remote logging server vendor. Remote logging helps ensure that the logs are not deleted so that you can rotate the logs to prevent your server disk from filling up. + +- **Deploy clusters with an odd number of 3 or more nodes**: Redis is an available and partition-tolerant database. We recommend that Redis Enterprise be deployed in a cluster of an odd number of 3 or more nodes so that you are able to successfully failover in the event of a failure. + +- **Reboot nodes in a sequence rather than all at once**: It is best practice to frequently maintain reboot schedules. If you reboot too many servers at once, it is possible to cause a quorum failure that results in loss of availability of the database. We recommend that rebooting be done in a phased manner so that quorum is not lost. For example, to maintain quorum in a 3 node cluster, at least 2 nodes must be up at all times. Only one server should be rebooted at any given time to maintain quorum. + +- **Implement client-side encryption**: Client-side encryption, or the practice of encrypting data within an application before storing it in a database, such as Redis, is the most widely adopted method to achieve encryption in memory. Redis is an in-memory database and stores data in-memory. If you require encryption in memory, better known as encryption in use, then client side encryption may be the right solution for you. Please be aware that database functions that need to operate on data — such as simple searching functions, comparisons, and incremental operations — don’t work with client-side encryption. + +## Cluster security + +- **Control the level of access to your system**: Redis Enterprise lets you decide which users can access the cluster, which users can access databases, and which users can access both. We recommend preventing database users from accessing the cluster. See [Access control]({{}}) for more information. + +- **Enable LDAP authentication**: If your organization uses the Lightweight Directory Access Protocol (LDAP), we recommend enabling Redis Enterprise Software support for role-based LDAP authentication. + +- **Require HTTPS for API endpoints**: Redis Enterprise comes with a REST API to help automate tasks. This API is available in both an encrypted and unencrypted endpoint for backward compatibility. You can [disable the unencrypted endpoint]({{}}) with no loss in functionality. + +## Database security + +Redis Enterprise offers several database security controls to help protect your data against unauthorized access and to improve the operational security of your database. The following section details configurable security controls available for implementation. + +- **Use strong Redis passwords**: A frequent recommendation in the security industry is to use strong passwords to authenticate users. This helps to prevent brute force password guessing attacks against your database. Its important to check that your password aligns with your organizations security policy. + +- **Deactivate default user access**: Redis Enterprise comes with a "default" user for backwards compatibility with applications designed with versions of Redis prior to Redis Enterprise 6. The default user is turned on by default. This allows you to access the database without specifying a username and only using a shared secret. For applications designed to use access control lists, we recommend that you [deactivate default user access]({{}}). + +- **Configure Transport Layer Security (TLS)**: Similar to the control plane, you can also [configure TLS protocols]({{}}) to help support your security and compliance needs. + +- **Enable client certificate authentication**: To prevent unauthorized access to your data, Redis Enterprise databases support the [TLS protocol]({{}}), which includes authentication and encryption. Client certificate authentication can be used to ensure only authorized hosts can access the database. + +- **Install trusted certificates**: Redis implements self-signed certificates for the database proxy and replication service, but many organizations prefer to [use their own certificates]({{}}). + +- **Configure and verify database backups**: Implementing a disaster recovery strategy is an important part of data security. Redis Enterprise supports [database backups to many destinations]({{}}). +--- +Title: Rotate passwords +alwaysopen: false +categories: +- docs +- operate +- rs +description: Rotate user passwords. +linkTitle: Rotate passwords +toc: 'true' +weight: 70 +--- + +Redis Enterprise Software lets you implement password rotation policies using the [REST API]({{< relref "/operate/rs/references/rest-api" >}}). + +You can add a new password for a database user without immediately invalidating the old one (which might cause authentication errors in production). + +{{< note >}} +Password rotation does not work for the default user. [Add additional users]({{< relref "/operate/rs/security/access-control/create-users" >}}) to enable password rotation. +{{< /note >}} + +## Password rotation policies + +For user access to the Redis Enterprise Software Cluster Manager UI, +you can set a [password expiration policy]({{< relref "/operate/rs/security/access-control/manage-passwords/password-expiration" >}}) to prompt the user to change their password. + +However, for database connections that rely on password authentication, +you need to allow for authentication with the existing password while you roll out the new password to your systems. + +With the Redis Enterprise Software REST API, you can add additional passwords to a user account for authentication to the database or the Cluster Manager UI and API. + +After the old password is replaced in the database connections, you can delete the old password to finish the password rotation process. + +{{< warning >}} +Multiple passwords are only supported using the REST API. +If you reset the password for a user in the Cluster Manager UI, +the new password replaces all other passwords for that user. +{{< /warning >}} + +The new password cannot already exist as a password for the user and must meet the [password complexity]({{< relref "/operate/rs/security/access-control/manage-passwords/password-complexity-rules" >}}) requirements, if enabled. + +## Rotate password + +To rotate the password of a user account: + +1. Add an additional password to a user account with [`POST /v1/users/password`]({{< relref "/operate/rs/references/rest-api/requests/users/password#add-password" >}}): + + ```sh + POST https://[host][:port]/v1/users/password + '{"username":"", "old_password":"", "new_password":""}' + ``` + + After you send this request, you can authenticate with both the old and the new password. + +1. Update the password in all database connections that connect with the user account. +1. Delete the original password with [`DELETE /v1/users/password`]({{< relref "/operate/rs/references/rest-api/requests/users/password#update-password" >}}): + + ```sh + DELETE https://[host][:port]/v1/users/password + '{"username":"", "old_password":""}' + ``` + + If there is only one valid password for a user account, you cannot delete that password. + +## Replace all passwords + +You can also replace all existing passwords for a user account with a single password that does not match any existing passwords. +This can be helpful if you suspect that your passwords are compromised and you want to quickly resecure the account. + +To replace all existing passwords for a user account with a single new password, use [`PUT /v1/users/password`]({{< relref "/operate/rs/references/rest-api/requests/users/password#delete-password" >}}): + +```sh +PUT https://[host][:port]/v1/users/password + '{"username":"", "old_password":"", "new_password":""}' +``` + +All of the existing passwords are deleted and only the new password is valid. + +{{}} +If you send the above request without specifying it is a `PUT` request, the new password is added to the list of existing passwords. +{{}} +--- +Title: Configure password expiration +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configure password expiration to enforce expiration of a user's password + after a specified number of days. +linkTitle: Password expiration +toc: 'true' +weight: 50 +--- + +## Enable password expiration + +To enforce an expiration of a user's password after a specified number of days: + +- Use the Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn on **Expiration**. + + 1. Enter the number of days before passwords expire. + + 1. Select **Save**. + +- Use the `cluster` endpoint of the REST API + + ``` REST + PUT https://[host][:port]/v1/cluster + {"password_expiration_duration":} + ``` + +## Deactivate password expiration + +To deactivate password expiration: + +- Use the Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn off **Expiration**. + + 1. Select **Save**. + +- Use the `cluster` REST API endpoint to set `password_expiration_duration` to `0` (zero). +--- +Title: Change the password hashing algorithm +alwaysopen: false +categories: +- docs +- operate +- rs +description: Change the password hashing algorithm for user passwords in a Redis Enterprise Software cluster. +linkTitle: Password hashing algorithm +toc: 'true' +weight: 95 +--- + +Redis Enterprise Software securely stores all user passwords using a cryptographic hash function. The default password hashing algorithm is `SHA-256`, but `PBKDF2` is also supported as of Redis Enterprise Software version 7.8.6-13. + +You can change the password hashing algorithm using [`rladmin`]({{}}) or the [REST API]({{}}). When you change the password hashing algorithm, the cluster rehashes the administrator password and passwords for all users, including default users. + +## Command-line method + +To change the password hashing algorithm from the command line, run [`rladmin cluster change_password_hashing_algorithm`]({{}}): + +```sh +rladmin cluster change_password_hashing_algorithm PBKDF2 +``` + +## REST API method + +You can [change the password hashing algorithm]({{}}) using a REST API request: + +```sh +PATCH /v1/cluster/change_password_hashing_algorithm +{ "algorithm": "PBKDF2" } +``` +--- +Title: Update admin credentials for Active-Active databases +alwaysopen: false +categories: +- docs +- operate +- rs +description: Update admin credentials for Active-Active databases. +linkTitle: Update Active-Active admin credentials +weight: 90 +--- + +Active-Active databases use administrator credentials to manage operations. + +To update the administrator user password on a cluster with Active-Active databases: + +1. From the user management page, update the administrator user password on the clusters you want to update. + +1. For each participating cluster _and_ each Active-Active database, update the admin user credentials to match the changes in step 1. + +{{}} +Do not perform any management operations on the databases until these steps are complete. +{{}} +--- +Title: Configure password complexity rules +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enable password complexity rules and configure minimum password length. +linkTitle: Password complexity rules +toc: 'true' +weight: 30 +--- + +Redis Enterprise Software provides optional password complexity rules that meet common requirements. When enabled, these rules require the password to have: + +- At least 8 characters +- At least one uppercase character +- At least one lowercase character +- At least one number +- At least one special character + +These requirements reflect v6.2.12 and later. Earlier versions did not support numbers or special characters as the first or the last character of a password. This restriction was removed in v6.2.12. + +In addition, the password: + +- Cannot contain the user's email address or the reverse of the email address. +- Cannot have more than three repeating characters. + +Password complexity rules apply when a new user account is created and when the password is changed. Password complexity rules are not applied to accounts authenticated by an external identity provider. + +## Enable password complexity rules + +To enable password complexity rules, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, enable **Complexity rules**. + + 1. Select **Save**. + +- [Update cluster]({{}}) REST API request: + + ```sh + PUT https://[host][:port]/v1/cluster + { "password_complexity": true } + ``` + +## Change minimum password length + +When password complexity rules are enabled, passwords must have at least 8 characters by default. + +If you change the minimum password length, the new minimum is enforced for new users and when existing users change their passwords. + +To change the minimum password length, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**. + + 1. Click **Edit**. + + 1. In the **Password** section, enable **Complexity rules**. + + 1. Set the number of characters for **Minimum password length**. + + {{The minimum password length setting appears in the password section of the cluster security preferences screen when complexity rules are enabled.}} + + 1. Click **Save**. + +- [Update cluster]({{}}) REST API request: + + ```sh + PUT https://[host][:port]/v1/cluster + { "password_min_length": } + ``` + +## Deactivate password complexity rules + +To deactivate password complexity rules, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. In the **Password** section, turn off **Complexity rules**. + + 1. Select **Save**. + +- [Update cluster]({{}}) REST API request: + + ```sh + PUT https://[host][:port]/v1/cluster + { "password_complexity": false } + ``` +--- +Title: Set password policies +alwaysopen: false +categories: +- docs +- operate +- rs +description: Set password policies. +hideListLinks: true +linkTitle: Set password policies +toc: 'true' +weight: 30 +--- + +Redis Enterprise Software provides several ways to manage the passwords of local accounts, including: + +- [Password complexity rules]({{< relref "/operate/rs/security/access-control/manage-passwords/password-complexity-rules" >}}) + +- [Password expiration]({{< relref "/operate/rs/security/access-control/manage-passwords/password-expiration" >}}) + +- [Password rotation]({{< relref "/operate/rs/security/access-control/manage-passwords/rotate-passwords" >}}) + +You can also manage a user's ability to [sign in]({{< relref "/operate/rs/security/access-control/manage-users/login-lockout#user-login-lockout" >}}) and control [session timeout]({{< relref "/operate/rs/security/access-control/manage-users/login-lockout#session-timeout" >}}). + +To enforce more advanced password policies, we recommend using [LDAP integration]({{< relref "/operate/rs/security/access-control/ldap" >}}) with an external identity provider, such as Active Directory. + +{{}} +Redis Enterprise Software securely stores all user passwords using a cryptographic hash function. The default password hashing algorithm is `SHA-256`, but you can [change the password hashing algorithm]({{}}) to `PBKDF2` as of Redis Enterprise Software version 7.8.6-13. +{{}} +--- +Title: Manage user login +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage user login lockout and session timeout. +linkTitle: Manage user login and session +toc: 'true' +weight: 40 +--- + +Redis Enterprise Software secures user access in a few different ways, including automatically: + +- Locking user accounts after a series of authentication failures (invalid passwords) + +- Signing sessions out after a period of inactivity + +Here, you learn how to configure the relevant settings. + +## User login lockout + +By default, after 5 failed login attempts within 15 minutes, the user account is locked for 30 minutes. You can change the user login lockout settings in the Cluster Manager UI or with [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}). + +### View login lockout settings + +You can view the cluster's user login lockout settings from **Cluster > Security > Preferences > Lockout threshold** in the Cluster Manager UI or with [`rladmin info cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/info#info-cluster" >}}): + +```sh +$ rladmin info cluster | grep login_lockout + login_lockout_counter_reset_after: 900 + login_lockout_duration: 1800 + login_lockout_threshold: 5 +``` + +### Configure user login lockout + +To change the user login lockout settings using the Cluster Manager UI: + +1. Go to **Cluster > Security > Preferences**, then select **Edit**. + +1. In the **Lockout threshold** section, make sure the checkbox is selected. + + {{The Lockout threshold configuration section}} + +1. Configure the following **Lockout threshold** settings: + + 1. **Log-in attempts until user is revoked** - The number of failed login attempts allowed before the user account is locked. + + 1. **Time between failed login attempts** in seconds, minutes, or hours - The amount of time during which failed login attempts are counted. + + 1. For **Unlock method**, select one of the following: + + - **Locked duration** to set how long the user account is locked after excessive failed login attempts. + + - **Only Admin can unlock the user by resetting the password**. + +1. Select **Save**. + +### Change allowed login attempts + +To change the number of failed login attempts allowed before the user account is locked, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_threshold + ``` + + For example, to set the lockout threshold to 10 failed login attempts, run: + + ```sh + rladmin tune cluster login_lockout_threshold 10 + ``` + + If you set the lockout threshold to 0, it turns off account lockout, and the cluster settings show `login_lockout_threshold: disabled`. + + ```sh + rladmin tune cluster login_lockout_threshold 0 + ``` + +### Change time before login attempts reset + +To change the amount of time during which failed login attempts are counted, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_counter_reset_after + ``` + + For example, to set the lockout reset to 1 hour, run: + + ```sh + rladmin tune cluster login_lockout_counter_reset_after 3600 + ``` + +### Change login lockout duration + +To change the amount of time that the user account is locked after excessive failed login attempts, use one of the following methods: + +- [Cluster Manager UI](#configure-user-login-lockout) + +- [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_duration + ``` + + For example, to set the lockout duration to 1 hour, run: + + ```sh + rladmin tune cluster login_lockout_duration 3600 + ``` + + If you set the lockout duration to 0, then the account can be unlocked only when an administrator changes the account's password. + + ```sh + rladmin tune cluster login_lockout_duration 0 + ``` + + The cluster settings now show `login_lockout_duration: admin-release`. + +### Unlock locked user accounts + +To unlock a user account in the Cluster Manager UI: + +1. Go to **Access Control > Users**. Locked users have a "User is locked out" label: + + {{The Access Control > Users configuration screen in the Cluster Manager UI}} + +1. Point to the user you want to unlock, then click **Reset to unlock**: + + {{Reset to unlock button appears when you point to a locked user in the list}} + +1. In the **Reset user password** dialog, enter a new password for the user: + + {{Reset user password dialog}} + +1. Select **Save** to reset the user's password and unlock their account. + +To unlock a user account or reset a user password with `rladmin`, run: + +```sh +rladmin cluster reset_password +``` + +To unlock a user account or reset a user password with the REST API, use [`PUT /v1/users`]({{< relref "/operate/rs/references/rest-api/requests/users#put-user" >}}): + +```sh +PUT /v1/users +{"password": ""} +``` + +### Turn off login lockout + +To turn off user login lockout and allow unlimited login attempts, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. Clear the **Lockout threshold** checkbox. + + 1. Select **Save**. + +- [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster login_lockout_threshold 0 + ``` + +The cluster settings show `login_lockout_threshold: disabled`. + +## Configure session timeout + +The Redis Enterprise Cluster Manager UI supports session timeouts. By default, users are automatically logged out after 15 minutes of inactivity. + +To customize the session timeout, use one of the following methods: + +- Cluster Manager UI: + + 1. Go to **Cluster > Security > Preferences**, then select **Edit**. + + 1. For **Session timeout**, select minutes or hours from the list and enter the timeout value. + + 1. Select **Save**. + +- [`rladmin cluster config`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/config" >}}): + + ```sh + rladmin cluster config cm_session_timeout_minutes + ``` + + The `` is the number of minutes after which sessions will time out. +--- +Title: Manage user security +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage user account security settings. +hideListLinks: false +linkTitle: Manage user security +weight: 20 +--- + +Redis Enterprise supports the following user account security settings: + +- Password complexity +- Password expiration +- User lockouts +- Account inactivity timeout + +## Manage users and user security + +--- +Title: Manage default user +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage a database's default user. +linkTitle: Manage default user +toc: 'true' +weight: 60 +--- + +When you [create a database]({{< relref "/operate/rs/databases/create" >}}), default user database access is enabled by default (**Unauthenticated access** is selected). This gives the default user full access to the database and enables compatibility with versions of Redis before Redis 6. + +Select **Password-only authentication**, then enter and confirm a default database password to require authentication for connections to the database. + +{{Select Password-only authentication to require a password to access the database.}} + +## Authenticate as default user + +When you configure a password for your database, all connections to the database must authenticate using the [AUTH]({{< relref "/commands/auth" >}}) command. See Redis security's [authentication]({{}}) section for more information. + +```sh +AUTH +``` + +## Change default database password + +To change the default user's password: + +1. From the database's **Security** tab, select **Edit**. + +1. In the **Access Control** section, select **Password-only authentication** as the **Access method**. + +1. Enter and re-enter the new password. + +1. Select **Save**. + +## Deactivate default user + +If you set up [role-based access control]({{< relref "/operate/rs/security/access-control" >}}) with [access control lists]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) (ACLs) for your database and don't require backwards compatibility with versions earlier than Redis 6, you can [deactivate the default user]({{< relref "/operate/rs/security/access-control/manage-users/default-user" >}}). + +{{}} +Before you deactivate default user access, make sure the role associated with the database is [assigned to a user]({{< relref "/operate/rs/security/access-control/create-users" >}}). Otherwise, the database will be inaccessible. +{{}} + +To deactivate the default user: + +1. From the database's **Security** tab, select **Edit**. + +1. In the **Access Control** section, select **Using ACL only** as the **Access method**. + + {{Select Using ACL only to deactivate default user access to the database.}} + +1. Choose at least one role and Redis ACL to access the database. + +1. Select **Save**. +--- +Title: Update database ACLs +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to use the Cluster Manager UI to update database access + control lists (ACLs) to authorize access to roles authorizing LDAP user access. +weight: 45 +--- + +To grant LDAP users access to a database, assign the mapped access role to the access control list (ACL) for the database. + +1. In the Cluster Manager UI, go to **Databases**, then select the database from the list. + +1. From the **Security** tab, select the **Edit** button. + +1. In the **Access Control List** section, select **+ Add ACL**. + + {{Updating a database access control list (ACL)}} + +1. Select the appropriate roles and then save your changes. + +If you assign multiple roles to an ACL and a user is authorized by more than one of these roles, their access is determined by the first “matching” rule in the list. + +If the first rule gives them read access and the third rule authorizes write access, the user will only be able to read data. + +As a result, we recommend ordering roles so that higher access roles appear before roles with more limited access. + + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap.md" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles.md" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/security/" >}}) +--- +Title: Enable role-based LDAP +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to enable role-based LDAP authentication and authorization + using the Cluster Manager UI. +weight: 25 +--- + +Redis Enterprise Software uses a role-based mechanism to enable LDAP authentication and authorization. + +When a user attempts to access Redis Enterprise resources using LDAP credentials, the credentials are passed to the LDAP server in a bind request. If the request succeeds, the user’s groups are searched for a group that authorizes access to the original resource. + +Role-based LDAP lets you authorize cluster management users (previously known as _external users_) and database users. As with any access control role, you can define the level of access authorized by the role. + +## Set up LDAP connection + +To configure and enable LDAP from the Cluster Manager UI: + +1. Go to **Access Control > LDAP > Configuration**. + +1. Select **+ Create**. + +1. In **Set LDAP**, configure [LDAP server settings](#ldap-server-settings), [bind credentials](#bind-credentials), [authentication query](#authentication-query), and [authorization query](#authorization-query). + + {{The LDAP configuration screen in the Cluster Manager UI}} + +1. Select **Save & Enable**. + +### LDAP server settings + +The **LDAP server** settings define the communication settings used for LDAP authentication and authorization. These include: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Protocol type** | Underlying communication protocol; must be _LDAP_, _LDAPS_, or _STARTTLS_ | +| **Host** | URL of the LDAP server | +| **Port** | LDAP server port number | +| **Trusted CA certificate** | _(LDAPS or STARTTLS protocols only)_ Certificate for the trusted certificate authority (CA) | + +When defining multiple LDAP hosts, the organization tree structure must be identical for all hosts. + +### Bind credentials + +These settings define the credentials for the bind query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Distinguished Name** | Example: `cd=admin,dc=example,dc=org` | +| **Password** | Example: `admin1` | +| **Client certificate authentication** |_(LDAPS or STARTTLS protocols only)_ Place checkmark to enable | +| **Client public key** | _(LDAPS or STARTTLS protocols only)_ The client public key for authentication | +| **Client private key** | _(LDAPS or STARTTLS protocols only)_ The client private key for authentication | + +### Authentication query + +These settings define the authentication query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Search user by** | Either _Template_ or _Query_ | +| **Template** | _(template search)_ Example: `cn=%u,ou=dev,dc=example,dc=com` | +| **Base** | _(query search)_ Example: `ou=dev,dc=example,dc=com` | +| **Filter** | _(query search)_ Example: `(cn=%u)` | +| **Scope** | _(query search)_ Must be _baseObject_, _singleLevel_, or _wholeSubtree_ | + +In this example, `%u` is replaced by the username attempting to access the Redis Enterprise resource. + +### Authorization query + +These settings define the group authorization query: + +| _Setting_ | _Description_ | +|:----------|:--------------| +| **Search groups by** | Either _Attribute_ or _Query_ | +| **Attribute** | _(attribute search)_ Example: `memberOf` (case-sensitive) | +| **Base** | _(query search)_ Example: `ou=groups,dc=example,dc=com` | +| **Filter** | _(query search)_ Example: `(members=%D)` | +| **Scope** | _(query search)_ Must be _baseObject_, _singleLevel_, or _wholeSubtree_ | + +In this example, `%D` is replaced by the Distinguished Name of the user attempting to access the Redis Enterprise resource. + +### Authentication timeout + +The **Authentication timeout** setting determines the connection timeout to the LDAP server during user authentication. + +By default, the timeout is 5 seconds, which is recommended for most cases. + +However, if you enable multi-factor authentication (MFA) for your LDAP server, you might need to increase the timeout to provide enough time for MFA verification. You can set it to any integer in the range of 5-60 seconds. + +## More info + +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Software [security and practices]({{< relref "/operate/rs/security/" >}}) +--- +Title: Map LDAP groups to roles +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to map LDAP authorization groups to Redis Enterprise roles + using the Cluster Manager UI. +weight: 35 +--- + +Redis Enterprise Software uses a role-based mechanism to enable LDAP authentication and authorization. + +Once LDAP is enabled, you need to map LDAP groups to Redis Enterprise access control roles. + +## Map LDAP groups to roles + +To map LDAP groups to access control roles in the Cluster Manager UI: + +1. Select **Access Control > LDAP > Mapping**. + + {{}} +You can map LDAP roles when LDAP configuration is not enabled, but they won't have any effect until you [configure and enable LDAP]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap" >}}). + {{}} + + {{Enable LDAP mappings Panel}} + +1. Select the **+ Add LDAP Mapping** button to create a new mapping and then enter the following details: + + | _Setting_ | _Description_ | +|:----------|:--------------| +| **Name** | A descriptive, unique name for the mapping | +| **Distinguished Name** | The distinguished name of the LDAP group to be mapped.
Example: `cn=admins,ou=groups,dc=example,dc=com` | +| **Role** | The Redis Software access control role defined for this group | +| **Email** | _(Optional)_ An address to receive alerts| +| **Alerts** | Selections identifying the desired alerts. | + + {{Enable LDAP mappings Panel}} + +1. When finished, select the **Save** button. + +Create a mapping for each LDAP group used to authenticate and/or authorize access to Redis Enterprise Software resources. + +The scope of the authorization depends on the access control role: + +- If the role authorizes admin management, LDAP users are authorized as cluster management administrators. + +- If the role authorizes database access, LDAP users are authorized to use the database to the limits specified in the role. + +- To authorize LDAP users to specific databases, update the database access control lists (ACLs) to include the mapped LDAP role. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/security/" >}}) +--- +Title: Migrate to role-based LDAP +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how to migrate existing cluster-based LDAP deployments to role-based + LDAP. +weight: 55 +--- + +Redis Enterprise Software supports LDAP through a [role-based mechanism]({{< relref "/operate/rs/security/access-control/ldap/" >}}), first introduced [in v6.0.20]({{< relref "/operate/rs/release-notes/rs-6-0-20-april-2021" >}}). + +Earlier versions of Redis Enterprise Software supported a cluster-based mechanism; however, that mechanism was removed in v6.2.12. + +If you're using the cluster-based mechanism to enable LDAP authentication, you need to migrate to the role-based mechanism before upgrading to Redis Enterprise Software v6.2.12 or later. + +## Migration checklist + +This checklist covers the basic process: + +1. Identify accounts per app on the customer end. + +1. Create or identify an LDAP user account on the server that is responsible for LDAP authentication and authorization. + +1. Create or identify an LDAP group that contains the app team members. + +1. Verify or configure the Redis Enterprise ACLs. + +1. Configure each database ACL. + +1. Remove the earlier "external" (LDAP) users from Redis Enterprise. + +1. _(Recommended)_ Update cluster configuration to replace the cluster-based configuration file. + + You can use `rladmin` to update the cluster configuration: + + ``` bash + $ touch /tmp/saslauthd_empty.conf + $ rladmin cluster config saslauthd_ldap_conf \ + /tmp/saslauthd_empty.conf + ``` + + Here, a blank file replaces the earlier configuration. + +1. Use **Access Control > LDAP > Configuration** to enable role-based LDAP. + +1. Map your LDAP groups to access control roles. + +1. Test application connectivity using the LDAP credentials of an app team member. + +1. _(Recommended)_ Turn off default access for the database to avoid anonymous client connections. + + Because deployments and requirements vary, you’ll likely need to adjust these guidelines. + +## Test LDAP access + +To test your LDAP integration, you can: + +- Connect with `redis-cli` and use the [`AUTH` command]({{< relref "/commands/auth" >}}) to test LDAP username/password credentials. + +- Sign in to the Cluster Manager UI using LDAP credentials authorized for admin access. + +- Use [Redis Insight]({{< relref "/develop/tools/insight" >}}) to access a database using authorized LDAP credentials. + +- Use the [REST API]({{< relref "/operate/rs/references/rest-api" >}}) to connect using authorized LDAP credentials. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/security/" >}}) +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Describes how Redis Enterprise Software integrates LDAP authentication + and authorization. Also describes how to enable LDAP for your deployment of Redis + Enterprise Software. +hideListLinks: true +linkTitle: LDAP authentication +title: LDAP authentication +weight: 50 +--- + +Redis Enterprise Software supports [Lightweight Directory Access Protocol](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol) (LDAP) authentication and authorization through its [role-based access controls]({{< relref "/operate/rs/security/access-control" >}}) (RBAC). You can use LDAP to authorize access to the Cluster Manager UI and to control database access. + +You can configure LDAP roles using the Redis Enterprise Cluster Manager UI or [REST API]({{< relref "/operate/rs/references/rest-api/requests/ldap_mappings/" >}}). + +## How it works + +Here's how role-based LDAP integration works: + +{{LDAP overview}} + +1. A user signs in with their LDAP credentials. + + Based on the LDAP configuration details, the username is mapped to an LDAP Distinguished Name. + +1. A simple [LDAP bind request](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Bind_(authenticate)) is attempted using the Distinguished Name and the password. The sign-in fails if the bind fails. + +1. Obtain the user’s LDAP group memberships. + + Using configured LDAP details, obtain a list of the user’s group memberships. + +1. Compare the user’s LDAP group memberships to those mapped to local roles. + +1. Determine if one of the user's groups is authorized to access the target resource. If so, the user is granted the level of access authorized to the role. + +To access the Cluster Manager UI, the user needs to belong to an LDAP group mapped to an administrative role. + +For database access, the user needs to belong to an LDAP group mapped to a role listed in the database’s access control list (ACL). The rights granted to the group determine the user's level of access. + +## Prerequisites + +Before you enable LDAP in Redis Enterprise, you need: + +1. The following LDAP details: + + - Server URI, including host, port, and protocol details. + - Certificate details for secure protocols. + - Bind credentials, including Distinguished Name, password, and (optionally) client public and private keys for certificate authentication. + - Authentication query details, whether template or query. + - Authorization query details, whether attribute or query. + - The Distinguished Names of LDAP groups you’ll use to authorize access to Redis Enterprise resources. + +1. The LDAP groups that correspond to the levels of access you wish to authorize. Each LDAP group will be mapped to a Redis Enterprise access control role. + +1. A Redis Enterprise access control role for each LDAP group. Before you enable LDAP, you need to set up [role-based access controls]({{< relref "/operate/rs/security/access-control" >}}) (RBAC). + +## Enable LDAP + +To enable LDAP: + +1. From **Access Control > LDAP** in the Cluster Manager UI, select the **Configuration** tab and [enable LDAP access]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap" >}}). + + {{Enable LDAP Panel}} + +2. Map LDAP groups to [access control roles]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles" >}}). + +3. Update database access control lists (ACLs) to [authorize role access]({{< relref "/operate/rs/security/access-control/ldap/update-database-acls" >}}). + +If you already have appropriate roles, you can update them to include LDAP groups. + +## More info + +- Enable and configure [role-based LDAP]({{< relref "/operate/rs/security/access-control/ldap/enable-role-based-ldap" >}}) +- Map LDAP groups to [access control roles]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles" >}}) +- Update database ACLs to [authorize LDAP access]({{< relref "/operate/rs/security/access-control/ldap/update-database-acls" >}}) +- Learn more about Redis Enterprise Software [security and practices]({{< relref "/operate/rs/security/" >}}) + +--- +Title: Create users +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create users and assign access control roles. +linkTitle: Create users +weight: 10 +--- + +## Prerequisites + +Before you create other users: + +1. Review the [access control overview]({{}}) to learn how to use role-based access control (RBAC) to manage users' cluster access and database access. + +1. Create roles you can assign to users. See [Create roles with cluster access only]({{}}), [Create roles with database access only]({{}}), or [Create roles with combined access]({{}}) for instructions. + +## Add users + +To add a user to the cluster: + +1. From the **Access Control > Users** tab in the Cluster Manager UI, select **+ Add user**. + + {{Add role with name}} + +1. Enter the name, email, and password of the new user. + + {{Add role with name}} + +1. Assign a **Role** to the user to grant permissions for cluster management and data access. + + {{Add role to user.}} + +1. Select the **Alerts** the user should receive by email: + + - **Receive alerts for databases** - The alerts that are enabled for the selected databases will be sent to the user. Choose **All databases** or **Customize** to select the individual databases to send alerts for. + + - **Receive cluster alerts** - The alerts that are enabled for the cluster in **Cluster > Alerts Settings** are sent to the user. + +1. Select **Save**. + +## Assign roles to users + +Assign a role, associated with specific databases and access control lists (ACLs), to a user to grant database access: + +1. From the **Access Control > Users** tab in the Cluster Manager UI, you can: + + - Point to an existing user and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit the user. + + - Select **+ Add user** to [create a new user]({{< relref "/operate/rs/security/access-control/create-users" >}}). + +1. Select a role to assign to the user. + + {{Add role to user.}} + +1. Select **Save**. + +## Next steps + +Depending on the type of the user's assigned role (cluster management role or data access role), the user can now: + +- [Connect to a database]({{< relref "/operate/rs/databases/connect" >}}) associated with the role and run limited Redis commands, depending on the role's Redis ACLs. + +- Sign in to the Redis Enterprise Software Cluster Manager UI. + +- Make a [REST API]({{< relref "/operate/rs/references/rest-api" >}}) request. +--- +Title: Create roles with cluster access only +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with cluster access only. +linkTitle: Create roles with cluster access only +weight: 14 +--- + +Roles with cluster access allow access to the Cluster Management UI and REST API. + +## Default management roles + +Redis Enterprise Software includes five predefined roles that determine a user's level of access to the Cluster Manager UI and [REST API]({{}}). + +1. **DB Viewer** - Read database settings +1. **DB Member** - Administer databases +1. **Cluster Viewer** - Read cluster settings +1. **Cluster Member** - Administer the cluster +1. **User Manager** - Administer users +1. **Admin** - Full cluster access +1. **None** - For data access only - cannot access the Cluster Manager UI or use the REST API + +For more details about the privileges granted by each of these roles, see [Cluster Manager UI permissions](#cluster-manager-ui-permissions) or [REST API permissions]({{}}). + +## Cluster Manager UI permissions + +Here's a summary of the Cluster Manager UI actions permitted by each default management role: + +| Action | DB Viewer | DB Member | Cluster Viewer | Cluster Member | Admin | User Manager | +|--------|:---------:|:---------:|:--------------:|:-----------:|:------:|:------:| +| Create, edit, delete users and LDAP mappings | ❌ No | ❌ No | ❌ No | ❌ No | ✅ Yes | ✅ Yes | +| Create support package | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ❌ No | +| Edit database configuration | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ❌ No | +| Reset slow log | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ❌ No | +| View cluster configuration | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View cluster logs | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes
| ✅ Yes
| +| View cluster metrics | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View database configuration | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View database metrics | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View node configuration | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View node metrics | ❌ No | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | +| View Redis database password | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | +| View slow log | ❌ No | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ❌ No | +| View and edit cluster settings | ❌ No | ❌ No | ❌ No | ❌ No | ✅ Yes | ❌ No | + +## Create roles for cluster access {#create-cluster-role} + +You can use the [Cluster Manager UI](#create-roles-ui) or the [REST API](#define-roles-rest-api) to create a role that grants cluster access but does not grant access to any databases. + +### Cluster Manager UI method {#create-roles-ui} + +To create a role that grants cluster access: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. + +1. Choose a **Cluster management role** to determine cluster management permissions. + + {{Select a cluster management role to set the level of cluster management permissions for the new role.}} + +1. To prevent database access when using this role, do not add any ACLs. + +1. Select **Save**. + +You can [assign the new role to users]({{}}) to grant cluster access. + +### REST API method {#define-roles-rest-api} + +To [create a role]({{}}) that grants cluster access: + +```sh +POST /v1/roles +{ + "name": "", + "management": "db_viewer | db_member | cluster_viewer | cluster_member | user_manager | admin" +} +``` +--- +Title: Create roles with combined access +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with both cluster and database access. +linkTitle: Create roles with combined access +weight: 16 +--- + +To create a role that grants database access privileges and allows access to the Cluster Management UI and REST API: + +1. [Define Redis ACLs](#define-redis-acls) that determine database access privileges. + +1. [Create a role with ACLs](#create-role) added and choose a **Cluster management role** other than **None**. + +## Define Redis ACLs + +You can use the [Cluster Manager UI](#define-acls-ui) or the [REST API](#define-acls-rest-api) to define Redis ACL rules that you can assign to roles. + +### Cluster Manager UI method {#define-acls-ui} + +To define a Redis ACL rule using the Cluster Manager UI: + +1. From **Access Control > Redis ACLs**, you can either: + + - Point to a Redis ACL and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing Redis ACL. + + - Select **+ Add Redis ACL** to create a new Redis ACL. + +1. Enter a descriptive name for the Redis ACL. This will be used to associate the ACL rule with the role. + +1. Define the ACL rule. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + + {{}} +The **ACL builder** does not support selectors and key permissions. Use **Free text command** to manually define them instead. + {{}} + +1. Select **Save**. + +{{}} +For multi-key commands on multi-slot keys, the return value is `failure`, but the command runs on the keys that are allowed. +{{}} + +### REST API method {#define-acls-rest-api} + +To define a Redis ACL rule using the REST API, use a [create Redis ACL]({{}}) request. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + +Example request: + +```sh +POST /v1/redis_acls +{ + "name": "Test_ACL_1", + "acl": "+@read +FT.INFO +FT.SEARCH" +} +``` + +Example response body: + +```json +{ + "acl": "+@read +FT.INFO +FT.SEARCH", + "name": "Test_ACL_1", + "uid": 11 +} +``` + +To associate the Redis ACL with a role and database, use the `uid` from the response as the `redis_acl_uid` when you add `roles_permissions` to the database. See [Associate a database with roles and Redis ACLs](#associate-roles-acls-rest-api) for an example request. + +## Create roles with ACLs and cluster access {#create-role} + +You can create a role that grants database access privileges and allows access to the Cluster Management UI and REST API. + +### Cluster Manager UI method {#create-roles-ui} + +To define a role for combined access using the Cluster Manager UI: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. This will be used to reference the role when configuring users. + +1. Choose a **Cluster management role** other than **None**. For details about permissions granted by each role, see [Cluster Manager UI permissions]({{}}) and [REST API permissions]({{}}). + + {{Add role with name}} + +1. Select **+ Add ACL**. + + {{Add role database acl}} + +1. Choose a Redis ACL and databases to associate with the role. + + {{Add databases to access}} + +1. Select the check mark {{< image filename="/images/rs/buttons/checkmark-button.png#no-click" alt="The Check button" width="25px" class="inline" >}} to confirm. + +1. Select **Save**. + + {{Add databases to access}} + +You can [assign the new role to users]({{}}) to grant database access and access to the Cluster Manager UI and REST API. + +### REST API method {#define-roles-rest-api} + +To define a role for combined access using the REST API: + +1. [Create a role.](#create-role-rest-api) + +1. [Associate a database with roles and Redis ACLs.](#associate-roles-acls-rest-api) + +#### Create a role {#create-role-rest-api} + +To [create a role]({{}}) using the REST API: + +```sh +POST /v1/roles +{ + "name": "", + "management": "db_viewer | db_member | cluster_viewer | cluster_member | admin" +} +``` + +Example response body: + +```json +{ + "management": "admin", + "name": "", + "uid": 7 +} +``` + +To associate the role with a Redis ACL and database, use the `uid` from the response as the `role_uid` when you add `roles_permissions` to the database. See [Associate a database with roles and Redis ACLs](#associate-roles-acls-rest-api) for an example request. + + +#### Associate a database with roles and Redis ACLs {#associate-roles-acls-rest-api} + +[Update a database's configuration]({{}}) to add `roles_permissions` with the role and Redis ACL: + +```sh +POST /v1/bdbs/ +{ + "roles_permissions": + [ + { + "role_uid": , + "redis_acl_uid": + } + ] +} +``` +--- +Title: Overview of Redis ACLs in Redis Enterprise Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of Redis ACLs, syntax, and ACL command support in Redis Enterprise Software. +linkTitle: Redis ACL overview +weight: 17 +--- + +Redis access control lists (Redis ACLs) allow you to define named permissions for specific Redis commands, keys, and pub/sub channels. You can use defined Redis ACLs for multiple databases and roles. + +## Predefined Redis ACLs + +Redis Enterprise Software provides one predefined Redis ACL named **Full Access**. This ACL allows all commands on all keys and cannot be edited. + +## Redis ACL syntax + +Redis ACLs are defined by a [Redis syntax]({{< relref "/operate/oss_and_stack/management/security/acl" >}}) where you specify the commands or command categories that are allowed for specific keys. + +### Commands and categories + +Redis ACL rules can allow or block specific [Redis commands]({{< relref "/commands" >}}) or [command categories]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#command-categories). + +- `+` includes commands + +- `-` excludes commands + +- `+@` includes command categories + +- `-@` excludes command categories + +The following example allows all `read` commands and the `SET` command: + +```sh ++@read +SET +``` + +Module commands have several ACL limitations: + +- [Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}) do not have command categories. + +- Other [command category]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#command-categories) ACLs, such as `+@read` and `+@write`, do not include Redis module commands. `+@all` is the only exception because it allows all Redis commands. + +- You have to include individual module commands in a Redis ACL rule to allow them. + + For example, the following Redis ACL rule allows read-only commands and the RediSearch commands `FT.INFO` and `FT.SEARCH`: + + ```sh + +@read +FT.INFO +FT.SEARCH + ``` + +### Key patterns + +To define access to specific keys or key patterns, use the following prefixes: + +- `~` or `%RW~` allows read and write access to keys. + +- `%R~` allows read access to keys. + +- `%W~` allows write access to keys. + +`%RW~`, `%R~`, and `%W~` are only supported for databases with Redis version 7.2 or later. + +The following example allows read and write access to all keys that start with "app1" and read-only access to all keys that start with "app2": + +```sh +~app1* %R~app2* +``` + +### Pub/sub channels + +The `&` prefix allows access to [pub/sub channels]({{< relref "/develop/interact/pubsub" >}}) (only supported for databases with Redis version 6.2 or later). + +To limit access to specific channels, include `resetchannels` before the allowed channels: + +```sh +resetchannels &channel1 &channel2 +``` + +### Selectors + +[Selectors]({{< relref "/operate/oss_and_stack/management/security/acl" >}}#selectors) let you define multiple sets of rules in a single Redis ACL (only supported for databases with Redis version 7.2 or later). A command is allowed if it matches the base rule or any selector in the Redis ACL. + +- `()` creates a new selector. + +- `clearselectors` deletes all existing selectors for a user. This action does not delete the base ACL rule. + +In the following example, the base rule allows `GET key1` and the selector allows `SET key2`: + +```sh ++GET ~key1 (+SET ~key2) +``` + +## Default pub/sub permissions + +Redis database version 6.2 introduced pub/sub ACL rules that determine which [pub/sub channels]({{< relref "/develop/interact/pubsub" >}}) a user can access. + +The configuration option `acl-pubsub-default`, added in Redis Enterprise Software version 6.4.2, determines the cluster-wide default level of access for all pub/sub channels. Redis Enterprise Software uses the following pub/sub permissions by default: + +- For versions 6.4.2 and 7.2, `acl-pubsub-default` is permissive (`allchannels` or `&*`) by default to accommodate earlier Redis versions. + +- In future versions, `acl-pubsub-default` will change to restrictive (`resetchannels`). Restrictive permissions block all pub/sub channels by default, unless explicitly permitted by an ACL rule. + +If you use ACLs and pub/sub channels, you should review your databases and ACL settings and plan to transition your cluster to restrictive pub/sub permissions in preparation for future Redis Enterprise Software releases. + +### Prepare for restrictive pub/sub permissions + +To secure pub/sub channels and prepare your cluster for future Redis Enterprise Software releases that default to restrictive pub/sub permissions: + +1. Upgrade Redis databases: + + - For Redis Enterprise Software version 6.4.2, upgrade all databases in the cluster to Redis DB version 6.2. + + - For Redis Enterprise Software version 7.2, upgrade all databases in the cluster to Redis DB version 7.2 or 6.2. + +1. Create or update ACLs with permissions for specific channels using the `resetchannels &channel` format. + +1. Associate the ACLs with relevant databases. + +1. Set default pub/sub permissions (`acl-pubsub-default`) to restrictive. See [Change default pub/sub permissions](#change-default-pubsub-permissions) for details. + +1. If any issues occur, you can temporarily change the default pub/sub setting back to permissive. Resolve any problematic ACLs before making pub/sub permissions restrictive again. + +{{}} +When you change the cluster's default pub/sub permissions to restrictive, `&*` is added to the **Full Access** ACL. Before you make this change, consider the following: + +- Because pub/sub ACL syntax was added in Redis 6.2, you can't associate the **Full Access** ACL with database versions 6.0 or lower after this change. + +- The **Full Access** ACL is not reverted if you change `acl-pubsub-default` to permissive again. + +- Every database with the default user enabled uses the **Full Access** ACL. +{{}} + +### Change default pub/sub permissions + +As of Redis Enterprise version 6.4.2, you can configure `acl_pubsub_default`, which determines the default pub/sub permissions for all databases in the cluster. You can set `acl_pubsub_default` to the following values: + +- `resetchannels` is restrictive and blocks access to all channels by default. + +- `allchannels` is permissive and allows access to all channels by default. + +To make default pub/sub permissions restrictive: + +1. [Upgrade all databases]({{< relref "/operate/rs/installing-upgrading/upgrading/upgrade-database" >}}) in the cluster to Redis version 6.2 or later. + +1. Set the default to restrictive (`resetchannels`) using one of the following methods: + + - New Cluster Manager UI (only available for Redis Enterprise versions 7.2 and later): + + 1. Navigate to **Access Control > Settings > Pub/Sub ACLs** and select **Edit**. + + 1. For **Default permissions for Pub/Sub ACLs**, select **Restrictive**, then **Save**. + + - [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}): + + ```sh + rladmin tune cluster acl_pubsub_default resetchannels + ``` + + - [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "acl_pubsub_default": "resetchannels" } + ``` + +## ACL command support + +Redis Enterprise Software does not support certain Redis ACL commands. Instead, you can manage access controls from the Cluster Manager UI. + +{{}} + +Redis ACLs also have the following differences in Redis Enterprise Software: + +- The `MULTI`, `EXEC`, `DISCARD` commands are always allowed, but ACLs are enforced on `MULTI` subcommands. + +- Nested selectors are not supported. + + For example, the following selectors are not valid in Redis Enterprise: `+GET ~key1 (+SET (+SET ~key2) ~key3)` + +- Key and pub/sub patterns do not allow the following characters: `'(', ')'` + +- The following password configuration syntax is not supported: `'>', '<', '#!', 'resetpass'` + + To configure passwords in Redis Enterprise Software, use one of the following methods: + + - [`rladmin cluster reset_password`]({{< relref "/operate/rs/references/cli-utilities/rladmin/cluster/reset_password" >}}): + + ```sh + rladmin cluster reset_password + ``` + + - REST API [`PUT /v1/users`]({{< relref "/operate/rs/references/rest-api/requests/users#put-user" >}}) request and provide `password` + +--- +Title: Create roles with database access only +alwaysopen: false +categories: +- docs +- operate +- rs +description: Create roles with database access only. +linkTitle: Create roles with database access only +weight: 15 +--- + +Roles with database access grant the ability to access and interact with a database's data. Database access privileges are determined by defining [Redis ACLs]({{}}) and adding them to roles. + +To create a role that grants database access without granting access to the Redis Enterprise Cluster Manager UI and REST API: + +1. [Define Redis ACLs](#define-redis-acls) that determine database access privileges. + +1. [Create a role with ACLs](#create-roles-with-acls) added and leave the **Cluster management role** as **None**. + +## Define Redis ACLs + +You can use the [Cluster Manager UI](#define-acls-ui) or the [REST API](#define-acls-rest-api) to define Redis ACL rules that you can assign to roles. + +### Cluster Manager UI method {#define-acls-ui} + +To define a Redis ACL rule using the Cluster Manager UI: + +1. From **Access Control > Redis ACLs**, you can either: + + - Point to a Redis ACL and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing Redis ACL. + + - Select **+ Add Redis ACL** to create a new Redis ACL. + +1. Enter a descriptive name for the Redis ACL. This will be used to associate the ACL rule with the role. + +1. Define the ACL rule. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + + {{}} +The **ACL builder** does not support selectors and key permissions. Use **Free text command** to manually define them instead. + {{}} + +1. Select **Save**. + +{{}} +For multi-key commands on multi-slot keys, the return value is `failure`, but the command runs on the keys that are allowed. +{{}} + +### REST API method {#define-acls-rest-api} + +To define a Redis ACL rule using the REST API, use a [create Redis ACL]({{}}) request. For more information about Redis ACL rules and syntax, see the [Redis ACL overview]({{}}). + +Example request: + +```sh +POST /v1/redis_acls +{ + "name": "Test_ACL_1", + "acl": "+@read +FT.INFO +FT.SEARCH" +} +``` + +Example response body: + +```json +{ + "acl": "+@read +FT.INFO +FT.SEARCH", + "name": "Test_ACL_1", + "uid": 11 +} +``` + +To associate the Redis ACL with a role and database, use the `uid` from the response as the `redis_acl_uid` when you add `roles_permissions` to the database. See [Associate a database with roles and Redis ACLs](#associate-roles-acls-rest-api) for an example request. + +## Create roles with ACLs + +To create a role that grants database access to users but blocks access to the Redis Enterprise Cluster Manager UI and REST API, set the **Cluster management role** to **None**. + +### Cluster Manager UI method {#create-roles-ui} + +To define a role for database access using the Cluster Manager UI: + +1. From **Access Control** > **Roles**, you can: + + - Point to a role and select {{< image filename="/images/rs/buttons/edit-button.png#no-click" alt="The Edit button" width="25px" class="inline" >}} to edit an existing role. + + - Select **+ Add role** to create a new role. + + {{Add role with name}} + +1. Enter a descriptive name for the role. This will be used to reference the role when configuring users. + +1. Leave **Cluster management role** as the default **None**. + + {{Add role with name}} + +1. Select **+ Add ACL**. + + {{Add role database acl}} + +1. Choose a Redis ACL and databases to associate with the role. + + {{Add databases to access}} + +1. Select the check mark {{< image filename="/images/rs/buttons/checkmark-button.png#no-click" alt="The Check button" width="25px" class="inline" >}} to confirm. + +1. Select **Save**. + + {{Add databases to access}} + +You can [assign the new role to users]({{}}) to grant database access. + +### REST API method {#define-roles-rest-api} + +To define a role for database access using the REST API: + +1. [Create a role.](#create-role-rest-api) + +1. [Associate a database with roles and Redis ACLs.](#associate-roles-acls-rest-api) + +#### Create a role {#create-role-rest-api} + +To [create a role]({{}}) using the REST API: + +```sh +POST /v1/roles +{ + "name": "", + "management": "none" +} +``` + +Example response body: + +```json +{ + "management": "none", + "name": "", + "uid": 7 +} +``` + +To associate the role with a Redis ACL and database, use the `uid` from the response as the `role_uid` when you add `roles_permissions` to the database. See [Associate a database with roles and Redis ACLs](#associate-roles-acls-rest-api) for an example request. + + +#### Associate a database with roles and Redis ACLs {#associate-roles-acls-rest-api} + +[Update a database's configuration]({{}}) to add `roles_permissions` with the role and Redis ACL: + +```sh +POST /v1/bdbs/ +{ + "roles_permissions": + [ + { + "role_uid": , + "redis_acl_uid": + } + ] +} +``` +--- +Title: Access control +alwaysopen: false +categories: +- docs +- operate +- rs +description: An overview of access control in Redis Enterprise Software. +hideListLinks: false +linkTitle: Access control +weight: 10 +--- + +Redis Enterprise Software lets you use role-based access control (RBAC) to manage users' access privileges. RBAC requires you to do the following: + +1. Create roles and define each role's access privileges. + +1. Create users and assign roles to them. The assigned role determines the user's access privileges. + +## Cluster access versus database access + +Redis Enterprise allows two separate paths of access: + +- **Cluster access** allows performing management-related actions, such as creating databases and viewing statistics. + +- **Database access** allows performing data-related actions, like reading and writing data in a database. + +You can grant cluster access, database access, or both to each role. These roles let you differentiate between users who can access databases and users who can access cluster management, according to your organization's security needs. + +The following diagram shows three different options for roles and users: + +{{Role-based access control diagram.}} + +- Role A was created with permission to access the cluster and perform management-related actions. Because user A was assigned role A, they can access the cluster but cannot access databases. + +- Role B was created with permission to access one or more databases and perform data-related actions. Because user B was assigned role B, they cannot access the cluster but can access databases. + +- Role C was created with cluster access and database access permissions. Because user C was assigned role C, they can access the cluster and databases. + +## Default database access + +When you create a database, [default user access]({{< relref "/operate/rs/security/access-control/manage-users/default-user" >}}) is enabled automatically. + +If you set up role-based access controls for your database and don't require compatibility with versions earlier than Redis 6, you can [deactivate the default user]({{< relref "/operate/rs/security/access-control/manage-users/default-user" >}}). + +{{}} +Before you [deactivate default user access]({{< relref "/operate/rs/security/access-control/manage-users/default-user#deactivate-default-user" >}}), make sure the role associated with the database is [assigned to a user]({{< relref "/operate/rs/security/access-control/create-users#assign-roles-to-users" >}}). Otherwise, the database will be inaccessible. +{{}} + +## More info +--- +Title: Security +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +hideListLinks: true +weight: 60 +--- + +Redis Enterprise Software provides various features to secure your Redis Enterprise Software deployment: + +| Login and passwords | Users and roles | Encryption and TLS | Certificates and audit | +|---------------------|-----------------|--------------------|-----------------------| +| [Password attempts and session timeout]({{}}) | [Cluster and database access explained]({{}}) | [Enable TLS]({{}}) | [Create certificates]({{}}) | +| [Password complexity]({{}}) | [Create users]({{}}) | [Configure TLS protocols]({{}}) | [Monitor certificates]({{}}) | +| [Password expiration]({{}}) | [Create roles]({{}}) | [Configure cipher suites]({{}}) | [Update certificates]({{}}) | +| [Default database access]({{}}) | [Redis ACLs]({{}}) | [Encrypt private keys on disk]({{}}) | [Enable OCSP stapling]({{}}) | +| [Rotate user passwords]({{}}) | [Integrate with LDAP]({{}}) | [Internode encryption]({{}}) | [Audit database connections]({{}}) | + +## Recommended security practices + +See [Recommended security practices]({{}}) to learn how to protect Redis Enterprise Software. + +## Redis Trust Center + +Visit our [Trust Center](https://trust.redis.io/) to learn more about Redis security policies. If you find a suspected security bug, you can [submit a report](https://hackerone.com/redis-vdp?type=team). +--- +Title: Compatibility with Redis Open Source configuration settings +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Open Source configuration settings supported by Redis Enterprise. +linkTitle: Configuration settings +weight: 50 +--- + +Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}) only support a subset of [Redis Open Source configuration settings]({{}}). Using [`CONFIG GET`]({{< relref "/commands/config-get" >}}) or [`CONFIG SET`]({{< relref "/commands/config-set" >}}) with unsupported configuration settings returns an error. + +| Setting | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| activerehashing | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| busy-reply-threshold | ✅ Standard
✅ Active-Active | ❌ Standard
❌ Active-Active | Value must be between 0 and 60000 milliseconds. | +| hash-max-listpack-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-listpack-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-ziplist-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hash-max-ziplist-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| hll-sparse-max-bytes | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-compress-depth | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-max-listpack-size | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| list-max-ziplist-size | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| lua-time-limit | ✅ Standard
✅ Active-Active | ❌ Standard
❌ Active-Active | Value must be between 0 and 60000 milliseconds. | +| notify-keyspace-events | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| set-max-intset-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| slowlog-log-slower-than | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Value must be larger than 1000 microseconds. | +| slowlog-max-len | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Value must be between 128 and 1024. | +| stream-node-max-bytes | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| stream-node-max-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| tracking-table-max-keys | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | For Redis Software, use an [update database configuration]({{}}) REST API request or [`rladmin tune db`]({{}}) to set `tracking_table_max_keys` instead. | +| zset-max-listpack-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-listpack-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-ziplist-entries | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| zset-max-ziplist-value | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Connection management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Connection management commands compatibility. +linkTitle: Connection management +weight: 10 +--- + +The following tables show which Redis Open Source [connection management commands]({{< relref "/commands" >}}?group=connection) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [AUTH]({{< relref "/commands/auth" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT CACHING]({{< relref "/commands/client-caching" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT GETNAME]({{< relref "/commands/client-getname" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT GETREDIR]({{< relref "/commands/client-getredir" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT ID]({{< relref "/commands/client-id" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Because Redis Enterprise clustering allows [multiple active proxies]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}), `CLIENT ID` cannot guarantee incremental IDs between clients that connect to different nodes under multi proxy policies. | +| [CLIENT INFO]({{< relref "/commands/client-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT KILL]({{< relref "/commands/client-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT LIST]({{< relref "/commands/client-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT NO-EVICT]({{< relref "/commands/client-no-evict" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT NO-TOUCH]({{< relref "/commands/client-no-touch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT PAUSE]({{< relref "/commands/client-pause" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT REPLY]({{< relref "/commands/client-reply" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLIENT SETINFO]({{< relref "/commands/client-setinfo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT SETNAME]({{< relref "/commands/client-setname" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT TRACKING]({{< relref "/commands/client-tracking" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT TRACKINGINFO]({{< relref "/commands/client-trackinginfo" >}}) |✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT UNBLOCK]({{< relref "/commands/client-unblock" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CLIENT UNPAUSE]({{< relref "/commands/client-unpause" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ECHO]({{< relref "/commands/echo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HELLO]({{< relref "/commands/hello" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PING]({{< relref "/commands/ping" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [QUIT]({{< relref "/commands/quit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v7.2.0. | +| [RESET]({{< relref "/commands/reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SELECT]({{< relref "/commands/select" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Redis Enterprise does not support shared databases due to potential negative performance impacts and blocks any related commands. The `SELECT` command is supported solely for compatibility with Redis Open Source but does not perform any operations in Redis Enterprise. | +--- +Title: Cluster management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Cluster management commands compatible with Redis Enterprise. +linkTitle: Cluster management +weight: 10 +--- + +[Clustering in Redis Enterprise Software]({{< relref "/operate/rs/databases/durability-ha/clustering" >}}) and [Redis Cloud]({{< relref "/operate/rc/databases/configuration/clustering" >}}) differs from the [Redis Open Source cluster]({{}}) and works with all standard Redis clients. + +Redis Enterprise blocks most [cluster commands]({{< relref "/commands" >}}?group=cluster). If you try to use a blocked cluster command, it returns an error. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [ASKING]({{< relref "/commands/asking" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER ADDSLOTS]({{< relref "/commands/cluster-addslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER ADDSLOTSRANGE]({{< relref "/commands/cluster-addslotsrange" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER BUMPEPOCH]({{< relref "/commands/cluster-bumpepoch" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER COUNT-FAILURE-REPORTS]({{< relref "/commands/cluster-count-failure-reports" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER COUNTKEYSINSLOT]({{< relref "/commands/cluster-countkeysinslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER DELSLOTS]({{< relref "/commands/cluster-delslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER DELSLOTSRANGE]({{< relref "/commands/cluster-delslotsrange" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FAILOVER]({{< relref "/commands/cluster-failover" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FLUSHSLOTS]({{< relref "/commands/cluster-flushslots" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER FORGET]({{< relref "/commands/cluster-forget" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER GETKEYSINSLOT]({{< relref "/commands/cluster-getkeysinslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER HELP]({{< relref "/commands/cluster-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER INFO]({{< relref "/commands/cluster-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER KEYSLOT]({{< relref "/commands/cluster-keyslot" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER LINKS]({{< relref "/commands/cluster-links" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MEET]({{< relref "/commands/cluster-meet" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MYID]({{< relref "/commands/cluster-myid" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER MYSHARDID]({{< relref "/commands/cluster-myshardid" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER NODES]({{< relref "/commands/cluster-nodes" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). | +| [CLUSTER REPLICAS]({{< relref "/commands/cluster-replicas" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER REPLICATE]({{< relref "/commands/cluster-replicate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER RESET]({{< relref "/commands/cluster-reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SAVECONFIG]({{< relref "/commands/cluster-saveconfig" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SET-CONFIG-EPOCH]({{< relref "/commands/cluster-set-config-epoch" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SETSLOT]({{< relref "/commands/cluster-setslot" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SHARDS]({{< relref "/commands/cluster-shards" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CLUSTER SLAVES]({{< relref "/commands/cluster-slaves" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v5.0.0. | +| [CLUSTER SLOTS]({{< relref "/commands/cluster-slots" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Only supported with the [OSS cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). Deprecated as of Redis v7.0.0. | +| [READONLY]({{< relref "/commands/readonly" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [READWRITE]({{< relref "/commands/readwrite" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +--- +Title: Data type commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Data type commands compatibility (bitmaps, geospatial indices, hashes, + HyperLogLogs, lists, sets, sorted sets, streams, strings). +linkTitle: Data types +toc: 'true' +weight: 10 +--- + +The following tables show which Redis Open Source data type commands are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Bitmap commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BITCOUNT]({{< relref "/commands/bitcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITFIELD]({{< relref "/commands/bitfield" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITFIELD_RO]({{< relref "/commands/bitfield_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITOP]({{< relref "/commands/bitop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BITPOS]({{< relref "/commands/bitpos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETBIT]({{< relref "/commands/getbit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETBIT]({{< relref "/commands/setbit" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Geospatial indices commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [GEOADD]({{< relref "/commands/geoadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEODIST]({{< relref "/commands/geodist" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOHASH]({{< relref "/commands/geohash" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOPOS]({{< relref "/commands/geopos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEORADIUS]({{< relref "/commands/georadius" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUS_RO]({{< relref "/commands/georadius_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUSBYMEMBER]({{< relref "/commands/georadiusbymember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEORADIUSBYMEMBER_RO]({{< relref "/commands/georadiusbymember_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [GEOSEARCH]({{< relref "/commands/geosearch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GEOSEARCHSTORE]({{< relref "/commands/geosearchstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Hash commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [HDEL]({{< relref "/commands/hdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HEXISTS]({{< relref "/commands/hexists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HEXPIRE]({{< relref "/commands/hexpire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HEXPIREAT]({{< relref "/commands/hexpireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HEXPIRETIME]({{< relref "/commands/hexpiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HGET]({{< relref "/commands/hget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HGETALL]({{< relref "/commands/hgetall" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HINCRBY]({{< relref "/commands/hincrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HINCRBYFLOAT]({{< relref "/commands/hincrbyfloat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HKEYS]({{< relref "/commands/hkeys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HLEN]({{< relref "/commands/hlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HMGET]({{< relref "/commands/hmget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HMSET]({{< relref "/commands/hmset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v4.0.0. | +| [HPERSIST]({{< relref "/commands/hpersist" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HPEXPIRE]({{< relref "/commands/hpexpire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HPEXPIREAT]({{< relref "/commands/hpexpireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HPEXPIRETIME]({{< relref "/commands/hpexpiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HPTTL]({{< relref "/commands/hpttl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HRANDFIELD]({{< relref "/commands/hrandfield" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSCAN]({{< relref "/commands/hscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSET]({{< relref "/commands/hset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSETNX]({{< relref "/commands/hsetnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HSTRLEN]({{< relref "/commands/hstrlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HTTL]({{< relref "/commands/httl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [HVALS]({{< relref "/commands/hvals" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## HyperLogLog commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [PFADD]({{< relref "/commands/pfadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFCOUNT]({{< relref "/commands/pfcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFDEBUG]({{< relref "/commands/pfdebug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [PFMERGE]({{< relref "/commands/pfmerge" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PFSELFTEST]({{< relref "/commands/pfselftest" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## List commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BLMOVE]({{< relref "/commands/blmove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BLMPOP]({{< relref "/commands/blmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BLPOP]({{< relref "/commands/blpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BRPOP]({{< relref "/commands/brpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BRPOPLPUSH]({{< relref "/commands/brpoplpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [LINDEX]({{< relref "/commands/lindex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LINSERT]({{< relref "/commands/linsert" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LLEN]({{< relref "/commands/llen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LMOVE]({{< relref "/commands/lmove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LMPOP]({{< relref "/commands/lmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPOP]({{< relref "/commands/lpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPOS]({{< relref "/commands/lpos" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPUSH]({{< relref "/commands/lpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LPUSHX]({{< relref "/commands/lpushx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LRANGE]({{< relref "/commands/lrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LREM]({{< relref "/commands/lrem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LSET]({{< relref "/commands/lset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LTRIM]({{< relref "/commands/ltrim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPOP]({{< relref "/commands/rpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPOPLPUSH]({{< relref "/commands/rpoplpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [RPUSH]({{< relref "/commands/rpush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RPUSHX]({{< relref "/commands/rpushx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Set commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [SADD]({{< relref "/commands/sadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCARD]({{< relref "/commands/scard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SDIFF]({{< relref "/commands/sdiff" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SDIFFSTORE]({{< relref "/commands/sdiffstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTER]({{< relref "/commands/sinter" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTERCARD]({{< relref "/commands/sintercard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SINTERSTORE]({{< relref "/commands/sinterstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SISMEMBER]({{< relref "/commands/sismember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMEMBERS]({{< relref "/commands/smembers" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMISMEMBER]({{< relref "/commands/sismember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SMOVE]({{< relref "/commands/smove" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SPOP]({{< relref "/commands/spop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SRANDMEMBER]({{< relref "/commands/srandmember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SREM]({{< relref "/commands/srem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SSCAN]({{< relref "/commands/sscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNION]({{< relref "/commands/sunion" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNIONSTORE]({{< relref "/commands/sunionstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Sorted set commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BZMPOP]({{< relref "/commands/bzmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BZPOPMAX]({{< relref "/commands/bzpopmax" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [BZPOPMIN]({{< relref "/commands/bzpopmin" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZADD]({{< relref "/commands/zadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZCARD]({{< relref "/commands/zcard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZCOUNT]({{< relref "/commands/zcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZDIFF]({{< relref "/commands/zdiff" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZDIFFSTORE]({{< relref "/commands/zdiffstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINCRBY]({{< relref "/commands/zincrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTER]({{< relref "/commands/zinter" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTERCARD]({{< relref "/commands/zintercard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZINTERSTORE]({{< relref "/commands/zinterstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZLEXCOUNT]({{< relref "/commands/zlexcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZMPOP]({{< relref "/commands/zmpop" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZMSCORE]({{< relref "/commands/zmscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZPOPMAX]({{< relref "/commands/zpopmax" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZPOPMIN]({{< relref "/commands/zpopmin" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANDMEMBER]({{< relref "/commands/zrandmember" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANGE]({{< relref "/commands/zrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANGEBYLEX]({{< relref "/commands/zrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZRANGEBYSCORE]({{< relref "/commands/zrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZRANGESTORE]({{< relref "/commands/zrangestore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZRANK]({{< relref "/commands/zrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREM]({{< relref "/commands/zrem" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYLEX]({{< relref "/commands/zremrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYRANK]({{< relref "/commands/zremrangebyrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREMRANGEBYSCORE]({{< relref "/commands/zremrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZREVRANGE]({{< relref "/commands/zrevrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANGEBYLEX]({{< relref "/commands/zrevrangebylex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANGEBYSCORE]({{< relref "/commands/zrevrangebyscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [ZREVRANK]({{< relref "/commands/zrevrank" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZSCAN]({{< relref "/commands/zscan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZSCORE]({{< relref "/commands/zscore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZUNION]({{< relref "/commands/zunion" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [ZUNIONSTORE]({{< relref "/commands/zunionstore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Stream commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [XACK]({{< relref "/commands/xack" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XADD]({{< relref "/commands/xadd" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XAUTOCLAIM]({{< relref "/commands/xautoclaim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XCLAIM]({{< relref "/commands/xclaim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XDEL]({{< relref "/commands/xdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XGROUP]({{< relref "/commands/xgroup" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XINFO]({{< relref "/commands/xinfo" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XLEN]({{< relref "/commands/xlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XPENDING]({{< relref "/commands/xpending" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XRANGE]({{< relref "/commands/xrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREAD]({{< relref "/commands/xread" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREADGROUP]({{< relref "/commands/xreadgroup" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XREVRANGE]({{< relref "/commands/xrevrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XSETID]({{< relref "/commands/xsetid" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [XTRIM]({{< relref "/commands/xtrim" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## String commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [APPEND]({{< relref "/commands/append" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DECR]({{< relref "/commands/decr" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DECRBY]({{< relref "/commands/decrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GET]({{< relref "/commands/get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETDEL]({{< relref "/commands/getdel" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETEX]({{< relref "/commands/getex" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | \*Not supported for HyperLogLog. | +| [GETRANGE]({{< relref "/commands/getrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [GETSET]({{< relref "/commands/getset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Deprecated as of Redis v6.2.0. | +| [INCR]({{< relref "/commands/incr" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INCRBY]({{< relref "/commands/incrby" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INCRBYFLOAT]({{< relref "/commands/incrbyfloat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LCS]({{< relref "/commands/lcs" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MGET]({{< relref "/commands/mget" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MSET]({{< relref "/commands/mset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MSETNX]({{< relref "/commands/msetnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PSETEX]({{< relref "/commands/psetex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SET]({{< relref "/commands/set" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETEX]({{< relref "/commands/setex" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETNX]({{< relref "/commands/setnx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SETRANGE]({{< relref "/commands/setrange" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| STRALGO | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v7.0.0. | +| [STRLEN]({{< relref "/commands/strlen" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUBSTR]({{< relref "/commands/substr" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v2.0.0. | +--- +Title: Server management commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Server management commands compatibility. +linkTitle: Server management +toc: 'true' +weight: 10 +--- + +The following tables show which Redis Open Source [server management commands]({{< relref "/commands" >}}?group=server) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Access control commands + +Several access control list (ACL) commands are not available in Redis Enterprise. Instead, you can manage access controls from the [Redis Enterprise Software Cluster Manager UI]({{< relref "/operate/rs/security/access-control" >}}) and the [Redis Cloud console]({{< relref "/operate/rc/security/access-control/data-access-control/role-based-access-control.md" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [ACL CAT]({{< relref "/commands/acl-cat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL DELUSER]({{< relref "/commands/acl-deluser" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL DRYRUN]({{< relref "/commands/acl-dryrun" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Might reply with "unknown user" for LDAP users even if `AUTH` succeeds. | +| [ACL GENPASS]({{< relref "/commands/acl-genpass" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL GETUSER]({{< relref "/commands/acl-getuser" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL HELP]({{< relref "/commands/acl-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL LIST]({{< relref "/commands/acl-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL LOAD]({{< relref "/commands/acl-load" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL LOG]({{< relref "/commands/acl-log" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL SAVE]({{< relref "/commands/acl-save" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL SETUSER]({{< relref "/commands/acl-setuser" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ACL USERS]({{< relref "/commands/acl-users" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [ACL WHOAMI]({{< relref "/commands/acl-whoami" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | + + +## Configuration commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [CONFIG GET]({{< relref "/commands/config-get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | [Only supports a subset of configuration settings.]({{< relref "/operate/rs/references/compatibility/config-settings" >}}) | +| [CONFIG RESETSTAT]({{< relref "/commands/config-resetstat" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [CONFIG REWRITE]({{< relref "/commands/config-rewrite" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [CONFIG SET]({{< relref "/commands/config-set" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | [Only supports a subset of configuration settings.]({{< relref "/operate/rs/references/compatibility/config-settings" >}}) | + + +## General server commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [COMMAND]({{< relref "/commands/command" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND COUNT]({{< relref "/commands/command-count" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND DOCS]({{< relref "/commands/command-docs" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND GETKEYS]({{< relref "/commands/command-getkeys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND GETKEYSANDFLAGS]({{< relref "/commands/command-getkeysandflags" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND HELP]({{< relref "/commands/command-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND INFO]({{< relref "/commands/command-info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [COMMAND LIST]({{< relref "/commands/command-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DEBUG]({{< relref "/commands/debug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [FLUSHALL]({{< relref "/commands/flushall" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active | \*Can use the [Active-Active flush API request]({{< relref "/operate/rs/references/rest-api/requests/crdbs/flush" >}}). | +| [FLUSHDB]({{< relref "/commands/flushdb" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active | \*Can use the [Active-Active flush API request]({{< relref "/operate/rs/references/rest-api/requests/crdbs/flush" >}}). | +| [LOLWUT]({{< relref "/commands/lolwut" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SHUTDOWN]({{< relref "/commands/shutdown" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SWAPDB]({{< relref "/commands/swapdb" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [TIME]({{< relref "/commands/time" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + + +## Module commands + +For Redis Enterprise Software, you can [manage Redis modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/" >}}) from the Cluster Manager UI or with [REST API requests]({{< relref "/operate/rs/references/rest-api/requests/modules" >}}). + +Redis Cloud manages modules for you and lets you [enable modules]({{< relref "/operate/rc/databases/create-database#modules" >}}) when you create a database. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [MODULE HELP]({{< relref "/commands/module-help" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE LIST]({{< relref "/commands/module-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MODULE LOAD]({{< relref "/commands/module-load" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE LOADEX]({{< relref "/commands/module-loadex" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MODULE UNLOAD]({{< relref "/commands/module-unload" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## Monitoring commands + +Although Redis Enterprise does not support certain monitoring commands, you can use the Cluster Manager UI to view Redis Enterprise Software [metrics]({{< relref "/operate/rs/monitoring" >}}) and [logs]({{< relref "/operate/rs/clusters/logging" >}}) or the Redis Cloud console to view Redis Cloud [metrics]({{< relref "/operate/rc/databases/monitor-performance" >}}) and [logs]({{< relref "/operate/rc/logs-reports/system-logs" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [DBSIZE]({{< relref "/commands/dbsize" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [INFO]({{< relref "/commands/info" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | In Redis Enterprise, `INFO` returns a different set of fields than Redis Open Source.
Not supported for [scripts]({{}}). | +| [LATENCY DOCTOR]({{< relref "/commands/latency-doctor" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY GRAPH]({{< relref "/commands/latency-graph" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY HELP]({{< relref "/commands/latency-help" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY HISTOGRAM]({{< relref "/commands/latency-histogram" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [LATENCY HISTORY]({{< relref "/commands/latency-history" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY LATEST]({{< relref "/commands/latency-latest" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LATENCY RESET]({{< relref "/commands/latency-reset" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY DOCTOR]({{< relref "/commands/memory-doctor" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY HELP]({{< relref "/commands/memory-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}) in Redis versions earlier than 7. | +| [MEMORY MALLOC-STATS]({{< relref "/commands/memory-malloc-stats" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY PURGE]({{< relref "/commands/memory-purge" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY STATS]({{< relref "/commands/memory-stats" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MEMORY USAGE]({{< relref "/commands/memory-usage" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}) in Redis versions earlier than 7. | +| [MONITOR]({{< relref "/commands/monitor" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SLOWLOG GET]({{< relref "/commands/slowlog-get" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [SLOWLOG LEN]({{< relref "/commands/slowlog-len" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | +| [SLOWLOG RESET]({{< relref "/commands/slowlog-reset" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | Not supported for [scripts]({{}}). | + + +## Persistence commands + +Data persistence and backup commands are not available in Redis Enterprise. Instead, you can [manage data persistence]({{< relref "/operate/rs/databases/configure/database-persistence" >}}) and [backups]({{< relref "/operate/rs/databases/import-export/schedule-backups" >}}) from the Redis Enterprise Software Cluster Manager UI and the [Redis Cloud console]({{< relref "/operate/rc/databases/view-edit-database#durability-section" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [BGREWRITEAOF]({{< relref "/commands/bgrewriteaof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [BGSAVE]({{< relref "/commands/bgsave" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [LASTSAVE]({{< relref "/commands/lastsave" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SAVE]({{< relref "/commands/save" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | + + +## Replication commands + +Redis Enterprise automatically manages [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}). + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [FAILOVER]({{< relref "/commands/failover" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MIGRATE]({{< relref "/commands/migrate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [PSYNC]({{< relref "/commands/psync" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [REPLCONF]({{< relref "/commands/replconf" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [REPLICAOF]({{< relref "/commands/replicaof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [RESTORE-ASKING]({{< relref "/commands/restore-asking" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [ROLE]({{< relref "/commands/role" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SLAVEOF]({{< relref "/commands/slaveof" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Deprecated as of Redis v5.0.0. | +| [SYNC]({{< relref "/commands/sync" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +--- +Title: Pub/sub commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Pub/sub commands compatibility. +linkTitle: Pub/sub +weight: 10 +--- + +The following table shows which Redis Open Source [pub/sub commands]({{< relref "/commands" >}}?group=pubsub) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [PSUBSCRIBE]({{< relref "/commands/psubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBLISH]({{< relref "/commands/publish" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB CHANNELS]({{< relref "/commands/pubsub-channels" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB NUMPAT]({{< relref "/commands/pubsub-numpat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB NUMSUB]({{< relref "/commands/pubsub-numsub" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB SHARDCHANNELS]({{< relref "/commands/pubsub-shardchannels" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUBSUB SHARDNUMSUB]({{< relref "/commands/pubsub-shardnumsub" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PUNSUBSCRIBE]({{< relref "/commands/punsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SPUBLISH]({{< relref "/commands/spublish" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SSUBSCRIBE]({{< relref "/commands/ssubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUBSCRIBE]({{< relref "/commands/subscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SUNSUBSCRIBE]({{< relref "/commands/sunsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNSUBSCRIBE]({{< relref "/commands/unsubscribe" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Transaction commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Transaction commands compatibility. +linkTitle: Transactions +weight: 10 +--- + +The following table shows which Redis Open Source [transaction commands]({{< relref "/commands" >}}?group=transactions) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [DISCARD]({{< relref "/commands/discard" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXEC]({{< relref "/commands/exec" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MULTI]({{< relref "/commands/multi" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNWATCH]({{< relref "/commands/unwatch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [WATCH]({{< relref "/commands/watch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Compatibility with Redis Open Source commands +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Open Source commands compatible with Redis Enterprise. +hideListLinks: true +linkTitle: Commands +weight: 30 +--- + +Learn which Redis Open Source commands are compatible with Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}). + +Select a command group for more details about compatibility with standard and Active-Active Redis Enterprise. + +{{}} +--- +Title: Scripting commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Scripting and function commands compatibility. +linkTitle: Scripting +weight: 10 +--- + +The following table shows which Redis Open Source [scripting and function commands]({{< relref "/commands" >}}?group=scripting) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +## Function commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [FCALL]({{< relref "/commands/fcall" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FCALL_RO]({{< relref "/commands/fcall_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION DELETE]({{< relref "/commands/function-delete" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION DUMP]({{< relref "/commands/function-dump" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION FLUSH]({{< relref "/commands/function-flush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION HELP]({{< relref "/commands/function-help" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION KILL]({{< relref "/commands/function-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION LIST]({{< relref "/commands/function-list" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION LOAD]({{< relref "/commands/function-load" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION RESTORE]({{< relref "/commands/function-restore" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [FUNCTION STATS]({{< relref "/commands/function-stats" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + +## Scripting commands + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [EVAL]({{< relref "/commands/eval" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVAL_RO]({{< relref "/commands/eval_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVALSHA]({{< relref "/commands/evalsha" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EVALSHA_RO]({{< relref "/commands/evalsha_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT DEBUG]({{< relref "/commands/script-debug" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [SCRIPT EXISTS]({{< relref "/commands/script-exists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT FLUSH]({{< relref "/commands/script-flush" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT KILL]({{< relref "/commands/script-kill" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SCRIPT LOAD]({{< relref "/commands/script-load" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +--- +Title: Key commands compatibility +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Generic key commands compatible with Redis Enterprise. +linkTitle: Keys (generic) +weight: 10 +--- + +The following table shows which Redis Open Source [key (generic) commands]({{< relref "/commands" >}}?group=generic) are compatible with standard and Active-Active databases in Redis Enterprise Software and Redis Cloud. + +| Command | Redis
Enterprise | Redis
Cloud | Notes | +|:--------|:----------------------|:-----------------|:------| +| [COPY]({{< relref "/commands/copy" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | For Active-Active or clustered databases, the source and destination keys must be in the same hash slot.

\*Not supported for stream consumer group info. | +| [DEL]({{< relref "/commands/del" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [DUMP]({{< relref "/commands/dump" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXISTS]({{< relref "/commands/exists" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIRE]({{< relref "/commands/expire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIREAT]({{< relref "/commands/expireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [EXPIRETIME]({{< relref "/commands/expiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [KEYS]({{< relref "/commands/keys" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [MIGRATE]({{< relref "/commands/migrate" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | | +| [MOVE]({{< relref "/commands/move" >}}) | ❌ Standard
❌ Active-Active | ❌ Standard
❌ Active-Active | Redis Enterprise does not support shared databases due to potential negative performance impacts and blocks any related commands. | +| [OBJECT ENCODING]({{< relref "/commands/object-encoding" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT FREQ]({{< relref "/commands/object-freq" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT IDLETIME]({{< relref "/commands/object-idletime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [OBJECT REFCOUNT]({{< relref "/commands/object-refcount" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PERSIST]({{< relref "/commands/persist" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIRE]({{< relref "/commands/pexpire" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIREAT]({{< relref "/commands/pexpireat" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PEXPIRETIME]({{< relref "/commands/pexpiretime" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [PTTL]({{< relref "/commands/pttl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RANDOMKEY]({{< relref "/commands/randomkey" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [RENAME]({{< relref "/commands/rename" >}}) | ✅ Standard
✅ Active-Active\* | ✅ Standard
✅ Active-Active\* | For Active-Active or clustered databases, the original key and new key must be in the same hash slot.

\*Not supported for stream consumer group info. | +| [RENAMENX]({{< relref "/commands/renamenx" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | For Active-Active or clustered databases, the original key and new key must be in the same hash slot. | +| [RESTORE]({{< relref "/commands/restore" >}}) | ✅ Standard
❌ Active-Active\* | ✅ Standard
❌ Active-Active\* | \*Only supported for module keys. | +| [SCAN]({{< relref "/commands/scan" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SORT]({{< relref "/commands/sort" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [SORT_RO]({{< relref "/commands/sort_ro" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TOUCH]({{< relref "/commands/touch" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TTL]({{< relref "/commands/ttl" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [TYPE]({{< relref "/commands/type" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [UNLINK]({{< relref "/commands/unlink" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | +| [WAIT]({{< relref "/commands/wait" >}}) | ✅ Standard
❌ Active-Active\* | ❌ Standard**
❌ Active-Active | \*For Active-Active databases, `WAIT` commands are supported for primary and replica shard replication. You can contact support to enable `WAIT` for local replicas only. `WAIT` is not supported for cross-instance replication.

\*\*`WAIT` commands are supported on Redis Cloud Flexible subscriptions. | +| [WAITAOF]({{< relref "/commands/waitaof" >}}) | ✅ Standard
✅ Active-Active | ✅ Standard
✅ Active-Active | | + +--- +Title: Client-side caching compatibility with Redis Software and Redis Cloud +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Software and Redis Cloud compatibility with client-side caching. +linkTitle: Client-side caching +toc: 'true' +weight: 80 +--- + +Redis Software and Redis Cloud support [client-side caching]({{}}) for databases with Redis versions 7.4 or later. + +## Required database versions + +Client-side caching in Redis Software and Redis Cloud requires Redis database versions 7.4 or later. + +The following table shows the differences in client-side caching support by product: + +| Redis product | Client-side caching support | +|-------------------------|-----------------------------| +| Redis Open Source | Redis v6.0 and later | +| Redis Cloud | Redis database v7.4 and later | +| Redis Software | Redis database v7.4 and later | + +## Supported RESP versions + +Client-side caching in Redis Software and Redis Cloud requires [RESP3]({{< relref "/develop/reference/protocol-spec#resp-versions" >}}). + +The following table shows the differences in client-side caching support for RESP by product: + +| Redis product with client-side caching | RESP2 | RESP3 | +|-------------------------|-------|-------| +| Redis Open Source | | | +| Redis Cloud | | | +| Redis Software | | | + +## Two connections mode with REDIRECT not supported + +Unlike Redis Open Source, Redis Software and Redis Cloud do not support [two connections mode]({{}}) or the `REDIRECT` option for [`CLIENT TRACKING`]({{}}). + +## Change tracking_table_max_keys for a database + +When client-side caching is enabled, Redis uses an invalidation table to track which keys are cached by each connected client. + +The configuration setting `tracking-table-max-keys` determines the maximum number of keys stored in the invalidation table and is set to `1000000` keys by default. Redis Software does not support using `CONFIG SET` to change this value, but you can use the REST API or rladmin instead. + +To change `tracking_table_max_keys` for a database in a Redis Software cluster: + +- [`rladmin tune db`]({{}}): + + ```sh + rladmin tune db db: tracking_table_max_keys 2000000 + ``` + + You can use the database name in place of `db:` in the preceding command. + +- [Update database configuration]({{}}) REST API request: + + ```sh + PUT /v1/bdbs/ + { "tracking_table_max_keys": 2000000 } + ``` + +## Change default tracking_table_max_keys + +The cluster-wide option `default_tracking_table_max_keys_policy` determines the default value of `tracking_table_max_keys` for new databases in a Redis Software cluster. `default_tracking_table_max_keys_policy` is set to `1000000` keys by default. + +To change `default_tracking_table_max_keys_policy`, use one of the following methods: + +- [`rladmin tune cluster`]({{}}) + + ```sh + rladmin tune cluster default_tracking_table_max_keys_policy 2000000 + ``` + +- [Update cluster policy]({{}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "default_tracking_table_max_keys_policy": 2000000 } + ``` +--- +Title: RESP compatibility with Redis Enterprise +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise supports RESP2 and RESP3. +linkTitle: RESP +toc: 'true' +weight: 80 +--- + +RESP (Redis Serialization Protocol) is the protocol that clients use to communicate with Redis databases. See the [RESP protocol specification]({{< relref "/develop/reference/protocol-spec" >}}) for more information. + +## Supported RESP versions + +- RESP2 is supported by all Redis Enterprise versions. + +- RESP3 is supported by Redis Enterprise 7.2 and later. + +{{}} +Redis Enterprise versions that support RESP3 continue to support RESP2. +{{}} + + +## Enable RESP3 for a database {#enable-resp3} + +To use RESP3 with a Redis Enterprise Software database: + +1. Upgrade Redis servers to version 7.2 or later. + + For Active-Active and Replica Of databases: + + 1. Upgrade all participating clusters to Redis Enterprise version 7.2.x or later. + + 1. Upgrade all databases to version 7.x or later. + +1. Enable RESP3 support for your database (`enabled` by default): + + - [`rladmin tune db`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-db" >}}): + + ```sh + rladmin tune db db: resp3 enabled + ``` + + You can use the database name in place of `db:` in the preceding command. + + - [Update database configuration]({{< relref "/operate/rs/references/rest-api/requests/bdbs#put-bdbs" >}}) REST API request: + + ```sh + PUT /v1/bdbs/ + { "resp3": true } + ``` + + ## Deactivate RESP3 for a database {#deactivate-resp3} + + To deactivate RESP3 support for a database: + +- [`rladmin tune db`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-db" >}}): + + ```sh + rladmin tune db db: resp3 disabled + ``` + + You can use the database name in place of `db:` in the preceding command. + +- [Update database configuration]({{< relref "/operate/rs/references/rest-api/requests/bdbs#put-bdbs" >}}) REST API request: + + ```sh + PUT /v1/bdbs/ + { "resp3": false } + ``` + + When RESP3 is deactivated, connected clients that use RESP3 are disconnected from the database. + +{{}} +You cannot use sharded pub/sub if you deactivate RESP3 support. When RESP3 is enabled, you can use sharded pub/sub with either RESP2 or RESP3. +{{}} + +## Change default RESP3 option + +The cluster-wide option `resp3_default` determines the default value of the `resp3` option, which enables or deactivates RESP3 for a database, upon upgrading a database to version 7.2. `resp3_default` is set to `enabled` by default. + +To change `resp3_default` to `disabled`, use one of the following methods: + +- Cluster Manager UI: + + 1. On the **Databases** screen, select {{< image filename="/images/rs/buttons/button-toggle-actions-vertical.png#no-click" alt="Toggle actions button" width="22px" class="inline" >}} to open a list of additional actions. + + 1. Select **Upgrade configuration**. + + 1. For **RESP3 support**, select **Disable**. + + 1. Click **Save**. + +- [`rladmin tune cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}) + + ```sh + rladmin tune cluster resp3_default disabled + ``` + +- [Update cluster policy]({{< relref "/operate/rs/references/rest-api/requests/cluster/policy#put-cluster-policy" >}}) REST API request: + + ```sh + PUT /v1/cluster/policy + { "resp3_default": false } + ``` + +## Client prerequisites for Redis 7.2 upgrade + +The Redis clients [Go-Redis](https://redis.uptrace.dev/) version 9 and [Lettuce](https://redis.github.io/lettuce/) versions 6 and later use RESP3 by default. If you use either client to run Redis Stack commands, you should set the client's protocol version to RESP2 before upgrading your database to Redis version 7.2 to prevent potential application issues due to RESP3 breaking changes. + +### Go-Redis + +For applications using Go-Redis v9.0.5 or later, set the protocol version to RESP2: + +```go +client := redis.NewClient(&redis.Options{ + Addr: "", + Protocol: 2, // Pin the protocol version +}) +``` + +### Lettuce + +To set the protocol version to RESP2 with Lettuce v6 or later: + +```java +import io.lettuce.core.*; +import io.lettuce.core.api.*; +import io.lettuce.core.protocol.ProtocolVersion; + +// ... +RedisClient client = RedisClient.create(""); +client.setOptions(ClientOptions.builder() + .protocolVersion(ProtocolVersion.RESP2) // Pin the protocol version + .build()); +// ... +``` + +If you are using [LettuceMod](https://github.com/redis-developer/lettucemod/), you need to upgrade to [v3.6.0](https://github.com/redis-developer/lettucemod/releases/tag/v3.6.0). +--- +Title: Redis Enterprise compatibility with Redis Open Source +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Redis Enterprise compatibility with Redis Open Source. +hideListLinks: true +linkTitle: Redis Open Source compatibility +weight: $weight +tocEmbedHeaders: true +--- +Both Redis Enterprise Software and [Redis Cloud]({{< relref "/operate/rc" >}}) are compatible with Redis Open Source. + +{{< embed-md "rc-rs-oss-compatibility.md" >}} + +## RESP compatibility + +Redis Enterprise Software and Redis Cloud support RESP2 and RESP3. See [RESP compatibility with Redis Enterprise]({{< relref "/operate/rs/references/compatibility/resp" >}}) for more information. + +## Client-side caching compatibility + +Redis Software and Redis Cloud support [client-side caching]({{}}) for databases with Redis versions 7.4 or later. See [Client-side caching compatibility with Redis Software and Redis Cloud]({{}}) for more information about compatibility and configuration options. + +## Compatibility with open source Redis Cluster API + +Redis Enterprise supports [Redis OSS Cluster API]({{< relref "/operate/rs/clusters/optimize/oss-cluster-api" >}}) if it is enabled for a database. For more information, see [Enable OSS Cluster API]({{< relref "/operate/rs/databases/configure/oss-cluster-api" >}}). +--- +Title: Resource usage metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Resource usage +weight: $weight +--- + +## Connections + +Number of connections to the database. + +**Components measured**: Cluster, Node, and Database + +## CPU usage + +Percent of the node CPU used. + +**Components measured**: Cluster and Node + +### Main thread CPU usage + +Percent of the CPU used by the main thread. + +**Components measured**: Database and Shard + +### Fork CPU usage + +CPU usage of Redis child forks. + +**Components measured**: Database and Shard + +### Total CPU usage + +Percent usage of the CPU for all nodes. + +**Components measured**: Database + +## Free disk space + +Remaining unused disk space. + +**Components measured**: Cluster and Node + +## Memory +### Used memory + +Total memory used by the database, including RAM, [Flash]({{< relref "/operate/rs/databases/auto-tiering" >}}) (if enabled), and [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}) (if enabled). + +Used memory does not include: + +1. Fragmentation overhead - The ratio of memory seen by the operating system to memory allocated by Redis +2. Replication buffers at the primary nodes - Set to 10% of used memory and is between 64 MB and 2048 MB +3. Memory used by Lua scripts - Does not exceed 1 MB +4. Copy on Write (COW) operation that can be triggered by: + - A full replication process + - A database snapshot process + - AOF rewrite process + +Used memory is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +### Free RAM + +Available RAM for System use. + +**Components measured**: Cluster and Node + +### Memory limit + +Memory size limit of the database, enforced on the [used memory](#used-memory). + +**Components measured**: Database + +### Memory usage + +Percent of memory used by Redis out of the [memory limit](#memory-limit). + +**Components measured**: Database +## Traffic + +### Incoming traffic + +Total incoming traffic to the database in bytes/sec. + +All incoming traffic is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Cluster, Node and Database + +#### Incoming traffic compressed + +Total incoming compressed traffic (in bytes/sec) per [Active-Active]({{< relref "/operate/rs/databases/active-active" >}}) replica database. + +#### Incoming traffic uncompressed + +Total incoming uncompressed traffic (in bytes/sec) per [Active-Active]({{< relref "/operate/rs/databases/active-active" >}}) replica database. + +### Outgoing traffic + +Total outgoing traffic from the database in bytes per second. + +Outgoing traffic is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Cluster, Node and Database + + + + + + + + --- +Title: Database operations metrics +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: null +linkTitle: Database operations +weight: $weight +--- + +## Evicted objects/sec + +Number of objects evicted from the database per second. + +Objects are evicted from the database according to the [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}). + +Object information is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Expired objects/sec + +Number of expired objects per second. + +Object information is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Hit ratio + +Ratio of the number of operations on existing keys out of the total number of operations. + +**Components measured**: Database and Shard + +### Read misses/sec + +The number of [read operations](#readssec) per second on keys that do not exist. + +Read misses are not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database + +### Write misses/sec + +Number of [write operations](#writessec) per second on keys that do not exist. + +Write misses are not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database and Shard + +## Latency + +The total amount of time between sending a Redis operation and receiving a response from the database. + +The graph shows average, minimum, maximum, and last latency values for all latency metrics. + +**Components measured**: Database + +### Reads latency + +[Latency](#latency) of [read operations](#readssec). + +**Components measured**: Database + +### Writes latency + +[Latency](#latency) per [write operation](#writessec). + +**Components measured**: Database + +### Other commands latency + +[Latency](#latency) of [other operations](#other-commandssec). + +**Components measured**: Database + +## Ops/sec + +Number of total operations per second, which includes [read operations](#readssec), [write operations](#writessec), and [other operations](#other-commandssec). + +**Components measured**: Cluster, Node, Database, and Shard + +### Reads/sec + +Number of total read operations per second. + +To find out which commands are read operations, run the following command with [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}): + +```sh +ACL CAT read +``` + +**Components measured**: Database + +### Writes/sec + +Number of total write operations per second. + +To find out which commands are write operations, run the following command with [`redis-cli`]({{< relref "/operate/rs/references/cli-utilities/redis-cli" >}}): + +```sh +ACL CAT write +``` + +**Components measured**: Database + +#### Pending writes min + +Minimum number of write operations queued per [Active-Active]({{< relref "/operate/rs/databases/active-active" >}}) replica database. + +#### Pending writes max + +Maximum number of write operations queued per [Active-Active]({{< relref "/operate/rs/databases/active-active" >}}) replica database. + +### Other commands/sec + +Number of operations per second that are not [read operations](#readssec) or [write operations](#writessec). + +Examples of other operations include [PING]({{< relref "/commands/ping" >}}), [AUTH]({{< relref "/commands/auth" >}}, and [INFO]({{< relref "/commands/info" >}} + +**Components measured**: Database + +## Total keys + +Total number of keys in the dataset. + +Does not include replicated keys, even if [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}) is enabled. + +Total keys is not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}). + +**Components measured**: Database + + + + + + + + +--- +Title: Prometheus metrics v2 preview +alwaysopen: false +categories: +- docs +- integrate +- rs +description: V2 metrics available to Prometheus as of Redis Enterprise Software version 7.8.2. +group: observability +linkTitle: Prometheus metrics v2 +summary: V2 metrics available to Prometheus as of Redis Enterprise Software version 7.8.2. +type: integration +weight: 50 +tocEmbedHeaders: true +--- + +{{}} +While the metrics stream engine is in preview, this document provides only a partial list of v2 metrics. More metrics will be added. +{{}} + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +The v2 metrics in the following tables are available as of Redis Enterprise Software version 7.8.0. For help transitioning from v1 metrics to v2 PromQL, see [Prometheus v1 metrics and equivalent v2 PromQL]({{}}). + +The v2 scraping endpoint also exposes metrics for `node_exporter` version 1.8.1. For more information, see the [Prometheus node_exporter GitHub repository](https://github.com/prometheus/node_exporter). + +{{}} +--- +Title: Prometheus metrics v1 +alwaysopen: false +categories: +- docs +- integrate +- rs +description: V1 metrics available to Prometheus. +group: observability +linkTitle: Prometheus metrics v1 +summary: You can use Prometheus and Grafana to collect and visualize your Redis Enterprise Software metrics. +type: integration +weight: 48 +tocEmbedHeaders: true +--- + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +As of Redis Enterprise Software version 7.8.2, v1 metrics are deprecated but still available. For help transitioning from v1 metrics to v2 PromQL, see [Prometheus v1 metrics and equivalent v2 PromQL]({{}}). + +{{}} +--- +Title: Real-time metrics +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Documents the metrics that are tracked with Redis Enterprise Software. +hideListLinks: true +linkTitle: Metrics +weight: $weight +--- + +## Cluster manager metrics + +In the Redis Enterprise Cluster Manager UI, you can see real-time performance metrics for clusters, nodes, databases, and shards, and configure alerts that send notifications based on alert parameters. Select the **Metrics** tab to view the metrics for each component. For more information, see [Monitoring with metrics and alerts]({{< relref "/operate/rs/monitoring" >}}). + +See the following topics for metrics definitions: +- [Database operations]({{< relref "/operate/rs/references/metrics/database-operations" >}}) for database metrics +- [Resource usage]({{< relref "/operate/rs/references/metrics/resource-usage" >}}) for resource and database usage metrics +- [Auto Tiering]({{< relref "/operate/rs/references/metrics/auto-tiering" >}}) for additional metrics for [Auto Tiering ]({{< relref "/operate/rs/databases/auto-tiering" >}}) databases + +## Prometheus metrics + +To collect and display metrics data from your databases and other cluster components, +you can connect your [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) server to your Redis Enterprise Software cluster. We recommend you use Prometheus and Grafana to view metrics history and trends. + +See [Prometheus integration]({{< relref "/operate/rs/monitoring/prometheus_and_grafana" >}}) to learn how to connect Prometheus and Grafana to your Redis Enterprise database. + +Redis Enterprise version 7.8.2 introduces a preview of the new metrics stream engine that exposes the v2 Prometheus scraping endpoint at `https://:8070/v2`. +This new engine exports all time-series metrics to external monitoring tools such as Grafana, DataDog, NewRelic, and Dynatrace using Prometheus. + +The new engine enables real-time monitoring, including full monitoring during maintenance operations, providing full visibility into performance during events such as shards' failovers and scaling operations. + +For a list of available metrics, see the following references: + +- [Prometheus metrics v1]({{}}) + +- [Prometheus metrics v2 preview]({{}}) + +If you are already using the existing scraping endpoint for integration, follow [this guide]({{}}) to transition and try the new engine. It is possible to scrape both existing and new endpoints simultaneously, allowing advanced dashboard preparation and a smooth transition. + +## Limitations + +### Shard limit + +Metrics information is not shown for clusters with more than 128 shards. For large clusters, we recommend you use [Prometheus and Grafana]({{< relref "/operate/rs/monitoring/prometheus_and_grafana" >}}) to view metrics. + +### Metrics not shown during shard migration + +The following metrics are not measured during [shard migration]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) when using the [internal monitoring systems]({{}}). If you view these metrics while resharding, the graph will be blank. + +- [Evicted objects/sec]({{< relref "/operate/rs/references/metrics/database-operations#evicted-objectssec" >}}) +- [Expired objects/sec]({{< relref "/operate/rs/references/metrics/database-operations#expired-objectssec" >}}) +- [Read misses/sec]({{< relref "/operate/rs/references/metrics/database-operations#read-missessec" >}}) +- [Write misses/sec]({{< relref "/operate/rs/references/metrics/database-operations#write-missessec" >}}) +- [Total keys]({{< relref "/operate/rs/references/metrics/database-operations#total-keys" >}}) +- [Incoming traffic]({{< relref "/operate/rs/references/metrics/resource-usage#incoming-traffic" >}}) +- [Outgoing traffic]({{< relref "/operate/rs/references/metrics/resource-usage#outgoing-traffic" >}}) +- [Used memory]({{< relref "/operate/rs/references/metrics/resource-usage#used-memory" >}}) + +This limitation does not apply to the new [metrics stream engine]({{}}). +--- +Title: Auto Tiering Metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Auto Tiering +weight: $weight +--- + +These metrics are additional metrics for [Auto Tiering ]({{< relref "/operate/rs/databases/auto-tiering" >}}) databases. + +#### % Values in RAM + +Percent of keys whose values are stored in RAM. + +A low percentage alert means most of the RAM is used for holding keys and not much RAM is available for values. This can be due to a high number of small keys or a few large keys. Inserting more keys might cause the database to run out of memory. + +If the percent of values in RAM is low for a subset of the database's shards, it might also indicate an unbalanced database. + +**Components measured**: Database and Shard + +#### Values in flash + +Number of keys with values stored in flash, not including [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}). + +**Components measured**: Database and Shard + +#### Values in RAM + +Number of keys with values stored in RAM, not including [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}). + +**Components measured**: Database and Shard + +#### Flash key-value operations + +Number of operations on flash key values (read + write + del) per second. + +**Components measured**: Node + +#### Flash bytes/sec + +Number of total bytes read and written per second on flash memory. + +**Components measured**: Cluster, Node, Database, and Shard + +#### Flash I/O operations/sec + +Number of input/output operations per second on the flash storage device. + +**Components measured**: Cluster and Node + +#### RAM:Flash access ratio + +Ratio between logical Redis key value operations and actual flash key value operations. + +**Components measured**: Database and Shard + +#### RAM hit ratio + +Ratio of requests processed directly from RAM to total number of requests processed. + +**Components measured**: Database and Shard + +#### Used flash + +Total amount of memory used to store values in flash. + +**Components measured**: Database and Shard + +#### Free flash + +Amount of free space on flash storage. + +**Components measured**: Cluster and Node + +#### Flash fragmentation + +Ratio between the used logical flash memory and the physical flash memory that is used. + +**Components measured**: Database and Shard + +#### Used RAM + +Total size of data stored in RAM, including keys, values, overheads, and [replication]({{< relref "/operate/rs/databases/durability-ha/replication" >}}) (if enabled). + +**Components measured**: Database and Shard + +#### RAM dataset overhead + +Percentage of the [RAM limit](#ram-limit) that is used for anything other than values, such as key names, dictionaries, and other overheads. + +**Components measured**: Database and Shard + +#### RAM limit + +Maximum amount of RAM that can be used in bytes. + +**Components measured**: Database + +#### RAM usage + +Percentage of the [RAM limit](#ram-limit) used. + +**Components measured**: Database + +#### Storage engine usage + +Total count of shards used, filtered by the sorage engine (Speedb / RockSB) per given database. + +**Components measured**: Database, Shards + + + +#### Calculated metrics + +These RoF statistics can be calculated from other metrics. + +- RoF average key size with overhead + + ([ram_dataset_overhead](#ram-dataset-overhead) * [used_ram](#used-ram)) + / ([total_keys]({{< relref "/operate/rs/references/metrics/database-operations#total-keys" >}}) * 2) + +- RoF average value size in RAM + + ((1 - [ram_dataset_overhead](#ram-dataset-overhead)) * [used_ram](#used-ram)) / ([values_in_ram](#values-in-ram) * 2) + +- RoF average value size in flash + + [used_flash](#used-flash) / [values_in_flash](#values-in-flash) +--- +Title: Transition from Prometheus v1 to Prometheus v2 +alwaysopen: false +categories: +- docs +- integrate +- rs +description: Transition from v1 metrics to v2 PromQL equivalents. +group: observability +linkTitle: Transition from Prometheus v1 to v2 +summary: Transition from v1 metrics to v2 PromQL equivalents. +type: integration +weight: 49 +tocEmbedHeaders: true +--- + +You can [integrate Redis Enterprise Software with Prometheus and Grafana]({{}}) to create dashboards for important metrics. + +As of Redis Enterprise Software version 7.8.2, [PromQL (Prometheus Query Language)](https://prometheus.io/docs/prometheus/latest/querying/basics/) metrics are available. V1 metrics are deprecated but still available. You can use the following tables to transition from v1 metrics to equivalent v2 PromQL. For a list of all available v2 metrics, see [Prometheus metrics v2]({{}}). + +{{}} +--- +Title: Supported platforms +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software is supported on several operating systems, + cloud environments, and virtual environments. +linkTitle: Supported platforms +weight: 30 +tocEmbedHeaders: true +--- +{{}} +--- +Title: Supported upgrade paths for Redis Software +alwaysopen: false +categories: +- docs +- operate +- rs +description: Supported paths to upgrade a Redis Software cluster. +linkTitle: Upgrade paths +weight: 30 +tocEmbedHeaders: true +--- + +{{}} + +For detailed upgrade instructions, see [Upgrade a Redis Enterprise Software cluster]({{}}). + +See the [Redis Enterprise Software product lifecycle]({{}}) for more information about release numbers and the end-of-life schedule. + +{{}} +Redis Enterprise for Kubernetes has its own support lifecycle, which accounts for the Kubernetes distribution lifecycle. For details, see [Supported Kubernetes distributions]({{}}). +{{}} +--- +Title: Connecting to Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +draft: true +weight: null +--- +To establish a connection to a Redis database, you'll need the following information: + +- The hostname or IP address of the Redis server +- The port number that the Redis server is listening at +- The database password (when configured with an authentication password which is **strongly recommended**) +- The SSL certificates (when configured with SSL authentication and encryption - see [this article](/kb/read-more-ssl) for more information) + +The combination of `hostname:port` is commonly referred to as the "endpoint." This information is readily obtainable from your Redis Enterprise Cluster and Redis Cloud admin consoles. Unless otherwise specified, our Redis databases are accessible via a single managed endpoint to ensure high availability. + +You can connect to a Redis database using a wide variety of tools and libraries depending on your needs. Here's a short list: + +- Use one of the many [clients for Redis](redis.io/clients) - see below for client-specific information and examples +- Code your own Redis client based on the [Redis Serialization Protocol (RESP)](http://redis.io/topics/protocol) +- Make friends with Redis' own command line tool - `redis-cli` - to quickly connect and manage any Redis database (**tip:** you can also use `telnet` instead) +- Use tools that provide a [GUI for Redis](/blog/so-youre-looking-for-the-redis-gui) + +## Basic connection troubleshooting + +Connecting to a remote server can be challenging. Here’s a quick checklist for common pitfalls: + +- Verify that the connection information was copy-pasted correctly <- more than 90% of connectivity issues are due to a single missing character. +- If you're using Redis in the cloud or not inside of a LAN, consider adjusting your client's timeout settings +- Try disabling any security measures that your database may have been set up with (e.g. Source IP/Subnet lists, Security Groups, SSL, etc...). +- Try using a command line tool to connect to the database from your server - it is possible that your host and/port are blocked by the network. +- If you've managed to open a connection, try sending the `INFO` command and act on its reply or error message. +- Redis Enterprise Software Redis databases only support connecting to the default database (0) and block some administrative commands. To learn more, see: + - Redis Enterprise Cluster: [REC compatibility](/redis-enterprise-documentation/rlec-compatibility) + - Redis Cloud FAQ: [Are you fully compatible with Redis Open Source](/faqs#are-you-fully-compatible-with-open-source-redis) + +If you encounter any difficulties or have questions please feel free to [contact our help desk](mailto:support@redislabs.com). +--- +Title: Clustering Redis +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +draft: true +weight: null +--- +Joining multiple Redis servers into a Redis cluster is a challenging task, especially because Redis supports complex data structures and commands required by modern web applications, in high-throughput and low latency (sub-millisecond) conditions. Some of those challenges are: + +- Performing union and intersection operations over List/Set/Sorted Set + data types across multiple shards and nodes +- Maintaining consistency across multi-shard/multi-node architecture, + while running (a) a SORT command over a List of Hash keys; or (b) a + Redis transaction that includes multiple keys; or (c) a Lua script + with multiple keys +- Creating a simple abstraction layer that hides the complex cluster + architecture from the user’s application, without code modifications + and while supporting infinite scalability +- Maintaining a reliable and consistent infrastructure in a cluster + configuration + +There are several solutions to clustering Redis, most notable of which is the [Redis Open Source cluster](http://redis.io/topics/cluster-spec). + +Redis Enterprise Software and Redis Cloud were built from the ground up to provide a Redis cluster of any size while supporting all Redis commands. Your dataset is distributed across multiple shards in multiple nodes of the Redis cluster and is constantly monitored to ensure optimal performance. When needed, more shards and nodes can be added to your dataset so it can scale continuously and limitlessly. + +Redis Enterprise clusters provide a single endpoint to connect to, and do not require any code changes or special configuration from the application’s perspective. For more information on setting up and using Redis Enterprise clusters, see [Database clustering]({{< relref "/operate/rs/databases/durability-ha/clustering/" >}}). +--- +alwaysopen: false +categories: +- docs +- operate +- rs +description: Explains terms used in Redis Enterprise Software and its docs. +linkTitle: Terminology +title: Terminology in Redis Enterprise Software +weight: $weight +--- +Here are explanations of some of the terms used in Redis Enterprise Software. + +## Node + +A _node_ is a physical machine, virtual machine, container or cloud +instance on which the RS installation package was installed and the +setup process was run in order to make the machine part of the cluster. + +Each node is a container for running multiple Redis +instances, referred to as "shards". + +The recommended configuration for a production cluster is an uneven +number of nodes, with a minimum of three. Note that in some +configurations, certain functionalities might be blocked. For example, +if a cluster has only one node you cannot enable database replication, +which helps to achieve high availability. + +A node is made up of several components, as detailed below, and works +together with the other cluster nodes. + +## Redis instance (shard) + +As indicated above, each node serves as a container for hosting multiple +database instances, referred to as "shards". + +Redis Enterprise Software supports various database configurations: + +- **Standard Redis database** - A single Redis shard with no + replication or clustering. +- **Highly available Redis database** - Every database master shard + has a replica shard, so that if the master shard fails the + cluster can automatically fail over to the replica with minimal impact. Master and replica shards are always placed on separate + nodes to ensure high availability. +- **Clustered Redis database** - The data stored in the database is + split across several shards. The number of shards can be defined by + the user. Various performance optimization algorithms define where + shards are placed within the cluster. During the lifetime of the + cluster, these algorithms might migrate a shard between nodes. +- **Clustered and highly available Redis database** - Each master shard + in the clustered database has a replica shard, enabling failover if + the master shard fails. + +## Proxy + +Each node includes one zero-latency, multi-threaded proxy +(written in low-level C) that masks the underlying system complexity. The +proxy oversees forwarding Redis operations to the database shards on +behalf of a Redis client. + +The proxy simplifies the cluster operation, from the application or +Redis client point of view, by enabling the use of a standard Redis +client. The zero-latency proxy is built over a cut-through architecture +and employs various optimization methods. For example, to help ensure +high-throughput and low-latency performance, the proxy might use +instruction pipelining even if not instructed to do so by the client. + +## Database endpoint + +Each database is served by a database endpoint that is part of and +managed by the proxies. The endpoint oversees forwarding Redis +operations to specific database shards. + +If the master shard fails and the replica shard is promoted to master, the +master endpoint is updated to point to the new master shard. + +If the master endpoint fails, the replica endpoint is promoted to be the +new master endpoint and is updated to point to the master shard. + +Similarly, if both the master shard and the master endpoint fail, then +both the replica shard and the replica endpoint are promoted to be the new +master shard and master endpoint. + +Shards and their endpoints do not +have to reside within the same node in the cluster. + +In the case of a clustered database with multiple database shards, only +one master endpoint acts as the master endpoint for all master shards, +forwarding Redis operations to all shards as needed. + +## Cluster manager + +The cluster manager oversees all node management-related tasks, and the +cluster manager in the master node looks after all the cluster related +tasks. + +The cluster manager is designed in a way that is totally decoupled from +the Redis operation. This enables RS to react in a much faster and +accurate manner to failure events, so that, for example, a node failure +event triggers mass failover operations of all the master endpoints +and master shards that are hosted on the failed node. + +In addition, this architecture guarantees that each Redis shard is only +dealing with processing Redis commands in a shared-nothing architecture, +thus maintaining the inherent high-throughput and low-latency of each +Redis process. Lastly, this architecture guarantees that any change in +the cluster manager itself does not affect the Redis operation. + +Some of the primary functionalities of the cluster manager include: + +- Deciding where shards are created +- Deciding when shards are migrated and to where +- Monitoring database size +- Monitoring databases and endpoints across all nodes +- Running the database resharding process +- Running the database provisioning and de-provisioning processes +- Gathering operational statistics +- Enforcing license and subscription limitations + +--- +Title: redis-cli +alwaysopen: false +categories: +- docs +- operate +- rs +- rc +description: Run Redis commands. +hideListLinks: true +linkTitle: redis-cli (run Redis commands) +toc: 'true' +weight: $weight +--- + +The `redis-cli` command-line utility lets you interact with a Redis database. With `redis-cli`, you can run [Redis commands]({{< relref "/commands" >}}) directly from the command-line terminal or with [interactive mode](#interactive-mode). + +If you want to run Redis commands without `redis-cli`, you can [connect to a database with Redis Insight]({{< relref "/develop/tools/insight" >}}) and use the built-in [CLI]({{< relref "/develop/tools/insight" >}}) prompt instead. + +## Install `redis-cli` + +When you install Redis Enterprise Software or Redis Open Source, it also installs the `redis-cli` command-line utility. + +To learn how to install Redis and `redis-cli`, see the following installation guides: + +- [Redis Open Source]({{< relref "/operate/oss_and_stack/install/install-stack/" >}}) + +- [Redis Enterprise Software]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) + +- [Redis Enterprise Software with Docker]({{< relref "/operate/rs/installing-upgrading/quickstarts/docker-quickstart" >}}) + +## Connect to a database + +To run Redis commands with `redis-cli`, you need to connect to your Redis database. + +You can find endpoint and port details in the **Databases** list or the database’s **Configuration** screen. + +### Connect remotely + +If you have `redis-cli` installed on your local machine, you can use it to connect to a remote Redis database. You will need to provide the database's connection details, such as the hostname or IP address, port, and password. + +```sh +$ redis-cli -h -p -a +``` + +You can also provide the password with the `REDISCLI_AUTH` environment variable instead of the `-a` option: + +```sh +$ export REDISCLI_AUTH= +$ redis-cli -h -p +``` + +### Connect over TLS + +To connect to a Redis Enterprise Software or Redis Cloud database over TLS: + +1. Download or copy the Redis Enterprise server (or proxy) certificates. + + - For Redis Cloud, see [Download certificates]({{< relref "/operate/rc/security/database-security/tls-ssl#download-certificates" >}}) for detailed instructions on how to download the server certificates (`redis_ca.pem`) from the [Redis Cloud console](https://cloud.redis.io/). + + - For Redis Enterprise Software, copy the proxy certificate from the Cluster Manager UI (**Cluster > Security > Certificates > Server authentication**) or from a cluster node (`/etc/opt/redislabs/proxy_cert.pem`). + +1. Copy the certificate to each client machine. + +1. If your database doesn't require client authentication, provide the Redis Enterprise server certificate (`redis_ca.pem` for Cloud or `proxy_cert.pem` for Software) when you connect: + + ```sh + redis-cli -h -p --tls --cacert .pem + ``` + +1. If your database requires client authentication, provide your client's private and public keys along with the Redis Enterprise server certificate (`redis_ca.pem` for Cloud or `proxy_cert.pem` for Software) when you connect: + + ```sh + redis-cli -h -p --tls --cacert .pem \ + --cert redis_user.crt --key redis_user_private.key + ``` + +### Connect with Docker + +If your Redis database runs in a Docker container, you can use `docker exec` to run `redis-cli` commands: + +```sh +$ docker exec -it redis-cli -p +``` + +## Basic use + +You can run `redis-cli` commands directly from the command-line terminal: + +```sh +$ redis-cli -h -p +``` + +For example, you can use `redis-cli` to test your database connection and store a new Redis string in the database: + +```sh +$ redis-cli -h -p 12000 PING +PONG +$ redis-cli -h -p 12000 SET mykey "Hello world" +OK +$ redis-cli -h -p 12000 GET mykey +"Hello world" +``` + +For more information, see [Command line usage]({{< relref "/develop/tools/cli" >}}#command-line-usage). + +## Interactive mode + +In `redis-cli` [interactive mode]({{< relref "/develop/tools/cli" >}}#interactive-mode), you can: + +- Run any `redis-cli` command without prefacing it with `redis-cli`. +- Enter `?` for more information about how to use the `HELP` command and [set `redis-cli` preferences]({{< relref "/develop/tools/cli" >}}#preferences). +- Enter [`HELP`]({{< relref "/develop/tools/cli" >}}#showing-help-about-redis-commands) followed by the name of a command for more information about the command and its options. +- Press the `Tab` key for command completion. +- Enter `exit` or `quit` or press `Control+D` to exit interactive mode and return to the terminal prompt. + +This example shows how to start interactive mode and run Redis commands: + +```sh +$ redis-cli -p 12000 +127.0.0.1:12000> PING +PONG +127.0.0.1:12000> SET mykey "Hello world" +OK +127.0.0.1:12000> GET mykey +"Hello world" +``` + +## Examples + +### Check slowlog + +Run [`slowlog get`]({{< relref "/commands/slowlog-get" >}}) for a list of recent slow commands: + +```sh +redis-cli -h -p slowlog get +``` + +### Scan for big keys + +Scan the database for big keys: + +```sh +redis-cli -h -p --bigkeys +``` + +See [Scanning for big keys]({{< relref "/develop/tools/cli" >}}#scanning-for-big-keys) for more information. + +## More info + +- [Redis CLI documentation]({{< relref "/develop/tools/cli" >}}) +- [Redis commands reference]({{< relref "/commands/" >}}) +--- +Title: rlcheck +alwaysopen: false +categories: +- docs +- operate +- rs +description: Verify nodes. +hideListLinks: true +linkTitle: rlcheck (verify nodes) +weight: $weight +--- +The `rlcheck` utility runs various [tests](#tests) to check the health of a Redis Enterprise Software node and reports any discovered issues. +You can use this utility to confirm a successful installation or to verify that the node is functioning properly. + +To resolve issues reported by `rlcheck`, [contact Redis support](https://redis.com/company/support/). + +## Run rlcheck + +You can run `rlcheck` from the node host's command line. +The output of `rlcheck` shows information specific to the host you run it on. + +To run `rlcheck` tests: + +1. Sign in to the Redis Enterprise Software host with an account that is a member of the **redislabs** operating system group. + +1. Run: + + ```sh + rlcheck + ``` + +## Options + +You can run `rlcheck` with the following options: + +| Option | Description | +|--------|-------------| +| `--suppress-tests TEXT` | Skip the specified, comma-delimited list of tests. See [Tests](#tests) for the list of tests and descriptions. | +| `--retry-delay INTEGER` | Delay between retries, in seconds. | +| `--retry INTEGER` | Number of retries after a failure. | +| `--file-path TEXT` | Custom path to `rlcheck.log`. | +| `--continue-on-error` | Continue to run all tests even if a test fails, then show all errors when complete. | +| `--help` | Return the list of `rlcheck` options. | + +## Tests + +`rlcheck` runs the following tests by default: + +| Test name | Description | +|-----------|-------------| +| verify_owner_and_group | Verifies the owner and group for Redis Enterprise Software files are correct. | +| verify_bootstrap_status | Verifies the local node's bootstrap process completed without errors. | +| verify_services | Verifies all Redis Enterprise Software services are running. | +| verify_port_range | Verifies the [`ip_local_port_range`](https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html) doesn't conflict with the ports Redis Enterprise might assign to shards. | +| verify_pidfiles | Verifies all active local shards have PID files. | +| verify_capabilities | Verifies all binaries have the proper capability bits. | +| verify_existing_sockets | Verifies sockets exist for all processes that require them. | +| verify_host_settings | Verifies the following:
• Linux `overcommit_memory` setting is 1.
•`transparent_hugepage` is disabled.
• Socket maximum connections setting `somaxconn` is 1024. | +| verify_tcp_connectivity | Verifies this node can connect to all other alive nodes. | +| verify_encrypted_gossip | Verifies gossip communication is encrypted. | +--- +Title: rladmin tune +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures parameters for databases, proxies, nodes, and clusters. +headerRange: '[1-2]' +linkTitle: tune +toc: 'true' +weight: $weight +--- + +Configures parameters for databases, proxies, nodes, and clusters. + +## `tune cluster` + +Configures cluster parameters. + +``` sh +rladmin tune cluster + [ repl_diskless { enabled | disabled } ] + [ redis_provision_node_threshold ] + [ redis_migrate_node_threshold ] + [ redis_provision_node_threshold_percent ] + [ redis_migrate_node_threshold_percent ] + [ max_simultaneous_backups ] + [ failure_detection_sensitivity { high | low } ] + [ watchdog_profile { cloud | local-network } ] + [ slave_ha { enabled | disabled } ] + [ slave_ha_grace_period ] + [ slave_ha_cooldown_period ] + [ slave_ha_bdb_cooldown_period ] + [ max_saved_events_per_type ] + [ parallel_shards_upgrade ] + [ default_concurrent_restore_actions ] + [ show_internals { enabled | disabled } ] + [ expose_hostnames_for_all_suffixes { enabled | disabled } ] + [ redis_upgrade_policy { latest | major } ] + [ default_redis_version ] + [ default_non_sharded_proxy_policy { single | all-master-shards | all-nodes } ] + [ default_sharded_proxy_policy { single | all-master-shards | all-nodes } ] + [ default_shards_placement { dense | sparse } ] + [ data_internode_encryption { enabled | disabled } ] + [ db_conns_auditing { enabled | disabled } ] + [ acl_pubsub_default { resetchannels | allchannels } ] + [ resp3_default { enabled | disabled } ] + [ automatic_node_offload { enabled | disabled } ] + [ default_tracking_table_max_keys_policy ] + [ default_oss_sharding { enabled | disabled } ] + ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------------| +| acl_pubsub_default | `resetchannels`
`allchannels` | Default pub/sub ACL rule for all databases in the cluster:
•`resetchannels` blocks access to all channels (restrictive)
•`allchannels` allows access to all channels (permissive) | +| automatic_node_offload | `enabled`
`disabled` | Define whether automatic node offload migration will take place | +| data_internode_encryption | `enabled`
`disabled` | Activates or deactivates [internode encryption]({{< relref "/operate/rs/security/encryption/internode-encryption" >}}) for new databases | +| db_conns_auditing | `enabled`
`disabled` | Activates or deactivates [connection auditing]({{< relref "/operate/rs/security/audit-events" >}}) by default for new databases of a cluster | +| default_concurrent_restore_actions | integer
`all` | Default number of concurrent actions when restoring a node from a snapshot (positive integer or "all") | +| default_non_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default [proxy policy]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}) for newly created non-sharded databases' endpoints | +| default_oss_sharding | `enabled`
`disabled` | Default hashing policy to use for new databases. Set to `disabled` by default. This field is for future use only and should not be changed. | +| default_redis_version | version number | The default Redis database compatibility version used to create new databases.

The value parameter should be a version number in the form of "x.y" where _x_ represents the major version number and _y_ represents the minor version number. The final value corresponds to the desired version of Redis.

You cannot set _default_redis_version_ to a value higher than that supported by the current _redis_upgrade_policy_ value. | +| default_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default [proxy policy]({{< relref "/operate/rs/databases/configure/proxy-policy" >}}) for newly created sharded databases' endpoints | +| default_shards_placement | `dense`
`sparse` | New databases place shards according to the default [shard placement policy]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy" >}}) | +| default_tracking_table_max_keys_policy | integer (default: 1000000) | Defines the default value of the client-side caching invalidation table size for new databases. 0 makes the cache unlimited. | +| expose_hostnames_for_all_suffixes | `enabled`
`disabled` | Exposes hostnames for all DNS suffixes | +| failure_detection_sensitivity | `high`
`low` | Predefined thresholds and timeouts for failure detection (previously known as `watchdog_profile`)
• `high` (previously `local-network`) – high failure detection sensitivity, lower thresholds, faster failure detection and failover
• `low` (previously `cloud`) – low failure detection sensitivity, higher tolerance for latency variance (also called network jitter) | +| login_lockout_counter_reset_after | time in seconds | Time after failed login attempt before the counter resets to 0 | +| login_lockout_duration | time in seconds | Time a locked account remains locked ( "0" means only an admin can unlock the account) | +| login_lockout_threshold | integer | Number of failed sign-in attempts to trigger locking a user account ("0" means never lock the account) | +| max_saved_events_per_type | integer | Maximum number of events each type saved in CCS per object type | +| max_simultaneous_backups | integer (default: 4) | Number of database backups allowed to run at the same time. Combines with `max_redis_forks` (set by [`tune node`](#tune-node)) to determine the number of shard backups allowed to run simultaneously. | +| parallel_shards_upgrade | integer
`all` | Number of shards upgraded in parallel during DB upgrade (positive integer or "all") | +| redis_migrate_node_threshold | size in MB | Memory (in MBs by default or can be specified) needed to migrate a database between nodes | +| redis_migrate_node_threshold_percent | percentage | Memory (in percentage) needed to migrate a database between nodes | +| redis_provision_node_threshold | size in MB | Memory (in MBs by default or can be specified) needed to provision a new database | +| redis_provision_node_threshold_percent | percentage | Memory (in percentage) needed to provision a new database | +| redis_upgrade_policy | `latest`
`major` | When you upgrade or create a new Redis database, this policy determines which version of Redis database compatibility is used.

Supported values are:
  • `latest`, which applies the most recent Redis compatibility update \(_effective default prior to v6.2.4_)

  • `major`, which applies the most recent major release compatibility update (_default as of v6.2.4_).
| +| repl_diskless | `enabled`
`disabled` | Activates or deactivates diskless replication (can be overridden per database) | +| resp3_default | `enabled`
`disabled` | Determines the default value of the `resp3` option upon upgrading a database to version 7.2 (defaults to `enabled`) | +| show_internals | `enabled`
`disabled` | Controls the visibility of internal databases that are only used for the cluster's management | +| slave_ha | `enabled`
`disabled` | Activates or deactivates [replica high availability]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) in the cluster
(enabled by default; use [`rladmin tune db`](#tune-db) to change `slave_ha` for a specific database)

Deprecated as of Redis Enterprise Software v7.2.4. | +| slave_ha_bdb_cooldown_period | time in seconds (default: 7200) | Time (in seconds) a database must wait after its shards are relocated by [replica high availability]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) before it can go through another shard migration if another node fails (default is 2 hours) | +| slave_ha_cooldown_period | time in seconds (default: 3600) | Time (in seconds) [replica high availability]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) must wait after relocating shards due to node failure before performing another shard migration for any database in the cluster (default is 1 hour) | +| slave_ha_grace_period | time in seconds (default: 600) | Time (in seconds) between when a node fails and when [replica high availability]({{< relref "/operate/rs/databases/configure/replica-ha" >}}) starts relocating shards to another node | +| watchdog_profile | `cloud`
`local-network` | Watchdog profiles with preconfigured thresholds and timeouts (deprecated as of Redis Enterprise Software v6.4.2-69; use `failure_detection_sensitivity` instead)
• `cloud` is suitable for common cloud environments and has a higher tolerance for latency variance (also called network jitter).
• `local-network` is suitable for dedicated LANs and has better failure detection and failover times. | + +### Returns + +Returns `Finished successfully` if the cluster configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info cluster`]({{< relref "/operate/rs/references/cli-utilities/rladmin/info#info-cluster" >}}) to verify the cluster configuration was changed. + +### Example + +``` sh +$ rladmin tune cluster slave_ha enabled +Finished successfully +$ rladmin info cluster | grep slave_ha + slave_ha: enabled +``` + +## `tune db` + +Configures database parameters. + +``` sh +rladmin tune db { db: | } + [ slave_buffer ] + [ client_buffer ] + [ repl_backlog ] + [ crdt_repl_backlog ] + [ repl_timeout ] + [ repl_diskless { enabled | disabled | default } ] + [ master_persistence { enabled | disabled } ] + [ maxclients ] + [ schedpolicy { cmp | mru | spread | mnp } ] + [ max_shard_pipeline ] + [ conns ] + [ conns_type ] + [ max_client_pipeline ] + [ max_connections ] + [ max_aof_file_size ] + [ max_aof_load_time ] + [ oss_cluster { enabled | disabled } ] + [ oss_cluster_api_preferred_ip_type ] + [ slave_ha { enabled | disabled } ] + [ slave_ha_priority ] + [ skip_import_analyze { enabled | disabled } ] + [ mkms { enabled | disabled } ] + [ continue_on_error ] + [ gradual_src_mode { enabled | disabled } ] + [ gradual_sync_mode { enabled | disabled | auto } ] + [ gradual_sync_max_shards_per_source ] + [ module_name ] [ module_config_params ] + [ crdt_xadd_id_uniqueness_mode { liberal | semi-strict | strict } ] + [ metrics_export_all { enabled | disabled } ] + [ syncer_mode { distributed | centralized }] + [ syncer_monitoring { enabled | disabled } ] + [ mtls_allow_weak_hashing { enabled | disabled } ] + [ mtls_allow_outdated_cert { enabled | disabled } ] + [ data_internode_encryption { enabled | disabled } ] + [ db_conns_auditing { enabled | disabled } ] + [ resp3 { enabled | disabled } ] + [ tracking_table_max_keys ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|--------------------------------------|----------------------------------|---------------------------------------------------------------------------------------------------------------------------------------| +| db:id | integer | ID of the specified database | +| name | string | Name of the specified database | +| client_buffer | value in MB hard:soft:time | Redis client output buffer limits | +| conns | integer | Size of internal connection pool, specified per-thread or per-shard depending on conns_type | +| conns_type | `per-thread`
`per-shard` | Specifies connection pool size as either per-thread or per-shard | +| continue_on_error | | Flag that skips tuning shards that can't be reached | +| crdt_repl_backlog | value in MB
`auto` | Size of the Active-Active replication buffer | +| crdt_xadd_id_uniqueness_mode | `liberal`
`semi-strict`
`strict` | XADD's behavior in an Active-Active database, defined as liberal, semi-strict, or strict (see descriptions below) | +| data_internode_encryption | `enabled`
`disabled` | Activates or deactivates [internode encryption]({{< relref "/operate/rs/security/encryption/internode-encryption" >}}) for the database | +| db_conns_auditing | `enabled`
`disabled` | Activates or deactivates database [connection auditing]({{< relref "/operate/rs/security/audit-events" >}}) for a database | +| gradual_src_mode | `enabled`
`disabled` | Activates or deactivates gradual sync of sources | +| gradual_sync_max_shards_per_source | integer | Number of shards per sync source that can be replicated in parallel (positive integer) | +| gradual_sync_mode | `enabled`
`disabled`
`auto` | Activates, deactivates, or automatically determines gradual sync of source shards | +| master_persistence | `enabled`
`disabled` | If enabled, persists the primary shard in addition to replica shards in a replicated and persistent database. | +| max_aof_file_size | size in MB | Maximum size (in MB, if not specified) of [AoF]({{< relref "/glossary/_index.md#letter-a" >}}) file (minimum value is 10 GB) | +| max_aof_load_time | time in seconds | Time limit in seconds to load a shard from an append-only file (AOF). If exceeded, an AOF rewrite is initiated to decrease future load time.
Minimum: 2700 seconds (45 minutes)
Default: 3600 seconds (1 hour) | +| max_client_pipeline | integer | Maximum commands in the proxy's pipeline per client connection (max value is 2047, default value is 200) | +| max_connections | integer | Maximum client connections to the database's endpoint (default value is 0, which is unlimited) | +| max_shard_pipeline | integer | Maximum commands in the proxy's pipeline per shard connection (default value is 200) | +| maxclients | integer | Controls the maximum client connections between the proxy and shards (default value is 10000) | +| metrics_export_all | `enabled`
`disabled` | Activates the exporter to expose all shard metrics | +| mkms | `enabled`
`disabled` | Activates multi-key multi-slot commands | +| module_config_params | string | Configures module arguments at runtime. Enclose `module_config_params` within quotation marks. | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf`
`rg` | The module to configure with `module_config_params` | +| mtls_allow_outdated_cert | `enabled`
`disabled` | Activates outdated certificates in mTLS connections | +| mtls_allow_weak_hashing | `enabled`
`disabled` | Activates weak hashing (less than 2048 bits) in mTLS connections | +| oss_cluster | `enabled`
`disabled` | Activates OSS cluster API | +| oss_cluster_api_preferred_ip_type | `internal`
`external` | IP type for the endpoint and database in the OSS cluster API (default is internal) | +| repl_backlog | size in MB
`auto` | Size of the replication buffer | +| repl_diskless | `enabled`
`disabled`
`default` | Activates or deactivates diskless replication (defaults to the cluster setting) | +| repl_timeout | time in seconds | Replication timeout (in seconds) | +| resp3 | `enabled`
`disabled` | Enables or deactivates RESP3 support (defaults to `enabled`) | +| schedpolicy | `cmp`
`mru`
`spread`
`mnp` | Controls how server-side connections are used when forwarding traffic to shards | +| skip_import_analyze | `enabled`
`disabled` | Skips the analyzing step when importing a database | +| slave_buffer | `auto`
value in MB
hard:soft:time | Redis replica output buffer limits
• `auto`: dynamically adjusts the buffer limit based on the shard’s current used memory
• value in MB: sets the buffer limit in MB
• hard:soft:time: sets the hard limit (maximum buffer size in MB), soft limit in MB, and the time in seconds that the soft limit can be exceeded | +| slave_ha | `enabled`
`disabled` | Activates or deactivates replica high availability (defaults to the cluster setting) | +| slave_ha_priority | integer | Priority of the database in the replica high-availability mechanism | +| syncer_mode | `distributed`
`centralized`| Configures syncer to run in distributed or centralized mode. For distributed syncer, the DMC policy must be all-nodes or all-master-nodes | +| syncer_monitoring | `enabled`
`disabled` | Activates syncer monitoring | +| tracking_table_max_keys | integer | The client-side caching invalidation table size. 0 makes the cache unlimited. | + +| XADD behavior mode | Description | +| - | - | +| liberal | XADD succeeds with any valid ID (not recommended, allows duplicate IDs) | +| semi-strict | Allows a full ID. Partial IDs are completed with the unique database instance ID (not recommended, allows duplicate IDs). | +| strict | XADD fails if a full ID is given. Partial IDs are completed using the unique database instance ID. | + +### Returns + +Returns `Finished successfully` if the database configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info db`]({{< relref "/operate/rs/references/cli-utilities/rladmin/info#info-db" >}}) to verify the database configuration was changed. + +### Example + +``` sh +$ rladmin tune db db:4 repl_timeout 300 +Tuning database: o +Finished successfully +$ rladmin info db db:4 | grep repl_timeout + repl_timeout: 300 seconds +``` + +## `tune node` + +Configures node parameters. + +``` sh +tune node { | all } + [ max_listeners ] + [ max_redis_forks ] + [ max_redis_servers ] + [ max_slave_full_syncs ] + [ quorum_only { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|----------------------|------------|----------------------------------------------------------------------------------------------------------------------------------| +| id | integer | ID of the specified node | +| all | | Configures settings for all nodes | +| max_listeners | integer | Maximum number of endpoints that may be bound to the node | +| max_redis_forks | integer | Maximum number of background processes forked from shards that may exist on the node at any given time | +| max_redis_servers | integer | Maximum number of shards allowed to reside on the node | +| max_slave_full_syncs | integer | Maximum number of simultaneous replica full-syncs that may be running at any given time (0: Unlimited, -1: Use cluster settings) | +| quorum_only | `enabled`
`disabled` | If activated, configures the node as a [quorum-only node]({{< relref "/glossary/_index.md#letter-p" >}}) | + +### Returns + +Returns `Finished successfully` if the node configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info node`]({{< relref "/operate/rs/references/cli-utilities/rladmin/info#info-node" >}}) to verify the node configuration was changed. + +### Example + +``` sh +$ rladmin tune node 3 max_redis_servers 120 +Finished successfully +$ rladmin info node 3 | grep "max redis servers" + max redis servers: 120 +``` + +## `tune proxy` + +Configures proxy parameters. + +``` sh +rladmin tune proxy { | all } + [ mode { static | dynamic } ] + [ threads ] + [ max_threads ] + [ scale_threshold ] + [ scale_duration ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------|----------------------------|-------------------------------------------------------------------------------------| +| id | integer | ID of the specified proxy | +| all | | Configures settings for all proxies | +| max_threads | integer, (range: 1-255) | Maximum number of threads allowed | +| mode | `static`
`dynamic` | Determines if the proxy automatically adjusts the number of threads based on load size | +| scale_duration | time in seconds, (range: 10-300) | Time of scale_threshold CPU utilization before the automatic proxy automatically scales | +| scale_threshold | percentage, (range: 50-99) | CPU utilization threshold that triggers spawning new threads | +| threads | integer, (range: 1-255) | Initial number of threads created at startup | + +### Returns + +Returns `OK` if the proxy configuration was changed. Otherwise, it returns an error. + +Use [`rladmin info proxy`]({{< relref "/operate/rs/references/cli-utilities/rladmin/info#info-proxy" >}}) to verify the proxy configuration was changed. + +### Example + +``` sh +$ rladmin tune proxy 2 scale_threshold 75 +Configuring proxies: + - proxy:2: ok +$ rladmin info proxy 2 | grep scale_threshold + scale_threshold: 75 (%) +``` +--- +Title: rladmin cluster change_password_hashing_algorithm +alwaysopen: false +categories: +- docs +- operate +- rs +description: Changes the password hashing algorithm. +headerRange: '[1-2]' +linkTitle: change_password_hashing_algorithm +tags: +- configured +toc: 'true' +weight: $weight +--- + +Changes the password hashing algorithm for the entire cluster. When you change the hashing algorithm, it rehashes the administrator password and passwords for all users, including default users. + +```sh +rladmin cluster change_password_hashing_algorithm +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| algorithm | SHA-256
PBKDF2 | Change to the specified hashing algorithm. The default hashing algorithm is `SHA-256`. | + +### Returns + +Reports whether the algorithm change succeeded or an error occurred. + +### Example + +```sh +$ rladmin cluster change_password_hashing_algorithm PBKDF2 +Please confirm changing the password hashing algorithm +Please confirm [Y/N]: y +Algorithm changed +``` +--- +Title: rladmin cluster master +alwaysopen: false +categories: +- docs +- operate +- rs +description: Identifies or changes the cluster's master node. +headerRange: '[1-2]' +linkTitle: master +tags: +- configured +toc: 'true' +weight: $weight +--- + +Identifies the cluster's master node. Use `set` to change the cluster's master to a different node. + +```sh +cluster master [ set ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node_id | integer | Unique node ID | + +### Returns + +Returns the ID of the cluster's master node. Otherwise, it returns an error message. + +### Example + +Identify the cluster's master node: + +```sh +$ rladmin cluster master +Node 1 is the cluster master node +``` + +Change the cluster master to node 3: + +```sh +$ rladmin cluster master set 3 +Node 3 set to be the cluster master node +``` +--- +Title: rladmin cluster recover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Recovers a cluster from a backup file. +headerRange: '[1-2]' +linkTitle: recover +tags: +- non-configured +toc: 'true' +weight: $weight +--- + +Recovers a cluster from a backup file. The default location of the configuration backup file is `/var/opt/redislabs/persist/ccs/ccs-redis.rdb`. + +```sh +rladmin cluster recover + filename + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ rack_id ] + [ override_rack_id ] + [ node_uid ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| addr | IP address | Sets a node's internal IP address. If not provided, the node sets the address automatically. (optional) | +| ccs_persistent_path | filepath | Path to the location of CCS snapshots (default is the same as persistent_path) (optional) | +| external_addr | IP address | Sets a node's external IP address. If not provided, the node sets the address automatically. (optional) | +| ephemeral_path | filepath (default: /var/opt/redislabs) | Path to an ephemeral storage location (optional) | +| filename | filepath | Backup file to use for recovery | +| flash_enabled | | Enables flash storage (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location in case the node does not support CAPI (required if flash_enabled) | +| node_uid | integer (default: 1) | Specifies which node will recover first and become master (optional) | +| override_rack_id | | Changes to a new rack, specified by `rack_id` (optional) | +| persistent_path | filepath | Path to the persistent storage location (optional) | +| rack_id | string | Switches to the specified rack (optional) | + +### Returns + +Returns `ok` if the cluster recovered successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster recover filename /tmp/persist/ccs/ccs-redis.rdb node_uid 1 rack_id 5 +Initiating cluster recovery... ok +``` +--- +Title: rladmin cluster stats_archiver +alwaysopen: false +categories: +- docs +- operate +- rs +description: Enables/deactivates the stats archiver. +headerRange: '[1-2]' +linkTitle: stats_archiver +tags: +- configured +toc: 'true' +weight: $weight +--- + +Enables or deactivates the stats archiver, which logs statistics in CSV (comma-separated values) format. + +```sh +rladmin cluster stats_archiver { enabled | disabled } +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| enabled | Turn on the stats archiver | +| disabled | Turn off the stats archiver | + +### Returns + +Returns the updated status of the stats archiver. + +### Example + +```sh +$ rladmin cluster stats_archiver enabled +Status: enabled +```--- +Title: rladmin cluster reset_password +alwaysopen: false +categories: +- docs +- operate +- rs +description: Changes the password for a given email. +headerRange: '[1-2]' +linkTitle: reset_password +tags: +- configured +toc: 'true' +weight: $weight +--- + +Changes the password for the user associated with the specified email address. + +Enter a new password when prompted. Then enter the same password when prompted a second time to confirm the password change. + +```sh +rladmin cluster reset_password +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| user email | email address | The email address of the user that needs a password reset | + +### Returns + +Reports whether the password change succeeded or an error occurred. + +### Example + +```sh +$ rladmin cluster reset_password user@example.com +New password: +New password (again): +Password changed. +```--- +Title: rladmin cluster config +alwaysopen: false +categories: +- docs +- operate +- rs +description: Updates the cluster's configuration. +headerRange: '[1-2]' +linkTitle: config +tags: +- configured +toc: 'true' +weight: $weight +--- + +Updates the cluster configuration. + +```sh + rladmin cluster config + [ auditing db_conns audit_protocol { TCP | local } + audit_address audit_port ] + [bigstore_driver {speedb | rocksdb} ] + [ control_cipher_suites ] + [ cm_port ] + [ cm_session_timeout_minutes ] + [ cnm_http_port ] + [ cnm_https_port ] + [ crdb_coordinator_port ] + [ data_cipher_list ] + [ data_cipher_suites_tls_1_3 ] + [ debuginfo_path ] + [ encrypt_pkeys { enabled | disabled } ] + [ envoy_admin_port ] + [ envoy_mgmt_server_port ] + [ gossip_envoy_admin_port ] + [ handle_redirects { enabled | disabled } ] + [ handle_metrics_redirects { enabled | disabled } ] + [ http_support { enabled | disabled } ] + [ ipv6 { enabled | disabled } ] + [ min_control_TLS_version { 1.2 | 1.3 } ] + [ min_data_TLS_version { 1.2 | 1.3 } ] + [ min_sentinel_TLS_version { 1.2 | 1.3 } ] + [ reserved_ports ] + [ s3_url ] + [ s3_ca_cert ] + [ saslauthd_ldap_conf ] + [ sentinel_tls_mode { allowed | required | disabled } ] + [ sentinel_cipher_suites ] + [ services { cm_server | crdb_coordinator | crdb_worker | + mdns_server | pdns_server | saslauthd | + stats_archiver } { enabled | disabled } ] + [ upgrade_mode { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| audit_address | string | TCP/IP address where a listener can capture [audit event notifications]({{< relref "/operate/rs/security/audit-events" >}}) | +| audit_port | string | Port where a listener can capture [audit event notifications]({{< relref "/operate/rs/security/audit-events" >}}) | +| audit_protocol | `tcp`
`local` | Protocol used for [audit event notifications]({{< relref "/operate/rs/security/audit-events" >}})
For production systems, only `tcp` is supported. | +| control_cipher_suites | list of ciphers | Cipher suites used for TLS connections to the Cluster Manager UI (specified in the format understood by the BoringSSL library)
(previously named `cipher_suites`) | +| cm_port | integer | UI server listening port | +| cm_session_timeout_minutes | integer | Timeout in minutes for the CM session +| cnm_http_port | integer | HTTP REST API server listening port | +| cnm_https_port | integer | HTTPS REST API server listening port | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| data_cipher_list | list of ciphers | Cipher suites used by the the data plane (specified in the format understood by the OpenSSL library) | +| data_cipher_suites_tls_1_3 | list of ciphers | Specifies the enabled TLS 1.3 ciphers for the data plane | +| debuginfo_path | filepath | Local directory to place generated support package files | +| encrypt_pkeys | `enabled`
`disabled` | Enable or turn off encryption of private keys | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| handle_redirects | `enabled`
`disabled` | Enable or turn off handling DNS redirects when DNS is not configured and running behind a load balancer | +| handle_metrics_redirects | `enabled`
`disabled` | Enable or turn off handling cluster redirects internally for Metrics API | +| http_support | `enabled`
`disabled` | Enable or turn off using HTTP for REST API connections | +| ipv6 | `enabled`
`disabled` | Enable or turn off IPv6 connections to the Cluster Manager UI | +| min_control_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the control path | +| min_data_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the data path | +| min_sentinel_TLS_version | `1.2`
`1.3` | The minimum TLS protocol version that is supported for the discovery service | +| reserved_ports | list of ports/port ranges | List of reserved ports and/or port ranges to avoid using for database endpoints (for example `reserved_ports 11000 13000-13010`) | +| s3_url | string | The URL of S3 export and import | +| s3_ca_cert | string | The CA certificate filepath for S3 export and import | +| saslauthd_ldap_conf | filepath | Updates LDAP authentication configuration for the cluster | +| sentinel_cipher_suites | list of ciphers | Cipher suites used by the discovery service (supported ciphers are implemented by the [cipher_suites.go]() package) | +| sentinel_tls_mode | `allowed`
`required`
`disabled` | Define the SSL policy for the discovery service
(previously named `sentinel_ssl_policy`) | +| services | `cm_server`
`crdb_coordinator`
`crdb_worker`
`mdns_server`
`pdns_server`
`saslauthd`
`stats_archiver`

`enabled`
`disabled` | Enable or turn off selected cluster services | +| upgrade_mode | `enabled`
`disabled` | Enable or turn off upgrade mode on the cluster | + +### Returns + +Reports whether the cluster was configured successfully. Displays an error message if the configuration attempt fails. + +### Example + +```sh +$ rladmin cluster config cm_session_timeout_minutes 20 +Cluster configured successfully +``` +--- +Title: rladmin cluster ocsp +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages OCSP. +headerRange: '[1-2]' +linkTitle: ocsp +tags: +- configured +toc: 'true' +weight: $weight +--- + +Manages OCSP configuration and verifies the status of a server certificate maintained by a third-party [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). + +## `ocsp certificate_compatible` + +Checks if the proxy certificate contains an OCSP URI. + +```sh +rladmin cluster ocsp certificate_compatible +``` + +### Parameters + +None + +### Returns + +Returns the OCSP URI if it exists. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin cluster ocsp certificate_compatible +Success. OCSP URI is http://responder.ocsp.url.com +``` + +## `ocsp config` + +Displays or updates OCSP configuration. Run the command without the `set` option to display the current configuration of a parameter. + +```sh +rladmin cluster ocsp config + [set ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|---------------|-------------| +| ocsp_functionality | enabled

disabled | Enables or turns off OCSP for the cluster | +| query_frequency | integer (range: 60-86400) (default: 3600) | The time interval in seconds between OCSP queries to check the certificate's status | +| recovery_frequency | integer (range: 60-86400) (default: 60) | The time interval in seconds between retries after a failed query | +| recovery_max_tries | integer (range: 1-100) (default: 5) | The number of retries before the validation query fails and invalidates the certificate | +| responder_url | string | The OCSP server URL embedded in the proxy certificate (you cannot manually set this parameter) | +| response_timeout | integer (range: 1-60) (default: 1) | The time interval in seconds to wait for a response before timing out | + +### Returns + +If you run the `ocsp config` command without the `set` option, it displays the specified parameter's current configuration. + +### Example + +```sh +$ rladmin cluster ocsp config recovery_frequency +Recovery frequency of the OCSP server is 60 seconds +$ rladmin cluster ocsp config recovery_frequency set 30 +$ rladmin cluster ocsp config recovery_frequency +Recovery frequency of the OCSP server is 30 seconds +``` + +## `ocsp status` + +Returns the latest cached status of the certificate's OCSP response. + +```sh +rladmin cluster ocsp status +``` +### Parameters + +None + +### Returns + +Returns the latest cached status of the certificate's OCSP response. + +### Example + +```sh +$ rladmin cluster ocsp status +OCSP certificate status is: REVOKED +produced_at: Wed, 22 Dec 2021 12:50:11 GMT +responder_url: http://responder.ocsp.url.com +revocation_time: Wed, 22 Dec 2021 12:50:04 GMT +this_update: Wed, 22 Dec 2021 12:50:11 GMT +``` + +## `ocsp test_certificate` + +Queries the OCSP server for the certificate's latest status, then caches and displays the response. + +```sh +rladmin cluster ocsp test_certificate +``` + +### Parameters + +None + +### Returns + +Returns the latest status of the certificate's OCSP response. + +### Example + +```sh +$ rladmin cluster ocsp test_certificate +Initiating a query to OCSP server +...OCSP certificate status is: REVOKED +produced_at: Wed, 22 Dec 2021 12:50:11 GMT +responder_url: http://responder.ocsp.url.com +revocation_time: Wed, 22 Dec 2021 12:50:04 GMT +this_update: Wed, 22 Dec 2021 12:50:11 GMT +``` +--- +Title: rladmin cluster certificate +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets the cluster certificate. +headerRange: '[1-2]' +linkTitle: certificate +tags: +- configured +toc: 'true' +weight: $weight +--- + +Sets a cluster certificate to a specified PEM file. + +```sh +rladmin cluster certificate + set + certificate_file + [ key_file ] +``` + +To set a certificate for a specific service, use the corresponding certificate name. See the [certificates table]({{< relref "/operate/rs/security/certificates" >}}) for the list of cluster certificates and their descriptions. + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| certificate name | 'cm'
'api'
'proxy'
'syncer'
'metrics_exporter' | Name of the certificate to update | +| certificate_file | filepath | Path to the certificate file | +| key_file | filepath | Path to the key file (optional) | + +### Returns + +Reports that the certificate was set to the specified file. Returns an error message if the certificate fails to update. + +### Example + +```sh +$ rladmin cluster certificate set proxy \ + certificate_file /tmp/proxy.pem +Set proxy certificate to contents of file /tmp/proxy.pem +``` +--- +Title: rladmin cluster create +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates a new cluster. +headerRange: '[1-2]' +linkTitle: create +tags: +- non-configured +toc: 'true' +weight: $weight +--- + +Creates a new cluster. The node where you run `rladmin cluster create` becomes the first node of the new cluster. + +```sh +cluster create + name + username + password + [ node_uid ] + [ rack_aware ] + [ rack_id ] + [ license_file ] + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ register_dns_suffix ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr [ ... ] ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| addr | IP address | The node's internal IP address (optional) | +| ccs_persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the location of CCS snapshots (optional) | +| ephemeral_path | filepath (default: /var/opt/redislabs) | Path to the ephemeral storage location (optional) | +| external_addr | list of IP addresses | A space-delimited list of the node's external IP addresses (optional) | +| flash_enabled | | Enables flash storage (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location (optional) | +| license_file | filepath | Path to the RLEC license file (optional) | +| name | string | Cluster name | +| node_uid | integer | Unique node ID (optional) | +| password | string | Admin user's password | +| persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the persistent storage location (optional) | +| rack_aware | | Activates or deactivates rack awareness (optional) | +| rack_id | string | The rack's unique identifier (optional) | +| register_dns_suffix | | Enables database mapping to both internal and external IP addresses (optional) | +| username | email address | Admin user's email address | + +### Returns + +Returns `ok` if the new cluster was created successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster create name cluster.local \ + username admin@example.com \ + password admin-password +Creating a new cluster... ok +``` +--- +Title: rladmin cluster running_actions +alwaysopen: false +categories: +- docs +- operate +- rs +description: Lists all active tasks. +headerRange: '[1-2]' +linkTitle: running_actions +tags: +- configured +toc: 'true' +weight: $weight +--- + +Lists all active tasks running on the cluster. + +```sh +rladmin cluster running_actions +``` + +### Parameters + +None + +### Returns + +Returns details about any active tasks running on the cluster. + +### Example + +```sh +$ rladmin cluster running_actions +Got 1 tasks: +1) Task: maintenance_on (ce391d81-8d51-4ce2-8f63-729c7ac2589e) Node: 1 Status: running +```--- +Title: rladmin cluster debug_info +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates a support package. +headerRange: '[1-2]' +linkTitle: debug_info +tags: +- configured +toc: 'true' +weight: $weight +--- + +Downloads a support package to the specified path. If you do not specify a path, it downloads the package to the default path specified in the cluster configuration file. + +```sh +rladmin cluster debug_info + [ node ] + [ path ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node | integer | Downloads a support package for the specified node | +| path | filepath | Specifies the location where the support package should download | + +### Returns + +Reports the progress of the support package download. + +### Example + +```sh +$ rladmin cluster debug_info node 1 +Preparing the debug info files package +Downloading... +[==================================================] +Downloading complete. File /tmp/debuginfo.20220511-215637.node-1.tar.gz is saved. +``` +--- +Title: rladmin cluster join +alwaysopen: false +categories: +- docs +- operate +- rs +description: Adds a node to an existing cluster. +headerRange: '[1-2]' +linkTitle: join +tags: +- non-configured +toc: 'true' +weight: $weight +--- + +Adds a node to an existing cluster. + +```sh +rladmin cluster join + nodes + username + password + [ ephemeral_path ] + [ persistent_path ] + [ ccs_persistent_path ] + [ rack_id ] + [ override_rack_id ] + [ replace_node ] + [ flash_enabled ] + [ flash_path ] + [ addr ] + [ external_addr [ ... ] ] + [ override_repair ] + [ accept_servers { enabled | disabled } ] + [ cnm_http_port ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| accept_servers | 'enabled'
'disabled' | Allows allocation of resources on the new node when enabled (optional) | +| addr | IP address | Sets a node's internal IP address. If not provided, the node sets the address automatically. (optional) | +| ccs_persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the CCS snapshot location (the default is the same as persistent_path) (optional) | +| cnm_http_port | integer | Joins a cluster that has a non-default cnm_http_port (optional) | +| ephemeral_path | filepath | Path to the ephemeral storage location (optional) | +| external_addr | list of IP addresses | Sets a node's external IP addresses (space-delimited list). If not provided, the node sets the address automatically. (optional) | +| flash_enabled | | Enables flash capabilities for a database (optional) | +| flash_path | filepath (default: /var/opt/redislabs/flash) | Path to the flash storage location in case the node does not support CAPI (required if flash_enabled) | +| nodes | IP address | Internal IP address of an existing node in the cluster | +| override_rack_id | | Changes to a new rack, specified by `rack_id` (optional) | +| override_repair | | Enables joining a cluster with a dead node (optional) | +| password | string | Admin user's password | +| persistent_path | filepath (default: /var/opt/redislabs/persist) | Path to the persistent storage location (optional) | +| rack_id | string | Moves the node to the specified rack (optional) | +| replace_node | integer | Replaces the specified node with the new node (optional) | +| username | email address | Admin user's email address | + +### Returns + +Returns `ok` if the node joined the cluster successfully. Otherwise, it returns an error message. + +### Example + +```sh +$ rladmin cluster join nodes 192.0.2.2 \ + username admin@example.com \ + password admin-password +Joining cluster... ok +``` +--- +Title: rladmin cluster +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage cluster. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: cluster +toc: 'true' +weight: $weight +--- + +Manages cluster configuration and administration. Most `rladmin cluster` commands are only for clusters that are already configured, while a few others are only for new clusters that have not been configured. + +## Commands for configured clusters + +{{}} + +## Commands for non-configured clusters + +{{}} +--- +Title: rladmin bind +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages the proxy policy for a specified database endpoint. +headerRange: '[1-2]' +linkTitle: bind +toc: 'true' +weight: $weight +--- + +Manages the proxy policy for a specific database endpoint. + +## `bind endpoint exclude` + +Defines a list of nodes to exclude from the proxy policy for a specific database endpoint. When you exclude a node, the endpoint cannot bind to the node's proxy. + +Each time you run an exclude command, it overwrites the previous list of excluded nodes. + +```sh +rladmin bind + [ db { db: | } ] + endpoint exclude + +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| proxy | list of proxy IDs | Proxies to exclude | + +### Returns + +Returns `Finished successfully` if the list of excluded proxies was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:2 all-nodes No +db:6 tr02 endpoint:6:1 node:1 all-nodes No +db:6 tr02 endpoint:6:1 node:3 all-nodes No +$ rladmin bind endpoint 6:1 exclude 2 +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-nodes -2 No +db:6 tr02 endpoint:6:1 node:3 all-nodes -2 No +``` + +## `bind endpoint include` + +Defines a list of nodes to include in the proxy policy for the specific database endpoint. + +Each time you run an include command, it overwrites the previous list of included nodes. + +```sh +rladmin bind + [ db { db: | } ] + endpoint include + +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| proxy | list of proxy IDs | Proxies to include | + +### Returns + +Returns `Finished successfully` if the list of included proxies was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin bind endpoint 6:1 include 3 +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-master-shards +3 No +db:6 tr02 endpoint:6:1 node:3 all-master-shards +3 No +``` + +## `bind endpoint policy` + +Changes the overall proxy policy for a specific database endpoint. + +```sh +rladmin bind + [ db { db: | } ] + endpoint + policy { single | all-master-shards | all-nodes } +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Only allows endpoints for the specified database | +| endpoint | endpoint ID | Changes proxy settings for the specified endpoint | +| policy | 'all-master-shards'
'all-nodes'
'single' | Changes the [proxy policy](#proxy-policies) to the specified policy | + +| Proxy policy | Description | +| - | - | +| all-master-shards | Multiple proxies, one on each master node (best for high traffic and multiple master shards) | +| all-nodes | Multiple proxies, one on each node of the cluster (increases traffic in the cluster, only used in special cases) | +| single | All traffic flows through a single proxy bound to the database endpoint (preferable in most cases) | + +### Returns + +Returns `Finished successfully` if the proxy policy was successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the policy changed. + +### Example + +``` sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-nodes -2 No +db:6 tr02 endpoint:6:1 node:3 all-nodes -2 No +$ rladmin bind endpoint 6:1 policy all-master-shards +Executing bind endpoint: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +``` +--- +Title: rladmin restart +alwaysopen: false +categories: +- docs +- operate +- rs +description: Restarts Redis Enterprise Software processes for a specific database. +headerRange: '[1-2]' +linkTitle: restart +toc: 'true' +weight: $weight +--- + +Schedules a restart of the Redis Enterprise Software processes on primary and replica instances of a specific database. + +``` sh +rladmin restart db { db: | } + [preserve_roles] + [discard_data] + [force_discard] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|----------------|--------------------------------|-----------------------------------------------------------------------| +| db | db:\
name | Restarts Redis Enterprise Software processes for the specified database | +| discard_data | | Allows discarding data if there is no persistence or replication | +| force_discard | | Forcibly discards data even if there is persistence or replication | +| preserve_roles | | Performs an additional failover to maintain shard roles | + +### Returns + +Returns `Done` if the restart completed successfully. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin restart db db:5 preserve_roles +Monitoring 1db07491-35da-4bb6-9bc1-56949f4c312a +active - SMUpgradeBDB init +active - SMUpgradeBDB stop_forwarding +active - SMUpgradeBDB stop_active_expire +active - SMUpgradeBDB check_slave +oactive - SMUpgradeBDB stop_active_expire +active - SMUpgradeBDB second_failover +completed - SMUpgradeBDB +Done +``` +--- +Title: rladmin migrate +alwaysopen: false +categories: +- docs +- operate +- rs +description: Moves Redis Enterprise Software shards or endpoints to a new node in + the same cluster. +headerRange: '[1-2]' +linkTitle: migrate +toc: 'true' +weight: $weight +--- + +Moves Redis Enterprise shards or endpoints to a new node in the same cluster. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +## `migrate all_master_shards` + +Moves all primary shards of a specified database or node to a new node in the same cluster. + +```sh +rladmin migrate { db { db: | } | node } + all_master_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| node | integer | Limits migration to a specific origin node | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:6 sort ROLE +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:3 master 0-4095 3.01MB OK +db:6 tr02 redis:16 node:3 master 4096-8191 3.2MB OK +db:6 tr02 redis:18 node:3 master 8192-12287 3.2MB OK +db:6 tr02 redis:20 node:3 master 12288-16383 3.01MB OK +$ rladmin migrate db db:6 all_master_shards target_node 1 +Monitoring 8b0f28e2-4342-427a-a8e3-a68cba653ffe +queued - migrate_shards +running - migrate_shards +Executing migrate_redis with shards_uids ['18', '14', '20', '16'] +Ocompleted - migrate_shards +Done +$ rladmin status shards node 1 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:1 master 0-4095 3.22MB OK +db:6 tr02 redis:16 node:1 master 4096-8191 3.22MB OK +db:6 tr02 redis:18 node:1 master 8192-12287 3.22MB OK +db:6 tr02 redis:20 node:1 master 12288-16383 2.99MB OK +``` +## `migrate all_shards` + +Moves all shards on a specified node to a new node in the same cluster. + +``` sh +rladmin migrate node + [ max_concurrent_bdb_migrations ] + all_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| node | integer | Limits migration to a specific origin node | +| max_concurrent_bdb_migrations | integer | Sets the maximum number of concurrent endpoint migrations | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards node 1 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 master 0-16383 3.04MB OK +db:6 tr02 redis:15 node:1 slave 0-4095 2.93MB OK +db:6 tr02 redis:17 node:1 slave 4096-8191 2.93MB OK +db:6 tr02 redis:19 node:1 slave 8192-12287 3.08MB OK +db:6 tr02 redis:21 node:1 slave 12288-16383 3.08MB OK +$ rladmin migrate node 1 all_shards target_node 2 +Monitoring 71a4f371-9264-4398-a454-ce3ff4858c09 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['21', '15', '17', '19'] +OExecuting migrate_redis with shards_uids ['12'] +Ocompleted - migrate_shards +Done +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.14MB OK +db:6 tr02 redis:15 node:2 slave 0-4095 2.96MB OK +db:6 tr02 redis:17 node:2 slave 4096-8191 2.96MB OK +db:6 tr02 redis:19 node:2 slave 8192-12287 2.96MB OK +db:6 tr02 redis:21 node:2 slave 12288-16383 2.96MB OK +``` + +## `migrate all_slave_shards` + +Moves all replica shards of a specified database or node to a new node in the same cluster. + +```sh +rladmin migrate { db { db: | } | node } + all_slave_shards + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| node | integer | Limits migration to a specific origin node | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:6 node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:15 node:2 slave 0-4095 3.06MB OK +db:6 tr02 redis:17 node:2 slave 4096-8191 3.06MB OK +db:6 tr02 redis:19 node:2 slave 8192-12287 3.06MB OK +db:6 tr02 redis:21 node:2 slave 12288-16383 3.06MB OK +$ rladmin migrate db db:6 all_slave_shards target_node 3 +Monitoring 5d36a98c-3dc8-435f-8ed9-35809ba017a4 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['15', '17', '21', '19'] +Ocompleted - migrate_shards +Done +$ rladmin status shards db db:6 node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:15 node:3 slave 0-4095 3.04MB OK +db:6 tr02 redis:17 node:3 slave 4096-8191 3.04MB OK +db:6 tr02 redis:19 node:3 slave 8192-12287 3.04MB OK +db:6 tr02 redis:21 node:3 slave 12288-16383 3.04MB OK +``` + +## `migrate endpoint_to_shards` + +Moves database endpoints to the node where the majority of primary shards are located. + +```sh +rladmin migrate [ db { db: | } ] + endpoint_to_shards + [ restrict_target_node ] + [ commit ] + [ max_concurrent_bdb_migrations ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| db | db:\
name | Limits migration to a specific database | +| restrict_target_node | integer | Moves the endpoint only if the target node matches the specified node | +| commit | | Performs endpoint movement | +| max_concurrent_bdb_migrations | integer | Sets the maximum number of concurrent endpoint migrations | + + +### Returns + +Returns a list of steps to perform the migration. If the `commit` flag is set, the steps will run and return `Finished successfully` if they were completed. Otherwise, returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the endpoints were moved. + +### Example + +```sh +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin migrate db db:6 endpoint_to_shards +* Going to bind endpoint:6:1 to node 1 +Dry-run completed, add 'commit' argument to execute +$ rladmin migrate db db:6 endpoint_to_shards commit +* Going to bind endpoint:6:1 to node 1 +Executing bind endpoint:6:1: OOO. +Finished successfully +$ rladmin status endpoints db db:6 +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:6 tr02 endpoint:6:1 node:1 all-master-shards No +``` + +## `migrate shard` + +Moves one or more shards to a new node in the same cluster. + +```sh +rladmin migrate shard + [ preserve_roles ] + target_node + [ override_policy ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-------------------------------|------------------------|---------------------------------------------------------------------------------| +| shard | list of shard IDs | Shards to migrate | +| preserve_roles | | Performs an additional failover to guarantee the primary shards' roles are preserved | +| target_node | integer | Migration target node | +| override_policy | | Overrides the rack aware policy and allows primary and replica shards on the same node | + +### Returns + +Returns `Done` if the migration completed successfully. Otherwise, returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify the migration completed. + +### Example + +```sh +$ rladmin status shards db db:5 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.01MB OK +db:5 tr01 redis:13 node:3 slave 0-16383 3.1MB OK +$ rladmin migrate shard 13 target_node 1 +Monitoring d2637eea-9504-4e94-a70c-76df087efcb2 +queued - migrate_shards +.running - migrate_shards +Executing migrate_redis with shards_uids ['13'] +Ocompleted - migrate_shards +Done +$ rladmin status shards db db:5 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:2 master 0-16383 3.01MB OK +db:5 tr01 redis:13 node:1 slave 0-16383 3.04MB OK +``` +--- +Title: rladmin recover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Recovers databases in recovery mode. +headerRange: '[1-2]' +linkTitle: recover +toc: 'true' +weight: $weight +--- + +Recovers databases in recovery mode after events such as cluster failure, and restores the databases' configurations and data from stored persistence files. See [Recover a failed database]({{< relref "/operate/rs/databases/recover" >}}) for detailed instructions. + +Database persistence files are stored in `/var/opt/redislabs/persist/redis/` by default, but you can specify a different directory to use for database recovery with [`rladmin node recovery_path set `]({{< relref "/operate/rs/references/cli-utilities/rladmin/node/recovery-path" >}}). + +## `recover all` + +Recovers all databases in recovery mode. + +```sh +rladmin recover all + [ only_configuration ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|--------------------|------------|---------------------------------------------| +| only_configuration | | Recover database configuration without data | + +### Returns + +Returns `Completed successfully` if the database was recovered. Otherwise, returns an error. + +### Example + +``` +$ rladmin recover all + 0% [ 0 recovered | 0 failed ] | | Elapsed Time: 0:00:00[first-db (db:1) recovery] Initiated.[second-db (db:2) recovery] Initiated. + 50% [ 0 recovered | 0 failed ] |### | Elapsed Time: 0:00:04[first-db (db:1) recovery] Completed successfully + 75% [ 1 recovered | 0 failed ] |###### | Elapsed Time: 0:00:06[second-db (db:2) recovery] Completed successfully +100% [ 2 recovered | 0 failed ] |#########| Elapsed Time: 0:00:08 +``` + +## `recover db` + +Recovers a specific database in recovery mode. + +```sh +rladmin recover db { db: | } + [ only_configuration ] +``` + +### Parameters + +| Parameters | Type/Value | Description | +|--------------------|----------------------|---------------------------------------------| +| db | db:\
name | Database to recover | +| only_configuration | | Recover database configuration without data | + +### Returns + +Returns `Completed successfully` if the database was recovered. Otherwise, returns an error. + +### Example + +``` +$ rladmin recover db db:1 + 0% [ 0 recovered | 0 failed ] | | Elapsed Time: 0:00:00[demo-db (db:1) recovery] Initiated. + 50% [ 0 recovered | 0 failed ] |### | Elapsed Time: 0:00:00[demo-db (db:1) recovery] Completed successfully +100% [ 1 recovered | 0 failed ] |######| Elapsed Time: 0:00:02 +``` + +## `recover list` + +Shows a list of all databases that are currently in recovery mode. + +```sh +rladmin recover list +``` + +### Parameters + +None + +### Returns + +Displays a list of all recoverable databases. If no databases are in recovery mode, returns `No recoverable databases found`. + +### Example + +```sh +$ rladmin recover list +DATABASES IN RECOVERY STATE: +DB:ID NAME TYPE SHARDS REPLICATION PERSISTENCE STATUS +db:5 tr01 redis 1 enabled aof missing-files +db:6 tr02 redis 4 enabled snapshot ready +``` + +## `recover s3_import` + +Imports current database snapshot files from an AWS S3 bucket to a directory on the node. + +```sh +rladmin recover s3_import + s3_bucket + [ s3_prefix ] + s3_access_key_id + s3_secret_access_key + import_path +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------|------------|------------------------------------------------------------------| +| s3_bucket | string | S3 bucket name | +| s3_prefix | string | S3 object prefix | +| s3_access_key_id | string | S3 access key ID | +| s3_secret_access_key | string | S3 secret access key | +| import_path | filepath | Local import path where all database snapshots will be imported | + +### Returns + +Returns `Completed successfully` if the database files were imported. Otherwise, returns an error. + +### Example + +```sh +rladmin recover s3_import s3_bucket s3_prefix / s3_access_key_id s3_secret_access_key import_path /tmp +``` +--- +Title: rladmin status +alwaysopen: false +categories: +- docs +- operate +- rs +description: Displays the current cluster status and topology information. +headerRange: '[1-2]' +linkTitle: status +toc: 'true' +weight: $weight +--- + +Displays the current cluster status and topology information. + +## `status` + +Displays the current status of all nodes, databases, database endpoints, and shards on the cluster. + +``` sh +rladmin status + [ extra ] + [ issues_only] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| issues_only | Filters out all items that have an `OK` status | + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all databases in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns + +Returns tables of the status of all nodes, databases, and database endpoints on the cluster. + +If `issues_only` is specified, it only shows instances that do not have an `OK` status. + +In the `CLUSTER NODES` section, `*node` indicates which node you are connected to. + +For descriptions of the fields returned by `rladmin status extra all`, see the output tables for [nodes](#returns-nodes), [databases](#returns-dbs), [endpoints](#returns-endpoints), and [shards](#returns-shards). + +### Example + +``` sh +$ rladmin status extra all +CLUSTER: +OK. Cluster master: 1 (198.51.100.2) +Cluster health: OK, [1, 0.13333333333333333, 0.03333333333333333] +failures/minute - avg1 1.00, avg15 0.13, avg60 0.03. + +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME MASTERS SLAVES OVERBOOKING_DEPTH SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION SHA RACK-ID STATUS +node:1 master 198.51.100.2 3d99db1fdf4b 4 0 10.91GB 4/100 6 14.91GB/19.54GB 10.91GB/16.02GB 6.2.12-37 5c2106 - OK +node:2 slave 198.51.100.3 fc7a3d332458 0 0 11.4GB 0/100 6 14.91GB/19.54GB 11.4GB/16.02GB 6.2.12-37 5c2106 - OK +*node:3 slave 198.51.100.4 b87cc06c830f 0 0 11.4GB 0/100 6 14.91GB/19.54GB 11.4GB/16.02GB 6.2.12-37 5c2106 - OK + +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT EXEC_STATE EXEC_STATE_MACHINE BACKUP_PROGRESS MISSING_BACKUP_TIME REDIS_VERSION +db:3 database3 redis active 4 dense disabled disabled redis-11103.cluster.local:11103 N/A N/A N/A N/A 6.0.16 + +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL WATCHDOG_STATUS +db:3 database3 endpoint:3:1 node:1 single No OK + +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY BACKUP_PROGRESS RAM_FRAG WATCHDOG_STATUS STATUS +db:3 database3 redis:4 node:1 master 0-4095 2.08MB N/A 4.73MB OK OK +db:3 database3 redis:5 node:1 master 4096-8191 2.08MB N/A 4.62MB OK OK +db:3 database3 redis:6 node:1 master 8192-12287 2.08MB N/A 4.59MB OK OK +db:3 database3 redis:7 node:1 master 12288-16383 2.08MB N/A 4.66MB OK OK +``` + +## `status databases` + +Displays the current status of all databases on the cluster. + +``` sh +rladmin status databases + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all databases in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns {#returns-dbs} + +Returns a table of the status of all databases on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows databases that do not have an `OK` status. + +The following table describes the fields returned by `rladmin status databases extra all`: + +| Field | Description | +|-------|-------------| +| DB:ID | Database ID | +| NAME | Database name | +| TYPE | Database type: Redis or Memcached | +| STATUS | Database status | +| SHARDS | The number of primary shards in the database | +| PLACEMENT | How the shards are spread across nodes in the cluster, densely or sparsely | +| REPLICATION | Is replication enabled for the database | +| PERSISTENCE | Is persistence enabled for the database | +| ENDPOINT | Database endpoint | +| EXEC_STATE | The current state of the state machine | +| EXEC_STATE_MACHINE | The name of the running state machine | +| BACKUP_PROGRESS | The database’s backup progress | +| MISSING_BACKUP_TIME | How long ago a backup was done | +| REDIS_VERSION | The database’s Redis version | + +### Example + +``` sh +$ rladmin status databases sort REPLICATION PERSISTENCE +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:1 database1 redis active 1 dense disabled disabled redis-10269.testdbd11169.localhost:10269 +db:2 database2 redis active 1 dense disabled snapshot redis-13897.testdbd11169.localhost:13897 +db:3 database3 redis active 1 dense enabled snapshot redis-19416.testdbd13186.localhost:19416 +``` + +## `status endpoints` + +Displays the current status of all endpoints on the cluster. + +``` sh +rladmin status endpoints + [ node ] + [ db { db: | } ] + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| node \ | Only show endpoints for the specified node ID | +| db db:\ | Only show endpoints for the specified database ID | +| db \ | Only show endpoints for the specified database name | +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all endpoints in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns {#returns-endpoints} + +Returns a table of the status of all endpoints on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows endpoints that do not have an `OK` status. + +The following table describes the fields returned by `rladmin status endpoints extra all`: + +| Field | Description | +|-------|-------------| +| DB:ID | Database ID | +| NAME | Database name | +| ID | Endpoint ID | +| NODE | The node that hosts the endpoint | +| ROLE | The proxy policy of the database: single, all-master-shards, or all-nodes | +| SSL | Is SSL enabled | +| WATCHDOG_STATUS | The shards related to the endpoint are monitored and healthy | + +### Example + +``` sh +$ rladmin status endpoints +DB:ID NAME ID NODE ROLE SSL +db:1 database1 endpoint:1:1 node:1 single No +db:2 database2 endpoint:2:1 node:2 single No +db:3 database3 endpoint:3:1 node:3 single No +``` + +## `status modules` + +Displays the current status of modules installed on the cluster and modules used by databases. This information is not included in the combined status report returned by [`rladmin status`](#status). + +``` sh +rladmin status modules + [ db { db: | } ... { db: | } ] + [ extra { all | compatible_redis_version | min_redis_version | module_id } ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| db db:\ | Provide a list of database IDs to show only modules used by the specified databases
(for example: `rladmin status modules db db:1 db:2`) | +| db \ | Provide a list of database names to show only modules used by the specified databases
(for example: `rladmin status modules db name1 name2`) | +| extra all | Shows all extra information | +| extra compatible_redis_version | Shows the compatible Redis database version for the module | +| extra module_id | Shows module IDs | +| extra min_redis_version | Shows the minimum compatible Redis database version for each module | + +### Returns + +Returns the status of modules installed on the cluster and modules used by databases. + +### Example + +```sh +$ rladmin status modules extra all +CLUSTER MODULES: +MODULE VERSION MIN_REDIS_VERSION ID +RedisBloom 2.4.5 6.0 1b895a180592cbcae5bd3bff6af24be2 +RedisBloom 2.6.8 7.1 95264e7c9ac9540268c115c86a94659b +RediSearch 2 2.6.12 6.0 2c000539f65272f7a2712ed3662c2b6b +RediSearch 2 2.8.9 7.1 dd9a75710db528afa691767e9310ac6f +RedisGears 2.0.15 7.1 18c83d024b8ee22e7caf030862026ca6 +RedisGraph 2.10.12 6.0 5a1f2fdedb8f6ca18f81371ea8d28f68 +RedisJSON 2.4.7 6.0 28308b101a0203c21fa460e7eeb9344a +RedisJSON 2.6.8 7.1 b631b6a863edde1b53b2f7a27a49c004 +RedisTimeSeries 1.8.11 6.0 8fe09b00f56afe5dba160d234a6606af +RedisTimeSeries 1.10.9 7.1 98a492a017ea6669a162fd3503bf31f3 + +DATABASE MODULES: +DB:ID NAME MODULE VERSION ARGS STATUS +db:1 search-json-db RediSearch 2 2.8.9 PARTITIONS AUTO OK +db:1 search-json-db RedisJSON 2.6.8 OK +db:2 timeseries-db RedisTimeSeries 1.10.9 OK +``` + +## `status nodes` + +Displays the current status of all nodes on the cluster. + +``` sh +rladmin status nodes + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra nodestats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all nodes in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns {#returns-nodes} + +Returns a table of the status of all nodes on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows nodes that do not have an `OK` status. + +`*node` indicates which node you are connected to. + +The following table describes the fields returned by `rladmin status nodes extra all`: + +| Field | Description | +|-------|-------------| +| NODE:ID | Node ID | +| ROLE | Is the node a primary (`master`) or secondary (`slave`) node | +| ADDRESS | The node’s internal IP address | +| EXTERNAL ADDRESS | The node’s external IP address | +| HOSTNAME | Node name | +| MASTERS | The number of primary shards on the node | +| SLAVES | The number of replica shards on the node | +| OVERBOOKING_DEPTH | Memory available to create new shards, accounting for the memory reserved for existing shards to grow, even if `shards_overbooking` is enabled. A negative value indicates how much memory is overbooked rather than just showing that no memory is available for new shards. | +| SHARDS | The number of shards on the node | +| CORES | The number of cores on the node | +| FREE_RAM | free_memory/total_memory
**free_memory**: the amount of free memory reported by the OS.
**total_memory**: the total physical memory available on the node. | +| PROVISIONAL_RAM | Memory available to create new shards, displayed as available_provisional_memory/total_provisional_memory.
**available_provisional_memory**: memory currently available for the creation of new shards.
**total_provisional_memory**: memory that would be available to create new shards if the used memory on the node was 0.
If the available provisional memory is 0, the node cannot create new shards because the node has reached its shard limit, is in maintenance mode, or is a quorum-only node. | +| FLASH | The amount of flash memory available on the node, similar to `FREE_RAM` | +| AVAILABLE_FLASH | Flash memory available to create new shards, similar to `PROVISIONAL_RAM` | +| VERSION | The cluster version installed on the node | +| SHA | The node’s SHA hash | +| RACK-ID | The node’s rack ID | +| STATUS | The node’s status | + +### Example + +``` sh +$ rladmin status nodes sort PROVISIONAL_RAM HOSTNAME +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +node:1 master 198.51.100.2 3d99db1fdf4b 4/100 6 14.74GB/19.54GB 10.73GB/16.02GB 6.2.12-37 OK +*node:3 slave 198.51.100.4 b87cc06c830f 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +node:2 slave 198.51.100.3 fc7a3d332458 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +``` + +## `status shards` + +Displays the current status of all shards on the cluster. + +``` sh +rladmin status shards + [ node ] + [ db {db: | } ] + [ extra ] + [ sort ] + [ issues_only ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| node \ | Only show shards for the specified node ID | +| db db:\ | Only show shards for the specified database ID | +| db \ | Only show shards for the specified database name | +| extra \ | Extra options that show more information | +| sort \ | Sort results by specified column titles | +| issues_only | Filters out all items that have an `OK` status | + + +| Extra parameter | Description | +|-------------------|-------------| +| extra all | Shows all `extra` information | +| extra backups | Shows periodic backup status | +| extra frag | Shows fragmented memory available after the restart | +| extra shardstats | Shows shards per node | +| extra rack_id | Shows `rack_id` if customer is not `rack_aware` | +| extra redis_version | Shows Redis version of all shards in the cluster | +| extra state_machine | Shows execution of state machine information | +| extra watchdog | Shows watchdog status | + +### Returns {#returns-shards} + +Returns a table of the status of all shards on the cluster. + +If `sort ` is specified, the result is sorted by the specified table columns. + +If `issues_only` is specified, it only shows shards that do not have an `OK` status. + +The following table describes the fields returned by `rladmin status shards extra all`: + +| Field | Description | +|-------|-------------| +| DB:ID | Database ID | +| NAME | Database name | +| ID | Shard ID | +| NODE | The node on which the shard resides | +| ROLE | The shard’s role: primary (`master`) or replica (`slave`) | +| SLOTS | Redis keys slot range of the shard | +| USED_MEMORY | Memory used by the shard | +| BACKUP_PROGRESS | The shard’s backup progress | +| RAM_FRAG | The shard’s RAM fragmentation caused by deleted data or expired keys. A large value can indicate inefficient memory allocation. | +| FLASH_FRAG | For Auto Tiering databases, the shard’s flash fragmentation | +| WATCHDOG_STATUS | The shard is being monitored by the node watchdog and the shard is healthy | +| STATUS | The shard’s status | + +### Example + +``` sh +$ rladmin status shards sort USED_MEMORY ID +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:3 database3 redis:6 node:1 master 8192-12287 2.04MB OK +db:3 database3 redis:4 node:1 master 0-4095 2.08MB OK +db:3 database3 redis:5 node:1 master 4096-8191 2.08MB OK +db:3 database3 redis:7 node:1 master 12288-16383 2.08MB OK +``` +--- +Title: rladmin verify +alwaysopen: false +categories: +- docs +- operate +- rs +description: Prints verification reports for the cluster. +headerRange: '[1-2]' +linkTitle: verify +toc: 'true' +weight: $weight +--- + +Prints verification reports for the cluster. + +## `verify balance` + +Prints a balance report that displays all of the unbalanced endpoints or nodes in the cluster. + +```sh +rladmin verify balance [ node ] +``` + +The [proxy policy]({{< relref "/operate/rs/databases/configure/proxy-policy#proxy-policies" >}}) determines which nodes or endpoints to report as unbalanced. + +A node is unbalanced if: +- `all-nodes` proxy policy and the node has no endpoint + +An endpoint is unbalanced in the following cases: +- `single` proxy policy and one of the following is true: + - Shard placement is [`sparse`]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy.md#sparse-shard-placement-policy" >}}) and none of the master shards are on the node + - Shard placement is [`dense`]({{< relref "/operate/rs/databases/memory-performance/shard-placement-policy.md#dense-shard-placement-policy" >}}) and some master shards are on a different node from the endpoint +- `all-master-shards` proxy policy and one of the following is true: + - None of the master shards are on the node + - Some master shards are on a different node from the endpoint + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------|-------------| +| node | integer | Specify a node ID to return a balance table for that node only (optional) | + +### Returns + +Returns a table of unbalanced endpoints and nodes in the cluster. + +### Examples + +Verify all nodes: + +```sh +$ rladmin verify balance +The table presents all of the unbalanced endpoints/nodes in the cluster +BALANCE: +NODE:ID DB:ID NAME ENDPOINT:ID PROXY_POLICY LOCAL SHARDS TOTAL SHARDS +``` + +Verify a specific node: + +```sh +$ rladmin verify balance node 1 +The table presents all of the unbalanced endpoints/nodes in the cluster +BALANCE: +NODE:ID DB:ID NAME ENDPOINT:ID PROXY_POLICY LOCAL SHARDS TOTAL SHARDS +``` + +## `verify rack_aware` + +Verifies that the cluster complies with the rack awareness policy and reports any discovered rack collisions, if [rack-zone awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness" >}}) is enabled. + +```sh +rladmin verify rack_aware +``` + +### Parameters + +None + +### Returns + +Returns whether the cluster is rack aware. If rack awareness is enabled, it returns any rack collisions. + +### Example + +```sh +$ rladmin verify rack_aware + +Cluster policy is not configured for rack awareness. +``` +--- +Title: rladmin placement +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures the shard placement policy for a database. +headerRange: '[1-2]' +linkTitle: placement +toc: 'true' +weight: $weight +--- + +Configures the shard placement policy for a specified database. + +``` sh +rladmin placement + db { db: | } + { dense | sparse } +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Configures shard placement for the specified database | +| dense | | Places new shards on the same node as long as it has resources | +| sparse | | Places new shards on the maximum number of available nodes within the cluster | + +### Returns + +Returns the new shard placement policy if the policy was changed successfully. Otherwise, it returns an error. + +Use [`rladmin status databases`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-databases" >}}) to verify that the failover completed. + +### Example + +``` sh +$ rladmin status databases +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:5 tr01 redis active 1 dense enabled aof redis-12000.cluster.local:12000 +$ rladmin placement db db:5 sparse +Shards placement policy is now sparse +$ rladmin status databases +DATABASES: +DB:ID NAME TYPE STATUS SHARDS PLACEMENT REPLICATION PERSISTENCE ENDPOINT +db:5 tr01 redis active 1 sparse enabled aof redis-12000.cluster.local:12000 +``` +--- +Title: rladmin info +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the current configuration of a cluster, database, node, or proxy. +headerRange: '[1-2]' +linkTitle: info +toc: 'true' +weight: $weight +--- + +Shows the current configuration of specified databases, proxies, clusters, or nodes. + +## `info cluster` + +Lists the current configuration for the cluster. + +```sh +rladmin info cluster +``` + +### Parameters + +None + +### Returns + +Returns the current configuration for the cluster. + +### Example + +``` sh +$ rladmin info cluster +Cluster configuration: + repl_diskless: enabled + shards_overbooking: disabled + default_non_sharded_proxy_policy: single + default_sharded_proxy_policy: single + default_shards_placement: dense + default_fork_evict_ram: enabled + default_provisioned_redis_version: 6.0 + redis_migrate_node_threshold: 0KB (0 bytes) + redis_migrate_node_threshold_percent: 4 (%) + redis_provision_node_threshold: 0KB (0 bytes) + redis_provision_node_threshold_percent: 12 (%) + max_simultaneous_backups: 4 + slave_ha: enabled + slave_ha_grace_period: 600 + slave_ha_cooldown_period: 3600 + slave_ha_bdb_cooldown_period: 7200 + parallel_shards_upgrade: 0 + show_internals: disabled + expose_hostnames_for_all_suffixes: disabled + login_lockout_threshold: 5 + login_lockout_duration: 1800 + login_lockout_counter_reset_after: 900 + default_concurrent_restore_actions: 10 + endpoint_rebind_propagation_grace_time: 15 + data_internode_encryption: disabled + redis_upgrade_policy: major + db_conns_auditing: disabled + watchdog profile: local-network + http support: enabled + upgrade mode: disabled + cm_session_timeout_minutes: 15 + cm_port: 8443 + cnm_http_port: 8080 + cnm_https_port: 9443 + bigstore_driver: speedb +``` + +## `info db` + +Shows the current configuration for databases. + +```sh +rladmin info db [ {db: | } ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| db:id | ID of the specified database (optional) | +| name | Name of the specified database (optional) | + +### Returns + +Returns the current configuration for all databases. + +If `db:` or `` is specified, returns the current configuration for the specified database. + +### Example + +``` sh +$ rladmin info db db:1 +db:1 [database1]: + client_buffer_limits: 1GB (hard limit)/512MB (soft limit) in 30 seconds + slave_buffer: auto + pubsub_buffer_limits: 32MB (hard limit)/8MB (soft limit) in 60 seconds + proxy_client_buffer_limits: 0KB (hard limit)/0KB (soft limit) in 0 seconds + proxy_slave_buffer_limits: 1GB (hard limit)/512MB (soft limit) in 60 seconds + proxy_pubsub_buffer_limits: 32MB (hard limit)/8MB (soft limit) in 60 seconds + repl_backlog: 1.02MB (1073741 bytes) + repl_timeout: 360 seconds + repl_diskless: default + master_persistence: disabled + maxclients: 10000 + conns: 5 + conns_type: per-thread + sched_policy: cmp + max_aof_file_size: 300GB + max_aof_load_time: 3600 seconds + dedicated_replicaof_threads: 5 + max_client_pipeline: 200 + max_shard_pipeline: 2000 + max_connections: 0 + oss_cluster: disabled + oss_cluster_api_preferred_ip_type: internal + gradual_src_mode: disabled + gradual_src_max_sources: 1 + gradual_sync_mode: auto + gradual_sync_max_shards_per_source: 1 + slave_ha: disabled (database) + mkms: enabled + oss_sharding: disabled + mtls_allow_weak_hashing: disabled + mtls_allow_outdated_certs: disabled + data_internode_encryption: disabled + proxy_policy: single + db_conns_auditing: disabled + syncer_mode: centralized +``` + +## `info node` + +Lists the current configuration for all nodes. + +```sh +rladmin info node [ ] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| id | ID of the specified node | + +### Returns + +Returns the current configuration for all nodes. + +If `` is specified, returns the current configuration for the specified node. + +### Example + +``` sh +$ rladmin info node 3 +Command Output: node:3 + address: 198.51.100.17 + external addresses: N/A + recovery path: N/A + quorum only: disabled + max redis servers: 100 + max listeners: 100 +``` + +## `info proxy` + +Lists the current configuration for a proxy. + +``` sh +rladmin info proxy { | all } +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| id | ID of the specified proxy | +| all | Show the current configuration for all proxies (optional) | + +### Returns + +If no parameter is specified or the `all` option is specified, returns the current configuration for all proxies. + +If ``is specified, returns the current configuration for the specified proxy. + +### Example + +``` sh +$ rladmin info proxy +proxy:1 + mode: dynamic + scale_threshold: 80 (%) + scale_duration: 30 (seconds) + max_threads: 8 + threads: 3 +``` +--- +Title: rladmin help +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows available commands or specific command usage. +headerRange: '[1-2]' +linkTitle: help +toc: 'true' +weight: $weight +--- + +Lists all options and parameters for `rladmin` commands. + +``` sh +rladmin help [command] +``` + +### Parameters + +| Parameter | Description | +|-----------|-------------| +| command | Display help for this `rladmin` command (optional) | + +### Returns + +Returns a list of available `rladmin` commands. + +If a `command` is specified, returns a list of all the options and parameters for that `rladmin` command. + +### Example + +```sh +$ rladmin help +usage: rladmin [options] [command] [command args] + +Options: + -y Assume Yes for all required user confirmations. + +Commands: + bind Bind an endpoint + cluster Cluster management commands + exit Exit admin shell + failover Fail-over master to slave + help Show available commands, or use help for a specific command + info Show information about tunable parameters + migrate Migrate elements between nodes + node Node management commands + placement Configure shards placement policy + recover Recover databases + restart Restart database shards + status Show status information + suffix Suffix management + tune Tune system parameters + upgrade Upgrade entity version + verify Cluster verification reports + +Use "rladmin help [command]" to get more information on a specific command. +``` +--- +Title: rladmin suffix +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages the DNS suffixes in the cluster. +headerRange: '[1-2]' +linkTitle: suffix +toc: 'true' +weight: $weight +--- + +Manages the DNS suffixes in the cluster. + +## `suffix add` + +Adds a DNS suffix to the cluster. + +``` sh +rladmin suffix add name + [default] + [internal] + [mdns] + [use_aaaa_ns] + [slaves ..] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------------|-----------------------------------------------------------------------------------------------| +| name | string | DNS suffix to add to the cluster | +| default | | Sets the given suffix as the default. If a default already exists, this overwrites it. | +| internal | | Forces the given suffix to use private IPs | +| mdns | | Activates multicast DNS support for the given suffix | +| slaves | list of IPv4 addresses | The given suffix will notify the frontend DNS servers when a change in the frontend DNS has occurred | +| use_aaaa_ns | | Activates IPv6 address support | + +### Returns + +Returns `Added suffixes successfully` if the suffix was added. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin suffix add name new.rediscluster.local +Added suffixes successfully +``` + +## `suffix delete` + +Deletes an existing DNS suffix from the cluster. + +``` sh +rladmin suffix delete name +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|------------------|-----------------------------------------------------------------------------------------------| +| name | string | DNS suffix to delete from the cluster | + +### Returns + +Returns `Suffix deleted successfully` if the suffix was deleted. Otherwise, it returns an error. + +### Example + +``` sh +$ rladmin suffix delete name new.rediscluster.local +Suffix deleted successfully +``` + +## `suffix list` + +Lists the DNS suffixes in the cluster. + +```sh +rladmin suffix list +``` + +### Parameters + +None + +### Returns + +Returns a list of the DNS suffixes. + +### Example + +``` sh +$ rladmin suffix list +List of all suffixes: +cluster.local +new.rediscluster.local +``` +--- +Title: rladmin failover +alwaysopen: false +categories: +- docs +- operate +- rs +description: Fail over primary shards of a database to their replicas. +headerRange: '[1-2]' +linkTitle: failover +toc: 'true' +weight: $weight +--- + +Fails over one or more primary (also known as master) shards of a database and promotes their respective replicas to primary shards. + +``` sh +rladmin failover + [db { db: | }] + shard + [immediate] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| db | db:\
name | Fail over shards for the specified database | +| shard | one or more primary shard IDs | Primary shard or shards to fail over | +| immediate | | Perform failover without verifying the replica shards are in full sync with the master shards | + +### Returns + +Returns `Finished successfully` if the failover completed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the failover completed. + +### Example + +``` sh +$ rladmin status shards +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 slave 0-16383 3.02MB OK +db:5 tr01 redis:13 node:2 master 0-16383 3.09MB OK +$ rladmin failover shard 13 +Executing shard fail-over: OOO. +Finished successfully +$ rladmin status shards +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:1 master 0-16383 3.12MB OK +db:5 tr01 redis:13 node:2 slave 0-16383 2.99MB OK +``` +--- +Title: rladmin upgrade +alwaysopen: false +categories: +- docs +- operate +- rs +description: Upgrades the version of a module or Redis Enterprise Software for a database. +headerRange: '[1-2]' +linkTitle: upgrade +toc: 'true' +weight: $weight +--- + +Upgrades the version of a module or Redis Enterprise Software for a database. + +## `upgrade db` + +Schedules a restart of the primary and replica processes of a database and then upgrades the database to the latest version of Redis Enterprise Software. + +For more information, see [Upgrade an existing Redis Software Deployment]({{< relref "/operate/rs/installing-upgrading/upgrading" >}}). + +```sh +rladmin upgrade db { db: | } + [ preserve_roles ] + [ keep_redis_version ] + [ discard_data ] + [ force_discard ] + [ parallel_shards_upgrade ] + [ keep_crdt_protocol_version ] + [ redis_version ] + [ force ] + [ { latest_with_modules | and module module_name version module_args } ] +``` + +As of v6.2.4, the default behavior for `upgrade db` has changed. It is now controlled by a new parameter that sets the default upgrade policy used to create new databases and to upgrade ones already in the cluster. To learn more, see [`tune cluster default_redis_version`]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-cluster" >}}). + +As of Redis Enterprise Software version 7.8.2, `upgrade db` will always upgrade modules. + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------| +| db | db:\
name | Database to upgrade | +| and module | [upgrade module](#upgrade-module) command | Clause that allows the upgrade of a database and a specified Redis module in a single step with only one restart (can be specified multiple times). Deprecated as of Redis Enterprise Software v7.8.2. | +| discard_data | | Indicates that data will not be saved after the upgrade | +| force | | Forces upgrade and skips warnings and confirmations | +| force_discard | | Forces `discard_data` if replication or persistence is enabled | +| keep_crdt_protocol_version | | Keeps the current CRDT protocol version | +| keep_redis_version | | Upgrades to a new patch release, not to the latest major.minor version. Deprecated as of Redis Enterprise Software v7.8.2. To upgrade modules without upgrading the Redis database version, set `redis_version` to the current Redis database version instead. | +| latest_with_modules | | Upgrades the Redis Enterprise Software version and all modules in the database. As of Redis Enterprise Software version 7.8.2, `upgrade db` will always upgrade modules. | +| parallel_shards_upgrade | integer
'all' | Maximum number of shards to upgrade all at once | +| preserve_roles | | Performs an additional failover to guarantee the shards' roles are preserved | +| redis_version | Redis version | Upgrades the database to the specified version instead of the latest version | + +### Returns + +Returns `Done` if the upgrade completed. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin upgrade db db:5 +Monitoring e39c8e87-75f9-4891-8c86-78cf151b720b +active - SMUpgradeBDB init +active - SMUpgradeBDB check_slaves +.active - SMUpgradeBDB prepare +active - SMUpgradeBDB stop_forwarding +oactive - SMUpgradeBDB start_wd +active - SMUpgradeBDB wait_for_version +.completed - SMUpgradeBDB +Done +``` + +## `upgrade module` + +Upgrades Redis modules in use by a specific database. Deprecated as of Redis Enterprise Software v7.8.2. Use [`upgrade db`](#upgrade-db) instead. + +For more information, see [Upgrade modules]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/upgrade-module" >}}). + +```sh +rladmin upgrade module + db_name { db: | } + module_name + version + module_args +``` + +### Parameters + +| Parameters | Type/Value | Description | +|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------| +| db_name | db:\
name | Upgrade a module for the specified database | +| module_name | 'ReJSON'
'graph'
'search'
'bf'
'rg'
'timeseries' | Redis module to upgrade | +| version | module version number | Upgrades the module to the specified version | +| module_args | 'keep_args'
string | Module configuration options | + +For more information about module configuration options, see [Module configuration options]({{< relref "/operate/oss_and_stack/stack-with-enterprise/install/add-module-to-database#module-configuration-options" >}}). + +### Returns + +Returns `Done` if the upgrade completed. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin upgrade module db_name db:8 module_name graph version 20812 module_args "" +Monitoring 21ac7659-e44c-4cc9-b243-a07922b2a6cc +active - SMUpgradeBDB init +active - SMUpgradeBDB wait_for_version +Ocompleted - SMUpgradeBDB +Done +``` +--- +Title: rladmin node enslave +alwaysopen: false +categories: +- docs +- operate +- rs +description: Changes a node's resources to replicas. +headerRange: '[1-2]' +linkTitle: enslave +toc: 'true' +weight: $weight +--- + +Changes the resources of a node to replicas. + +## `node enslave` + +Changes all of the node's endpoints and shards to replicas. + +``` sh +rladmin node enslave + [demote_node] + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's endpoints and shards to replicas | +| demote_node | | If the node is a primary node, changes the node to replica | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:2 master 0-4095 3.2MB OK +db:6 tr02 redis:16 node:2 master 4096-8191 3.12MB OK +db:6 tr02 redis:18 node:2 master 8192-12287 3.16MB OK +db:6 tr02 redis:20 node:2 master 12288-16383 3.12MB OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 slave 192.0.2.12 198.51.100.1 3d99db1fdf4b 1/100 6 14.43GB/19.54GB 10.87GB/16.02GB 6.2.12-37 OK +node:2 master 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.43GB/19.54GB 10.88GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 5/120 6 14.43GB/19.54GB 10.83GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 enslave demote_node +Performing enslave_node action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 1/100 6 14.72GB/19.54GB 10.91GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.72GB/19.54GB 11.17GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 5/120 6 14.72GB/19.54GB 10.92GB/16.02GB 6.2.12-37 OK +$ rladmin status shards node 2 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:6 tr02 redis:14 node:2 slave 0-4095 2.99MB OK +db:6 tr02 redis:16 node:2 slave 4096-8191 3.01MB OK +db:6 tr02 redis:18 node:2 slave 8192-12287 2.93MB OK +db:6 tr02 redis:20 node:2 slave 12288-16383 3.06MB OK +``` + +## `node enslave endpoints_only` + +Changes the role for all endpoints on a node to replica. + +``` sh +rladmin node enslave endpoints_only + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's endpoints to replicas | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status endpoints`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-endpoints" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status endpoints +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:5 tr01 endpoint:5:1 node:1 single No +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +$ rladmin node 1 enslave endpoints_only +Performing enslave_node action on node:1: 100% +OK +$ rladmin status endpoints +ENDPOINTS: +DB:ID NAME ID NODE ROLE SSL +db:5 tr01 endpoint:5:1 node:3 single No +db:6 tr02 endpoint:6:1 node:3 all-master-shards No +``` + +## `node enslave shards_only` + +Changes the role for all shards of a node to replica. + +``` sh +rladmin node enslave shards_only + [retry_timeout_seconds ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Changes all of the node's shards to replicas | +| retry_timeout_seconds | integer | Retries on failure until the specified number of seconds has passed. | + +### Returns + +Returns `OK` if the roles were successfully changed. Otherwise, it returns an error. + +Use [`rladmin status shards`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-shards" >}}) to verify that the roles were changed. + +### Example + +```sh +$ rladmin status shards node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:3 master 0-16383 3.04MB OK +db:6 tr02 redis:15 node:3 master 0-4095 4.13MB OK +db:6 tr02 redis:17 node:3 master 4096-8191 4.13MB OK +db:6 tr02 redis:19 node:3 master 8192-12287 4.13MB OK +db:6 tr02 redis:21 node:3 master 12288-16383 4.13MB OK +$ rladmin node 3 enslave shards_only +Performing enslave_node action on node:3: 100% +OK +$ rladmin status shards node 3 +SHARDS: +DB:ID NAME ID NODE ROLE SLOTS USED_MEMORY STATUS +db:5 tr01 redis:12 node:3 slave 0-16383 2.98MB OK +db:6 tr02 redis:15 node:3 slave 0-4095 4.23MB OK +db:6 tr02 redis:17 node:3 slave 4096-8191 4.11MB OK +db:6 tr02 redis:19 node:3 slave 8192-12287 4.19MB OK +db:6 tr02 redis:21 node:3 slave 12288-16383 4.27MB OK +``` +--- +Title: rladmin node maintenance_mode +alwaysopen: false +categories: +- docs +- operate +- rs +description: Turns quorum-only mode on or off for a node. +headerRange: '[1-2]' +linkTitle: maintenance_mode +toc: 'true' +weight: $weight +--- + +Configures [quorum-only mode]({{< relref "/operate/rs/clusters/maintenance-mode#activate-maintenance-mode" >}}) on a node. + +## `node maintenance_mode on` + +Migrates shards out of the node and turns the node into a quorum node to prevent shards from returning to it. + +```sh +rladmin node maintenance_mode on + [ keep_slave_shards ] + [ evict_ha_replica { enabled | disabled } ] + [ evict_active_active_replica { enabled | disabled } ] + [ evict_dbs ] + [ demote_node ] + [ overwrite_snapshot ] + [ max_concurrent_actions ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Turns the specified node into a quorum node | +| demote_node | | If the node is a primary node, changes the node to replica | +| evict_ha_replica | `enabled`
`disabled` | Migrates the HA replica shards in the node | +| evict_active_active_replica | `enabled`
`disabled` | Migrates the Active-Active replica shards in the node | +| evict_dbs | list of database names or IDs | Specify databases whose shards should be evicted from the node when entering maintenance mode.

Examples:
`$ rladmin node 1 maintenance_mode on evict_dbs db:1 db:2`
`$ rladmin node 1 maintenance_mode on evict_dbs db_name1 db_name2` | +| keep_slave_shards | | Keeps replica shards in the node and demotes primary shards to replicas.

Deprecated as of Redis Enterprise Software 7.4.2. Use `evict_ha_replica disabled evict_active_active_replica disabled` instead. | +| max_concurrent_actions | integer | Maximum number of concurrent actions during node maintenance | +| overwrite_snapshot | | Overwrites the latest existing node snapshot taken when enabling maintenance mode | + +### Returns + +Returns `OK` if the node was converted successfully. If the cluster does not have enough resources to migrate the shards, the process returns a warning. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the node became a quorum node. + +### Example + +```sh +$ rladmin node 2 maintenance_mode on overwrite_snapshot +Found snapshot from 2024-01-06T11:36:47Z, overwriting the snapshot +Performing maintenance_on action on node:2: 0% +created snapshot NodeSnapshot + +node:2 will not accept any more shards +Performing maintenance_on action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.21GB/19.54GB 10.62GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 0/0 6 14.21GB/19.54GB 0KB/0KB 6.2.12-37 OK +node:4 slave 192.0.2.14 6d754fe12cb9 5/100 6 14.21GB/19.54GB 10.62GB/16.02GB 6.2.12-37 OK +``` + +## `node maintenance_mode off` + +Turns maintenance mode off and returns the node to its previous state. + +```sh +rladmin node maintenance_mode off + [ { snapshot_name | skip_shards_restore } ] + [ max_concurrent_actions ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Restores the node back to the previous state | +| max_concurrent_actions | integer | Maximum number of concurrent actions during node maintenance | +| skip_shards_restore | | Does not restore shards back to the node | +| snapshot_name | string | Restores the node back to a state stored in the specified snapshot | + +### Returns + +Returns `OK` if the node was restored successfully. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the node was restored. + +### Example + +```sh +$ rladmin node 2 maintenance_mode off +Performing maintenance_off action on node:2: 0% +Found snapshot: NodeSnapshot +Performing maintenance_off action on node:2: 0% +migrate redis:12 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:12 to node:2: finished +Performing maintenance_off action on node:2: 0% +migrate redis:17 to node:2: executing + +migrate redis:15 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:17 to node:2: finished + +migrate redis:15 to node:2: finished +Performing maintenance_off action on node:2: 0% +failover redis:16: executing + +failover redis:14: executing +Performing maintenance_off action on node:2: 0% +failover redis:16: finished + +failover redis:14: finished +Performing maintenance_off action on node:2: 0% +failover redis:18: executing +Performing maintenance_off action on node:2: 0% +failover redis:18: finished + +migrate redis:21 to node:2: executing + +migrate redis:19 to node:2: executing +Performing maintenance_off action on node:2: 0% +migrate redis:21 to node:2: finished + +migrate redis:19 to node:2: finished + +failover redis:20: executing +Performing maintenance_off action on node:2: 0% +failover redis:20: finished +Performing maintenance_off action on node:2: 0% +rebind endpoint:6:1: executing +Performing maintenance_off action on node:2: 0% +rebind endpoint:6:1: finished +Performing maintenance_off action on node:2: 100% +OK +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.2GB/19.54GB 10.61GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 5/100 6 14.2GB/19.54GB 10.61GB/16.02GB 6.2.12-37 OK +node:4 slave 192.0.2.14 6d754fe12cb9 0/100 6 14.2GB/19.54GB 10.69GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node external_addr +alwaysopen: false +categories: +- docs +- operate +- rs +description: Configures a node's external IP addresses. +headerRange: '[1-2]' +linkTitle: external_addr +toc: 'true' +weight: $weight +--- + +Configures a node's external IP addresses. + +## `node external_addr add` + +Adds an external IP address that accepts inbound user connections for the node. + +```sh +rladmin node external_addr + add +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Adds an external IP address for the specified node | +| IP address | IP address | External IP address of the node | + +### Returns + +Returns `Updated successfully` if the IP address was added. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was added. + +### Example + +``` sh +$ rladmin node 1 external_addr add 198.51.100.1 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.24GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +``` + +## `node external_addr set` + +Sets one or more external IP addresses that accepts inbound user connections for the node. + +```sh +rladmin node external_addr + set ... +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets external IP addresses for the specified node | +| IP address | list of IP addresses | Sets specified IP addresses as external addresses | + +### Returns + +Returns `Updated successfully` if the IP addresses were set. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was set. + +### Example + +``` sh +$ rladmin node 2 external_addr set 198.51.100.2 198.51.100.3 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2,198.51.100.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.23GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +``` +## `node external_addr remove` + +Removes the specified external IP address from the node. + +```sh +rladmin node external_addr + remove +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Removes an external IP address for the specified node | +| IP address | IP address | Removes the specified IP address of the node | + +### Returns + +Returns `Updated successfully` if the IP address was removed. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the external IP address was removed. + +### Example + +``` sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2,198.51.100.3 fc7a3d332458 0/100 6 14.75GB/19.54GB 11.23GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.75GB/19.54GB 11.15GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 external_addr remove 198.51.100.3 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 198.51.100.1 3d99db1fdf4b 5/100 6 14.74GB/19.54GB 11.14GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 198.51.100.2 fc7a3d332458 0/100 6 14.74GB/19.54GB 11.22GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.74GB/19.54GB 11.14GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node snapshot +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manages snapshots of the configuration of a node's shards and endpoints. +headerRange: '[1-2]' +linkTitle: snapshot +toc: 'true' +weight: $weight +--- + +Manages snapshots of the configuration of a node's shards and endpoints. + +You can create node snapshots and use them to restore the node's shards and endpoints to a configuration from a previous point in time. If you restore a node from a snapshot (for example, after an event such as failover or maintenance), the node's shards have the same placement and roles as when the snapshot was created. + +## `node snapshot create` + +Creates a snapshot of a node's current configuration, including the placement of shards and endpoints on the node and the shards' roles. + +```sh +rladmin node snapshot create +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Creates a snapshot of the specified node | +| name | string | Name of the created snapshot | + +### Returns + +Returns `Done` if the snapshot was created successfully. Otherwise, returns an error. + +### Example + +```sh +$ rladmin node 1 snapshot create snap1 +Creating node snapshot 'snap1' for node:1 +Done. +``` + +## `node snapshot delete` + +Deletes an existing snapshot of a node. + +```sh +rladmin node snapshot delete +``` + +{{}} +You cannot use this command to delete a snapshot created by maintenance mode. As of Redis Enterprise Software version 7.4.2, only the latest maintenance mode snapshot is kept. +{{}} + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Deletes a snapshot of the specified node | +| name | string | Deletes the specified snapshot | + +### Returns + +Returns `Done` if the snapshot was deleted successfully. Otherwise, returns an error. + +### Example + +```sh +$ rladmin node 1 snapshot delete snap1 +Deleting node snapshot 'snap1' for node:1 +Done. +``` + +## `node snapshot list` + +Displays a list of created snapshots for the specified node. + +``` sh +rladmin node snapshot list +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Displays snapshots of the specified node | + +### Returns + +Returns a list of snapshots of the specified node. + +### Example + +```sh +$ rladmin node 2 snapshot list +Name Node Time +snap2 2 2022-05-12T19:27:51Z +``` + +## `node snapshot restore` + +Restores a node's shards and endpoints as close to the stored snapshot as possible. + +```sh +rladmin node snapshot restore +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------------------------------------| +| node | integer | Restore the specified node from a snapshot. | +| restore | string | Name of the snapshot used to restore the node. | + +### Returns + +Returns `Snapshot restore completed successfully` if the actions needed to restore the snapshot completed successfully. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin node 2 snapshot restore snap2 +Reading node snapshot 'snap2' for node:2 +Planning restore +Planned actions: +* migrate redis:15 to node:2 +* failover redis:14 +* migrate redis:17 to node:2 +* failover redis:16 +* migrate redis:19 to node:2 +* failover redis:18 +* migrate redis:21 to node:2 +* failover redis:20 +Proceed?[Y]es/[N]o? Y +2022-05-12T19:43:31.486613 Scheduling 8 actions +[2022-05-12T19:43:31.521422 Actions Status: 8 waiting ] +* [migrate redis:21 to node:2] waiting => executing +* [migrate redis:19 to node:2] waiting => executing +* [migrate redis:17 to node:2] waiting => executing +* [migrate redis:15 to node:2] waiting => executing +[2022-05-12T19:43:32.586084 Actions Status: 4 executing | 4 waiting ] +* [migrate redis:21 to node:2] executing => finished +* [migrate redis:19 to node:2] executing => finished +* [migrate redis:17 to node:2] executing => finished +* [migrate redis:15 to node:2] executing => finished +* [failover redis:20] waiting => executing +* [failover redis:18] waiting => executing +* [failover redis:16] waiting => executing +* [failover redis:14] waiting => executing +[2022-05-12T19:43:33.719496 Actions Status: 4 finished | 4 executing ] +* [failover redis:20] executing => finished +* [failover redis:18] executing => finished +* [failover redis:16] executing => finished +* [failover redis:14] executing => finished +Snapshot restore completed successfully. +``` +--- +Title: rladmin node addr set +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets a node's internal IP address. +headerRange: '[1-2]' +linkTitle: addr +toc: 'true' +weight: $weight +--- + +Sets the internal IP address of a node. You can only set the internal IP address when the node is down. See [Change internal IP address]({{< relref "/operate/rs/networking/multi-ip-ipv6#change-internal-ip-address" >}}) for detailed instructions. + +```sh +rladmin node addr set +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets the internal IP address of the specified node | +| addr | IP address | Sets the node's internal IP address to the specified IP address | + +### Returns + +Returns `Updated successfully` if the IP address was set. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify the internal IP address was changed. + +### Example + +```sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 3d99db1fdf4b 5/100 6 16.06GB/19.54GB 12.46GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.3 fc7a3d332458 0/100 6 -/19.54GB -/16.02GB 6.2.12-37 DOWN, last seen 33s ago +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 16.06GB/19.54GB 12.46GB/16.02GB 6.2.12-37 OK +$ rladmin node 2 addr set 192.0.2.5 +Updated successfully. +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.2 3d99db1fdf4b 5/100 6 14.78GB/19.54GB 11.18GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.5 fc7a3d332458 0/100 6 14.78GB/19.54GB 11.26GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.4 b87cc06c830f 5/120 6 14.78GB/19.54GB 11.18GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node recovery_path set +alwaysopen: false +categories: +- docs +- operate +- rs +description: Sets a node's local recovery path. +headerRange: '[1-2]' +linkTitle: recovery_path +toc: 'true' +weight: $weight +--- + +Sets the node's local recovery path, which specifies the directory where [persistence files]({{< relref "/operate/rs/databases/configure/database-persistence" >}}) are stored. You can use these persistence files to [recover a failed database]({{< relref "/operate/rs/databases/recover" >}}). + +```sh +rladmin node recovery_path set +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------|--------------------------------|-----------------------------------------------------------------------------------------------| +| node | integer | Sets the recovery path for the specified node | +| path | filepath | Path to the folder where persistence files are stored | + +### Returns + +Returns `Updated successfully` if the recovery path was set. Otherwise, it returns an error. + +### Example + +```sh +$ rladmin node 2 recovery_path set /var/opt/redislabs/persist/redis +Updated successfully. +``` +--- +Title: rladmin node remove +alwaysopen: false +categories: +- docs +- operate +- rs +description: Removes a node from the cluster. +headerRange: '[1-2]' +linkTitle: remove +toc: 'true' +weight: $weight +--- + +Removes the specified node from the cluster. + +```sh +rladmin node remove [ wait_for_persistence { enabled | disabled } ] +``` + +### Parameters + +| Parameter | Type/Value | Description | +|-----------------------|--------------------------------|-------------------------------------------------------------| +| node | integer | The node to remove from the cluster | +| wait_for_persistence | `enabled`
`disabled` | Ensures persistence files are available for recovery. The cluster policy `persistent_node_removal` determines the default value. | + +### Returns + +Returns `OK` if the node was removed successfully. Otherwise, it returns an error. + +Use [`rladmin status nodes`]({{< relref "/operate/rs/references/cli-utilities/rladmin/status#status-nodes" >}}) to verify that the node was removed. + +### Example + +```sh +$ rladmin status nodes +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.26GB/19.54GB 10.67GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 4/100 6 14.26GB/19.54GB 10.71GB/16.02GB 6.2.12-37 OK +node:3 slave 192.0.2.14 b87cc06c830f 1/120 6 14.26GB/19.54GB 10.7GB/16.02GB 6.2.12-37 OK +$ rladmin node 3 remove +Performing remove action on node:3: 100% +OK +CLUSTER NODES: +NODE:ID ROLE ADDRESS EXTERNAL_ADDRESS HOSTNAME SHARDS CORES FREE_RAM PROVISIONAL_RAM VERSION STATUS +*node:1 master 192.0.2.12 198.51.100.1 3d99db1fdf4b 5/100 6 14.34GB/19.54GB 10.74GB/16.02GB 6.2.12-37 OK +node:2 slave 192.0.2.13 198.51.100.2 fc7a3d332458 5/100 6 14.34GB/19.54GB 10.74GB/16.02GB 6.2.12-37 OK +``` +--- +Title: rladmin node +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage nodes. +headerRange: '[1-2]' +hideListLinks: true +linkTitle: node +toc: 'true' +weight: $weight +--- + +`rladmin node` commands manage nodes in the cluster. + +{{}} +--- +Title: rladmin +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Redis Enterprise clusters and databases. +hideListLinks: true +linkTitle: rladmin (manage cluster) +weight: $weight +--- + +`rladmin` is a command-line utility that lets you perform administrative tasks such as failover, migration, and endpoint binding on a Redis Enterprise Software cluster. You can also use `rladmin` to edit cluster and database configurations. + +Although you can use the Cluster Manager UI for some of these tasks, others are unique to the `rladmin` command-line tool. + +## `rladmin` commands + +{{}} + +## Use the `rladmin` shell + +To open the `rladmin` shell: + +1. Sign in to a Redis Enterprise Software node with an account that is a member of the **redislabs** group. + + The `rladmin` binary is located in `/opt/redislabs/bin`. If you don't have this directory in your `PATH`, you may want to add it. Otherwise, you can use `bash -l ` to sign in as a user with permissions for that directory. + +1. Run: `rladmin` + + {{}} +If the CLI does not recognize the `rladmin` command, +run this command to load the necessary configuration first: `bash -l` + {{}} + +In the `rladmin` shell, you can: + +- Run any `rladmin` command without prefacing it with `rladmin`. +- Enter `?` to view the full list of available commands. +- Enter [`help`]({{< relref "/operate/rs/references/cli-utilities/rladmin/help" >}}) followed by the name of a command for a detailed explanation of the command and its usage. +- Press the `Tab` key for command completion. +- Enter `exit` or press `Control+D` to exit the `rladmin` shell and return to the terminal prompt. +--- +Title: Command-line utilities +alwaysopen: false +categories: +- docs +- operate +- rs +description: Reference for Redis Enterprise Software command-line utilities, including rladmin, redis-cli, crdb-cli, and rlcheck. +hideListLinks: true +linkTitle: Command-line utilities +weight: $weight +--- + +Redis Enterprise Software includes a set of utilities to help you manage and test your cluster. To use a utility, run it from the command line. + +## Public utilities + +Administrators can use these CLI tools to manage and test a Redis Enterprise cluster. You can find the binaries in the `/opt/redislabs/bin/` directory. + +{{}} + +## Internal utilities + +The `/opt/redislabs/bin/` directory also contains utilities used internally by Redis Enterprise Software and for troubleshooting. + +{{}} +Do not use these tools for normal operations. +{{}} + +| Utility | Description | +|---------|-------------| +| bdb-cli | `redis-cli` connected to a database. | +| ccs-cli | Inspect Cluster Configuration Store. | +| cnm-ctl | Manages services for provisioning, migration, monitoring,
resharding, rebalancing, deprovisioning, and autoscaling. | +| consistency_checker | Checks the consistency of Redis instances. | +| crdbtop | Monitor Active-Active databases. | +| debug_mode | Enables debug mode. | +| debuginfo | Collects cluster information. | +| dmc-cli | Configure and monitor the DMC proxy. | +| pdns_control | Sends commands to a running PowerDNS nameserver. | +| redis_ctl | Stops or starts Redis instances. | +| rl_rdbloader | Load RDB backup files to a server. | +| rlutil | Maintenance utility. | +| shard-cli | `redis-cli` connected to a shard. | +| supervisorctl | Manages the lifecycles of Redis Enterprise services. | +--- +Title: crdb-cli crdb flush +alwaysopen: false +categories: +- docs +- operate +- rs +description: Clears all keys from an Active-Active database. +linkTitle: flush +weight: $weight +--- + +Clears all keys from an Active-Active database. + +```sh +crdb-cli crdb flush --crdb-guid + [ --no-wait ] +``` + +This command is irreversible. If the data in your database is important, back it up before you flush the database. + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task clearing the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be cleared and return `finished`. + +### Example + +```sh +$ crdb-cli crdb flush --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +Task 53cdc59e-ecf5-4564-a8dd-448d71f9e568 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb remove-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Removes a peer replica from an Active-Active database. +linkTitle: remove-instance +weight: $weight +--- + +Removes a peer replica instance from the Active-Active database and deletes the instance and its data from the participating cluster. + +```sh +crdb-cli crdb remove-instance --crdb-guid + --instance-id + [ --force ] + [ --no-wait ] +``` + +If the cluster cannot communicate with the instance that you want to remove, you can: + +1. Use the `--force` option to remove the instance from the Active-Active database without purging the data from the instance. + +1. Run [`crdb-cli crdb purge-instance`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/crdb/purge-instance" >}}) from the removed instance to delete the Active-Active database and its data. + +### Parameters + +| Parameter | Value | Description| +|------------------------------|--------|------------| +| crdb-guid | string | The GUID of the database (required) | +| instance-id | string | The ID of the local instance to remove (required) | +| force | | Removes the instance without purging data from the instance.
If --force is specified, you must run [`crdb-cli crdb purge-instance`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/crdb/purge-instance" >}}) from the removed instance. | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is deleting the instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be removed and return `finished`. + +### Example + +```sh +$ crdb-cli crdb remove-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 --instance-id 2 --force +Task b1eba5ba-90de-49e9-8678-d66daa1afb51 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb get +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the current configuration of an Active-Active database. +linkTitle: get +weight: $weight +--- + +Shows the current configuration of an Active-Active database. + +```sh +crdb-cli crdb get --crdb-guid +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | + +### Returns + +Returns the current configuration of the database. + +### Example + +```sh +$ crdb-cli crdb get --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +CRDB-GUID: d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +Name: database1 +Encryption: False +Causal consistency: False +Protocol version: 1 +FeatureSet version: 5 +Modules: [] +Default-DB-Config: + memory_size: 1073741824 + port: 12000 + replication: True + shard_key_regex: [{'regex': '.*\\{(?.*)\\}.*'}, {'regex': '(?.*)'}] + sharding: True + shards_count: 1 + tls_mode: disabled + rack_aware: None + data_persistence: None + authentication_redis_pass: None + authentication_admin_pass: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +Instance: + Id: 1 + Cluster: + FQDN: cluster1.redis.local + URL: https://cluster1.redis.local:9443 + Replication-Endpoint: + Replication TLS SNI: + Compression: 3 + DB-Config: + authentication_admin_pass: + replication: None + rack_aware: None + memory_size: None + data_persistence: None + tls_mode: None + authentication_redis_pass: None + port: None + shards_count: None + shard_key_regex: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +Instance: + Id: 2 + Cluster: + FQDN: cluster2.redis.local + URL: https://cluster2.redis.local:9443 + Replication-Endpoint: + Replication TLS SNI: + Compression: 3 + DB-Config: + authentication_admin_pass: + replication: None + rack_aware: None + memory_size: None + data_persistence: None + tls_mode: None + authentication_redis_pass: None + port: None + shards_count: None + shard_key_regex: None + oss_sharding: None + oss_cluster: None + proxy_policy: None + shards_placement: None + oss_cluster_api_preferred_ip_type: None + bigstore: None + bigstore_ram_size: None + aof_policy: None + snapshot_policy: None + max_aof_load_time: None + max_aof_file_size: None +``` +--- +Title: crdb-cli crdb health-report +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the health report of an Active-Active database. +linkTitle: health-report +weight: $weight +--- + +Shows the health report of the API management layer of an Active-Active database. + +```sh +crdb-cli crdb health-report --crdb-guid +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | + +### Returns + +Returns the health report of the API management layer of the database. + +### Example + +```sh +$ crdb-cli crdb health-report --crdb-guid d84f6fe4-5bb7-49d2-a188-8900e09c6f66 +[ + { + "active_config_version":1, + "cluster_name":"cluster2.redis.local", + "configurations":[ + { + "causal_consistency":false, + "encryption":false, + "featureset_version":5, + "instances":[ + { + "cluster":{ + "name":"cluster1.redis.local", + "url":"https:\/\/cluster1.redis.local:9443" + }, + "db_uid":"", + "id":1 + }, + { + "cluster":{ + "name":"cluster2.redis.local", + "url":"https:\/\/cluster2.redis.local:9443" + }, + "db_uid":"1", + "id":2 + } + ], + "name":"database1", + "protocol_version":1, + "status":"commit-completed", + "version":1 + } + ], + "connections":[ + { + "name":"cluster1.redis.local", + "status":"ok" + }, + { + "name":"cluster2.redis.local", + "status":"ok" + } + ], + "guid":"d84f6fe4-5bb7-49d2-a188-8900e09c6f66", + "name":"database1", + "connection_error":null + }, + { + "active_config_version":1, + "cluster_name":"cluster1.redis.local", + "configurations":[ + { + "causal_consistency":false, + "encryption":false, + "featureset_version":5, + "instances":[ + { + "cluster":{ + "name":"cluster1.redis.local", + "url":"https:\/\/cluster1.redis.local:9443" + }, + "db_uid":"4", + "id":1 + }, + { + "cluster":{ + "name":"cluster2.redis.local", + "url":"https:\/\/cluster2.redis.local:9443" + }, + "db_uid":"", + "id":2 + } + ], + "name":"database1", + "protocol_version":1, + "status":"commit-completed", + "version":1 + } + ], + "connections":[ + { + "name":"cluster1.redis.local", + "status":"ok" + }, + { + "name":"cluster2.redis.local", + "status":"ok" + } + ], + "guid":"d84f6fe4-5bb7-49d2-a188-8900e09c6f66", + "name":"database1", + "connection_error":null + } +] +``` +--- +Title: crdb-cli crdb add-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Adds a peer replica to an Active-Active database. +linkTitle: add-instance +weight: $weight +--- + +Adds a peer replica to an existing Active-Active database in order to host the database on another cluster. This creates an additional active instance of the database on the specified cluster. + +```sh +crdb-cli crdb add-instance --crdb-guid + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + [ --compression <0-6> ] + [ --no-wait ] +``` + +### Parameters + +| Parameter | Value | Description | +|-----------|---------|-------------| +| crdb-guid | string | The GUID of the database (required) | +| instance | strings | The connection information for the new participating cluster (required) | +| compression | 0-6 | The level of data compression: 0=Compression disabled

6=High compression and resource load (Default: 3) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is adding the new instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be added and return `finished`. + +### Example + +```sh +$ crdb-cli crdb add-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 \ + --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin-password +Task f809fae7-8e26-4c8f-9955-b74dbbd47949 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb list +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows a list of all Active-Active databases. +linkTitle: list +weight: $weight +--- + +Shows a list of all Active-Active databases. + +```sh +crdb-cli crdb list +``` + +### Parameters + +None + +### Returns + +Returns a list of all Active-Active databases that the cluster participates in. Each database is represented with a unique GUID, the name of the database, an instance ID, and the FQDN of the cluster that hosts the instance. + +### Example + +```sh +$ crdb-cli crdb list +CRDB-GUID NAME REPL-ID CLUSTER-FQDN +d84f6fe4-5bb7-49d2-a188-8900e09c6f66 database1 1 cluster1.redis.local +d84f6fe4-5bb7-49d2-a188-8900e09c6f66 database1 2 cluster2.redis.local +``` +--- +Title: crdb-cli crdb create +alwaysopen: false +categories: +- docs +- operate +- rs +description: Creates an Active-Active database. +linkTitle: create +weight: $weight +--- + +Creates an Active-Active database. + +```sh +crdb-cli crdb create --name + --memory-size + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + --instance fqdn=,username=,password=[,url=,replication_endpoint=] + [--port ] + [--no-wait] + [--default-db-config ] + [--default-db-config-file ] + [--compression <0-6>] + [--causal-consistency { true | false } ] + [--password ] + [--replication { true | false } ] + [--encryption { true | false } ] + [--sharding { false | true } ] + [--shards-count ] + [--shard-key-regex ] + [--oss-cluster { true | false } ] + [--bigstore { true | false }] + [--bigstore-ram-size ] + [--with-module name=,version=,args=] +``` + +### Prerequisites + +Before you create an Active-Active database, you must have: + +- At least two participating clusters +- [Network connectivity]({{< relref "/operate/rs/networking/port-configurations" >}}) between the participating clusters + +### Parameters + + +| Parameter & options(s)           | Value | Description | +|---------------------------------------------------------------------------------------|-------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| name \ | string | Name of the Active-Active database (required) | +| memory-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum database memory (required) | +| instance
   fqdn=\,
   username=\,
   password=\ | strings | The connection information for the participating clusters (required for each participating cluster) | +| port \ | integer | TCP port for the Active-Active database on all participating clusters | +| default-db-config \ | string | Default database configuration options | +| default-db-config-file \ | filepath | Default database configuration options from a file | +| no-wait | | Prevents `crdb-cli` from running another command before this command finishes | +| compression | 0-6 | The level of data compression:

0 = No compression

6 = High compression and resource load (Default: 3) | +| causal-consistency | true
false (*default*) | [Causal consistency]({{< relref "/operate/rs/databases/active-active/causal-consistency.md" >}}) applies updates to all instances in the order they were received | +| password \ | string | Password for access to the database | +| replication | true
false (*default*) | Activates or deactivates [database replication]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}}) where every master shard replicates to a replica shard | +| encryption | true
false (*default*) | Activates or deactivates encryption | +| sharding | true
false (*default*) | Activates or deactivates sharding (also known as [database clustering]({{< relref "/operate/rs/databases/durability-ha/replication.md" >}})). Cannot be updated after the database is created | +| shards-count \ | integer | If sharding is enabled, this specifies the number of Redis shards for each database instance | +| oss-cluster | true
false (*default*) | Activates [OSS cluster API]({{< relref "/operate/rs/clusters/optimize/oss-cluster-api" >}}) | +| shard-key-regex \ | string | If clustering is enabled, this defines a regex rule (also known as a [hashing policy]({{< relref "/operate/rs/databases/durability-ha/clustering#custom-hashing-policy" >}})) that determines which keys are located in each shard (defaults to `{u'regex': u'.*\\{(?.*)\\}.*'}, {u'regex': u'(?.*)'} `) | +| bigstore | true

false (*default*) | If true, the database uses Auto Tiering to add flash memory to the database | +| bigstore-ram-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum RAM limit for databases with Auto Tiering enabled | +| with-module
  name=\,
  version=\,
  args=\ | strings | Creates a database with a specific module | +| eviction-policy | noeviction (*default*)
allkeys-lru
allkeys-lfu
allkeys-random
volatile-lru
volatile-lfu
volatile-random
volatile-ttl | Sets [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}) | +| proxy-policy | all-nodes
all-master-shards
single | Sets proxy policy | + + + +### Returns + +Returns the task ID of the task that is creating the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be created and then return the CRDB GUID. + +### Examples + +```sh +$ crdb-cli crdb create --name database1 --memory-size 1GB --port 12000 \ + --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin \ + --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin +Task 633aaea3-97ee-4bcb-af39-a9cb25d7d4da created + ---> Status changed: queued -> started + ---> CRDB GUID Assigned: crdb:d84f6fe4-5bb7-49d2-a188-8900e09c6f66 + ---> Status changed: started -> finished +``` + +To create an Active-Active database with two shards in each instance and with encrypted traffic between the clusters: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --shards-count 2 --encryption true +``` + +To create an Active-Active database with two shards and with RediSearch 2.0.6 module: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --shards-count 2 --with-module name=search,version="2.0.6",args="PARTITIONS AUTO" +``` + +To create an Active-Active database with two shards and with encrypted traffic between the clusters: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --encryption true --shards-count 2 +``` + +To create an Active-Active database with 1 shard in each instance and not wait for the response: + +```sh +crdb-cli crdb create --name mycrdb --memory-size 100mb --port 12000 --instance fqdn=cluster1.redis.local,username=admin@redis.local,password=admin --instance fqdn=cluster2.redis.local,username=admin@redis.local,password=admin --no-wait +``` +--- +Title: crdb-cli crdb delete +alwaysopen: false +categories: +- docs +- operate +- rs +description: Deletes an Active-Active database. +linkTitle: delete +weight: $weight +--- + +Deletes an Active-Active database. + +```sh +crdb-cli crdb delete --crdb-guid + [ --no-wait ] +``` + +This command is irreversible. If the data in your database is important, back it up before you delete the database. + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is deleting the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be deleted and return `finished`. + +### Example + +```sh +$ crdb-cli crdb delete --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 +Task dfe6cacc-88ff-4667-812e-938fd05fe359 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb update +alwaysopen: false +categories: +- docs +- operate +- rs +description: Updates the configuration of an Active-Active database. +linkTitle: update +weight: $weight +--- + +Updates the configuration of an Active-Active database. + +```sh +crdb-cli crdb update --crdb-guid + [--no-wait] + [--force] + [--default-db-config ] + [--default-db-config-file ] + [--compression <0-6>] + [--causal-consistency { true | false } ] + [--credentials id=,username=,password= ] + [--encryption { true | false } ] + [--oss-cluster { true | false } ] + [--featureset-version { true | false } ] + [--memory-size ] + [--bigstore-ram-size ] + [--update-module name=,featureset_version=] +``` + +If you want to change the configuration of the local instance only, use [`rladmin`]({{< relref "/operate/rs/references/cli-utilities/rladmin" >}}) instead. + +### Parameters + +| Parameter | Value | Description | +|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| crdb-guid \ | string | GUID of the Active-Active database (required) | +| bigstore-ram-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum RAM limit for the databases with Auto Tiering enabled, if activated | +| memory-size \ | size in bytes, kilobytes (KB), or gigabytes (GB) | Maximum database memory (required) | +| causal-consistency | true
false | [Causal consistency]({{< relref "/operate/rs/databases/active-active/causal-consistency.md" >}}) applies updates to all instances in the order they were received | +| compression | 0-6 | The level of data compression:

0 = No compression

6 = High compression and resource load (Default: 3) | +| credentials id=\,username=\,password=\ | strings | Updates the credentials for access to the instance | +| default-db-config \ | | Default database configuration from stdin. For a list of database settings, see the [CRDB database config object]({{}}) reference. | +| default-db-config-file \ | filepath | Default database configuration from file | +| encryption | true
false | Activates or deactivates encryption | +| force | | Force an update even if there are no changes | +| no-wait | | Do not wait for the command to finish | +| oss-cluster | true
false | Activates or deactivates OSS Cluster mode | +| eviction-policy | noeviction
allkeys-lru
allkeys-lfu
allkeys-random
volatile-lru
volatile-lfu
volatile-random
volatile-ttl | Updates [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}) | +| featureset-version | true
false | Updates to latest FeatureSet version | +| update-module name=\,featureset_version=\ | strings | Update a module to the specified version | + +### Returns + +Returns the task ID of the task that is updating the database. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the database to be updated and then return "finished." + +### Examples + +The following example changes the maximum database memory: + +```sh +$ crdb-cli crdb update --crdb-guid --memory-size 2GB +Task created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` + +The following example shows how to change a default database configuration setting: + +```sh +$ crdb-cli crdb update --crdb-guid --default-db-config '{"shards_count": }' +Task created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb purge-instance +alwaysopen: false +categories: +- docs +- operate +- rs +description: Deletes data from a local instance and removes it from the Active-Active + database. +linkTitle: purge-instance +weight: $weight +--- + +Deletes data from a local instance and removes the instance from the Active-Active database. + +```sh +crdb-cli crdb purge-instance --crdb-guid + --instance-id + [ --no-wait ] +``` + +Once this command finishes, the other replicas must remove this instance with [`crdb-cli crdb remove-instance --force`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/crdb/remove-instance" >}}). + +### Parameters + +| Parameter | Value | Description | +|---------------------------|--------|--------------------------------------------------| +| crdb-guid | string | The GUID of the database (required) | +| instance-id | string | The ID of the local instance (required) | +| no-wait | | Does not wait for the task to complete | + +### Returns + +Returns the task ID of the task that is purging the local instance. + +If `--no-wait` is specified, the command exits. Otherwise, it will wait for the instance to be purged and return `finished`. + +### Example + +```sh +$ crdb-cli crdb purge-instance --crdb-guid db6365b5-8aca-4055-95d8-7eb0105c0b35 --instance-id 2 +Task add0705c-87f1-4c28-ad6a-ab5d98e00c58 created + ---> Status changed: queued -> started + ---> Status changed: started -> finished +``` +--- +Title: crdb-cli crdb commands +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active databases. +hideListLinks: true +linkTitle: crdb +weight: $weight +--- + +Use `crdb-cli crdb` commands to manage Active-Active databases. + +## `crdb-cli crdb` commands + +{{}} +--- +Title: crdb-cli task status +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shows the status of a specified Active-Active database task. +linkTitle: status +weight: $weight +--- + +Shows the status of a specified Active-Active database task. + +```sh +crdb-cli task status --task-id +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| task-id \ | string | An Active-Active database task ID (required) | +| verbose | N/A | Returns detailed information when specified | +| no-verbose | N/A | Returns limited information when specified | + +The `--verbose` and `--no-verbose` options are mutually incompatible; specify one or the other. + +The `404 Not Found` error indicates an invalid task ID. Use the [`task list`]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/task/list" >}}) command to determine available task IDs. + +### Returns + +Returns the status of an Active-Active database task. + +### Example + +```sh +$ crdb-cli task status --task-id e1c49470-ae0b-4df8-885b-9c755dd614d0 +Task-ID: e1c49470-ae0b-4df8-885b-9c755dd614d0 +CRDB-GUID: 1d7741cc-1110-4e2f-bc6c-935292783d24 +Operation: create_crdb +Status: finished +Worker-Name: crdb_worker:1:0 +Started: 2022-10-12T09:33:41Z +Ended: 2022-10-12T09:33:55Z +``` +--- +Title: crdb-cli task list +alwaysopen: false +categories: +- docs +- operate +- rs +description: Lists active and recent Active-Active database tasks. +linkTitle: list +weight: $weight +--- + +Lists active and recent Active-Active database tasks. + +```sh +crdb-cli task list +``` + +### Parameters + +None + +### Returns + +A table listing current and recent Active-Active tasks. Each entry includes the following: + +| Column | Description | +|--------|-------------| +| Task ID | String containing the unique ID associated with the task
Example: `e1c49470-ae0b-4df8-885b-9c755dd614d0` | +| CRDB-GUID | String containing the unique ID associated with the Active-Active database affected by the task
Example: `1d7741cc-1110-4e2f-bc6c-935292783d24` | +| Operation | String describing the task action
Example: `create_crdb` | +| Status | String indicating the task status
Example: `finished` | +| Worker name | String identifying the process handling the task
Example: `crdb_worker:1:0` | +| Started | TimeStamp value indicating when the task started ([UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time))
Example: `2022-10-12T09:33:41Z` | +| Ended | TimeStamp value indicating when the task ended (UTC)
Example: ` 2022-10-12T09:33:55Z` | + +### Example + +```sh +$ crdb-cli task list +TASK-ID CRDB-GUID OPERATION STATUS WORKER-NAME STARTED ENDED + +``` +--- +Title: crdb-cli task cancel +alwaysopen: false +categories: +- docs +- operate +- rs +description: Attempts to cancel a specified Active-Active database task. +linkTitle: cancel +weight: $weight +--- + +Cancels the Active-Active database task specified by the task ID. + +```sh +crdb-cli task cancel --task-id +``` + +### Parameters + +| Parameter | Value | Description | +|---------------------|--------|-------------------------------------| +| task-id \ | string | An Active-Active database task ID (required) | + +### Returns + +Attempts to cancel an Active-Active database task. + +Be aware that tasks may complete before they can be cancelled. + +### Example + +```sh +$ crdb-cli task cancel --task-id 2901c2a3-2828-4717-80c0-6f27f1dd2d7c +``` +--- +Title: crdb-cli task commands +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active tasks. +hideListLinks: true +linkTitle: task +weight: $weight +--- + +The `crdb-cli task` commands help investigate Active-Active database performance issues. They should not be used except as directed by Support. + +## `crdb-cli task` commands + +{{}} +--- +Title: crdb-cli +alwaysopen: false +categories: +- docs +- operate +- rs +description: Manage Active-Active databases. +hideListLinks: true +linkTitle: crdb-cli (manage Active-Active) +weight: $weight +--- + +An [Active-Active database]({{< relref "/operate/rs/databases/active-active/_index.md" >}}) (also known as CRDB or conflict-free replicated database) +replicates your data across Redis Enterprise Software clusters located in geographically distributed regions. +Active-Active databases allow read-write access in all locations, making them ideal for distributed applications that require fast response times and disaster recovery. + +The Active-Active database on an individual cluster is called an **instance**. +Each cluster that hosts an instance is called a **participating cluster**. + +An Active-Active database requires two or more participating clusters. +Each instance is responsible for updating the instances that reside on other participating clusters with the transactions it receives. +Write conflicts are resolved using [conflict-free replicated data types]({{< relref "/operate/rs/databases/active-active" >}}) (CRDTs). + +To programmatically maintain an Active-Active database and its instances, you can use the `crdb-cli` command-line tool. + +## `crdb-cli` commands + +{{}} + +## Use the crdb-cli + +To use the `crdb-cli` tool, use SSH to sign in to a Redis Enterprise host with a user that belongs to the group that Redis Enterprise Software was installed with (Default: **redislabs**). +If you sign in with a non-root user, you must add `/opt/redislabs/bin/` to your `PATH` environment variables. + +`crdb-cli` commands use the syntax: `crdb-cli ` to let you: + +- Create, list, update, flush, or delete an Active-Active database. +- Add or remove an instance of the Active-Active database on a specific cluster. + +Each command creates a task. + +By default, the command runs immediately and displays the result in the output. + +If you use the `--no-wait` flag, the command runs in the background so that your application is not delayed by the response. + +Use the [`crdb-cli task` commands]({{< relref "/operate/rs/references/cli-utilities/crdb-cli/task/" >}}) to manage Active-Active database tasks. + +For each `crdb-cli` command, you can use `--help` for additional information about the command. +--- +Title: Benchmark an Auto Tiering enabled database +alwaysopen: false +categories: +- docs +- operate +- rs +description: null +linkTitle: Benchmark Auto Tiering +weight: $weight +--- +Auto Tiering on Redis Enterprise Software lets you use cost-effective Flash memory as a RAM extension for your database. + +But what does the performance look like as compared to a memory-only database, one stored solely in RAM? + +These scenarios use the `memtier_benchmark` utility to evaluate the performance of a Redis Enterprise Software deployment, including the trial version. + +The `memtier_benchmark` utility is located in `/opt/redislabs/bin/` of Redis Enterprise Software deployments. To test performance for cloud provider deployments, see the [memtier-benchmark GitHub project](https://github.com/RedisLabs/memtier_benchmark). + +For additional, such as assistance with larger clusters, [contact support](https://redislabs.com/company/support/). + + +## Benchmark and performance test considerations + +These tests assume you're using a trial version of Redis Enterprise Software and want to test the performance of a Auto Tiering enabled database in the following scenarios: + +- Without replication: Four (4) master shards +- With replication: Two (2) primary and two replica shards + +With the trial version of Redis Enterprise Software you can create a cluster of up to four shards using a combination of database configurations, including: + +- Four databases, each with a single master shard +- Two highly available databases with replication enabled (each database has one master shard and one replica shard) +- One non-replicated clustered database with four master shards +- One highly available and clustered database with two master shards and two replica shards + +## Test environment and cluster setup + +For the test environment, you need to: + +1. Create a cluster with three nodes. +1. Prepare the flash memory. +1. Configure the load generation tool. + +### Creating a three-node cluster {#creating-a-threenode-rs-cluster} + +This performance test requires a three-node cluster. + +You can run all of these tests on Amazon AWS with these hosts: + +- 2 x i3.2xlarge (8 vCPU, 61 GiB RAM, up to 10GBit, 1.9TB NMVe SSD) + + These nodes serve RoF data + +- 1 x m4.large, which acts as a quorum node + +To learn how to install Redis Enterprise Software and set up a cluster, see: + +- [Redis Enterprise Software quickstart]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) for a test installation +- [Install and upgrade]({{< relref "/operate/rs/installing-upgrading" >}}) for a production installation + +These tests use a quorum node to reduce AWS EC2 instance use while maintaining the three nodes required to support a quorum node in case of node failure. Quorum nodes can be on less powerful instances because they do not have shards or support traffic. + +As of this writing, i3.2xlarge instances are required because they support NVMe SSDs, which are required to support RoF. Auto Tiering requires Flash-enabled storage, such as NVMe SSDs. + +For best results, compare performance of a Flash-enabled deployment to the performance in a RAM-only environment, such as a strictly on-premises deployment. + +## Prepare the flash memory + +After you install RS on the nodes, +the flash memory attached to the i3.2xlarge instances must be prepared and formatted with the `/opt/redislabs/sbin/prepare_flash.sh` script. + +## Set up the load generation tool + +The memtier_benchmark load generator tool generates the load on the RoF databases. +To use this tool, install RS on a dedicated instance that is not part of the RS cluster +but is in the same region/zone/subnet of your cluster. +We recommend that you use a relatively powerful instance to avoid bottlenecks at the load generation tool itself. + +For these tests, the load generation host uses a c4.8xlarge instance type. + +## Database configuration parameters + +### Create a Auto Tiering test database + +You can use the Redis Enterprise Cluster Manager UI to create a test database. +We recommend that you use a separate database for each test case with these requirements: + +| **Parameter** | **With replication** | **Without replication** | **Description** | +| ------ | ------ | ------ | ------ | +| Name | test-1 | test-2 | The name of the test database | +| Memory limit | 100 GB | 100 GB | The memory limit refers to RAM+Flash, aggregated across all the shards of the database, including master and replica shards. | +| RAM limit | 0.3 | 0.3 | RoF always keeps the Redis keys and Redis dictionary in RAM and additional RAM is required for storing hot values. For the purpose of these tests 30% RAM was calculated as an optimal value. | +| Replication | Enabled | Disabled | A database with no replication has only master shards. A database with replication has master and replica shards. | +| Data persistence | None | None | No data persistence is needed for these tests. | +| Database clustering | Enabled | Enabled | A clustered database consists of multiple shards. | +| Number of (master) shards | 2 | 4 | Shards are distributed as follows:
- With replication: One master shard and one replica shard on each node
- Without replication: Two master shards on each node | +| Other parameters | Default | Default | Keep the default values for the other configuration parameters. | + +## Data population + +### Populate the benchmark dataset + +The memtier_benchmark load generation tool populates the database. +To populate the database with N items of 500 Bytes each in size, on the load generation instance run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-maximum=$N -n allkeys -d 500 --key-pattern=P:P --ratio=1:0 +``` + +Set up a test database: + +| **Parameter** | **Description** | +| ------ | ------ | +| Database host
(-s) | The fully qualified name of the endpoint or the IP shown in the RS database configuration | +| Database port
(-p) | The endpoint port shown in your database configuration | +| Number of items
(–key-maximum) | With replication: 75 Million
Without replication: 150 Million | +| Item size
(-d) | 500 Bytes | + +## Centralize the keyspace + +### With replication {#centralize-with-repl} + +To create roughly 20.5 million items in RAM for your highly available clustered database with 75 million items, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-minimum=27250000 --key-maximum=47750000 -n allkeys +--key-pattern=P:P --ratio=0:1 +``` + +To verify the database values, use **Values in RAM** metric, which is available from the **Metrics** tab of your database in the Cluster Manager UI. + +### Without replication {#centralize-wo-repl} + +To create 41 million items in RAM without replication enabled and 150 million items, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --hide-histogram +--key-minimum=54500000 --key-maximum=95500000 -n allkeys +--key-pattern=P:P --ratio=0:1 +``` + +## Test runs + +### Generate load + +#### With replication {#generate-with-repl} + +We recommend that you do a dry run and double check the RAM Hit Ratio on the **Metrics** screen in the Cluster Manager UI before you write down the test results. + +To test RoF with an 85% RAM Hit Ratio, run: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --pipeline=11 -c 20 -t 1 +-d 500 --key-maximum=75000000 --key-pattern=G:G --key-stddev=5125000 +--ratio=1:1 --distinct-client-seed --randomize --test-time=600 +--run-count=1 --out-file=test.out +``` + +#### Without replication {#generate-wo-repl} + +Here is the command for 150 million items: + +```sh +$ memtier_benchmark -s $DB_HOST -p $DB_PORT --pipeline=24 -c 20 -t 1 +-d 500 --key-maximum=150000000 --key-pattern=G:G --key-stddev=10250000 +--ratio=1:1 --distinct-client-seed --randomize --test-time=600 +--run-count=1 --out-file=test.out +``` + +Where: + +| **Parameter** | **Description** | +|------------|-----------------| +| Access pattern (--key-pattern) and standard deviation (--key-stddev) | Controls the RAM Hit ratio after the centralization process is complete | +| Number of threads (-t and -c)\ | Controls how many connections are opened to the database, whereby the number of connections is the number of threads multiplied by the number of connections per thread (-t) and number of clients per thread (-c) | +| Pipelining (--pipeline)\ | Pipelining allows you to send multiple requests without waiting for each individual response (-t) and number of clients per thread (-c) | +| Read\write ratio (--ratio)\ | A value of 1:1 means that you have the same number of write operations as read operations (-t) and number of clients per thread (-c) | + +## Test results + +### Monitor the test results + +You can either monitor the results in the **Metrics** tab of the Cluster Manager UI or with the `memtier_benchmark` output. However, be aware that: + +- The memtier_benchmark results include the network latency between the load generator instance and the cluster instances. + +- The metrics shown in the Cluster Manager UI do _not_ include network latency. + +### Expected results + +You should expect to see an average throughput of: + +- Around 160,000 ops/sec when testing without replication (i.e. Four master shards) +- Around 115,000 ops/sec when testing with enabled replication (i.e. 2 master and 2 replica shards) + +In both cases, the average latency should be below one millisecond. +--- +Title: Develop with Redis clients +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis client libraries allow you to connect to Redis instances from within + your application. This section provides an overview of several recommended Redis + clients for popular programming and scripting languages. +hideListLinks: true +linkTitle: Redis clients +weight: 80 +--- +To connect to Redis instances from within your application, use a Redis client library that matches your application's language. + +## Official clients + +| Language | Client name | +| :---------- | :------------- | +| .Net | [NRedisStack]({{< relref "/develop/clients/dotnet" >}}) | +| Go | [go-redis]({{< relref "/develop/clients/go" >}}) | +| Java | [Jedis]({{< relref "/develop/clients/jedis" >}}) (Synchronous) and [Lettuce]({{< relref "/develop/clients/lettuce" >}}) (Asynchronous) | +| Node.js | [node-redis]({{< relref "/develop/clients/nodejs" >}}) | +| Python | [redis-py]({{< relref "/develop/clients/redis-py" >}}) | + +Select a client name to see its quick start. + +## Other clients + +For a list of community-driven Redis clients, which are available for more programming languages, see +[Community-supported clients]({{< relref "/develop/clients#community-supported-clients" >}}). +--- +Title: Permissions +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the permissions used with Redis Enterprise Software REST API + calls. +linkTitle: Permissions +weight: 60 +--- + +Some Redis Enterprise [REST API requests]({{< relref "/operate/rs/references/rest-api/requests" >}}) may require the user to have specific permissions. + +Administrators can assign a predefined role to a user with the [Cluster Manager UI]({{< relref "/operate/rs/security/access-control/create-users" >}}) or a [`PUT /v1/users/{uid}` API request]({{< relref "/operate/rs/references/rest-api/requests/users#put-user" >}}) to grant necessary permissions to them. + +## Roles + +Each user in the cluster has an assigned cluster management role, which defines the permissions granted to the user. + +Available management roles include: + +- **none**: No REST API permissions. +- **[db_viewer](#db-viewer-role)**: Can view database info. +- **[db_member](#db-member-role)**: Can create or modify databases and view their info. +- **[cluster_viewer](#cluster-viewer-role)**: Can view cluster and database info. +- **[cluster_member](#cluster-member-role)**: Can modify the cluster and databases and view their info. +- **[user_manager](#user-manager-role)**: Can modify users and view their info. +- **[admin](#admin-role)**: Can view and modify all elements of the cluster. + +## Permissions list for each role + +| Role | Permissions | +|------|-------------| +| none | No permissions | +|
admin | [add_cluster_module](#add_cluster_module), [cancel_cluster_action](#cancel_cluster_action), [cancel_node_action](#cancel_node_action), [config_ldap](#config_ldap), [config_ocsp](#config_ocsp), [create_bdb](#create_bdb), [create_crdb](#create_crdb), [create_ldap_mapping](#create_ldap_mapping), [create_new_user](#create_new_user), [create_redis_acl](#create_redis_acl), [create_role](#create_role), [delete_bdb](#delete_bdb), [delete_cluster_module](#delete_cluster_module), [delete_crdb](#delete_crdb), [delete_ldap_mapping](#delete_ldap_mapping), [delete_redis_acl](#delete_redis_acl), [delete_role](#delete_role), [delete_user](#delete_user), [edit_bdb_module](#edit_bdb_module), [failover_shard](#failover_shard), [flush_crdb](#flush_crdb), [install_new_license](#install_new_license), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [start_cluster_action](#start_cluster_action), [start_node_action](#start_node_action), [test_ocsp_status](#test_ocsp_status), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_cluster](#update_cluster), [update_crdb](#update_crdb), [update_ldap_mapping](#update_ldap_mapping), [update_node](#update_node), [update_proxy](#update_proxy), [update_redis_acl](#update_redis_acl), [update_role](#update_role), [update_user](#update_user), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_ldap_mappings_info](#view_all_ldap_mappings_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_all_users_info](#view_all_users_info), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_keys](#view_cluster_keys), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_ldap_config](#view_ldap_config), [view_ldap_mapping_info](#view_ldap_mapping_info), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_ocsp_config](#view_ocsp_config), [view_ocsp_status](#view_ocsp_status), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action), [view_user_info](#view_user_info) | +| cluster_member | [create_bdb](#create_bdb), [create_crdb](#create_crdb), [delete_bdb](#delete_bdb), [delete_crdb](#delete_crdb), [edit_bdb_module](#edit_bdb_module), [failover_shard](#failover_shard), [flush_crdb](#flush_crdb), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_crdb](#update_crdb), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_keys](#view_cluster_keys), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| cluster_viewer | [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| db_member | [create_bdb](#create_bdb), [create_crdb](#create_crdb), [delete_bdb](#delete_bdb), [delete_crdb](#delete_crdb), [edit_bdb_module](#edit_bdb_module), [failover_shard](#failover_shard), [flush_crdb](#flush_crdb), [migrate_shard](#migrate_shard), [purge_instance](#purge_instance), [reset_bdb_current_backup_status](#reset_bdb_current_backup_status), [reset_bdb_current_export_status](#reset_bdb_current_export_status), [reset_bdb_current_import_status](#reset_bdb_current_import_status), [start_bdb_export](#start_bdb_export), [start_bdb_import](#start_bdb_import), [start_bdb_recovery](#start_bdb_recovery), [update_bdb](#update_bdb), [update_bdb_alerts](#update_bdb_alerts), [update_bdb_with_action](#update_bdb_with_action), [update_crdb](#update_crdb), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_debugging_info](#view_debugging_info), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| db_viewer | [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_nodes_alerts](#view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_recovery_plan](#view_bdb_recovery_plan), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_endpoint_stats](#view_endpoint_stats), [view_license](#view_license), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action) | +| user_manager | [config_ldap](#config_ldap), [create_ldap_mapping](#create_ldap_mapping), [create_new_user](#create_new_user), [create_role](#create_role), [create_redis_acl](#create_redis_acl), [delete_ldap_mapping](#delete_ldap_mapping), [delete_redis_acl](#delete_redis_acl), [delete_role](#delete_role), [delete_user](#delete_user), [install_new_license](#install_new_license), [update_ldap_mapping](#update_ldap_mapping), [update_proxy](#update_proxy), [update_role](#update_role), [update_redis_acl](#update_redis_acl), [update_user](#update_user), [view_all_bdb_stats](#view_all_bdb_stats), [view_all_bdbs_alerts](#view_all_bdbs_alerts), [view_all_bdbs_info](#view_all_bdbs_info), [view_all_ldap_mappings_info](#view_all_ldap_mappings_info), [view_all_nodes_alerts](view_all_nodes_alerts), [view_all_nodes_checks](#view_all_nodes_checks), [view_all_nodes_info](#view_all_nodes_info), [view_all_nodes_stats](#view_all_nodes_stats), [view_all_proxies_info](#view_all_proxies_info), [view_all_redis_acls_info](#view_all_redis_acls_info), [view_all_roles_info](#view_all_roles_info), [view_all_shard_stats](#view_all_shard_stats), [view_all_users_info](#view_all_users_info), [view_bdb_alerts](#view_bdb_alerts), [view_bdb_info](#view_bdb_info), [view_bdb_stats](#view_bdb_stats), [view_cluster_alerts](#view_cluster_alerts), [view_cluster_info](#view_cluster_info), [view_cluster_keys](#view_cluster_keys), [view_cluster_modules](#view_cluster_modules), [view_cluster_stats](#view_cluster_stats), [view_crdb](#view_crdb), [view_crdb_list](#view_crdb_list), [view_crdb_task](#view_crdb_task), [view_crdb_task_list](#view_crdb_task_list), [view_endpoint_stats](#view_endpoint_stats), [view_ldap_config](#view_ldap_config), [view_ldap_mapping_info](#view_ldap_mapping_info), [view_license](#view_license), [view_logged_events](#view_logged_events), [view_node_alerts](#view_node_alerts), [view_node_check](#view_node_check), [view_node_info](#view_node_info), [view_node_stats](#view_node_stats), [view_proxy_info](#view_proxy_info), [view_redis_acl_info](#view_redis_acl_info), [view_redis_pass](#view_redis_pass), [view_role_info](#view_role_info), [view_shard_stats](#view_shard_stats), [view_status_of_all_node_actions](#view_status_of_all_node_actions), [view_status_of_cluster_action](#view_status_of_cluster_action), [view_status_of_node_action](#view_status_of_node_action), [view_user_info](#view_user_info) + | + +## Roles list per permission + +| Permission | Roles | +|------------|-------| +| add_cluster_module| admin | +| cancel_cluster_action | admin | +| cancel_node_action | admin | +| config_ldap | admin
user_manager | +| config_ocsp | admin | +| create_bdb | admin
cluster_member
db_member | +| create_crdb | admin
cluster_member
db_member | +| create_ldap_mapping | admin
user_manager | +| create_new_user | admin
user_manager | +| create_redis_acl | admin
user_manager | +| create_role | admin
user_manager | +| delete_bdb | admin
cluster_member
db_member | +| delete_cluster_module | admin | +| delete_crdb | admin
cluster_member
db_member | +| delete_ldap_mapping | admin
user_manager | +| delete_redis_acl | admin
user_manager | +| delete_role | admin
user_manager | +| delete_user | admin
user_manager | +| edit_bdb_module | admin
cluster_member
db_member | +| failover_shard | admin
cluster_member
db_member | +| flush_crdb | admin
cluster_member
db_member | +| install_new_license | admin
user_manager | +| migrate_shard | admin
cluster_member
db_member | +| purge_instance | admin
cluster_member
db_member | +| reset_bdb_current_backup_status | admin
cluster_member
db_member | +| reset_bdb_current_export_status | admin
cluster_member
db_member | +| reset_bdb_current_import_status | admin
cluster_member
db_member | +| start_bdb_export | admin
cluster_member
db_member | +| start_bdb_import | admin
cluster_member
db_member | +| start_bdb_recovery | admin
cluster_member
db_member | +| start_cluster_action | admin | +| start_node_action | admin | +| test_ocsp_status | admin | +| update_bdb | admin
cluster_member
db_member | +| update_bdb_alerts | admin
cluster_member
db_member | +| update_bdb_with_action | admin
cluster_member
db_member | +| update_cluster | admin | +| update_crdb | admin
cluster_member
db_member | +| update_ldap_mapping | admin
user_manager | +| update_node | admin | +| update_proxy | admin
user_manager | +| update_redis_acl | admin
user_manager | +| update_role | admin
user_manager | +| update_user | admin
user_manager | +| view_all_bdb_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_bdbs_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_bdbs_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_ldap_mappings_info | admin
user_manager | +| view_all_nodes_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_nodes_checks | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_nodes_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_nodes_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_proxies_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_redis_acls_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_roles_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_shard_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_all_users_info | admin
user_manager | +| view_bdb_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager |view_bdb_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_bdb_recovery_plan | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_bdb_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_cluster_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_cluster_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_cluster_keys | admin
cluster_member
user_manager | +| view_cluster_modules | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_cluster_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_crdb | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_crdb_list | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_crdb_task | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_crdb_task_list | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_debugging_info | admin
cluster_member
db_member
user_manager | +| view_endpoint_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_ldap_config | admin
user_manager | +| view_ldap_mapping_info | admin
user_manager | +| view_license | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_logged_events | admin
cluster_member
cluster_viewer
db_member
user_manager | +| view_node_alerts | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_node_check | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_node_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_node_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_ocsp_config | admin | +| view_ocsp_status | admin | +| view_proxy_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_redis_acl_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_redis_pass | admin
cluster_member
db_member
user_manager | +| view_role_info | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_shard_stats | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_status_of_all_node_actions | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_status_of_cluster_action | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_status_of_node_action | admin
cluster_member
cluster_viewer
db_member
db_viewer
user_manager | +| view_user_info | admin
user_manager | +--- +Title: Redis Enterprise Software REST API quick start +alwaysopen: false +categories: +- docs +- operate +- rs +description: Redis Enterprise Software REST API quick start +linkTitle: Quick start +weight: 20 +--- + +Redis Enterprise Software includes a REST API that allows you to automate certain tasks. This article shows you how to send a request to the Redis Enterprise Software REST API. + +## Fundamentals + +No matter which method you use to send API requests, there are a few common concepts to remember. + +| Type | Description | +|------|-------------| +| [Authentication]({{< relref "/operate/rs/references/rest-api#authentication" >}}) | Use [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) with your cluster username (email) and password | +| [Ports]({{< relref "/operate/rs/references/rest-api#ports" >}}) | All calls are made to port 9443 by default | +| [Versions]({{< relref "/operate/rs/references/rest-api#versions" >}}) | Specify the version in the request [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) | +| [Headers]({{< relref "/operate/rs/references/rest-api#headers" >}}) | `Accept` and `Content-Type` should be `application/json` | +| [Response types and error codes]({{< relref "/operate/rs/references/rest-api#response-types-and-error-codes" >}}) | A response of `200 OK` means success; otherwise, the request failed due to an error | + +For more information, see [Redis Enterprise Software REST API]({{< relref "/operate/rs/references/rest-api/" >}}). + +## cURL example requests + +[cURL](https://curl.se/) is a command-line tool that allows you to send HTTP requests from a terminal. + +You can use the following options to build a cURL request: + +| Option | Description | +|--------|-------------| +| -X | Method (GET, PUT, PATCH, POST, or DELETE) | +| -H | Request header, can be specified multiple times | +| -u | Username and password information | +| -d | JSON data for PUT or POST requests | +| -F | Form data for PUT or POST requests, such as for the [`POST /v1/modules`]({{< relref "/operate/rs/references/rest-api/requests/modules/#post-module" >}}) or [`POST /v2/modules`]({{< relref "/operate/rs/references/rest-api/requests/modules/#post-module-v2" >}}) endpoint | +| -k | Turn off SSL verification | +| -i | Show headers and status code as well as the response body | + +See the [cURL documentation](https://curl.se/docs/) for more information. + +### GET request + +Use the following cURL command to get a list of databases with the [GET `/v1/bdbs/`]({{< relref "/operate/rs/references/rest-api/requests/bdbs/#get-all-bdbs" >}}) endpoint. + +```sh +$ curl -X GET -H "accept: application/json" \ + -u "[username]:[password]" \ + https://[host][:port]/v1/bdbs -k -i + +HTTP/1.1 200 OK +server: envoy +date: Tue, 14 Jun 2022 19:24:30 GMT +content-type: application/json +content-length: 2833 +cluster-state-id: 42 +x-envoy-upstream-service-time: 25 + +[ + { + ... + "name": "tr01", + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] +``` + +In the response body, the `uid` is the database ID. You can use the database ID to view or update the database using the API. + +For more information about the fields returned by [GET `/v1/bdbs/`]({{< relref "/operate/rs/references/rest-api/requests/bdbs/#get-all-bdbs" >}}), see the [`bdbs` object]({{< relref "/operate/rs/references/rest-api/objects/bdb/" >}}). + +### PUT request + +Once you have the database ID, you can use [PUT `/v1/bdbs/`]({{< relref "/operate/rs/references/rest-api/requests/bdbs/#put-bdbs" >}}) to update the configuration of the database. + +For example, you can pass the database `uid` 1 as a URL parameter and use the `-d` option to specify the new `name` when you send the request. This changes the database's `name` from `tr01` to `database1`: + +```sh +$ curl -X PUT -H "accept: application/json" \ + -H "content-type: application/json" \ + -u "cameron.bates@redis.com:test123" \ + https://[host]:[port]/v1/bdbs/1 \ + -d '{ "name": "database1" }' -k -i +HTTP/1.1 200 OK +server: envoy +date: Tue, 14 Jun 2022 20:00:25 GMT +content-type: application/json +content-length: 2933 +cluster-state-id: 43 +x-envoy-upstream-service-time: 159 + +{ + ... + "name" : "database1", + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +For more information about the fields you can update with [PUT `/v1/bdbs/`]({{< relref "/operate/rs/references/rest-api/requests/bdbs/#put-bdbs" >}}), see the [`bdbs` object]({{< relref "/operate/rs/references/rest-api/objects/bdb/" >}}). + +## Client examples + +You can also use client libraries to make API requests in your preferred language. + +To follow these examples, you need: + +- A [Redis Enterprise Software]({{< relref "/operate/rs/installing-upgrading/quickstarts/redis-enterprise-software-quickstart" >}}) node +- Python 3 and the [requests](https://pypi.org/project/requests/) Python library +- [node.js](https://nodejs.dev/) and [node-fetch](https://www.npmjs.com/package/node-fetch) + +### Python + +```python +import json +import requests + +# Required connection information - replace with your host, port, username, and password +host = "[host]" +port = "[port]" +username = "[username]" +password = "[password]" + +# Get the list of databases using GET /v1/bdbs +bdbs_uri = "https://{}:{}/v1/bdbs".format(host, port) + +print("GET {}".format(bdbs_uri)) +get_resp = requests.get(bdbs_uri, + auth = (username, password), + headers = { "accept" : "application/json" }, + verify = False) + +print("{} {}".format(get_resp.status_code, get_resp.reason)) +for header in get_resp.headers.keys(): + print("{}: {}".format(header, get_resp.headers[header])) + +print("\n" + json.dumps(get_resp.json(), indent=4)) + +# Rename all databases using PUT /v1/bdbs +for bdb in get_resp.json(): + uid = bdb["uid"] # Get the database ID from the JSON response + + put_uri = "{}/{}".format(bdbs_uri, uid) + new_name = "database{}".format(uid) + put_data = { "name" : new_name } + + print("PUT {} {}".format(put_uri, json.dumps(put_data))) + + put_resp = requests.put(put_uri, + data = json.dumps(put_data), + auth = (username, password), + headers = { "content-type" : "application/json" }, + verify = False) + + print("{} {}".format(put_resp.status_code, put_resp.reason)) + for header in put_resp.headers.keys(): + print("{}: {}".format(header, put_resp.headers[header])) + + print("\n" + json.dumps(put_resp.json(), indent=4)) +``` + +See the [Python requests library documentation](https://requests.readthedocs.io/en/latest/) for more information. + +#### Output + +```sh +$ python rs_api.py +python rs_api.py +GET https://[host]:[port]/v1/bdbs +InsecureRequestWarning: Unverified HTTPS request is being made to host '[host]'. +Adding certificate verification is strongly advised. +See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings + warnings.warn( +200 OK +server: envoy +date: Wed, 15 Jun 2022 15:49:43 GMT +content-type: application/json +content-length: 2832 +cluster-state-id: 89 +x-envoy-upstream-service-time: 27 + +[ + { + ... + "name": "tr01", + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] + +PUT https://[host]:[port]/v1/bdbs/1 {"name": "database1"} +InsecureRequestWarning: Unverified HTTPS request is being made to host '[host]'. +Adding certificate verification is strongly advised. +See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings + warnings.warn( +200 OK +server: envoy +date: Wed, 15 Jun 2022 15:49:43 GMT +content-type: application/json +content-length: 2933 +cluster-state-id: 90 +x-envoy-upstream-service-time: 128 + +{ + ... + "name" : "database1", + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +### node.js + +```js +import fetch, { Headers } from 'node-fetch'; +import * as https from 'https'; + +const HOST = '[host]'; +const PORT = '[port]'; +const USERNAME = '[username]'; +const PASSWORD = '[password]'; + +// Get the list of databases using GET /v1/bdbs +const BDBS_URI = `https://${HOST}:${PORT}/v1/bdbs`; +const USER_CREDENTIALS = Buffer.from(`${USERNAME}:${PASSWORD}`).toString('base64'); +const AUTH_HEADER = `Basic ${USER_CREDENTIALS}`; + +console.log(`GET ${BDBS_URI}`); + +const HTTPS_AGENT = new https.Agent({ + rejectUnauthorized: false +}); + +const response = await fetch(BDBS_URI, { + method: 'GET', + headers: { + 'Accept': 'application/json', + 'Authorization': AUTH_HEADER + }, + agent: HTTPS_AGENT +}); + +const responseObject = await response.json(); +console.log(`${response.status}: ${response.statusText}`); +console.log(responseObject); + +// Rename all databases using PUT /v1/bdbs +for (const database of responseObject) { + const DATABASE_URI = `${BDBS_URI}/${database.uid}`; + const new_name = `database${database.uid}`; + + console.log(`PUT ${DATABASE_URI}`); + + const response = await fetch(DATABASE_URI, { + method: 'PUT', + headers: { + 'Authorization': AUTH_HEADER, + 'Content-Type': 'application/json' + }, + body: JSON.stringify({ + 'name': new_name + }), + agent: HTTPS_AGENT + }); + + console.log(`${response.status}: ${response.statusText}`); + console.log(await(response.json())); +} +``` + +See the [node-fetch documentation](https://www.npmjs.com/package/node-fetch) for more info. + +#### Output + +```sh +$ node rs_api.js +GET https://[host]:[port]/v1/bdbs +200: OK +[ + { + ... + "name": "tr01", + ... + "slave_ha" : false, + ... + "uid": 1, + "version": "6.0.16", + "wait_command": true + } +] +PUT https://[host]:[port]/v1/bdbs/1 +200: OK +{ + ... + "name" : "tr01", + ... + "slave_ha" : true, + ... + "uid" : 1, + "version" : "6.0.16", + "wait_command" : true +} +``` + +## More info + +- [Redis Enterprise Software REST API]({{< relref "/operate/rs/references/rest-api/" >}}) +- [Redis Enterprise Software REST API requests]({{< relref "/operate/rs/references/rest-api/requests/" >}}) +--- +Title: DB metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the DB metrics used with Redis Enterprise Software REST API + calls. +linkTitle: DB metrics +weight: $weight +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| avg_latency | float | Average latency of operations on the DB (microseconds). Only returned when there is traffic. | +| avg_other_latency | float | Average latency of other (non read/write) operations (microseconds). Only returned when there is traffic. | +| avg_read_latency | float | Average latency of read operations (microseconds). Only returned when there is traffic. | +| avg_write_latency | float | Average latency of write operations (microseconds). Only returned when there is traffic. | +| big_del_flash | float | Rate of key deletes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_del_ram | float | Rate of key deletes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| big_fetch_flash | float | Rate of key reads/updates for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_fetch_ram | float | Rate of key reads/updates for keys in RAM (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_flash | float | Rate of key operations on flash. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_redis | float | Rate of Redis operations on keys. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_write_flash | float | Rate of key writes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_ram | float | Rate of key writes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| bigstore_io_dels | float | Rate of key deletions from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_read_bytes | float | Throughput of I/O read operations against backend flash |for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_reads | float | Rate of key reads from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_write_bytes | float | Throughput of I/O write operations against backend flash |for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_writes | float | Rate of key writes from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards of the DB (BigRedis) (ops/sec). Only returned when BigRedis is enabled. | +| bigstore_kv_ops | float | Rate of value read/write/del operations against backend flash for all shards of the DB (BigRedis) (key access/sec). Only returned when BigRedis is enabled | +| bigstore_objs_flash | float | Value count on flash (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_objs_ram | float | Value count in RAM (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| conns | float | Number of client connections to the DB’s endpoints | +| disk_frag_ratio | float | Flash fragmentation ratio (used/required). Only returned when BigRedis is enabled. | +| egress_bytes | float | Rate of outgoing network traffic to the DB’s endpoint (bytes/sec) | +| evicted_objects | float | Rate of key evictions from DB (evictions/sec) | +| expired_objects | float | Rate keys expired in DB (expirations/sec) | +| fork_cpu_system | float | % cores utilization in system mode for all Redis shard fork child processes of this database | +| fork_cpu_user | float | % cores utilization in user mode for all Redis shard fork child processes of this database | +| ingress_bytes | float | Rate of incoming network traffic to the DB’s endpoint (bytes/sec) | +| instantaneous_ops_per_sec | float | Request rate handled by all shards of the DB (ops/sec) | +| last_req_time | date, ISO_8601 format | Last request time received to the DB (ISO format 2015-07-05T22:16:18Z). Returns 1/1/1970 when unavailable. | +| last_res_time | date, ISO_8601 format | Last response time received from DB (ISO format 2015-07-05T22:16:18Z). Returns 1/1/1970 when unavailable. | +| main_thread_cpu_system | float | % cores utilization in system mode for all Redis shard main threads of this database | +| main_thread_cpu_user | float | % cores utilization in user mode for all Redis shard main threads of this database | +| mem_frag_ratio | float | RAM fragmentation ratio (RSS/allocated RAM) | +| mem_not_counted_for_evict | float | Portion of used_memory (in bytes) not counted for eviction and OOM errors | +| mem_size_lua | float | Redis Lua scripting heap size (bytes) | +| monitor_sessions_count | float | Number of client connected in monitor mode to the DB | +| no_of_expires | float | Number of volatile keys in the DB | +| no_of_keys | float | Number of keys in the DB | +| other_req | float | Rate of other (non read/write) requests on DB (ops/sec) | +| other_res | float | Rate of other (non read/write) responses on DB (ops/sec) | +| pubsub_channels | float | Count the pub/sub channels with subscribed clients | +| pubsub_patterns | float | Count the pub/sub patterns with subscribed clients | +| ram_overhead | float | Non values RAM overhead (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| read_hits | float | Rate of read operations accessing an existing key (ops/sec) | +| read_misses | float | Rate of read operations accessing a nonexistent key (ops/sec) | +| read_req | float | Rate of read requests on DB (ops/sec) | +| read_res | float | Rate of read responses on DB (ops/sec) | +| shard_cpu_system | float | % cores utilization in system mode for all Redis shard processes of this database | +| shard_cpu_user | float | % cores utilization in user mode for the Redis shard process | +| total_connections_received | float | Rate of new client connections to the DB (connections/sec) | +| total_req | float | Rate of all requests on DB (ops/sec) | +| total_res | float | Rate of all responses on DB (ops/sec) | +| used_bigstore | float | Flash used by DB (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| used_memory | float | Memory used by DB (in BigRedis this includes flash) (bytes) | +| used_ram | float | RAM used by DB (BigRedis) (bytes). Only returned when BigRedis is enabled. | +| write_hits | float | Rate of write operations accessing an existing key (ops/sec) | +| write_misses | float | Rate of write operations accessing a nonexistent key (ops/sec) | +| write_req | float | Rate of write requests on DB (ops/sec) | +| write_res | float | Rate of write responses on DB (ops/sec) |--- +Title: Node metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node metrics used with Redis Enterprise Software REST API + calls. +linkTitle: node metrics +weight: $weight +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| available_flash | float | Available flash on the node (bytes) | +| available_memory | float | Available RAM on the node (bytes) | +| avg_latency | float | Average latency of requests handled by endpoints on the node (micro-sec); returned only when there is traffic | +| bigstore_free | float | Free space of backend flash (used by flash DB's BigRedis) (bytes); returned only when BigRedis is enabled | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (ops/sec); returned only when BigRedis is enabled | +| bigstore_kv_ops | float | Rate of value read/write operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (ops/sec); returned only when BigRedis is enabled | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) on the node (bytes/sec); returned only when BigRedis is enabled | +| conns | float | Number of clients connected to endpoints on the node | +| cpu_idle | float | CPU idle time portion (0-1, multiply by 100 to get percent) | +| cpu_system | float | CPU time portion spent in kernel (0-1, multiply by 100 to get percent) | +| cpu_user | float | CPU time portion spent by users-pace processes (0-1, multiply by 100 to get percent) | +| cur_aof_rewrites | float | Number of current AOF rewrites by shards on this node | +| egress_bytes | float | Rate of outgoing network traffic to the node (bytes/sec) | +| ephemeral_storage_avail | float | Disk space available to Redis Enterprise processes on configured ephemeral disk (bytes) | +| ephemeral_storage_free | float | Free disk space on configured ephemeral disk (bytes) | +| free_memory | float | Free memory on the node (bytes) | +| ingress_bytes | float | Rate of incoming network traffic to the node (bytes/sec) | +| persistent_storage_avail | float | Disk space available to Redis Enterprise processes on configured persistent disk (bytes) | +| persistent_storage_free | float | Free disk space on configured persistent disk (bytes) | +| provisional_flash | float | Amount of flash available for new shards on this node, taking into account overbooking, max Redis servers, reserved flash, and provision and migration thresholds (bytes) | +| provisional_memory | float | Amount of RAM available for new shards on this node, taking into account overbooking, max Redis servers, reserved memory, and provision and migration thresholds (bytes) | +| total_req | float | Request rate handled by endpoints on the node (ops/sec) | +--- +Title: Cluster metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster metrics used with Redis Enterprise Software REST + API calls. +linkTitle: cluster metrics +weight: $weight +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| available_flash | float | Sum of available flash in all nodes (bytes) | +| available_memory | float | Sum of available memory in all nodes (bytes) | +| avg_latency | float | Average latency of requests handled by all cluster endpoints (micro-sec); returned only when there is traffic | +| bigstore_free | float | Sum of free space of backend flash (used by flash DB's BigRedis) on all cluster nodes (bytes); only returned when BigRedis is enabled | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) in the cluster (ops/sec); returned only when BigRedis is enabled | +| bigstore_kv_ops | float | Rate of value read/write operations against back-end flash for all shards which are part of a flash based DB (BigRedis) in cluster (ops/sec); only returned when BigRedis is enabled | +| bigstore_throughput | float | Throughput I/O operations against backend flash for all shards which are part of a flash-based DB (BigRedis) in the cluster (bytes/sec); only returned when BigRedis is enabled | +| conns | float | Total number of clients connected to all cluster endpoints | +| cpu_idle | float | CPU idle time portion, the value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent) | +| cpu_system | float | CPU time portion spent in kernel on the cluster, the value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent) | +| cpu_user | float | CPU time portion spent by users-pace processes on the cluster. The value is weighted between all nodes based on number of cores in each node (0-1, multiply by 100 to get percent). | +| egress_bytes | float | Sum of rate of outgoing network traffic on all cluster nodes (bytes/sec) | +| ephemeral_storage_avail | float | Sum of disk space available to Redis Enterprise processes on configured ephemeral disk on all cluster nodes (bytes) | +| ephemeral_storage_free | float | Sum of free disk space on configured ephemeral disk on all cluster nodes (bytes) | +| free_memory | float | Sum of free memory in all cluster nodes (bytes) | +| ingress_bytes | float | Sum of rate of incoming network traffic on all cluster nodes (bytes/sec) | +| persistent_storage_avail | float | Sum of disk space available to Redis Enterprise processes on configured persistent disk on all cluster nodes (bytes) | +| persistent_storage_free | float | Sum of free disk space on configured persistent disk on all cluster nodes (bytes) | +| provisional_flash | float | Sum of provisional flash in all nodes (bytes) | +| provisional_memory | float | Sum of provisional memory in all nodes (bytes) | +| total_req | float | Request rate handled by all endpoints on the cluster (ops/sec) | +--- +Title: Statistics +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains metrics for clusters, databases, nodes, or shards +hideListLinks: true +linkTitle: statistics +weight: $weight +--- + +## Statistics overview + +Clusters, databases, nodes, and shards collect various statistics at regular time intervals. View the statistics for these objects using `GET stats` requests to their respective endpoints: +- [Cluster stats]({{< relref "/operate/rs/references/rest-api/requests/cluster/stats" >}}) +- [Database stats]({{< relref "/operate/rs/references/rest-api/requests/bdbs/stats" >}}) +- [Node stats]({{< relref "/operate/rs/references/rest-api/requests/nodes/stats" >}}) +- [Shard stats]({{< relref "/operate/rs/references/rest-api/requests/shards/stats" >}}) + +View endpoint stats using `GET` requests, see: +- [Endpoint stats]({{< relref "/operate/rs/references/rest-api/requests/endpoints-stats" >}}) + +### Response object + +Statistics returned from API requests always contain the following fields: +- `interval`: a string that represents the statistics time interval. Valid values include: + - 1sec + - 10sec + - 5min + - 15min + - 1hour + - 12hour + - 1week +- `stime`: a timestamp that represents the beginning of the interval, in the format "2015-05-27T12:00:00Z" +- `etime`: a timestamp that represents the end of the interval, in the format "2015-05-27T12:00:00Z" + +The statistics returned by the API also contain fields that represent the values of different metrics for an object during the specified time interval. + +More details about the metrics relevant to each object: +- [Cluster metrics]({{< relref "/operate/rs/references/rest-api/objects/statistics/cluster-metrics" >}}) +- [DB metrics]({{< relref "/operate/rs/references/rest-api/objects/statistics/db-metrics" >}}) +- [Node metrics]({{< relref "/operate/rs/references/rest-api/objects/statistics/node-metrics" >}}) +- [Shard metrics]({{< relref "/operate/rs/references/rest-api/objects/statistics/shard-metrics" >}}) + +{{}} +Certain statistics are not documented because they are for internal use only and should be ignored. Some statistics will only appear in API responses when they are relevant. +{{}} + +### Optional URL parameters + +There are several optional URL parameters you can pass to the various `GET stats` requests to filter the returned statistics. + +- `stime`: limit the start of the time range of the returned statistics +- `etime`: limit the end of the time range of the returned statistics +- `metrics`: only return the statistics for the specified metrics (comma-separated list) + +## Maximum number of samples per interval + +The system retains a maximum number of most recent samples for each interval. + +| Interval | Max samples | +|----------|-------------| +| 1sec | 10 | +| 10sec | 30 | +| 5min | 12 | +| 15min | 96 | +| 1hour | 168 | +| 12hour | 62 | +| 1week | 53 | + +The actual number of samples returned by a `GET stats` request depends on how many samples are available and any filters applied by the optional URL parameters. For example, newly created objects (clusters, nodes, databases, or shards) or a narrow time filter range will return fewer samples. + +{{}} +To reduce load generated by stats collection, relatively inactive databases or shards (less than 5 ops/sec) do not collect 1sec stats at one second intervals. Instead, they collect 1sec stats every 2-5 seconds but still retain the same maximum number of samples. +{{}} +--- +Title: Shard metrics +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the shard metrics used with Redis Enterprise Software REST + API calls. +linkTitle: shard metrics +weight: $weight +--- + +| Metric name | Type | Description | +|-------------|------|-------------| +| aof_rewrite_inprog | float | The number of simultaneous AOF rewrites that are in progress | +| avg_ttl | float | Estimated average time to live of a random key (msec) | +| big_del_flash | float | Rate of key deletes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_del_ram | float | Rate of key deletes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| big_fetch_flash | float | Rate of key reads/updates for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_fetch_ram | float | Rate of key reads/updates for keys in RAM (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_flash | float | Rate of key operations on flash. Can be used to compute the ratio of I/O operations (key access/sec). Only returned when BigRedis is enabled. | +| big_io_ratio_redis | float | Rate of Redis operations on keys. Can be used to compute the ratio of I/O operations) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_flash | float | Rate of key writes for keys on flash (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| big_write_ram | float | Rate of key writes for keys in RAM (BigRedis) (key access/sec); this includes write misses (new keys created). Only returned when BigRedis is enabled. | +| bigstore_io_dels | float | Rate of key deletions from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_read_bytes | float | Throughput of I/O read operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_reads | float | Rate of key reads from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_io_write_bytes | float | Throughput of I/O write operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| bigstore_io_writes | float | Rate of key writes from flash (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_iops | float | Rate of I/O operations against backend flash for all shards of the DB (BigRedis) (ops/sec). Only returned when BigRedis is enabled. | +| bigstore_kv_ops | float | Rate of value read/write/del operations against backend flash for all shards of the DB (BigRedis) (key access/sec). Only returned when BigRedis is enabled. | +| bigstore_objs_flash | float | Key count on flash (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_objs_ram | float | Key count in RAM (BigRedis). Only returned when BigRedis is enabled. | +| bigstore_throughput | float | Throughput of I/O operations against backend flash for all shards of the DB (BigRedis) (bytes/sec). Only returned when BigRedis is enabled. | +| blocked_clients | float | Count the clients waiting on a blocking call | +| connected_clients | float | Number of client connections to the specific shard | +| disk_frag_ratio | float | Flash fragmentation ratio (used/required). Only returned when BigRedis is enabled. | +| evicted_objects | float | Rate of key evictions from DB (evictions/sec) | +| expired_objects | float | Rate keys expired in DB (expirations/sec) | +| fork_cpu_system | float | % cores utilization in system mode for the Redis shard fork child process | +| fork_cpu_user | float | % cores utilization in user mode for the Redis shard fork child process | +| last_save_time | float | Time of the last RDB save | +| main_thread_cpu_system | float | % cores utilization in system mode for the Redis shard main thread | +| main_thread_cpu_user | float | % cores utilization in user mode for the Redis shard main thread | +| mem_frag_ratio | float | RAM fragmentation ratio (RSS/allocated RAM) | +| mem_not_counted_for_evict | float | Portion of used_memory (in bytes) not counted for eviction and OOM errors | +| mem_size_lua | float | Redis Lua scripting heap size (bytes) | +| no_of_expires | float | Number of volatile keys on the shard | +| no_of_keys | float | Number of keys in DB | +| pubsub_channels | float | Count the pub/sub channels with subscribed clients | +| pubsub_patterns | float | Count the pub/sub patterns with subscribed clients | +| rdb_changes_since_last_save | float | Count changes since last RDB save | +| read_hits | float | Rate of read operations accessing an existing key (ops/sec) | +| read_misses | float | Rate of read operations accessing a nonexistent key (ops/sec) | +| shard_cpu_system | float | % cores utilization in system mode for the Redis shard process | +| shard_cpu_user | float | % cores utilization in user mode for the Redis shard process | +| total_req | float | Rate of operations on DB (ops/sec) | +| used_memory | float | Memory used by shard (in BigRedis this includes flash) (bytes) | +| used_memory_peak | float | The largest amount of memory used by this shard (bytes) | +| used_memory_rss | float | Resident set size of this shard (bytes) | +| write_hits | float | Rate of write operations accessing an existing key (ops/sec) | +| write_misses | float | Rate of write operations accessing a nonexistent key (ops/sec) | +--- +Title: Alert settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the alert_settings object used with Redis Enterprise Software + REST API calls. +linkTitle: alert_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cluster_certs_about_to_expire | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Cluster certificate will expire in x days | +| cluster_even_node_count | boolean (default: false) | True high availability requires an odd number of nodes in the cluster | +| cluster_flash_overcommit | boolean (default: false) | Flash memory committed to databases is larger than cluster total flash memory | +| cluster_inconsistent_redis_sw | boolean (default: false) | Some shards in the cluster are running different versions of Redis software | +| cluster_inconsistent_rl_sw | boolean (default: false) | Some nodes in the cluster are running different versions of Redis Enterprise software | +| cluster_internal_bdb | boolean (default: false) | Issues with internal cluster databases | +| cluster_license_about_to_expire | [cluster_alert_settings_with_threshold]({{}}) object | Cluster license will expire in x days. This alert is enabled by default. Its default threshold is 7 days before license expiration. | +| cluster_multiple_nodes_down | boolean (default: false) | Multiple cluster nodes are down (this might cause data loss) | +| cluster_node_joined | boolean (default: false) | New node joined the cluster | +| cluster_node_remove_abort_completed | boolean (default: false) | Cancel node remove operation completed | +| cluster_node_remove_abort_failed | boolean (default: false) | Cancel node remove operation failed | +| cluster_node_remove_completed | boolean (default: false) | Node removed from the cluster | +| cluster_node_remove_failed | boolean (default: false) | Failed to remove a node from the cluster | +| cluster_ocsp_query_failed | boolean (default: false) | Failed to query the OCSP server | +| cluster_ocsp_status_revoked | boolean (default: false) | OCSP certificate status is REVOKED | +| cluster_ram_overcommit | boolean (default: false) | RAM committed to databases is larger than cluster total RAM | +| cluster_too_few_nodes_for_replication | boolean (default: false) | Replication requires at least 2 nodes in the cluster | +| node_aof_slow_disk_io | boolean (default: false) | AOF reaching disk I/O limits +| node_checks_error | boolean (default: false) | Some node checks have failed | +| node_cpu_utilization | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node CPU utilization has reached the threshold value (% of the utilization limit) | +| node_ephemeral_storage | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node ephemeral storage has reached the threshold value (% of the storage limit) | +| node_failed | boolean (default: false) | Node failed | +| node_free_flash | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node flash storage has reached the threshold value (% of the storage limit) | +| node_insufficient_disk_aofrw | boolean (default: false) | Insufficient AOF disk space | +| node_internal_certs_about_to_expire | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object| Internal certificate on node will expire in x days | +| node_memory | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node memory has reached the threshold value (% of the memory limit) | +| node_net_throughput | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node network throughput has reached the threshold value (bytes/s) | +| node_persistent_storage | [cluster_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/cluster/cluster_alert_settings_with_threshold" >}}) object | Node persistent storage has reached the threshold value (% of the storage limit) | +--- +Title: Cluster alert settings with threshold object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster_alert_settings_with_threshold object used with + Redis Enterprise Software REST API calls. +linkTitle: cluster_alert_settings_with_threshold +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| enabled | boolean (default: false) | Alert enabled or disabled | +| threshold | string | Threshold for alert going on/off | +--- +Title: Cluster object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a cluster +hideListLinks: true +linkTitle: cluster +weight: $weight +--- + +An API object that represents the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| alert_settings | [alert_settings]({{< relref "/operate/rs/references/rest-api/objects/cluster/alert_settings" >}}) object | Cluster and node alert settings | +| bigstore_driver | 'speedb'
'rocksdb' | Storage engine for Auto Tiering | +| cluster_ssh_public_key | string | Cluster's autogenerated SSH public key | +| cm_port | integer, (range: 1024-65535) | UI HTTPS listening port | +| cm_session_timeout_minutes | integer (default: 15) | The timeout (in minutes) for the session to the CM | +| cnm_http_max_threads_per_worker | integer (default: 10) | Maximum number of threads per worker in the `cnm_http` service (deprecated) | +| cnm_http_port | integer, (range: 1024-65535) | API HTTP listening port | +| cnm_http_workers | integer (default: 1) | Number of workers in the `cnm_http` service | +| cnm_https_port | integer, (range: 1024-65535) | API HTTPS listening port | +| control_cipher_suites | string | Specifies the enabled ciphers for the control plane. The ciphers are specified in the format understood by the BoringSSL library. | +| control_cipher_suites_tls_1_3 | string | Specifies the enabled TLS 1.3 ciphers for the control plane. The ciphers are specified in the format understood by the BoringSSL library. (read-only) | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| crdt_rest_client_retries | integer | Maximum number of retries for the REST client used by the Active-Active management API | +| crdt_rest_client_timeout | integer | Timeout for REST client used by the Active-Active management API | +| created_time | string | Cluster creation date (read-only) | +| data_cipher_list | string | Specifies the enabled ciphers for the data plane. The ciphers are specified in the format understood by the OpenSSL library. | +| data_cipher_suites_tls_1_3 | string | Specifies the enabled TLS 1.3 ciphers for the data plane. | +| debuginfo_path | string | Path to a local directory used when generating support packages | +| default_non_sharded_proxy_policy | string (default: single) | Default proxy_policy for newly created non-sharded databases' endpoints (read-only) | +| default_sharded_proxy_policy | string (default: all-master-shards) | Default proxy_policy for newly created sharded databases' endpoints (read-only) | +| email_alerts | boolean (default: false) | Send node/cluster email alerts (requires valid SMTP and email_from settings) | +| email_from | string | Sender email for automated emails | +| encrypt_pkeys | boolean (default: false) | Enable or turn off encryption of private keys | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_max_downstream_connections | integer, (range: 100-2048) | The max downstream connections envoy is allowed to open | +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| handle_redirects | boolean (default: false) | Handle API HTTPS requests and redirect to the master node internally | +| http_support | boolean (default: false) | Enable or turn off HTTP support | +| min_control_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the control path | +| min_data_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the data path | +| min_sentinel_TLS_version | '1.2'
'1.3' | The minimum version of TLS protocol which is supported at the data path | +| mtls_authorized_subjects | array | {{}}[{
"CN": string,
"O": string,
"OU": [array of strings],
"L": string,
"ST": string,
"C": string
}, ...]{{
}} A list of valid subjects used for additional certificate validations during TLS client authentication. All subject attributes are case-sensitive.
**Required subject fields**:
"CN" for Common Name
**Optional subject fields:**
"O" for Organization
"OU" for Organizational Unit (array of strings)
"L" for Locality (city)
"ST" for State/Province
"C" for 2-letter country code | +| mtls_certificate_authentication | boolean | Require authentication of client certificates for mTLS connections to the cluster. The API_CA certificate should be configured as a prerequisite. | +| mtls_client_cert_subject_validation_type | `disabled`
`san_cn`
`full_subject` | Enables additional certificate validations that further limit connections to clients with valid certificates during TLS client authentication.
Values:
**disabled**: Authenticates clients with valid certificates. No additional validations are enforced.
**san_cn**: A client certificate is valid only if its Common Name (CN) matches an entry in the list of valid subjects. Ignores other Subject attributes.
**full_subject**: A client certificate is valid only if its Subject attributes match an entry in the list of valid subjects. | +| name | string | Cluster's fully qualified domain name (read-only) | +| password_complexity | boolean (default: false) | Enforce password complexity policy | +| password_expiration_duration | integer (default: 0) | The number of days a password is valid until the user is required to replace it | +| password_min_length | integer, (range: 8-256) (default: 8) | The minimum length required for a password. | +| proxy_certificate | string | Cluster's proxy certificate | +| proxy_max_ccs_disconnection_time | integer | Cluster-wide proxy timeout policy between proxy and CCS | +| rack_aware | boolean | Cluster operates in a rack-aware mode (read-only) | +| reserved_ports | array of strings | List of reserved ports and/or port ranges to avoid using for database endpoints (for example `"reserved_ports": ["11000", "13000-13010"]`) | +| s3_ca_cert | string | Filepath to the PEM-encoded CA certificate to use for validating TLS connections to the S3 server | +| s3_url | string | Specifies the URL for S3 export and import | +| saslauthd_ldap_conf | string | saslauthd LDAP configuration | +| sentinel_cipher_suites | array | Specifies the list of enabled ciphers for the sentinel service. The supported ciphers are those implemented by the [cipher_suites.go]() package. | +| sentinel_cipher_suites_tls_1_3 | string | Specifies the list of enabled TLS 1.3 ciphers for the discovery (sentinel) service. The supported ciphers are those implemented by the [cipher_suites.go]() package.(read-only) | +| sentinel_tls_mode | 'allowed'
'disabled'
'required' | Determines whether the discovery service allows, blocks, or requires TLS connections (previously named `sentinel_ssl_policy`)
**allowed**: Allows both TLS and non-TLS connections
**disabled**: Allows only non-TLS connections
**required**: Allows only TLS connections | +| slave_ha | boolean (default: false) | Enable the replica high-availability mechanism (read-only) | +| slave_ha_bdb_cooldown_period | integer (default: 86400) | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database (read-only) | +| slave_ha_cooldown_period | integer (default: 3600) | Time in seconds between runs of the replica high-availability mechanism on different nodes (read-only) | +| slave_ha_grace_period | integer (default: 900) | Time in seconds between a node failure and when the replica high-availability mechanism starts relocating shards (read-only) | +| slowlog_in_sanitized_support | boolean | Whether to include slowlogs in the sanitized support package | +| smtp_host | string | SMTP server for automated emails | +| smtp_password | string | SMTP server password | +| smtp_port | integer | SMTP server port for automated emails | +| smtp_tls_mode | 'none'
'starttls'
'tls' | Specifies which TLS mode to use for SMTP access | +| smtp_use_tls | boolean (default: false) | Use TLS for SMTP access (deprecated as of Redis Enterprise v4.3.3, use smtp_tls_mode field instead) | +| smtp_username | string | SMTP server username (pattern does not allow special characters &,\<,>,") | +| syncer_certificate | string | Cluster's syncer certificate | +| upgrade_mode | boolean (default: false) | Is cluster currently in upgrade mode | +| use_external_ipv6 | boolean (default: true) | Should redislabs services listen on ipv6 | +| use_ipv6 | boolean (default: true) | Should redislabs services listen on ipv6 (deprecated as of Redis Enterprise v6.4.2, replaced with use_external_ipv6) | +| wait_command | boolean (default: true) | Supports Redis wait command (read-only) | +--- +Title: LDAP mapping object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a mapping between an LDAP group and roles +linkTitle: ldap_mapping +weight: $weight +--- + +An API object that represents an [LDAP mapping]({{< relref "/operate/rs/security/access-control/ldap/map-ldap-groups-to-roles" >}}) between an LDAP group and [roles]({{< relref "/operate/rs/references/rest-api/objects/role" >}}). + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | LDAP mapping's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API (read-only) | +| bdbs_email_alerts | complex object | UIDs of databases that associated email addresses will receive alerts for | +| cluster_email_alerts | boolean | Activate cluster email alerts for an associated email | +| dn | string | An LDAP group's distinguished name | +| email | string | Email address used for alerts (if set) | +| email_alerts | boolean (default: true) | Activate email alerts for an associated email | +| name | string | Role's name | +| role_uids | array of integers | List of role UIDs associated with the LDAP group | +--- +Title: Cluster identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cluster_identity object used with Redis Enterprise Software + REST API calls. +linkTitle: cluster_identity +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| name | string | Fully qualified cluster name. Limited to 64 characters and must comply with the IETF's RFC 952 standard and section 2.1 of the RFC 1123 standard. | +| nodes | array of strings | Array of IP addresses of existing cluster nodes | +| wait_command | boolean (default: true) | Supports Redis wait command | +--- +Title: Identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the identity object used with Redis Enterprise Software REST + API calls. +linkTitle: identity +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Assumed node's UID to join cluster. Used to replace a dead node with a new one. | +| accept_servers | boolean (default: true) | If true, no shards will be created on the node | +| addr | string | Internal IP address of node | +| external_addr | complex object | External IP addresses of node. `GET` `/jsonschema` to retrieve the object's structure. | +| name | string | Node's name | +| override_rack_id | boolean | When replacing an existing node in a rack-aware cluster, allows the new node to be located in a different rack | +| rack_id | string | Rack ID, overrides cloud config | +| use_internal_ipv6 | boolean (default: false) | Node uses IPv6 for internal communication | +--- +Title: Credentials object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the credentials object used with Redis Enterprise Software + REST API calls. +linkTitle: credentials +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| password | string | Admin password | +| username | string | Admin username (pattern does not allow special characters &,\<,>,") | +--- +Title: Limits object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the limits object used with Redis Enterprise Software REST + API calls. +linkTitle: limits +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| max_listeners | integer (default: 100) | Max allowed listeners on node | +| max_redis_servers | integer (default: 100) | Max allowed Redis servers on node | +--- +Title: Paths object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the paths object used with Redis Enterprise Software REST API + calls. +linkTitle: paths +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| bigstore_path | string | Bigredis storage path | +| ccs_persistent_path | string | Persistent storage path of CCS | +| ephemeral_path | string | Ephemeral storage path | +| persistent_path | string | Persistent storage path | +--- +Title: Policy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the policy object used with Redis Enterprise Software REST + API calls. +linkTitle: policy +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| default_fork_evict_ram | boolean (default: false) | If true, the databases should evict data from RAM to ensure successful replication or persistence | +| default_non_sharded_proxy_policy | **'single'**
'all-master-shards'
'all-nodes' | Default proxy_policy for newly created non-sharded databases' endpoints | +| default_sharded_proxy_policy | 'single'
**'all-master-shards'**
'all-nodes' | Default proxy_policy for newly created sharded databases' endpoints | +| default_shards_placement | 'dense'
**'sparse'** | Default shards_placement for newly created databases | +| rack_aware | boolean | Cluster rack awareness | +| shards_overbooking | boolean (default: true) | If true, all databases' memory_size settings are ignored during shards placement | +--- +Title: Node identity object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node_identity object used with Redis Enterprise Software + REST API calls. +linkTitle: node_identity +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| bigstore_driver | 'rocksdb' | Bigstore driver name or none (deprecated) | +| bigstore_enabled | boolean | Bigstore enabled or disabled | +| identity | [identity]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/identity" >}}) object | Node identity | +| limits | [limits]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/limits" >}}) object | Node limits | +| paths | [paths]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/paths" >}}) object | Storage paths object | +--- +Title: Bootstrap object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for bootstrap configuration +hideListLinks: true +linkTitle: bootstrap +weight: $weight +--- + +A bootstrap configuration object. + +| Name | Type/Value | Description | +|------|------------|-------------| +| action | 'create_cluster'
'join_cluster'
'recover_cluster' | Action to perform | +| cluster | [cluster_identity]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/cluster_identity" >}}) object | Cluster to join or create | +| cnm_https_port | integer | Port to join a cluster with non-default cnm_https port | +| crdb_coordinator_port | integer, (range: 1024-65535) (default: 9081) | CRDB coordinator port | +| credentials | [credentials]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/credentials" >}}) object | Cluster admin credentials | +| dns_suffixes | {{}} +[{ + "name": string, + "cluster_default": boolean, + "use_aaaa_ns": boolean, + "use_internal_addr": boolean, + "slaves": array +}, ...] +{{}} | Explicit configuration of DNS suffixes
**name**: DNS suffix name
**cluster_default**: Should this suffix be the default cluster suffix
**use_aaaa_ns**: Should AAAA records be published for NS records
**use_internal_addr**: Should internal cluster IPs be published for databases
**slaves**: List of replica servers that should be published as NS and notified | +| envoy_admin_port | integer, (range: 1024-65535) | Envoy admin port. Changing this port during runtime might result in an empty response because envoy serves as the cluster gateway.| +| envoy_mgmt_server_port | integer, (range: 1024-65535) | Envoy management server port| +| gossip_envoy_admin_port | integer, (range: 1024-65535) | Gossip envoy admin port| +| license | string | License string. If not provided, a trial license is set by default. | +| max_retries | integer | Max number of retries in case of recoverable errors | +| node | [node_identity]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/node_identity" >}}) object | Node description | +| policy | [policy]({{< relref "/operate/rs/references/rest-api/objects/bootstrap/policy" >}}) object | Policy object | +| recovery_filename | string | Name of backup file to recover from | +| required_version | string | This node can only join the cluster if all nodes in the cluster have a version greater than the required_version (deprecated as of Redis Enterprise Software v7.8.6) | +| retry_time | integer | Max waiting time between retries (in seconds) | + + +--- +Title: Check result object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains the results of a cluster check +linkTitle: check_result +weight: $weight +--- + +Cluster check result + +| Name | Type/Value | Description | +|------|------------|-------------| +| cluster_test_result | boolean | Indication if any of the tests failed | +| nodes | {{}} +[{ + "node_uid": integer, + "result": boolean, + "error": string +}, ...] +{{}} | Nodes results | +--- +Title: CRDB task object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a CRDB task +linkTitle: crdb_task +weight: $weight +--- + +An object that represents an Active-Active (CRDB) task. + +| Name | Type/Value | Description | +|------|------------|-------------| +| id | string | CRDB task ID (read only) | +| crdb_guid | string | Globally unique Active-Active database ID (GUID) (read-only) | +| errors | {{}} +[{ + "cluster_name": string, + "description": string, + "error_code": string +}, ...] {{}} | Details for errors that occurred on a cluster | +| status | 'queued'
'started'
'finished'
'failed' | CRDB task status (read only) | +--- +Title: Entra ID agent manager object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the entraid_agent_mgr object used with Redis Enterprise Software REST API calls. +linkTitle: entraid_agent_mgr +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the Entra ID agent manager processes | +--- +Title: MDNS server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the mdns_server object used with Redis Enterprise Software + REST API calls. +linkTitle: mdns_server +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the multicast DNS server | +--- +Title: CRDB worker object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the crdb_worker object used with Redis Enterprise Software + REST API calls. +linkTitle: crdb_worker +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CRDB worker processes | +--- +Title: LDAP agent manager object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the ldap_agent_mgr object used with Redis Enterprise Software REST API calls. +linkTitle: ldap_agent_mgr +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the LDAP agent manager processes | +--- +Title: Stats archiver object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the stats_archiver object used with Redis Enterprise Software + REST API calls. +linkTitle: stats_archiver +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the stats archiver service | +--- +Title: CRDB coordinator object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the crdb_coordinator object used with Redis Enterprise Software + REST API calls. +linkTitle: crdb_coordinator +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CRDB coordinator process | +--- +Title: CM server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cm_server object used with Redis Enterprise Software REST + API calls. +linkTitle: cm_server +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the CM server | +--- +Title: PDNS server object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the pdns_server object used with Redis Enterprise Software + REST API calls. +linkTitle: pdns_server +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the PDNS server | +--- +Title: Alert manager object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the alert_mgr object used with Redis Enterprise Software REST API calls. +linkTitle: alert_mgr +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| operating_mode | 'disabled'
'enabled' | Enable/disable the alert manager processes | +--- +Title: Services configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for optional cluster services settings +hideListLinks: true +linkTitle: services_configuration +weight: $weight +--- + +Optional cluster services settings + +| Name | Type/Value | Description | +|------|------------|-------------| +| alert_mgr | [alert_mgr]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/alert_mgr" >}}) object | Whether to enable/disable the alert manager processes | +| cm_server | [cm_server]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/cm_server" >}}) object | Whether to enable/disable the CM server | +| crdb_coordinator | [crdb_coordinator]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/crdb_coordinator" >}}) object | Whether to enable/disable the CRDB coordinator process | +| crdb_worker | [crdb_worker]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/crdb_worker" >}}) object | Whether to enable/disable the CRDB worker processes | +| entraid_agent_mgr | [entraid_agent_mgr]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/entraid_agent_mgr" >}}) object | Whether to enable/disable the Entra ID agent manager process | +| ldap_agent_mgr | [ldap_agent_mgr]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/ldap_agent_mgr" >}}) object | Whether to enable/disable the LDAP agent manager processes | +| mdns_server | [mdns_server]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/mdns_server" >}}) object | Whether to enable/disable the multicast DNS server | +| pdns_server | [pdns_server]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/pdns_server" >}}) object | Whether to enable/disable the PDNS server | +| stats_archiver | [stats_archiver]({{< relref "/operate/rs/references/rest-api/objects/services_configuration/stats_archiver" >}}) object | Whether to enable/disable the stats archiver service | +--- +Title: Alert object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains alert info +linkTitle: alert +weight: $weight +--- + +You can view, configure, and enable various alerts for the cluster. + +Alerts are bound to a cluster object (such as a [BDB]({{< relref "/operate/rs/references/rest-api/objects/bdb" >}}) or [node]({{< relref "/operate/rs/references/rest-api/objects/node" >}})), and the cluster's state determines whether the alerts turn on or off. + + Name | Type/Value | Description | Writable +|-------|------------|-------------|----------| +| change_time | string | Timestamp when alert state last changed | | +| change_value | object | Contains data relevant to the evaluation time when the alert went on/off (thresholds, sampled values, etc.) | | +| enabled | boolean | If true, alert is enabled | x | +| severity | 'DEBUG'
'INFO'
'WARNING'
'ERROR'
'CRITICAL' | The alert's severity | | +| state | boolean | If true, alert is currently triggered | | +| threshold | string | Represents an alert threshold when applicable | x | +--- +Title: Cluster Manager settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: A REST API object that represents Cluster Manager UI settings +linkTitle: cm_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| timezone | string | Configurable [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for the Cluster Manager UI. The default time zone is UTC. | +--- +Title: JWT authorize object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for user authentication or a JW token refresh request +linkTitle: jwt_authorize +weight: $weight +--- + +An API object for user authentication or a JW token refresh request. + +| Name | Type/Value | Description | +|------|------------|-------------| +| password | string | The user’s password (required) | +| ttl | integer (range: 1-86400) (default: 300) | Time to live - The amount of time in seconds the token will be valid | +| username | string | The user’s username (required) | +--- +Title: Node object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a node in the cluster +linkTitle: node +weight: $weight +--- + +An API object that represents a node in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Cluster unique ID of node (read-only) | +| accept_servers | boolean (default: true) | The node only accepts new shards if `accept_servers` is `true` | +| addr | string | Internal IP address of node | +| architecture | string | Hardware architecture (read-only) | +| bigredis_storage_path | string | Flash storage path (read-only) | +| bigstore_driver | 'ibm-capi-ga1'
'ibm-capi-ga2'
'ibm-capi-ga4'
'speedb'
'rocksdb' | Bigstore driver name or none (deprecated as of Redis Enterprise v7.2, use the [cluster object]({{< relref "/operate/rs/references/rest-api/objects/cluster" >}})'s bigstore_driver instead) | +| bigstore_enabled | boolean | If true, bigstore is enabled (read-only) | +| bigstore_size | integer | Storage size of bigstore storage (read-only) | +| cores | integer | Total number of CPU cores (read-only) | +| ephemeral_storage_path | string | Ephemeral storage path (read-only) | +| ephemeral_storage_size | number | Ephemeral storage size (bytes) (read-only) | +| external_addr | complex object | External IP addresses of node. `GET` `/jsonschema` to retrieve the object's structure. | +| max_listeners | integer | Maximum number of listeners on the node | +| max_redis_servers | integer | Maximum number of shards on the node | +| os_family | 'rhel'
'ubuntu'
'amzn' | Operating system family (read-only) | +| os_name | string | Operating system name (read-only) | +| os_semantic_version | string | Full version number (read-only) | +| os_version | string | Installed OS version (human-readable) (read-only) | +| persistent_storage_path | string | Persistent storage path (read-only) | +| persistent_storage_size | number | Persistent storage size (bytes) (read- only) | +| public_addr | string | Public IP address of node (deprecated as of Redis Enterprise v4.3.3, use external_addr instead) | +| rack_id | string | Rack ID where node is installed | +| recovery_path | string | Recovery files path | +| shard_count | integer | Number of shards on the node (read-only) | +| shard_list | array of integers | Cluster unique IDs of all node shards | +| software_version | string | Installed Redis Enterprise cluster software version (read-only) | +| status | 'active'
'decommissioning'
'down'
'provisioning' | Node status (read-only) | +| supported_database_versions | {{}} +[{ + "db_type": string, + "version": string +}, ...] +{{}} | Versions of Redis Open Source databases supported by Redis Enterprise Software on the node (read-only)
**db_type**: Type of database
**version**: Version of database | +| system_time | string | System time (UTC) (read-only) | +| total_memory | integer | Total memory of node (bytes) (read-only) | +| uptime | integer | System uptime (seconds) (read-only) | +| use_internal_ipv6 | boolean (default: false) | Node uses IPv6 for internal communication. Value is taken from bootstrap identity (read-only) | +--- +Title: Cluster settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for cluster resource management settings +linkTitle: cluster_settings +weight: $weight +--- + +Cluster resources management policy + +| Name | Type/Value | Description | +|------|------------|-------------| +| acl_pubsub_default | `resetchannels`
`allchannels` | Default pub/sub ACL rule for all databases in the cluster:
•`resetchannels` blocks access to all channels (restrictive)
•`allchannels` allows access to all channels (permissive) | +| auto_recovery | boolean (default: false) | Defines whether to use automatic recovery after shard failure | +| automatic_node_offload | boolean (default: true) | Defines whether the cluster will automatically migrate shards from a node, in case the node is overbooked | +| bigstore_migrate_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| bigstore_migrate_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| bigstore_provision_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| bigstore_provision_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| data_internode_encryption | boolean | Enable/deactivate encryption of the data plane internode communication | +| db_conns_auditing | boolean | [Audit connections]({{< relref "/operate/rs/security/audit-events" >}}) for new databases by default if set to true. | +| default_concurrent_restore_actions | integer | Default number of restore actions allowed at the same time. Set to 0 to allow any number of simultaneous restore actions. | +| default_fork_evict_ram | boolean | If true, the bdbs should evict data from RAM to ensure successful replication or persistence | +| default_non_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default proxy_policy for newly created non-sharded databases' endpoints | +| default_oss_sharding | boolean (default: false) | Default hashing policy to use for new databases. This field is for future use only and should not be changed. | +| default_provisioned_redis_version | string | Default Redis version | +| default_sharded_proxy_policy | `single`

`all-master-shards`

`all-nodes` | Default proxy_policy for newly created sharded databases' endpoints | +| default_shards_placement | `dense`
`sparse` | Default shards_placement for a newly created databases | +| default_tracking_table_max_keys_policy | integer (default: 1000000) | Defines the default value of the client-side caching invalidation table size for new databases. 0 makes the cache unlimited. | +| endpoint_rebind_propagation_grace_time | integer | Time to wait between the addition and removal of a proxy | +| failure_detection_sensitivity | `high`
`low` | Predefined thresholds and timeouts for failure detection (previously known as `watchdog_profile`)
• `high` (previously `local-network`) – high failure detection sensitivity, lower thresholds, faster failure detection and failover
• `low` (previously `cloud`) – low failure detection sensitivity, higher tolerance for latency variance (also called network jitter) | +| hide_user_data_from_log | boolean (default: false) | Set to `true` to enable the `hide-user-data-from-log` Redis configuration setting, which avoids logging user data | +| login_lockout_counter_reset_after | integer | Number of seconds that must elapse between failed sign in attempts before the lockout counter is reset to 0. | +| login_lockout_duration | integer | Duration (in secs) of account lockout. If set to 0, the account lockout will persist until released by an admin. | +| login_lockout_threshold | integer | Number of failed sign in attempts allowed before locking a user account | +| max_saved_events_per_type | integer | Maximum saved events per event type | +| max_simultaneous_backups | integer (default: 4) | Maximum number of backup processes allowed at the same time | +| parallel_shards_upgrade | integer | Maximum number of shards to upgrade in parallel | +| persistence_cleanup_scan_interval | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the Redis cleanup schedule | +| persistent_node_removal | boolean | When removing a node, wait for persistence files to be created for all migrated shards | +| rack_aware | boolean | Cluster operates in a rack-aware mode | +| redis_migrate_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| redis_migrate_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before automatic migration of shards from it to free more memory | +| redis_provision_node_threshold | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| redis_provision_node_threshold_p | integer | Minimum free memory (excluding reserved memory) allowed on a node before new shards can no longer be added to it | +| redis_upgrade_policy | **`major`**
`latest` | Create/upgrade Redis Enterprise software on databases in the cluster by compatibility with major versions or latest versions of Redis Open Source | +| resp3_default | boolean (default: true) | Determines the default value of the `resp3` option upon upgrading a database to version 7.2 | +| shards_overbooking | boolean | If true, all databases' memory_size is ignored during shards placement | +| show_internals | boolean | Show internal databases (and their shards and endpoints) REST APIs | +| slave_ha | boolean | Enable the replica high-availability mechanism. Deprecated as of Redis Enterprise Software v7.2.4. | +| slave_ha_bdb_cooldown_period | integer | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database | +| slave_ha_cooldown_period | integer | Time in seconds between runs of the replica high-availability mechanism on different nodes on the same database | +| slave_ha_grace_period | integer | Time in seconds between a node failure and when the replica high-availability mechanism starts relocating shards | +--- +Title: Module object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a Redis module +linkTitle: module +weight: $weight +--- + +Represents a [Redis module]({{< relref "/operate/oss_and_stack/stack-with-enterprise" >}}). + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | string | Cluster unique ID of module | +| architecture | string | Architecture used to compile the module | +| author | string | Module creator | +| capabilities | array of strings | List of capabilities supported by this module | +| capability_name | string | Short description of module functionality | +| command_line_args | string | Command line arguments passed to the module | +| compatible_redis_version | string | Redis version required by this module | +| config_command | string | Name of command to configure module arguments at runtime | +| dependencies | object dependencies | Module dependencies | +| description | string | Short description of the module +| display_name | string | Name of module for display purposes | +| email | string | Author's email address | +| homepage | string | Module's homepage | +| is_bundled | boolean | Whether module came bundled with a version of Redis Enterprise | +| license | string | Module is distributed under this license +| min_redis_pack_version | string | Minimum Redis Enterprise Software cluster version required by this module | +| min_redis_version | string | Minimum Redis database version required by this module. Only relevant for Redis databases earlier than v7.4. | +| module_file | string | Module filename | +| module_name | `search`
`ReJSON`
`graph`
`timeseries`
`bf` | Module's name
| +| os | string | Operating system used to compile the module | +| os_list | array of strings | List of supported operating systems | +| semantic_version | string | Module's semantic version | +| sha256 | string | SHA256 of module binary | +| version | integer | Module's version | +--- +Title: Suffix object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a DNS suffix +linkTitle: suffix +weight: $weight +--- + +An API object that represents a DNS suffix in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| default | boolean | Suffix is the default suffix for the cluster (read-only) | +| internal | boolean | Does the suffix point to internal IP addresses (read-only) | +| mdns | boolean | Support for multicast DNS (read-only) | +| name | string | Unique suffix name that represents its zone (read-only) | +| slaves | array of strings | Frontend DNS servers to be updated by this suffix | +| use_aaaa_ns | boolean | Suffix uses AAAA NS entries (read-only) |--- +Title: Database connection auditing configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for database connection auditing settings +linkTitle: db_conns_auditing_config +weight: $weight +--- + +Database connection auditing configuration + +| Name | Type/Value | Description | +|------|------------|-------------| +| audit_address | string | TCP/IP address where one can listen for notifications. | +| audit_port | integer | Port where one can listen for notifications. | +| audit_protocol | `TCP`
`local` | Protocol used to process notifications. For production systems, `TCP` is the only valid value. | +| audit_reconnect_interval | integer | Interval (in seconds) between attempts to reconnect to the listener. Default is 1 second. | +| audit_reconnect_max_attempts | integer | Maximum number of attempts to reconnect. Default is 0 (infinite). | +--- +Title: LDAP object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that contains the cluster's LDAP configuration +linkTitle: ldap +weight: $weight +--- + +An API object that represents the cluster's [LDAP]({{< relref "/operate/rs/security/access-control/ldap" >}}) configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| bind_dn | string | DN used when binding with the LDAP server to run queries | +| bind_pass | string | Password used when binding with the LDAP server to run queries | +| ca_cert | string | PEM-encoded CA certificate(s) used to validate TLS connections to the LDAP server | +| cache_ttl | integer (default: 300) | Maximum TTL (in seconds) of cached entries | +| control_plane | boolean (default: false) | Use LDAP for user authentication/authorization in the control plane | +| data_plane | boolean (default: false) | Use LDAP for user authentication/authorization in the data plane | +| directory_timeout_s | integer (range: 5-60) (default: 5) | The connection timeout to the LDAP server when authenticating a user, in seconds | +| dn_group_attr | string | The name of an attribute of the LDAP user entity that contains a list of the groups that user belongs to. (Mutually exclusive with "dn_group_query") | +| dn_group_query | complex object | An LDAP search query for mapping from a user DN to the groups the user is a member of. The substring "%D" in the filter will be replaced with the user's DN. (Mutually exclusive with "dn_group_attr") | +| starttls | boolean (default: false) | Use StartTLS negotiation for the LDAP connection | +| uris | array of strings | URIs of LDAP servers that only contain the schema, host, and port | +| user_dn_query | complex object | An LDAP search query for mapping from a username to a user DN. The substring "%u" in the filter will be replaced with the username. (Mutually exclusive with "user_dn_template") | +| user_dn_template | string | A string template that maps between the username, provided to the cluster for authentication, and the LDAP DN. The substring "%u" will be replaced with the username. (Mutually exclusive with "user_dn_query") | +--- +Title: CRDB cluster info object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents Active-Active cluster info +linkTitle: cluster_info +weight: $weight +--- + +Configuration details for a cluster that is part of an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| credentials | {{}} +{ + "username": string, + "password": string +} {{}} | Cluster access credentials (required) | +| name | string | Cluster fully qualified name, used to uniquely identify the cluster. Typically this is the same as the hostname used in the URL, although in some configruations the URL may point to a different name/address. (required) | +| replication_endpoint | string | Address to use for peer replication. If not specified, it is assumed that standard cluster naming conventions apply. | +| replication_tls_sni | string | Cluster SNI for TLS connections | +| url | string | Cluster access URL (required) | +--- +Title: CRDB health report configuration object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the database configuration to include in an + Active-Active database health report. +linkTitle: health_report_configuration +weight: $weight +--- + +An object that represents the database configuration to include in an Active-Active database health report. + +| Name | Type/Value | Description | +|------|------------|-------------| +| causal_consistency | boolean | Enables causal consistency across Active-Active replicas | +| encryption | boolean | Intercluster encryption | +| featureset_version | integer | CRDB active FeatureSet version | +| instances | {{}}[{ + // Unique instance ID + "id": integer, + // Local database instance ID + "db_uid": string, + "cluster": { + // Cluster FQDN + "name": string + // Cluster access URL + "url": string + } +}, ...] {{}} | Local database instances | +| name | string | Name of database | +| protocol_version | integer | CRDB active protocol version | +| status | string | Current status of the configuration.
Possible values:
**posted:** Configuration was posted to all replicas
**ready:** All replicas have finished processing posted configuration (create a database)
**committed**: Posted configuration is now active on all replicas
**commit-completed:** All replicas have finished processing committed configuration (database is active)
**failed:** Configuration failed to post | +| version | integer | Database configuration version | +--- +Title: CRDB health report object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents an Active-Active database health report. +hideListLinks: true +linkTitle: health_report +weight: $weight +--- + +An object that represents an Active-Active database health report. + +| Name | Type/Value | Description | +|------|------------|-------------| +| active_config_version | integer | Active configuration version | +| cluster_name | string | Name of local Active-Active cluster | +| configurations | array of [health_report_configuration]({{< relref "/operate/rs/references/rest-api/objects/crdb/health_report/health_report_configuration" >}}) objects | Stored database configurations | +| connection_error | string | Error string if remote cluster is not available | +| connections | {{}} +[{ + "name": string, + "replication_links": [ + { + "link_uid": "bdb_uid:replica_uid", + "status": "up | down" + } ], + "status": string +}, ...] {{}} | Connections to other clusters and their statuses. A replication link's `bdb_uid` is the unique ID of a local database instance ([bdb]({{< relref "/operate/rs/references/rest-api/objects/bdb" >}})) in the current cluster. The `replica_uid` is the unique ID of the database's remote replica, located in the connected cluster. | +| name | string | Name of the Active-Active database | +--- +Title: CRDB database config object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the database configuration +linkTitle: database_config +weight: $weight +--- + +An object that represents the database configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| aof_policy | **'appendfsync-every-sec'**
'appendfsync-always' | Policy for Append-Only File data persistence | +| authentication_admin_pass | string | Administrative databases access token | +| authentication_redis_pass | string | Redis AUTH password (deprecated as of Redis Enterprise v7.2, replaced with multiple passwords feature in version 6.0.X) | +| bigstore | boolean (default: false) | Database driver is Auto Tiering | +| bigstore_ram_size | integer (default: 0) | Memory size of RAM size | +| data_persistence | 'disabled'
'snapshot'
**'aof'** | Database on-disk persistence policy. For snapshot persistence, a [snapshot_policy]({{< relref "/operate/rs/references/rest-api/objects/bdb/snapshot_policy" >}}) must be provided | +| enforce_client_authentication | **'enabled'**
'disabled' | Require authentication of client certificates for SSL connections to the database. If enabled, a certificate should be provided in either `authentication_ssl_client_certs` or `authentication_ssl_crdt_certs` | +| max_aof_file_size | integer | Maximum AOF file size in bytes | +| max_aof_load_time | integer (default: 3600) | Maximum AOF reload time in seconds | +| memory_size | integer (default: 0) | Database memory size limit in bytes. 0 is unlimited. | +| oss_cluster | boolean (default: false) | Enables OSS Cluster mode | +| oss_cluster_api_preferred_ip_type | 'internal'
'external' | Indicates preferred IP type in OSS cluster API | +| oss_sharding | boolean (default: false) | An alternative to `shard_key_regex` for using the common case of the OSS shard hashing policy | +| port | integer | TCP port for database access | +| proxy_policy | 'single'
'all-master-shards'
'all-nodes' | The policy used for proxy binding to the endpoint | +| rack_aware | boolean (default: false) | Require the database to be always replicated across multiple racks | +| replication | boolean (default: true) | Database replication | +| sharding | boolean (default: false) | Cluster mode (server-side sharding). When true, shard hashing rules must be provided by either `oss_sharding` or `shard_key_regex` | +| shard_key_regex | `[{ "regex": string }, ...]` | Custom keyname-based sharding rules (required if sharding is enabled)

To use the default rules you should set the value to:
`[{"regex": ".*\\{(?.*)\\}.*"}, {"regex": "(?.*)"}]` | +| shards_count | integer (range: 1-512) (default: 1) | Number of database shards | +| shards_placement | 'dense'
'sparse' | Control the density of shards
Values:
**'dense'**: Shards reside on as few nodes as possible
**'sparse'**: Shards reside on as many nodes as possible | +| snapshot_policy | array of [snapshot_policy]({{< relref "/operate/rs/references/rest-api/objects/bdb/snapshot_policy" >}}) objects | Policy for snapshot-based data persistence. A dataset snapshot will be taken every N secs if there are at least M writes changes in the dataset. | +| tls_mode | 'enabled'
**'disabled'**
'replica_ssl' | Encrypt communication | +--- +Title: CRDB instance info object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents Active-Active instance info +linkTitle: instance_info +weight: $weight +--- + +An object that represents Active-Active instance info. + +| Name | Type/Value | Description | +|------|------------|-------------| +| id | integer | Unique instance ID | +| cluster | [CRDB cluster_info]({{< relref "/operate/rs/references/rest-api/objects/crdb/cluster_info" >}}) object | | +| compression | integer | Compression level when syncing from this source | +| db_config | [CRDB database_config]({{< relref "/operate/rs/references/rest-api/objects/crdb/database_config" >}}) object | Database configuration | +| db_uid | string | ID of local database instance. This field is likely to be empty for instances other than the local one. | +--- +Title: CRDB modify request object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object to update an Active-Active database +linkTitle: modify_request +weight: $weight +--- + +An object to update an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| add_instances | array of [CRDB instance_info]({{< relref "/operate/rs/references/rest-api/objects/crdb/instance_info" >}}) objects | List of new CRDB instances | +| crdb | [CRDB]({{< relref "/operate/rs/references/rest-api/objects/crdb" >}}) object | An object that represents an Active-Active database | +| force_update | boolean | (Warning: This flag can cause unintended and dangerous changes) Force the configuration update and increment the configuration version even if there is no change to the configuration parameters. If you use force, you can mistakenly cause the other instances to update to the configuration version even though it was not changed. | +| remove_instances | array of integers | List of unique instance IDs | +| remove_instances.force_remove | boolean | Force removal of instance from the Active-Active database. Before we remove an instance from an Active-Active database, all of the operations that the instance received from clients must be propagated to the other instances. This is the safe method to remove an instance from the Active-Active database. If the instance does not have connectivity to other instances, the propagation fails and removal fails. To remove an instance that does not have connectivity to other instances, you must use the force flag. The removed instance keeps its data and configuration for the instance. After you remove an instance by force, you must use the purge_instances API on the removed instance. | +--- +Title: CRDB object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents an Active-Active database +hideListLinks: true +linkTitle: crdb +weight: $weight +--- + +An object that represents an Active-Active database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| guid | string | The global unique ID of the Active-Active database | +| causal_consistency | boolean | Enables causal consistency across CRDT instances | +| default_db_config| [CRDB database_config]({{< relref "/operate/rs/references/rest-api/objects/crdb/database_config" >}}) object | Default database configuration | +| encryption | boolean | Encrypt communication | +| featureset_version | integer | Active-Active database active FeatureSet version +| instances | array of [CRDB instance_info]({{< relref "/operate/rs/references/rest-api/objects/crdb/instance_info" >}}) objects | | +| local_databases | {{}}[{ + "bdb_uid": string, + "id": integer +}, ...] {{}} | Mapping of instance IDs for local databases to local BDB IDs | +| name | string | Name of Active-Active database | +| protocol_version | integer | Active-Active database active protocol version | +--- +Title: OCSP object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the cluster's OCSP configuration +linkTitle: ocsp +weight: $weight +--- + +An API object that represents the cluster's OCSP configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| ocsp_functionality | boolean (default: false) | Enables or turns off OCSP for the cluster | +| query_frequency | integer (range: 60-86400) (default: 3600) | The time interval in seconds between OCSP queries to check the certificate’s status | +| recovery_frequency | integer (range: 60-86400) (default: 60) | The time interval in seconds between retries after the OCSP responder returns an invalid status for the certificate | +| recovery_max_tries | integer (range: 1-100) (default: 5) | The number of retries before the validation query fails and invalidates the certificate | +| responder_url | string | The OCSP server URL embedded in the proxy certificate (if available) (read-only) | +| response_timeout | integer (range: 1-60) (default: 1) | The time interval in seconds to wait for a response before timing out | +--- +Title: Certificate rotation job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the cert_rotation_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: cert_rotation_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the certificate rotation schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +| expiry_days_before_rotation | integer, (range: 1-90) (default: 60) | Number of days before a certificate expires before rotation | +--- +Title: Rotate CCS job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the rotate_ccs_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: rotate_ccs_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the CCS rotation schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +| file_suffix | string (default: 5min) | String added to the end of the rotated RDB files | +| rotate_max_num | integer, (range: 1-100) (default: 24) | The maximum number of saved RDB files | +--- +Title: BDB usage report job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb_usage_report_job_settings object used with Redis Enterprise Software REST API calls. +linkTitle: bdb_usage_report_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the database usage report schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +| file_retention_days | integer, 1-1000 (default: 365) | Number of days after a file is closed before it is deleted | +--- +Title: Redis cleanup job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the redis_cleanup_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: redis_cleanup_job_settings +weight: $weight +--- + +Deprecated and replaced with `persistence_cleanup_scan_interval`. + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the Redis cleanup schedule | +--- +Title: Log rotation job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the log_rotation_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: log_rotation_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the log rotation schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +--- +Title: Backup job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the backup_job_settings object used with Redis Enterprise Software + REST API calls. +linkTitle: backup_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the backup schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +--- +Title: Node checks job settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the node_checks_job_settings object used with Redis Enterprise + Software REST API calls. +linkTitle: node_checks_job_settings +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| cron_expression | string | [CRON expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) that defines the node checks schedule | +| enabled | boolean (default: true) | Indicates whether this job is enabled | +--- +Title: Job scheduler object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for job scheduler settings +hideListLinks: true +linkTitle: job_scheduler +weight: $weight +--- + +An API object that represents the job scheduler settings in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| backup_job_settings | [backup_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/backup_job_settings" >}}) object | Backup job settings | +| bdb_usage_report_job_settings | [bdb_usage_report_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/bdb_usage_report_job_settings" >}}) object | Job settings for database usage reports | +| cert_rotation_job_settings | [cert_rotation_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/cert_rotation_job_settings" >}}) object | Job settings for internal certificate rotation | +| log_rotation_job_settings | [log_rotation_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/log_rotation_job_settings" >}}) object | Log rotation job settings | +| node_checks_job_settings | [node_checks_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/node_checks_job_settings" >}}) object | Node checks job settings | +| redis_cleanup_job_settings | [redis_cleanup_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/redis_cleanup_job_settings" >}}) object | Redis cleanup job settings (deprecated as of Redis Enterprise v6.4.2, replaced with persistence_cleanup_scan_interval) | +| rotate_ccs_job_settings | [rotate_ccs_job_settings]({{< relref "/operate/rs/references/rest-api/objects/job_scheduler/rotate_ccs_job_settings" >}}) object | Rotate CCS job settings | +--- +Title: User object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An API object that represents a Redis Enterprise user +linkTitle: user +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | User's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API request (read-only) | +| auth_method | **'regular'**
'certificate'
'entraid' | User's authentication method | +| bdbs_email_alerts | complex object | UIDs of databases that user will receive alerts for | +| certificate_subject_line | string | The certificate’s subject line as defined by RFC2253. Used for certificate-based authentication users only. | +| cluster_email_alerts | boolean | Activate cluster email alerts for a user | +| email | string | User's email (pattern matching only ASCII characters) | +| email_alerts | boolean (default: true) | Activate email alerts for a user | +| name | string | User's name (pattern does not allow non-ASCII and special characters &,\<,>,") | +| password | string | User's password. If `password_hash_method` is set to `1`, the password should be hashed using SHA-256. The format before hashing is `username:clustername:password`. | +| password_hash_method | '1' | Used when password is passed pre-hashed to specify the hashing method | +| password_issue_date | string | The date in which the password was set (read-only) | +| role | 'admin'
'cluster_member'
'cluster_viewer'
'db_member'
**'db_viewer'**
'user_manager'
'none' | User's [role]({{< relref "/operate/rs/references/rest-api/permissions#roles" >}}) | +| role_uids | array of integers | UIDs of user's roles for role-based access control | +| status | 'active'
'locked'
'password_expired' | User sign-in status (read-only)
**active**: able to sign in
**locked**: unable to sign in
**password_expired**: unable to sign in because the password expired | +--- +Title: BDB alert settings with threshold object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb_alert_settings_with_threshold object used with Redis + Enterprise Software REST API calls. +linkTitle: bdb_alert_settings_with_threshold +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| enabled | boolean (default: false) | Alert enabled or disabled | +| threshold | string | Threshold for alert going on/off | +--- +Title: Database alerts settings object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object for database alerts configuration +hideListLinks: true +linkTitle: db_alerts_settings +weight: $weight +--- + +An API object that represents the database alerts configuration. + +| Name | Type/Value | Description | +|------|------------|-------------| +| bdb_backup_delayed | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Periodic backup has been delayed for longer than specified threshold value (minutes) | +| bdb_crdt_src_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB source sync lag is higher than specified threshold value (seconds) | +| bdb_crdt_src_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB source sync had a connection error while trying to connect to replica source | +| bdb_crdt_src_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | CRDB sync encountered in general error | +| bdb_high_latency | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Latency is higher than specified threshold value (microsec) | +| bdb_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync lag is higher than specified threshold value (seconds) (deprecated as of Redis Enterprise v5.0.1) | +| bdb_high_throughput | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Throughput is higher than specified threshold value (requests / sec) | +| bdb_long_running_action | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | An alert for state machines that are running for too long | +| bdb_low_throughput | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Throughput is lower than specified threshold value (requests / sec) | +| bdb_ram_dataset_overhead | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Dataset RAM overhead of a shard has reached the threshold value (% of its RAM limit) | +| bdb_ram_values | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Percent of values kept in a shard's RAM is lower than (% of its key count) | +| bdb_replica_src_high_syncer_lag | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of source sync lag is higher than specified threshold value (seconds) | +| bdb_replica_src_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of source sync has connection error while trying to connect replica source | +| bdb_replica_src_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync encountered in general error | +| bdb_shard_num_ram_values | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Number of values kept in a shard's RAM is lower than (values) | +| bdb_size | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Dataset size has reached the threshold value \(% of the memory limit) | +| bdb_syncer_connection_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync has connection error while trying to connect replica source (deprecated as of Redis Enterprise v5.0.1) | +| bdb_syncer_general_error | [bdb_alert_settings_with_threshold]({{< relref "/operate/rs/references/rest-api/objects/db_alerts_settings/bdb_alert_settings_with_threshold" >}}) object | Replica of sync encountered in general error (deprecated as of Redis Enterprise v5.0.1) | +--- +Title: BDB dataset import sources object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb dataset_import_sources object used with Redis Enterprise + Software REST API calls. +linkTitle: dataset_import_sources +weight: $weight +--- + +You can import data to a database from the following location types: + +- HTTP/S +- FTP +- SFTP +- Amazon S3 +- Google Cloud Storage +- Microsoft Azure Storage +- NAS/Local Storage + +The source file to import should be in the [RDB]({{< relref "/operate/rs/databases/configure/database-persistence.md" >}}) format. It can also be in a compressed (gz) RDB file. + +Supply an array of dataset import source objects to import data from multiple files. + +## Basic parameters + +For all import location objects, you need to specify the location type via the `type` field. + +| Location type | "type" value | +|---------------|--------------| +| FTP/S | "url" | +| SFTP | "sftp" | +| Amazon S3 | "s3" | +| Google Cloud Storage | "gs" | +| Microsoft Azure Storage | "abs" | +| NAS/Local Storage | "mount_point" | + +## Location-specific parameters + +Any additional required parameters may differ based on the import location type. + +### FTP + +| Key name | Type | Description | +|----------|------|-------------| +| url | string | A URI that represents the FTP/S location with the following format: `ftp://user:password@host:port/path/`. The user and password can be omitted if not needed. | + +### SFTP + +| Key name | Type | Description | +|----------|------|-------------| +| key | string | SSH private key to secure the SFTP server connection. If you do not specify an SSH private key, the autogenerated private key of the cluster is used and you must add the SSH public key of the cluster to the SFTP server configuration. (optional) | +| sftp_url | string | SFTP URL in the format: `sftp://user:password@host[:port]/path/filename.rdb`. The default port number is 22 and the default path is '/'. | + +### AWS S3 + +| Key name | Type | Description | +|----------|------|-------------| +| access_key_id | string | The AWS Access Key ID with access to the bucket | +| bucket_name | string | S3 bucket name | +| filename | string | RDB filename, including the file extension. | +| region_name | string | Amazon S3 region name (optional) | +| secret_access_key | string | The AWS Secret Access that matches the Access Key ID | +| subdir | string | Path to the backup directory in the S3 bucket (optional) | + +### Google Cloud Storage + +| Key name | Type | Description | +|----------|------|-------------| +| bucket_name | string | Cloud Storage bucket name | +| client_email | string | Email address for the Cloud Storage client ID | +| client_id | string | Cloud Storage client ID with access to the Cloud Storage bucket | +| filename | string | RDB filename, including the file extension. | +| private_key | string | Private key for the Cloud Storage matching the private key ID | +| private_key_id | string | Cloud Storage private key ID with access to the Cloud Storage bucket | +| subdir | string | Path to the backup directory in the Cloud Storage bucket (optional) | + +### Azure Blob Storage + +| Key name | Type | Description | +|----------|------|-------------| +| account_key | string | Access key for the storage account | +| account_name | string | Storage account name with access to the container | +| container | string | Blob Storage container name | +| filename | string | RDB filename, including the file extension. | +| sas_token | string | Token to authenticate with shared access signature | +| subdir | string | Path to the backup directory in the Blob Storage container (optional) | + +{{}} +`account_key` and `sas_token` are mutually exclusive +{{}} + +### NAS/Local Storage + +| Key name | Type | Description | +|----------|------|-------------| +| path | string | Path to the locally mounted filename to import. You must create the mount point on all nodes, and the `redislabs:redislabs` user must have read permissions on the local mount point. +--- +Title: BDB status field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb status field used with Redis Enterprise Software REST + API calls. +linkTitle: status +weight: $weight +--- + +The BDB status field is a read-only field that represents the database status. + +Possible status values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'active' | Database is active and no special action is in progress | 'active-change-pending'
'import-pending'
'delete-pending' | +| 'active-change-pending' | |'active' | +| 'creation-failed' | Initial database creation failed | | +| 'delete-pending' | Database deletion is in progress | | +| 'import-pending' | Dataset import is in progress | 'active' | +| 'pending' | Temporary status during database creation | 'active'
'creation-failed' | +| 'recovery' | Not currently relevant (intended for future use) | | + +{{< image filename="/images/rs/rest-api-bdb-status.png#no-click" alt="BDB status" >}} +--- +Title: BDB backup/export location object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb backup_location/export_location object used with Redis + Enterprise Software REST API calls. +linkTitle: backup_location/export_location +weight: $weight +--- + +You can back up or export a database's dataset to the following types of locations: + +- FTP/S +- SFTP +- Amazon S3 +- Google Cloud Storage +- Microsoft Azure Storage +- NAS/Local Storage + +## Basic parameters + +For all backup/export location objects, you need to specify the location type via the `type` field. + +| Location type | "type" value | +|---------------|--------------| +| FTP/S | "url" | +| SFTP | "sftp" | +| Amazon S3 | "s3" | +| Google Cloud Storage | "gs" | +| Microsoft Azure Storage | "abs" | +| NAS/Local Storage | "mount_point" | + +## Location-specific parameters + +Any additional required parameters may differ based on the backup/export location type. + +### FTP + +| Key name | Type | Description | +|----------|------|-------------| +| url | string | A URI that represents a FTP/S location with the following format: `ftp://user:password@host:port/path/`. The user and password can be omitted if not needed. | + +### SFTP + +| Key name | Type | Description | +|----------|------|-------------| +| key | string | SSH private key to secure the SFTP server connection. If you do not specify an SSH private key, the autogenerated private key of the cluster is used, and you must add the SSH public key of the cluster to the SFTP server configuration. (optional) | +| sftp_url | string | SFTP URL in the format: `sftp://user:password@host[:port][/path/]`. The default port number is 22 and the default path is '/'. | + +### AWS S3 + +| Key name | Type | Description | +|----------|------|-------------| +| access_key_id | string | The AWS Access Key ID with access to the bucket | +| bucket_name | string | S3 bucket name | +| region_name | string | Amazon S3 region name (optional) | +| secret_access_key | string | The AWS Secret Access Key that matches the Access Key ID | +| subdir | string | Path to the backup directory in the S3 bucket (optional) | + +### Google Cloud Storage + +| Key name | Type | Description | +|----------|------|-------------| +| bucket_name | string | Cloud Storage bucket name | +| client_email | string | Email address for the Cloud Storage client ID | +| client_id | string | Cloud Storage client ID with access to the Cloud Storage bucket | +| private_key | string | Cloud Storage private key that matches the private key ID | +| private_key_id | string | Cloud Storage private key ID with access to the Cloud Storage bucket | +| subdir | string | Path to the backup directory in the Cloud Storage bucket (optional) | + +### Azure Blob Storage + +| Key name | Type | Description | +|----------|------|-------------| +| account_key | string | Access key for the storage account | +| account_name | string | Storage account name with access to the container | +| container | string | Blob Storage container name | +| sas_token | string | Token to authenticate with shared access signature | +| subdir | string | Path to the backup directory in the Blob Storage container (optional) | + +{{}} +`account_key` and `sas_token` are mutually exclusive +{{}} + +### NAS/Local Storage + +| Key name | Type | Description | +|----------|------|-------------| +| path | string | Path to the local mount point. You must create the mount point on all nodes, and the `redislabs:redislabs` user must have read and write permissions on the local mount point. | +--- +Title: Syncer sources object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the syncer_sources object used with Redis Enterprise Software + REST API calls. +linkTitle: syncer_sources +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Unique ID of this source | +| client_cert | string | Client certificate to use if encryption is enabled | +| client_key | string | Client key to use if encryption is enabled | +| compression | integer, (range: 0-6) | Compression level for the replication link | +| encryption | boolean | Encryption enabled/disabled | +| lag | integer | Lag in milliseconds between source and destination (while synced) | +| last_error | string | Last error encountered when syncing from the source | +| last_update | string | Time when we last received an update from the source | +| rdb_size | integer | The source's RDB size to be transferred during the syncing phase | +| rdb_transferred | integer | Number of bytes transferred from the source's RDB during the syncing phase | +| replication_tls_sni | string | Replication TLS server name indication | +| server_cert | string | Server certificate to use if encryption is enabled | +| status | string | Sync status of this source | +| uri | string | Source Redis URI | +--- +Title: Snapshot policy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the snapshot_policy object used with Redis Enterprise Software + REST API calls. +linkTitle: snapshot_policy +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| secs | integer | Interval in seconds between snapshots | +| writes | integer | Number of write changes required to trigger a snapshot | +--- +Title: BDB replica sync field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb replica_sync field used with Redis Enterprise Software + REST API calls. +linkTitle: replica_sync +weight: $weight +--- + +The BDB `replica_sync` field relates to the [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/create.md" >}}) feature, which enables the creation of a Redis database (single- or multi-shard) that synchronizes data from another Redis database (single- or multi-shard). + +You can use the `replica_sync` field to enable, disable, or pause the [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/create.md" >}}) sync process. The BDB `crdt_sync` field has a similar purpose for the Redis CRDB. + +Possible BDB sync values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'disabled' | (default value) Disables the sync process and represents that no sync is currently configured or running. | 'enabled' | +| 'enabled' | Enables the sync process and represents that the process is currently active. | 'stopped'
'paused' | +| 'paused' | Pauses the sync process. The process is configured but is not currently executing any sync commands. | 'enabled'
'stopped' | +| 'stopped' | An unrecoverable error occurred during the sync process, which caused the system to stop the sync. | 'enabled' | + +{{< image filename="/images/rs/rest-api-bdb-sync.png#no-click" alt="BDB sync" >}} + +When the sync is in the 'stopped' or 'paused' state, then the `last_error` field in the relevant source entry in the `sync_sources` "status" field contains the detailed error message. +--- +Title: BDB replica sources status field +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the bdb replica_sources status field used with Redis Enterprise + Software REST API calls. +linkTitle: replica_sources status +weight: $weight +--- + +The `replica_sources` status field relates to the [Replica Of]({{< relref "/operate/rs/databases/import-export/replica-of/create.md" >}}) feature, which enables the creation of a Redis database (single- or multi-shard) that synchronizes data from another Redis database (single- or multi-shard). + +The status field represents the Replica Of sync status for a specific sync source. + +Possible status values: + +| Status | Description | Possible next status | +|--------|-------------|----------------------| +| 'out-of-sync' | Sync process is disconnected from source and trying to reconnect | 'syncing' | +| 'syncing' | Sync process is in progress | 'in-sync'
'out-of-sync' | +| 'in-sync' | Sync process finished successfully, and new commands are syncing on a regular basis | 'syncing'
'out-of-sync' + +{{< image filename="/images/rs/rest-api-replica-sources-status.png#no-click" alt="Replica sources status" >}} +--- +Title: BDB object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a database +hideListLinks: true +linkTitle: bdb +weight: $weight +--- + +An API object that represents a managed database in the cluster. + +| Name | Type/Value & Description | +|------|-------------------------| +| uid | integer; Cluster unique ID of database. Can be set during creation but cannot be updated. | +| account_id | integer; SM account ID | +| action_uid | string; Currently running action's UID (read-only) | +| aof_policy | Policy for Append-Only File data persistence
Values:
**'appendfsync-every-sec'**
'appendfsync-always' | +| authentication_admin_pass | string; Password for administrative access to the BDB (used for SYNC from the BDB) | +| authentication_redis_pass | string; Redis AUTH password authentication.
Use for Redis databases only. Ignored for memcached databases. (deprecated as of Redis Enterprise v7.2, replaced with multiple passwords feature in version 6.0.X) | +| authentication_sasl_pass | string; Binary memcache SASL password | +| authentication_sasl_uname | string; Binary memcache SASL username (pattern does not allow special characters &,\<,>,") | +| authentication_ssl_client_certs | {{}}[{
"client_cert": string
}, ...]{{
}} List of authorized client certificates
**client_cert**: X.509 PEM (base64) encoded certificate | +| authentication_ssl_crdt_certs | {{}}[{
"client_cert": string
}, ...]{{
}} List of authorized CRDT certificates
**client_cert**: X.509 PEM (base64) encoded certificate | +| authorized_names | array of strings; Additional certified names (deprecated as of Redis Enterprise v6.4.2; use authorized_subjects instead) | +| authorized_subjects | {{}}[{
"CN": string,
"O": string,
"OU": [array of strings],
"L": string,
"ST": string,
"C": string
}, ...]{{
}} A list of valid subjects used for additional certificate validations during TLS client authentication. All subject attributes are case-sensitive.
**Required subject fields**:
"CN" for Common Name
**Optional subject fields:**
"O" for Organization
"OU" for Organizational Unit (array of strings)
"L" for Locality (city)
"ST" for State/Province
"C" for 2-letter country code | +| auto_upgrade | boolean (default: false); Upgrade the database automatically after a cluster upgrade | +| avoid_nodes | array of strings; Cluster node UIDs to avoid when placing the database's shards and binding its endpoints | +| background_op | Deprecated as of Redis Enterprise Software v7.8.2. Use [`GET /v1/actions/bdb/`]({{}}) instead.
{{}}[{
"status": string,
"name": string,
"error": object,
"progress": number
}, ...]{{
}} (read-only); **progress**: Percent of completed steps in current operation | +| backup | boolean (default: false); Policy for periodic database backup | +| backup_failure_reason | Reason of last failed backup process (read-only)
Values:
'no-permission'
'wrong-file-path'
'general-error' | +| backup_history | integer (default: 0); Backup history retention policy (number of days, 0 is forever) | +| backup_interval | integer; Interval in seconds in which automatic backup will be initiated | +| backup_interval_offset | integer; Offset (in seconds) from round backup interval when automatic backup will be initiated (should be less than backup_interval) | +| backup_location | [complex object]({{< relref "/operate/rs/references/rest-api/objects/bdb/backup_location" >}}); Target for automatic database backups.
Call `GET` `/jsonschema` to retrieve the object's structure. | +| backup_progress | number, (range: 0-100); Database scheduled periodic backup progress (percentage) (read-only) | +| backup_status | Status of scheduled periodic backup process (read-only)
Values:
'exporting'
'succeeded'
'failed' | +| bigstore | boolean (default: false); Database bigstore option | +| bigstore_ram_size | integer (default: 0); Memory size of bigstore RAM part. | +| bigstore_ram_weights | {{}}[{
"shard_uid": integer,
"weight": number
}, ...]{{
}} List of shard UIDs and their bigstore RAM weights;
**shard_uid**: Shard UID;
**weight**: Relative weight of RAM distribution | +| client_cert_subject_validation_type | Enables additional certificate validations that further limit connections to clients with valid certificates during TLS client authentication.
Values:
**disabled**: Authenticates clients with valid certificates. No additional validations are enforced.
**san_cn**: A client certificate is valid only if its Common Name (CN) matches an entry in the list of valid subjects. Ignores other Subject attributes.
**full_subject**: A client certificate is valid only if its Subject attributes match an entry in the list of valid subjects. | +| conns | integer (default 5); Number of internal proxy connections | +| conns_type | Connections limit type
Values:
**‘per-thread’**
‘per-shard’ | +| crdt | boolean (default: false); Use CRDT-based data types for multi-master replication | +| crdt_causal_consistency | boolean (default: false); Causal consistent CRDB. | +| crdt_config_version | integer; Replica-set configuration version, for internal use only. | +| crdt_featureset_version | integer; CRDB active FeatureSet version | +| crdt_ghost_replica_ids | string; Removed replicas IDs, for internal use only. | +| crdt_guid | string; GUID of CRDB this database belongs to, for internal use only. | +| crdt_modules | string; CRDB modules information. The string representation of a JSON list, containing hashmaps. | +| crdt_protocol_version | integer; CRDB active Protocol version | +| crdt_repl_backlog_size | string; Active-Active replication backlog size ('auto' or size in bytes) | +| crdt_replica_id | integer; Local replica ID, for internal use only. | +| crdt_replicas | string; Replica set configuration, for internal use only. | +| crdt_sources | array of [syncer_sources]({{< relref "/operate/rs/references/rest-api/objects/bdb/syncer_sources" >}}) objects; Remote endpoints/peers of CRDB database to sync from. See the 'bdb -\> replica_sources' section | +| crdt_sync | Enable, disable, or pause syncing from specified crdt_sources. Applicable only for Active-Active databases. See [replica_sync]({{< relref "/operate/rs/references/rest-api/objects/bdb/replica_sync" >}}) for more details.
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| crdt_sync_connection_alarm_timeout_seconds | integer (default: 0); If the syncer takes longer than the specified number of seconds to connect to an Active-Active database, raise a connection alarm | +| crdt_sync_dist | boolean; Enable/disable distributed syncer in master-master | +| crdt_syncer_auto_oom_unlatch | boolean (default: true); Syncer automatically attempts to recover synchronisation from peers after this database throws an Out-Of-Memory error. Otherwise, the syncer exits | +| crdt_xadd_id_uniqueness_mode | XADD strict ID uniqueness mode. CRDT only.
Values:
‘liberal’
**‘strict’**
‘semi-strict’ | +| created_time | string; The date and time the database was created (read-only) | +| data_internode_encryption | boolean; Should the data plane internode communication for this database be encrypted | +| data_persistence | Database on-disk persistence policy. For snapshot persistence, a [snapshot_policy]({{< relref "/operate/rs/references/rest-api/objects/bdb/snapshot_policy" >}}) must be provided
Values:
**'disabled'**
'snapshot'
'aof' | +| dataset_import_sources | [complex object]({{< relref "/operate/rs/references/rest-api/objects/bdb/dataset_import_sources" >}}); Array of source file location description objects to import from when performing an import action. This is write-only and cannot be read after set.
Call `GET /v1/jsonschema` to retrieve the object's structure. | +| db_conns_auditing | boolean; Enables/deactivates [database connection auditing]({{< relref "/operate/rs/security/audit-events" >}}) | +| default_user | boolean (default: true); Allow/disallow a default user to connect | +| disabled_commands | string (default: ); Redis commands which are disabled in db | +| dns_address_master | string; Database private address endpoint FQDN (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| email_alerts | boolean (default: false); Send email alerts for this DB | +| endpoint | string; Latest bound endpoint. Used when reconfiguring an endpoint via update | +| endpoint_ip | complex object; External IP addresses of node hosting the BDB's endpoint. `GET` `/jsonschema` to retrieve the object's structure. (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| endpoint_node | integer; Node UID hosting the BDB's endpoint (read-only) (deprecated as of Redis Enterprise v4.3.3) | +| endpoints | array; List of database access endpoints (read-only)
**uid**: Unique identification of this source
**dns_name**: Endpoint’s DNS name
**port**: Endpoint’s TCP port number
**addr**: Endpoint’s accessible addresses
**proxy_policy**: The policy used for proxy binding to the endpoint
**exclude_proxies**: List of proxies to exclude
**include_proxies**: List of proxies to include
**addr_type**: Indicates if the endpoint is based on internal or external IPs
**oss_cluster_api_preferred_ip_type**: Indicates preferred IP type in the OSS cluster API: internal/external
**oss_cluster_api_preferred_endpoint_type**: Indicates preferred endpoint type in the OSS cluster API: ip/hostname | +| enforce_client_authentication | Require authentication of client certificates for SSL connections to the database. If set to 'enabled', a certificate should be provided in either authentication_ssl_client_certs or authentication_ssl_crdt_certs
Values:
**'enabled'**
'disabled' | +| eviction_policy | Database eviction policy (Redis style).
Values:
'volatile-lru'
'volatile-ttl'
'volatile-random'
'allkeys-lru'
'allkeys-random'
'noeviction'
'volatile-lfu'
'allkeys-lfu'
**Redis DB default**: 'volatile-lru'
**memcached DB default**: 'allkeys-lru' | +| export_failure_reason | Reason of last failed export process (read-only)
Values:
'no-permission'
'wrong-file-path'
'general-error' | +| export_progress | number, (range: 0-100); Database manually triggered export progress (percentage) (read-only) | +| export_status | Status of manually triggered export process (read-only)
Values:
'exporting'
'succeeded'
'failed' | +| generate_text_monitor | boolean; Enable/disable generation of syncer monitoring information | +| gradual_src_max_sources | integer (default: 1); Sync a maximum N sources in parallel (gradual_src_mode should be enabled for this to take effect) | +| gradual_src_mode | Indicates if gradual sync (of sync sources) should be activated
Values:
'enabled'
'disabled' | +| gradual_sync_max_shards_per_source | integer (default: 1); Sync a maximum of N shards per source in parallel (gradual_sync_mode should be enabled for this to take effect) | +| gradual_sync_mode | Indicates if gradual sync (of source shards) should be activated ('auto' for automatic decision)
Values:
'enabled'
'disabled'
'auto' | +| hash_slots_policy | The policy used for hash slots handling
Values:
**'legacy'**: slots range is '1-4096'
**'16k'**: slots range is '0-16383' | +| implicit_shard_key | boolean (default: false); Controls the behavior of what happens in case a key does not match any of the regex rules.
**true**: if a key does not match any of the rules, the entire key will be used for the hashing function
**false**: if a key does not match any of the rules, an error will be returned. | +| import_failure_reason | Import failure reason (read-only)
Values:
'download-error'
'file-corrupted'
'general-error'
'file-larger-than-mem-limit:\:\'
'key-too-long'
'invalid-bulk-length'
'out-of-memory' | +| import_progress | number, (range: 0-100); Database import progress (percentage) (read-only) | +| import_status | Database import process status (read-only)
Values:
'idle'
'initializing'
'importing'
'succeeded'
'failed' | +| internal | boolean (default: false); Is this a database used by the cluster internally | +| last_backup_time | string; Time of last successful backup (read-only) | +| last_changed_time | string; Last administrative configuration change (read-only) | +| last_export_time | string; Time of last successful export (read-only) | +| max_aof_file_size | integer; Maximum size for shard's AOF file (bytes). Default 300GB, (on bigstore DB 150GB) | +| max_aof_load_time | integer (default: 3600); Maximum time shard's AOF reload should take (seconds). | +| max_client_pipeline | integer (default: 200); Maximum number of pipelined commands per connection. Maximum value is 2047. | +| max_connections | integer (default: 0); Maximum number of client connections allowed (0 unlimited) | +| max_pipelined | integer (default: 2000); Determines the maximum number of commands in the proxy’s pipeline per shard connection. | +| master_persistence | boolean (default: false); If true, persists the primary shard in addition to replica shards in a replicated and persistent database. | +| memory_size | integer (default: 0); Database memory limit (0 is unlimited), expressed in bytes. | +| metrics_export_all | boolean; Enable/disable exposing all shard metrics through the metrics exporter | +| mkms | boolean (default: true); Are MKMS (Multi Key Multi Slots) commands supported? | +| module_list | {{}}[{
"module_id": string,
"module_args": [
u'string',
u'null'],
"module_name": string,
"semantic_version": string
}, ...]{{
}} List of modules associated with the database

**module_id**: Module UID (deprecated; use `module_name` instead)
**module_args**: Module command-line arguments (pattern does not allow special characters &,\<,>,")
**module_name**: Module's name
**semantic_version**: Module's semantic version (deprecated; use `module_args` instead)

**module_id** and **semantic_version** are optional as of Redis Enterprise Software v7.4.2 and deprecated as of v7.8.2. | +| mtls_allow_outdated_certs | boolean; An optional mTLS relaxation flag for certs verification | +| mtls_allow_weak_hashing | boolean; An optional mTLS relaxation flag for certs verification | +| name | string; Database name. Only letters, numbers, or hyphens are valid characters. The name must start and end with a letter or number. | +| oss_cluster | boolean (default: false); OSS Cluster mode option. Cannot be enabled with `'hash_slots_policy': 'legacy'` | +| oss_cluster_api_preferred_endpoint_type | Endpoint type in the OSS cluster API
Values:
**‘ip’**
‘hostname’ | +| oss_cluster_api_preferred_ip_type | Internal/external IP type in OSS cluster API. Default value for new endpoints
Values:
**'internal'**
'external' | +| oss_sharding | boolean (default: false); An alternative to `shard_key_regex` for using the common case of the OSS shard hashing policy | +| port | integer; TCP port on which the database is available. Generated automatically if omitted and returned as 0 | +| proxy_policy | The default policy used for proxy binding to endpoints
Values:
'single'
'all-master-shards'
'all-nodes' | +| rack_aware | boolean (default: false); Require the database to always replicate across multiple racks | +| recovery_wait_time | integer (default: -1); Defines how many seconds to wait for the persistence file to become available during auto recovery. After the wait time expires, auto recovery completes with potential data loss. The default `-1` means to wait forever. | +| redis_version | string; Version of the redis-server processes: e.g. 6.0, 5.0-big | +| repl_backlog_size | string; Redis replication backlog size ('auto' or size in bytes) | +| replica_sources | array of [syncer_sources]({{< relref "/operate/rs/references/rest-api/objects/bdb/syncer_sources" >}}) objects; Remote endpoints of database to sync from. See the 'bdb -\> replica_sources' section | +| [replica_sync]({{< relref "/operate/rs/references/rest-api/objects/bdb/replica_sync" >}}) | Enable, disable, or pause syncing from specified replica_sources
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| replica_sync_connection_alarm_timeout_seconds | integer (default: 0); If the syncer takes longer than the specified number of seconds to connect to a replica, raise a connection alarm | +| replica_sync_dist | boolean; Enable/disable distributed syncer in replica-of | +| replication | boolean (default: false); In-memory database replication mode | +| resp3 | boolean (default: true); Enables or deactivates RESP3 support | +| roles_permissions | {{}}[{
"role_uid": integer,
"redis_acl_uid": integer
}, ...]{{
}} | +| sched_policy | Controls how server-side connections are used when forwarding traffic to shards.
Values:
**cmp**: Closest to max_pipelined policy. Pick the connection with the most pipelined commands that has not reached the max_pipelined limit.
**mru**: Try to use most recently used connections.
**spread**: Try to use all connections.
**mnp**: Minimal pipeline policy. Pick the connection with the least pipelined commands. | +| shard_block_crossslot_keys | boolean (default: false); In Lua scripts, prevent use of keys from different hash slots within the range owned by the current shard | +| shard_block_foreign_keys | boolean (default: true); In Lua scripts, `foreign_keys` prevent use of keys which could reside in a different shard (foreign keys) | +| shard_key_regex | Custom keyname-based sharding rules.
`[{"regex": string}, ...]`
To use the default rules you should set the value to:
`[{"regex": ".*\\{(?.*)\\}.*"}, {"regex": "(?.*)"}]` | +| shard_list | array of integers; Cluster unique IDs of all database shards. | +| sharding | boolean (default: false); Cluster mode (server-side sharding). When true, shard hashing rules must be provided by either `oss_sharding` or `shard_key_regex` | +| shards_count | integer, (range: 1-512) (default: 1); Number of database server-side shards | +| shards_placement | Control the density of shards
Values:
**'dense'**: Shards reside on as few nodes as possible
**'sparse'**: Shards reside on as many nodes as possible | +| skip_import_analyze | Enable/disable skipping the analysis stage when importing an RDB file
Values:
'enabled'
'disabled' | +| slave_buffer | Redis replica output buffer limits
Values:
'auto'
value in MB
hard:soft:time | +| slave_ha | boolean; Enable replica high availability mechanism for this database (default takes the cluster setting) | +| slave_ha_priority | integer; Priority of the BDB in replica high availability mechanism | +| snapshot_policy | array of [snapshot_policy]({{< relref "/operate/rs/references/rest-api/objects/bdb/snapshot_policy" >}}) objects; Policy for snapshot-based data persistence. A dataset snapshot will be taken every N secs if there are at least M writes changes in the dataset | +| ssl | boolean (default: false); Require SSL authenticated and encrypted connections to the database (deprecated as of Redis Enterprise v5.0.1) | +| [status]({{< relref "/operate/rs/references/rest-api/objects/bdb/status" >}}) | Database lifecycle status (read-only)
Values:
'pending'
'active'
'active-change-pending'
'delete-pending'
'import-pending'
'creation-failed'
'recovery' | +| support_syncer_reconf | boolean; Determines whether the syncer handles its own configuration changes. If false, the DMC restarts the syncer upon a configuration change. | +| sync | (deprecated as of Redis Enterprise v5.0.1, use [replica_sync]({{< relref "/operate/rs/references/rest-api/objects/bdb/replica_sync" >}}) or crdt_sync instead) Enable, disable, or pause syncing from specified sync_sources
Values:
'enabled'
**'disabled'**
'paused'
'stopped' | +| sync_dedicated_threads | integer (range: 0-10) (default: 5); Number of dedicated Replica Of threads | +| sync_sources | {{}}[{
"uid": integer,
"uri": string,
"compression": integer,
"status": string,
"rdb_transferred": integer,
"rdb_size": integer,
"last_update": string,
"lag": integer,
"last_error": string
}, ...]{{
}} (deprecated as of Redis Enterprise v5.0.1, instead use replica_sources or crdt_sources) Remote endpoints of database to sync from. See the 'bdb -\> replica_sources' section
**uid**: Numeric unique identification of this source
**uri**: Source Redis URI
**compression**: Compression level for the replication link
**status**: Sync status of this source
**rdb_transferred**: Number of bytes transferred from the source's RDB during the syncing phase
**rdb_size**: The source's RDB size to be transferred during the syncing phase
**last_update**: Time last update was received from the source
**lag**: Lag in millisec between source and destination (while synced)
**last_error**: Last error encountered when syncing from the source | +| syncer_log_level | Minimum syncer log level to log. Only logs with this level or higher will be logged.
Values:
‘crit’
‘error’
‘warn’
**‘info’**
‘trace’
‘debug’ | +| syncer_mode | The syncer for replication between database instances is either on a single node (centralized) or on each node that has a proxy according to the proxy policy (distributed). (read-only)
Values:
'distributed'
'centralized' | +| tags | {{}}[{
"key": string,
"value": string
}, ...]{{
}} Optional list of tag objects attached to the database. Each tag requires a key-value pair.
**key**: Represents the tag's meaning and must be unique among tags (pattern does not allow special characters &,\<,>,")
**value**: The tag's value.| +| tls_mode | Require TLS-authenticated and encrypted connections to the database
Values:
'enabled'
**'disabled'**
'replica_ssl' | +| tracking_table_max_keys | integer; The client-side caching invalidation table size. 0 makes the cache unlimited. | +| type | Type of database
Values:
**'redis'**
'memcached' | +| use_nodes | array of strings; Cluster node UIDs to use for database shards and bound endpoints | +| version | string; Database compatibility version: full Redis/memcached version number, such as 6.0.6. This value can only change during database creation and database upgrades.| +| wait_command | boolean (default: true); Supports Redis wait command (read-only) | +--- +Title: BDB group object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a group of databases with a shared memory pool +linkTitle: bdb_group +weight: $weight +--- + +An API object that represents a group of databases that share a memory pool. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Cluster unique ID of the database group | +| members | array of strings | A list of UIDs of member databases (read-only) | +| memory_size | integer | The common memory pool size limit for all databases in the group, expressed in bytes | +--- +Title: Action object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents cluster actions +linkTitle: action +weight: $weight +--- + +The cluster allows you to invoke general maintenance actions such as rebalancing or taking a node offline by moving all of its entities to other nodes. + +Actions are implemented as tasks in the cluster. Every task has a unique `task_id` assigned by the cluster, a task name which describes the task, a status, and additional task-specific parameters. + +The REST API provides a simplified interface that allows callers to invoke actions and query their status without a specific `task_id`. + +The action lifecycle is based on the following status and status transitions: + +{{< image filename="/images/rs/rest-api-action-cycle.png#no-click" alt="Action lifecycle" >}} + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | float (range: 0-100) | Represents percent completed (As of v7.4.2, the return value type changed to 'float' to provide improved progress indication) | +| status | queued | Requested operation and added it to the queue to await processing | +| | starting | Picked up operation from the queue and started processing | +| | running | Currently executing operation | +| | cancelling | Operation cancellation is in progress | +| | cancelled | Operation cancelled | +| | completed | Operation completed | +| | failed | Operation failed | + +When a task fails, the `error_code` and `error_message` fields describe the error. + +Possible `error_code` values: + + Code | Description | +|-------------------------|------------------------------------------------| +| internal_error | An internal error that cannot be mapped to a more precise error code +| insufficient_resources | The cluster does not have sufficient resources to complete the required operation + +--- +Title: State machine object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a state machine. +linkTitle: state-machine +weight: $weight +--- + +A state machine object tracks the status of database actions. + +A state machine contains the following attributes: + +| Name | Type/Value | Description | +|-------------|------------|-------------| +| action_uid | string | A globally unique identifier of the action | +| object_name | string | Name of the object being manipulated by the state machine | +| status | pending | Requested state machine has not started | +| | active | State machine is currently running | +| | completed | Operation complete | +| | failed | Operation or state machine failed | +| name | string | Name of the running (or failed) state machine | +| state | string | Current state within the state machine, when known | +| error | string | A descriptive error string for failed state machine, when known | +--- +Title: Role object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a role +linkTitle: role +weight: $weight +--- + +An API object that represents a role. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Role's unique ID | +| account_id | integer | SM account ID | +| action_uid | string | Action UID. If it exists, progress can be tracked by the GET /actions/{uid} API (read-only) | +| management | 'admin'
'db_member'
'db_viewer'
'cluster_member'
'cluster_viewer'
'user_manager'
'none' | [Management role]({{< relref "/operate/rs/references/rest-api/permissions#roles" >}}) | +| name | string | Role's name | +--- +Title: Redis ACL object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a Redis access control list (ACL) +linkTitle: redis_acl +weight: $weight +--- + +An API object that represents a Redis [access control list (ACL)]({{< relref "/operate/rs/security/access-control/create-db-roles" >}}) + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Object's unique ID | +| account_id | integer | SM account ID | +| acl | string | Redis ACL's string | +| action_uid | string | Action UID. If it exists, progress can be tracked by the `GET` `/actions/{uid}` API (read-only) | +| name | string | Redis ACL's name | +| min_version | string | Minimum database version that supports this ACL. Read only | +| max_version | string | Maximum database version that supports this ACL. Read only | + +--- +Title: Proxy object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a proxy in the cluster +linkTitle: proxy +weight: $weight +--- + +An API object that represents a [proxy](https://en.wikipedia.org/wiki/Proxy_server) in the cluster. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | integer | Unique ID of the proxy (read-only) | +| backlog | integer | TCP listen queue backlog | +| client_keepcnt | integer | Client TCP keepalive count | +| client_keepidle | integer | Client TCP keepalive idle | +| client_keepintvl | integer | Client TCP keepalive interval | +| conns | integer | Number of connections | +| duration_usage_threshold | integer, (range: 10-300) | Max number of threads | +| dynamic_threads_scaling | boolean | Automatically adjust the number of threads| +| ignore_bdb_cconn_limit | boolean | Ignore client connection limits | +| ignore_bdb_cconn_output_buff_limits | boolean | Ignore buffer limit | +| log_level | `crit`
`error`
`warn`
`info`
`trace`
`debug` | Minimum log level to log. Only logs with this level or greater will be logged. | +| max_listeners | integer | Max number of listeners | +| max_servers | integer | Max number of Redis servers | +| max_threads | integer, (range: 1-256) | Max number of threads | +| max_worker_client_conns | integer | Max client connections per thread | +| max_worker_server_conns | integer | Max server connections per thread | +| max_worker_txns | integer | Max in-flight transactions per thread | +| threads | integer, (range: 1-256) | Number of threads | +| threads_usage_threshold | integer, (range: 50-99) | Max number of threads | +--- +Title: Sync object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the sync object used with Redis Enterprise Software REST API + calls. +linkTitle: sync +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | integer | Number of bytes remaining in current sync | +| status | 'in_progress'
'idle'
'link_down' | Indication of the shard's current sync status | +--- +Title: Loading object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the loading object used with Redis Enterprise Software REST + API calls. +linkTitle: loading +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | number, (range: 0-100) | Percentage of bytes already loaded | +| status | 'in_progress'
'idle' | Status of the load of a dump file (read-only) | +--- +Title: Backup object +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the backup object used with Redis Enterprise Software REST + API calls. +linkTitle: backup +weight: $weight +--- + +| Name | Type/Value | Description | +|------|------------|-------------| +| progress | number, (range: 0-100) | Shard backup progress (percentage) | +| status | 'exporting'
'succeeded'
'failed' | Status of scheduled periodic backup process | +--- +Title: Shard object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents a database shard +hideListLinks: true +linkTitle: shard +weight: $weight +--- + +An API object that represents a Redis shard in a database. + +| Name | Type/Value | Description | +|------|------------|-------------| +| uid | string | Cluster unique ID of shard | +| assigned_slots | string | Shards hash slot range | +| backup | [backup]({{< relref "/operate/rs/references/rest-api/objects/shard/backup" >}}) object | Current status of scheduled periodic backup process | +| bdb_uid | integer | The ID of the database this shard belongs to | +| bigstore_ram_weight | number | Shards RAM distribution weight | +| detailed_status | 'busy'
'down'
'importing'
'loading'
'ok'
'timeout'
'trimming'
'unknown' | A more detailed status of the shard | +| loading | [loading]({{< relref "/operate/rs/references/rest-api/objects/shard/loading" >}}) object | Current status of dump file loading | +| node_uid | string | The ID of the node this shard is located on | +| redis_info | redis_info object | A sub-dictionary of the [Redis INFO command]({{< relref "/commands/info" >}}) | +| report_timestamp | string | The time in which the shard's info was collected (read-only) | +| role | 'master'
'slave' | Role of this shard | +| status | 'active'
'inactive'
'trimming' | The current status of the shard | +| sync | [sync]({{< relref "/operate/rs/references/rest-api/objects/shard/sync.md" >}}) object | Shard's current sync status and progress | +--- +Title: OCSP status object +alwaysopen: false +categories: +- docs +- operate +- rs +description: An object that represents the cluster's OCSP status +linkTitle: ocsp_status +weight: $weight +--- + +An API object that represents the cluster's OCSP status. + +| Name | Type/Value | Description | +|------|------------|-------------| +| cert_status | string | Indicates the proxy certificate's status: GOOD/REVOKED/UNKNOWN (read-only) | +| responder_url | string | The OCSP responder URL this status came from (read-only) | +| next_update | string | The expected date and time of the next certificate status update (read-only) | +| produced_at | string | The date and time when the OCSP responder signed this response (read-only) | +| revocation_time | string | The date and time when the certificate was revoked or placed on hold (read-only) | +| this_update | string | The most recent time that the responder confirmed the current status (read-only) | +--- +Title: Redis Enterprise REST API objects +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the objects used with Redis Enterprise Software REST API calls. +hideListLinks: true +linkTitle: Objects +weight: 40 +--- + +Certain [REST API requests]({{< relref "/operate/rs/references/rest-api/requests" >}}) require you to include specific objects in the request body. Many requests also return objects in the response body. + +Both REST API requests and responses represent these objects as [JSON](https://www.json.org). + +{{< table-children columnNames="Object,Description" columnSources="LinkTitle,Description" enableLinks="LinkTitle" >}} +--- +Title: Suffixes requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: DNS suffixes requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: suffixes +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-suffixes) | `/v1/suffixes` | Get all DNS suffixes | + +## Get all suffixes {#get-all-suffixes} + + GET /v1/suffixes + +Get all DNS suffixes in the cluster. + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/suffixes + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +### Response {#get-all-response} + +The response body contains a JSON array with all suffixes, represented as [suffix objects]({{< relref "/operate/rs/references/rest-api/objects/suffix" >}}). + +#### Example JSON body + +```json +[ + { + "name": "cluster.fqdn", + "// additional fields..." + }, + { + "name": "internal.cluster.fqdn", + "// additional fields..." + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | No error | +--- +Title: Migrate shards requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to migrate database shards +headerRange: '[1-2]' +linkTitle: migrate +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-multi-shards) | `/v1/shards/actions/migrate` | Migrate multiple shards | +| [POST](#post-shard) | `/v1/shards/{uid}/actions/migrate` | Migrate a specific shard | + +## Migrate multiple shards {#post-multi-shards} + + POST /v1/shards/actions/migrate + +Migrates the list of given shard UIDs to the node specified by `target_node_uid`. The shards can be from multiple databases. This request is asynchronous. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [migrate_shard]({{< relref "/operate/rs/references/rest-api/permissions#migrate_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-multi-request} + +#### Example HTTP request + + POST /v1/shards/actions/migrate + +#### Example JSON body + +```json +{ + "shard_uids": ["2","4","6"], + "target_node_uid": 9, + "override_rack_policy": false, + "preserve_roles": false, + "max_concurrent_bdb_migrations": 3 +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body {#post-multi-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| shard_uids | array of strings | List of shard UIDs to migrate. | +| target_node_uid | integer | UID of the node to where the shards should migrate. | +| override_rack_policy | boolean | If true, overrides and ignores rack-aware policy violations. | +| dry_run | boolean | Determines whether the migration is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. | +| preserve_roles | boolean | If true, preserves the migrated shards' roles after migration. | +| max_concurrent_bdb_migrations | integer | The number of concurrent databases that can migrate shards. | + +### Response {#post-multi-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Migrate was triggered" +} +``` + +### Status codes {#post-multi-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | Conflicting parameters. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | A list of shard UIDs is required and not given, a specified shard does not exist, or a node UID is required and not given. | +| [500 Internal Server Error](https://www.rfc-editor.org/rfc/rfc9110.html#name-500-internal-server-error) | Migration failed. | + + +## Migrate shard {#post-shard} + + POST /v1/shards/{int: uid}/actions/migrate + +Migrates the shard with the given `shard_uid` to the node specified by `target_node_uid`. If the shard is already on the target node, nothing happens. This request is asynchronous. + +For more information about shard migration use cases and considerations, see [Migrate database shards]({{}}). + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [migrate_shard]({{< relref "/operate/rs/references/rest-api/permissions#migrate_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/shards/1/actions/migrate + +#### Example JSON body + +```json +{ + "target_node_uid": 9, + "override_rack_policy": false, + "preserve_roles": false +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard to migrate. | + + +#### Request body {#post-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| target_node_uid | integer | UID of the node to where the shard should migrate. | +| override_rack_policy | boolean | If true, overrides and ignores rack-aware policy violations. | +| dry_run | boolean | Determines whether the migration is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. | +| preserve_roles | boolean | If true, preserves the migrated shards' roles after migration. | + +### Response {#post-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Migrate was triggered" +} +``` + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist, or node UID is required and not given. | +| [409 Conflict](https://www.rfc-editor.org/rfc/rfc9110.html#name-409-conflict) | Database is currently busy. | +--- +Title: Shard failover requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to fail over database shards +headerRange: '[1-2]' +linkTitle: failover +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [POST](#post-multi-shards) | `/v1/shards/actions/failover` | Fail over multiple shards | +| [POST](#post-shard) | `/v1/shards/{uid}/actions/failover` | Fail over a specific shard | + +## Fail over multiple shards {#post-multi-shards} + + POST /v1/shards/actions/failover + +Performs failover on the primary shards specified by `shard_uids` in the request body, and promotes their replicas to primary shards. This request is asynchronous. + +The cluster automatically manages failover to ensure high availability. Use this failover REST API request only for testing and planned maintenance. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [failover_shard]({{< relref "/operate/rs/references/rest-api/permissions#failover_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-multi-request} + +#### Example HTTP request + + POST /v1/shards/actions/failover + +#### Example JSON body + +```json +{ + "shard_uids": ["2","4","6"] +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body {#post-multi-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| shard_uids | array of strings | List of primary shard UIDs to fail over. The shards must belong to the same database. | +| dead_uids | array of strings | Primary shards to avoid stopping. Optional. | +| dead_nodes | array of strings | Nodes that should not be drained or used for promoted replica shards. Optional. | +| dry_run | boolean | Determines whether the failover is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. Optional. | +| force_rebind | boolean | Rebind after promotion. Optional. | +| redis_version_upgrade | string | New version of the promoted primary shards. Optional. | + +### Response {#post-multi-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Failover was triggered" +} +``` + +### Status codes {#post-multi-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | Shard is a replica or the specified failover shards are not in the same database. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | A list of shard UIDs is required and not given, or a specified shard does not exist. | +| [409 Conflict](https://www.rfc-editor.org/rfc/rfc9110.html#name-409-conflict) | Database is currently busy. | + +### Error codes {#put-multi-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_busy | Database is currently busy. | +| failover_shards_different_bdb | All failover shards should be in the same database. | +| shard_is_slave | Shard is a replica. | +| shard_not_exist | Shard does not exist. | +| shard_uids_required | List of shard UIDs is required and not given. | + +## Fail over shard {#post-shard} + + POST /v1/shards/{int: uid}/actions/failover + +Performs failover on the primary shard with the specified `shard_uid`, and promotes its replica shard to a primary shard. This request is asynchronous. + +The cluster automatically manages failover to ensure high availability. Use this failover REST API request only for testing and planned maintenance. + +#### Required permissions + +| Permission name | Roles | +|-----------------|-------| +| [failover_shard]({{< relref "/operate/rs/references/rest-api/permissions#failover_shard" >}}) | admin
cluster_member
db_member | + +### Request {#post-request} + +#### Example HTTP request + + POST /v1/shards/1/actions/failover + +#### Example JSON body + +```json +{ + "force_rebind": true +} +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard to fail over. | + + +#### Request body {#post-request-body} + +The request body is a JSON object that can contain the following fields: + +| Field | Type | Description | +|-------|------|-------------| +| dead_uid | string | Primary shard to avoid stopping. Optional. | +| dead_nodes | array of strings | Nodes that should not be drained or used for promoted replica shards. Optional. | +| dry_run | boolean | Determines whether the failover is actually done. If true, will just do a dry run. If the dry run succeeds, the request returns a `200 OK` status code. Otherwise, it returns a JSON object with an error code and description. Optional. | +| force_rebind | boolean | Rebind after promotion. Optional. | +| redis_version_upgrade | string | New version of the promoted primary shards. Optional. | + +### Response {#post-response} + +Returns a JSON object with an `action_uid`. You can track the action's progress with a [`GET /v1/actions/`]({{}}) request. + +#### Example JSON body + +```json +{ + "action_uid": "e5e24ddf-a456-4a7e-ad53-4463cd44880e", + "description": "Failover was triggered" +} +``` + +### Status codes {#post-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [400 Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) | Shard is a replica. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Specified shard does not exist. | +| [409 Conflict](https://www.rfc-editor.org/rfc/rfc9110.html#name-409-conflict) | Database is currently busy. | + +### Error codes {#put-error-codes} + +When errors are reported, the server may return a JSON object with `error_code` and `message` field that provide additional information. The following are possible `error_code` values: + +| Code | Description | +|------|-------------| +| db_busy | Database is currently busy. | +| shard_is_slave | Shard is a replica. | +| shard_not_exist | Shard does not exist. | +--- +Title: Shard actions requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to perform shard actions +headerRange: '[1-2]' +hideListLinks: true +linkTitle: actions +weight: $weight +--- + +## Migrate + +| Method | Path | Description | +|--------|------|-------------| +| [POST]({{}}) | `/v1/shards/actions/migrate` | Migrate multiple shards | +| [POST]({{}}) | `/v1/shards/{uid}/actions/migrate` | Migrate a specific shard | +--- +Title: Latest shards stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Most recent shard statistics requests +headerRange: '[1-2]' +linkTitle: last +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards-stats-last) | `/v1/shards/stats/last` | Get most recent stats for all shards | +| [GET](#get-shard-stats-last) | `/v1/shards/stats/last/{uid}` | Get most recent stats for a specific shard | + +## Get latest stats for all shards {#get-all-shards-stats-last} + + GET /v1/shards/stats/last + +Get most recent statistics for all shards. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_shard_stats]({{< relref "/operate/rs/references/rest-api/permissions#view_all_shard_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards/stats/last?interval=1sec&stime=015-05-27T08:27:35Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-all-response} + +Returns most recent [statistics]({{< relref "/operate/rs/references/rest-api/objects/statistics" >}}) for all shards. + +#### Example JSON body + +```json +{ + "1": { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:28:36Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + }, + "2": { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:28:45Z", + "// additional fields..." + } +} +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | No shards exist | + +## Get latest shard stats {#get-shard-stats-last} + + GET /v1/shards/stats/last/{int: uid} + +Get most recent statistics for a specific shard. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_shard_stats]({{< relref "/operate/rs/references/rest-api/permissions#view_shard_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/stats/last/1?interval=1sec&stime=2015-05-28T08:27:35Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week. Default: 1sec. (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns the most recent [statistics]({{< relref "/operate/rs/references/rest-api/objects/statistics" >}}) for the specified shard. + +#### Example JSON body + +```json +{ + "1": { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:36Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + } +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Shard isn't currently active | +--- +Title: Shards stats requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Shard statistics requests +headerRange: '[1-2]' +hideListLinks: true +linkTitle: stats +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards-stats) | `/v1/shards/stats` | Get stats for all shards | +| [GET](#get-shard-stats) | `/v1/shards/stats/{uid}` | Get stats for a specific shard | + +## Get all shards stats {#get-all-shards-stats} + + GET /v1/shards/stats + +Get statistics for all shards. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_all_shard_stats]({{< relref "/operate/rs/references/rest-api/permissions#view_all_shard_stats" >}}) | + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards/stats?interval=1hour&stime=2014-08-28T10:00:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| parent_uid | integer | Only return shard from the given BDB ID (optional) | +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| metrics | list | Comma-separated list of [metric names]({{< relref "/operate/rs/references/rest-api/objects/statistics/shard-metrics" >}}) for which we want statistics (default is all) (optional) | + +### Response {#get-all-response} + +Returns a JSON array of [statistics]({{< relref "/operate/rs/references/rest-api/objects/statistics" >}}) for all shards. + +#### Example JSON body + +```json +[ + { + "status": "active", + "uid": "1", + "node_uid": "1", + "assigned_slots": "0-8191", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:40Z", + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "read_hits": 0.0, + "pubsub_patterns": 0.0, + "no_of_keys": 0.0, + "mem_size_lua": 35840.0, + "last_save_time": 1432541051.0, + "sync_partial_ok": 0.0, + "connected_clients": 9.0, + "avg_ttl": 0.0, + "write_misses": 0.0, + "used_memory": 5651440.0, + "sync_full": 0.0, + "expired_objects": 0.0, + "total_req": 0.0, + "blocked_clients": 0.0, + "pubsub_channels": 0.0, + "evicted_objects": 0.0, + "no_of_expires": 0.0, + "interval": "1sec", + "write_hits": 0.0, + "read_misses": 0.0, + "sync_partial_err": 0.0, + "rdb_changes_since_last_save": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:27:45Z", + "// additional fields..." + } + ] + }, + { + "uid": "2", + "status": "active", + "node_uid": "1", + "assigned_slots": "8192-16383", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:27:35Z", + "etime": "2015-05-28T08:27:40Z", + "// additional fields..." + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:27:40Z", + "etime": "2015-05-28T08:27:45Z", + "// additional fields..." + } + ] + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | No shards exist | + +## Get shard stats {#get-shard-stats} + + GET /v1/shards/stats/{int: uid} + +Get statistics for a specific shard. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_shard_stats]({{< relref "/operate/rs/references/rest-api/permissions#view_shard_stats" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/stats/1?interval=1hour&stime=2014-08-28T10:00:00Z + + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the shard requested. | + + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| interval | string | Time interval for which we want stats: 1sec/10sec/5min/15min/1hour/12hour/1week (optional) | +| stime | ISO_8601 | Start time from which we want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | +| etime | ISO_8601 | End time after which we don't want the stats. Should comply with the [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601) format (optional) | + +### Response {#get-response} + +Returns [statistics]({{< relref "/operate/rs/references/rest-api/objects/statistics" >}}) for the specified shard. + +#### Example JSON body + +```json +{ + "uid": "1", + "status": "active", + "node_uid": "1", + "role": "master", + "intervals": [ + { + "interval": "1sec", + "stime": "2015-05-28T08:24:13Z", + "etime": "2015-05-28T08:24:18Z", + "avg_ttl": 0.0, + "blocked_clients": 0.0, + "connected_clients": 9.0, + "etime": "2015-05-28T08:24:18Z", + "evicted_objects": 0.0, + "expired_objects": 0.0, + "last_save_time": 1432541051.0, + "used_memory": 5651440.0, + "mem_size_lua": 35840.0, + "used_memory_peak": 5888264.0, + "used_memory_rss": 5888264.0, + "no_of_expires": 0.0, + "no_of_keys": 0.0, + "pubsub_channels": 0.0, + "pubsub_patterns": 0.0, + "rdb_changes_since_last_save": 0.0, + "read_hits": 0.0, + "read_misses": 0.0, + "stime": "2015-05-28T08:24:13Z", + "sync_full": 0.0, + "sync_partial_err": 0.0, + "sync_partial_ok": 0.0, + "total_req": 0.0, + "write_hits": 0.0, + "write_misses": 0.0 + }, + { + "interval": "1sec", + "stime": "2015-05-28T08:24:18Z", + "etime": "2015-05-28T08:24:23Z", + + "// additional fields..." + } + ] +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard does not exist | +| [406 Not Acceptable](https://www.rfc-editor.org/rfc/rfc9110.html#name-406-not-acceptable) | Shard isn't currently active | +--- +Title: Shard requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests for database shards +headerRange: '[1-2]' +hideListLinks: true +linkTitle: shards +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-all-shards) | `/v1/shards` | Get all shards | +| [GET](#get-shard) | `/v1/shards/{uid}` | Get a specific shard | + +## Get all shards {#get-all-shards} + + GET /v1/shards + +Get information about all shards in the cluster. + +### Request {#get-all-request} + +#### Example HTTP request + + GET /v1/shards?extra_info_keys=used_memory_rss&extra_info_keys=connected_clients + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| extra_info_keys | list of strings | A list of extra keys to be fetched (optional) | + +### Response {#get-all-response} + +Returns a JSON array of [shard objects]({{}}). + +#### Example JSON body + +```json +[ + { + "uid": "1", + "role": "master", + "assigned_slots": "0-16383", + "bdb_uid": 1, + "detailed_status": "ok", + "loading": { + "status": "idle" + }, + "node_uid": "1", + "redis_info": { + "connected_clients": 14, + "used_memory_rss": 12263424 + }, + "report_timestamp": "2024-06-28T18:44:01Z", + "status": "active" + }, + { + "uid": 2, + "role": "slave", + // additional fields... + } +] +``` + +### Status codes {#get-all-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | + +## Get shard {#get-shard} + + GET /v1/shards/{int: uid} + +Gets information about a single shard. + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/shards/1?extra_info_keys=used_memory_rss&extra_info_keys=connected_clients + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### URL parameters + +| Field | Type | Description | +|-------|------|-------------| +| uid | integer | The unique ID of the requested shard. | + +#### Query parameters + +| Field | Type | Description | +|-------|------|-------------| +| extra_info_keys | list of strings | A list of extra keys to be fetched (optional) | + +### Response {#get-response} + +Returns a [shard object]({{}}). + +#### Example JSON body + +```json +{ + "assigned_slots": "0-16383", + "bdb_uid": 1, + "detailed_status": "ok", + "loading": { + "status": "idle" + }, + "node_uid": "1", + "redis_info": { + "connected_clients": 14, + "used_memory_rss": 12263424 + }, + "role": "master", + "report_timestamp": "2024-06-28T18:44:01Z", + "status": "active", + "uid": "1" +} +``` + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](https://www.rfc-editor.org/rfc/rfc9110.html#name-200-ok) | No error. | +| [404 Not Found](https://www.rfc-editor.org/rfc/rfc9110.html#name-404-not-found) | Shard UID does not exist. | +--- +Title: Cluster debug info requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: Documents the Redis Enterprise Software REST API /cluster/debuginfo requests. +headerRange: '[1-2]' +linkTitle: debuginfo +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [GET](#get-cluster-debuginfo) | `/v1/cluster/debuginfo` | Get debug info from all nodes and databases | + +## Get cluster debug info {#get-cluster-debuginfo} + + GET /v1/cluster/debuginfo + +Downloads a tar file that contains debug info from all nodes and databases. + +#### Required permissions + +| Permission name | +|-----------------| +| [view_debugging_info]({{< relref "/operate/rs/references/rest-api/permissions#view_debugging_info" >}}) | + +### Request {#get-request} + +#### Example HTTP request + + GET /v1/cluster/debuginfo + +### Response {#get-response} + +Downloads the debug info in a tar file called `filename.tar.gz`. Extract the files from the tar file to access the debug info for all nodes. + +#### Response headers + +| Key | Value | Description | +|-----|-------|-------------| +| Content-Type | application/x-gzip | Media type of request/response body | +| Content-Length | 653350 | Length of the response body in octets | +| Content-Disposition | attachment; filename=debuginfo.tar.gz | Display response in browser or download as attachment | + +### Status codes {#get-status-codes} + +| Code | Description | +|------|-------------| +| [200 OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) | Success. | +| [500 Internal Server Error](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1) | Failed to get debug info. | +--- +Title: Change password hashing algorithm requests +alwaysopen: false +categories: +- docs +- operate +- rs +description: REST API requests to change the hashing algorithm for user passwords. +headerRange: '[1-2]' +linkTitle: change_password_hashing_algorithm +weight: $weight +--- + +| Method | Path | Description | +|--------|------|-------------| +| [PATCH](#patch-change-password-hashing-algorithm) | `/v1/cluster/change_password_hashing_algorithm` | Change the hashing policy for user passwords | + +## Change password hashing algorithm {#patch-change-password-hashing-algorithm} + + PATCH /v1/cluster/change_password_hashing_algorithm + +Changes the password hashing algorithm for the entire cluster. When you change the hashing algorithm, it rehashes the administrator password and passwords for all users, including default users. + +The hashing algorithm options are `SHA-256` or `PBKDF2`. The default hashing algorithm is `SHA-256`. + +#### Required permissions + +| Permission name | +|-----------------| +| [update_cluster]({{< relref "/operate/rs/references/rest-api/permissions#update_cluster" >}}) | + +### Request {#patch-request} + +#### Example HTTP request + + PATCH /v1/cluster/change_password_hashing_algorithm + +#### Example JSON body + +```json +{ "algorithm": "PBKDF2" } +``` + +#### Request headers + +| Key | Value | Description | +|-----|-------|-------------| +| Host | cnm.cluster.fqdn | Domain name | +| Accept | application/json | Accepted media type | + +#### Request body + +Include a JSON object `{ "algorithm": "