Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ rdi_redis_gears_version = "1.2.6"
rdi_debezium_server_version = "2.3.0.Final"
rdi_db_types = "cassandra|mysql|oracle|postgresql|sqlserver"
rdi_cli_latest = "latest"
rdi_current_version = "1.15.0"
rdi_current_version = "1.16.0"

[params.clientsConfig]
"Python"={quickstartSlug="redis-py"}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
Title: Redis Data Integration release notes 1.16.0 (December 2025)
alwaysopen: false
categories:
- docs
- operate
- rs
description: |
SQL-based transformations using Flink Table API for more flexible data shaping.
New RDI API v2 dry-run, info, and metrics endpoints for safer rollouts and better observability.
Improved deployment, installation, and local testing options, including KIND support and disk space preflight checks.
Resilience and stability improvements across Kubernetes, networking, and logging.
linkTitle: 1.16.0 (December 2025)
toc: 'true'
weight: 974
---

RDI's mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to:

- Meet the required speed and scale of read queries and provide an excellent and predictable user experience.
- Save resources and time when building pipelines and coding data transformations.
- Reduce the total cost of ownership by saving money on expensive database read replicas.

RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism.
It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required.

## What's New in 1.16.0

- **External secrets mount status tracking**: The operator tracks and surfaces the status of external secrets mounts, helping you diagnose issues with secrets providers more quickly.
- **Improved metrics exposure for VM installs**: RDI annotates ingresses during VM-based installations so that metrics are automatically exposed for monitoring systems.
- **Debezium upgrade**: Upgraded Debezium to version 3.3.1, bringing connector fixes and performance improvements from the upstream project.
- **Installer preflight disk space checks**: The installer now checks for sufficient disk space and `noexe` permissions on the `/tmp` directory before starting the installation, reducing failures due to insufficient capacity.

### RDI API changes

- **Dry-run support in API v2**: RDI API v2 now supports dry-run mode, allowing you to validate pipeline configuration and connectivity without applying changes. New dry-run endpoints make it safer to test changes before rollout and return clear validation errors instead of generic failures when the request body is missing or invalid.
- **Expanded API v2 observability and metadata**: RDI API v2 now exposes additional information and metrics:
- New `/info` endpoint that includes the running RDI version. This endpoint is not behind authentication and can be used to check RDI availability.
- New `/metric-collections` endpoints that describe available metric collections.
- Pipeline `/api/v2/pipelines/{name}` and status `/api/v2/pipelines/{name}/status` responses now include component-level information to simplify troubleshooting.
- Metric collection selection is now based on component type instead of name, avoiding misconfigured metrics.
- **API correctness and compatibility improvements**: Ensured backwards compatibility between API v1 and v2 and fixed issues where API v2 source/target dry-run endpoints could return HTTP 500 for certain invalid requests.
- **Environment variable handling for monitoring statistics**: Fixed an issue where `/api/v1/monitoring/statistics` could return HTTP 500 when the target database port was specified as an environment variable.

### Spanner integration with Flink

- **Enhanced Spanner disaster recovery with Redis Active-Active**: When using Cloud Spanner as a CDC source with a Redis Active-Active database, the Spanner Flink collector now handles long-running workloads and failover scenarios more reliably, improving the overall disaster recovery solution.
- **Spanner client resource management**: Spanner clients are now reused per database instead of being created repeatedly, eliminating a memory leak and reducing connection churn on Spanner.
- **Spanner collector memory tuning**: Increased the default process and heap size for the Spanner Flink collector and aligned memory fractions with the main Flink processor so it can handle larger change streams without instability.
- **Resilient Redis writes from the Spanner collector**: The Spanner Flink collector now uses configurable connection and socket timeouts plus retries with exponential backoff and jitter when writing to Redis, with metrics for retry attempts and failures to help you monitor reliability.
- **Configurable keys for Spanner tables without unique identifiers**: Added configuration to define keys for Spanner tables that do not have primary or unique keys, aligning behavior with Debezium and ensuring stable key generation for Redis.

### Bug Fixes and Stability Improvements

- **Configuration validation improvements**: Flink collector: RDI validates `value_capture_type` on startup to catch configuration errors earlier in the deployment lifecycle.
- **Improved support package generation**: Fixed several issues in the support package dump process so that diagnostic bundles are generated reliably.

## Limitations

RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.