Skip to content

Commit ca1f4ad

Browse files
DOC-6026 implemented feedback
1 parent 701b115 commit ca1f4ad

File tree

2 files changed

+9
-6
lines changed

2 files changed

+9
-6
lines changed

content/embeds/rdi-when-to-use-dec-tree.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,8 @@ questions:
114114
text: |
115115
Is your total data size smaller than 100GB?
116116
whyAsk: |
117-
RDI has practical limits on the total data size it can manage. Very large datasets may exceed these limits.
117+
RDI has practical limits on the total data size it can manage, based
118+
on the throughput requirements for full sync.
118119
answers:
119120
no:
120121
value: "No"

content/embeds/rdi-when-to-use.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,20 @@
22

33
RDI is a good fit when:
44

5-
- You want to use Redis as the target database for caching data.
5+
- You want your app/micro-services to read from Redis to scale reads at speed.
66
- You want to transfer data to Redis from a *single* source database.
77
- You must use a slow database as the system of record for the app.
88
- The app must always *write* its data to the slow database.
99
- Your app can tolerate *eventual* consistency of data in the Redis cache.
1010
- You want a self-managed solution or AWS based solution.
1111
- The source data changes frequently in small increments.
1212
- There are no more than 10K changes per second in the source database.
13-
- The total data size is not larger than 100GB.
1413
- RDI throughput during
15-
[full sync]({{< relref "/integrate/redis-data-integration/data-pipelines#pipeline-lifecycle" >}}) would not exceed 30K records per second and during
14+
[full sync]({{< relref "/integrate/redis-data-integration/data-pipelines#pipeline-lifecycle" >}}) would not exceed 30K records per second (for an average 1KB record size) and during
1615
[CDC]({{< relref "/integrate/redis-data-integration/data-pipelines#pipeline-lifecycle" >}})
17-
would not exceed 10K records per second.
16+
would not exceed 10K records per second (for an average 1KB record size).
17+
- The total data size is not larger than 100GB (since this would typically exceed the throughput
18+
limits just mentioned for full sync).
1819
- You don’t need to perform join operations on the data from several tables
1920
into a [nested Redis JSON object]({{< relref "/integrate/redis-data-integration/data-pipelines/data-denormalization#joining-one-to-many-relationships" >}}).
2021
- RDI supports the [data transformations]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples" >}}) you need for your app.
@@ -31,7 +32,8 @@ RDI is not a good fit when:
3132
than *eventual* consistency.
3233
- You need *transactional* consistency between the source and target databases.
3334
- The data is ingested from two replicas of Active-Active at the same time.
34-
- The app must *write* data to the Redis cache, which then updates the source database.
35+
- The app must *write* data to the Redis cache, which then updates the source database
36+
(write-behind/write-through patterns).
3537
- Your data set will only ever be small.
3638
- Your data is updated by some batch or ETL process with long and large transactions - RDI will fail
3739
processing these changes.

0 commit comments

Comments
 (0)