-1. In **Connectivity Method**, select **Public**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
+1. In **Connectivity Method**, select **Public**, and fill in your Kafka broker endpoints. You can use commas `,` to separate multiple endpoints.
2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -109,7 +109,7 @@ The steps vary depending on the connectivity method you select.
1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -117,7 +117,7 @@ The steps vary depending on the connectivity method you select.
2. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
3. Customize **Column Selector** to select columns from events and send only the data changes related to those columns to the downstream.
@@ -130,7 +130,7 @@ The steps vary depending on the connectivity method you select.
- Avro is a compact, fast, and binary data format with rich data structures, which is widely used in various flow systems. For more information, see [Avro data format](https://docs.pingcap.com/tidb/stable/ticdc-avro-protocol).
- Canal-JSON is a plain JSON text format, which is easy to parse. For more information, see [Canal-JSON data format](https://docs.pingcap.com/tidb/stable/ticdc-canal-json).
- - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
+ - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
- Debezium is a tool for capturing database changes. It converts each captured database change into a message called an "event" and sends these events to Kafka. For more information, see [Debezium data format](https://docs.pingcap.com/tidb/stable/ticdc-debezium).
5. Enable the **TiDB Extension** option if you want to add TiDB-extension fields to the Kafka message body.
@@ -180,7 +180,7 @@ The steps vary depending on the connectivity method you select.
- **Distribute changelogs by column value to Kafka partition**
- If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is send to the same partition.
+ If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is sent to the same partition.
9. In the **Topic Configuration** area, configure the following numbers. The changefeed will automatically create the Kafka topics according to the numbers.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 9d1e32791686f..286bef181be75 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -34,7 +34,7 @@ If your MySQL service can be accessed over the public network, you can choose to
-Private link connection leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
You can connect your TiDB Cloud cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
@@ -48,7 +48,7 @@ The **Sink to MySQL** connector can only sink incremental data from your TiDB Cl
To load the existing data:
-1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during the time is not garbage collected by TiDB.
+1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during this period is not garbage collected by TiDB.
- The time to export and import the existing data
- The time to create **Sink to MySQL**
@@ -82,7 +82,7 @@ After completing the prerequisites, you can sink your data to MySQL.
- If you choose **Public**, fill in your MySQL endpoint.
- If you choose **Private Link**, select the private link connection that you created in the [Network](#network) section, and then fill in the MySQL port for your MySQL service.
-4. In **Authentication**, fill in the MySQL user name, password and TLS Encryption of your MySQL service. TiDB Cloud does not support self-signed certificates for MySQL TLS connections currently.
+4. In **Authentication**, fill in the MySQL user name and password, and configure TLS encryption for your MySQL service. Currently, TiDB Cloud does not support self-signed certificates for MySQL TLS connections.
5. Click **Next** to test whether TiDB can connect to MySQL successfully:
@@ -92,7 +92,7 @@ After completing the prerequisites, you can sink your data to MySQL.
6. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -100,20 +100,20 @@ After completing the prerequisites, you can sink your data to MySQL.
7. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
8. In **Start Replication Position**, configure the starting position for your MySQL sink.
- - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention the time zone.
+ - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention to the time zone.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
9. Click **Next** to configure your changefeed specification.
- In the **Changefeed Name** area, specify a name for the changefeed.
-10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
+10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
-11. The sink starts soon, and you can see the status of the sink changes from **Creating** to **Running**.
+11. The sink starts soon, and you can see the sink status change from **Creating** to **Running**.
Click the changefeed name, and you can see more details about the changefeed, such as the checkpoint, replication latency, and other metrics.
From c7f2950145a4fa4eb753525195d7fc2ef01c200d Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:00:10 +0800
Subject: [PATCH 7/8] fix changefeed
---
tidb-cloud/essential-changefeed-overview.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 9ed3b05fc242c..136df8c168697 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -30,7 +30,7 @@ On the **Changefeed** page, you can create a changefeed, view a list of existing
To create a changefeed, refer to the tutorials:
-- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-apache-kafka.md)
+- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
## View a changefeed
@@ -80,7 +80,6 @@ ticloud serverless changefeed resume -c --changefeed-id
-
## Edit a changefeed
> **Note:**
From 3623e4a17fcdaf4f899909374275dd80f757a43a Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:46:29 +0800
Subject: [PATCH 8/8] fix link
---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index f578de8ce79e9..5a0b2efeec95e 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -59,7 +59,7 @@ To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
-For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in the Confluent documentation for more information.
+For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/security/authorization/acls/manage-acls.html#add-acls) in the Confluent documentation for more information.
## Step 1. Open the Changefeed page for Apache Kafka