cloud: Premium supports data migration#22821
cloud: Premium supports data migration#22821alastori wants to merge 19 commits intopingcap:release-8.5from
Conversation
Add a new Public Preview guide for using the Data Migration feature on TiDB Cloud Premium, plus the corresponding entry in the Premium TOC. Mirrors the structure of premium-export.md.
- Update wizard structure to 4 steps (add Precheck as Step 3) - Tighten Job Name constraints language to match wizard helper text - Note that Private Link is in development and not yet generally available Verified against the Premium DM proto enums and the dev wizard text; prod release tag does not yet include the Private Link backend support, so the doc deliberately documents Public-only connectivity.
The 60-second safe-mode behavior is implemented in the legacy DM stack (used by Dedicated and Essential) and does not apply to the Premium DM service. Verified via dataflow-service-ng/app/models/ premium_dm/ which contains no safe-mode references.
Verified the complete wizard flow against the dev environment with a real MySQL source connection. Several corrections: - Step 2 has two controls under Migration Type: "Migration process" (Full + Incremental / Incremental only) and "Existing data migration mode" (Logical default / Physical). Document both. - Object selection is an All / Customize toggle, with Customize revealing a transfer-list pattern between source and selected. - Step 3 is named "Pre-check" (hyphenated) in the UI; "Check Again" re-runs; warnings can be ignored via a confirmation dialog. - Mode label is "Incremental only", not "Incremental Data Only". - Step 4 review shows three sections: Job Configuration, Source Connection Profile, Target Connection Profile. - PROCESS privilege is also recommended; pre-check warns when missing.
Safe mode is implemented in the tiflow DM kernel (used by Premium DM via the agent layer), not in the cloud control plane. The earlier removal was based on a search of the dataflow-service repo only, which is incomplete. Restoring the 60-second safe-mode note so the Premium doc matches the underlying replication engine behavior.
Customers reading the new Premium DM guide cross-reference the canonical Cloud DM doc for binary-log setup, privileges, and limitations. Without Premium variants in the canonical doc, those links would either render Dedicated-default content or leave tier placeholders blank. Changes: - TOC-tidb-cloud-premium.md: add the canonical and incremental-only Cloud DM docs as siblings of premium-data-migration.md so Premium customers can navigate to them. - tidb-cloud/migrate-from-mysql-using-data-migration.md: add Premium tier to all inline tier-name placeholders, plus three new Premium variant blocks: Public Preview note, supported sources matrix, and the Physical / Logical mode discussion (including PITR / changefeed and concurrent-job caveats for physical mode). - tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md: add Premium tier to all inline tier-name placeholders. - tidb-cloud/premium/premium-data-migration.md: add the two physical-mode caveats (PITR / changefeed; concurrent-job limit) inline so they are visible in the Premium-tier overview without requiring readers to click through. The Dedicated and Essential renderings of all three docs are unchanged.
|
cc @Oreoxmt |
The canonical Cloud DM doc anchors are: - "grant-required-privileges-to-the-migration-user-in-the-source-mysql-database" (note "source-mysql", not just "source") - "grant-required-privileges-for-migration" (parent ### section; the target-side ## #### heading uses CustomContent variants and the rendered anchor is not stable, so link to the parent instead) Detected by the internal-links-anchors CI job on PR pingcap#22821.
There was a problem hiding this comment.
Code Review
This pull request introduces documentation for migrating data to TiDB Cloud Premium using the Data Migration feature, including a new guide and updates to existing migration docs to incorporate Premium-specific details like logical and physical migration modes. The review feedback focuses on style guide adherence, specifically recommending the removal of passive voice, ensuring consistent terminology, using backticks for SQL keywords, and correcting minor grammatical and tense issues.
| <CustomContent plan="premium"> | ||
|
|
||
| - For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target instance, consuming Request Capacity Units (RCUs) on the target during the load. Physical mode uses `IMPORT INTO` on the target instance and is recommended for large datasets where load throughput and cost are priorities. | ||
| - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. |
There was a problem hiding this comment.
Avoid using passive voice. State the subject clearly.
| - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. | |
| - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. |
References
- Avoid passive voice overuse. (link)
|
|
||
| > **Note:** | ||
| > | ||
| > The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and the source connection cannot be reused across migration jobs. For details, see [Limitations](#limitations). |
There was a problem hiding this comment.
Avoid using passive voice. State the subject clearly.
| > The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and the source connection cannot be reused across migration jobs. For details, see [Limitations](#limitations). | |
| The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and you cannot reuse the source connection across migration jobs. For details, see [Limitations](#limitations). |
References
- Avoid passive voice overuse. (link)
|
|
||
| When you use physical mode, the following limitations apply: | ||
|
|
||
| - After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead. |
There was a problem hiding this comment.
Avoid using passive voice. State the subject clearly.
| - After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead. | |
| - After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead. |
References
- Avoid passive voice overuse. (link)
| ### General limitations | ||
|
|
||
| - The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases. | ||
| - During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced. |
There was a problem hiding this comment.
Avoid using passive voice. State the subject clearly.
| - During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced. | |
| - During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, TiDB Cloud replaces the rows with duplicate keys. |
References
- Avoid passive voice overuse. (link)
|
|
||
| - The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases. | ||
| - During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced. | ||
| - During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance. |
There was a problem hiding this comment.
Use backticks for SQL keywords and avoid passive voice.
| - During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance. | |
| - During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance. |
|
|
||
| 4. On the **Configure source and target connection** step, enter the following information: | ||
|
|
||
| - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter, can contain letters, numbers, underscores (`_`), and hyphens (`-`), and must be less than 60 characters. |
There was a problem hiding this comment.
Use 'fewer' for countable items and prefer present tense.
| - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter, can contain letters, numbers, underscores (`_`), and hyphens (`-`), and must be less than 60 characters. | |
| - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter, contains letters, numbers, underscores (`_`), and hyphens (`-`), and must be fewer than 60 characters. |
References
- Prefer present tense unless describing historical behavior. (link)
|
|
||
| In the **Select Objects to Migrate** section, choose: | ||
|
|
||
| - **All** (default): migrate every database and table on the source. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are excluded automatically. |
There was a problem hiding this comment.
Avoid using passive voice. State the subject clearly.
| - **All** (default): migrate every database and table on the source. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are excluded automatically. | |
| - **All** (default): migrate every database and table on the source. TiDB Cloud automatically excludes the system databases (`mysql`, `information_schema`, `performance_schema`, `sys`). |
References
- Avoid passive voice overuse. (link)
|
|
||
| ### Step 3: Pre-check | ||
|
|
||
| The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports total items, completed, passed, with warning, and failed. |
There was a problem hiding this comment.
Grammar correction: 'with warnings' instead of 'with warning'.
| The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports total items, completed, passed, with warning, and failed. | |
| The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports the total number of items, including those that are completed, passed, with warnings, or failed. |
References
- Correct English grammar, spelling, and punctuation mistakes, if any. (link)
| The review page shows three sections summarizing the migration job: | ||
|
|
||
| - **Job Configuration**: job name and migration type. | ||
| - **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and import mode. |
There was a problem hiding this comment.
Terminology consistency: use 'existing data migration mode' as defined earlier in the document.
| - **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and import mode. | |
| - **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and existing data migration mode. |
References
- Use consistent terminology. (link)
Apply 7 of 9 Gemini suggestions on PR pingcap#22821, all marked low priority and aligned with pingcap/docs styleguide: - Active voice: replace "the source connection cannot be reused" with "you cannot reuse the source connection". - Active voice: replace "rows ... are replaced" with "TiDB Cloud replaces the rows" in existing-data limitation. - Active voice + subject clarity: replace "INSERT statements are migrated as ..." with "TiDB Cloud migrates INSERT statements as ...". - Active voice: replace "the migration job will be stuck" with "the migration job stops" (Premium DM doc + canonical Cloud DM doc). - Active voice + subject clarity: replace "system databases ... are excluded automatically" with "TiDB Cloud automatically excludes the system databases". - Grammar: "with warning" -> "with warnings"; rephrase pre-check summary line for clarity. - Terminology consistency: in Step 4 review section, replace "import mode" with "the existing data migration mode (shown as Import Mode on the review page)" to bridge the wizard's two labels for the same concept. Skipped: the suggestion to use "fewer than 60 characters" / "contains letters" instead of "less than 60 characters" / "can contain letters" is intentionally rejected; the current wording mirrors the wizard's helper text verbatim.
End-to-end wizard verification on the dev cluster created a real migration job (id dmtskc3frek3p5fhy7ixu6wpj7cy2r4) and inspected the post-creation experience: - The Job Detail page does not expose action buttons (just Summary and Progress panels). - The list-page actions menu (the "..." button at the end of each row) shows different items based on job status. While the job is in Creating state, only View and Delete are visible. Pause and Resume become available once the job reaches a running or paused state. Doc previously implied Pause/Resume/Delete were always available from the detail page or the list. Replaced with status-aware phrasing and noted the Creating-state subset explicitly. The dev cluster job remained in Creating for 9+ minutes without transitioning, matching the March AS-IS report KI-5 (dev infrastructure issue, not a feature gap), so Pause/Resume behavior was confirmed via API surface (PausePremiumMigration / ResumePremiumMigration RPCs in proto) rather than the UI.
|
/cc @Oreoxmt |
|
/assign |
There was a problem hiding this comment.
@alastori I suggest removing tidb-cloud/premium/premium-data-migration.md, or reducing it to a short overview page only. Detailed supported source databases, prerequisites, and migration steps are already covered in tidb-cloud/migrate-from-mysql-using-data-migration.md and tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md.
Premium DM now supports AWS PrivateLink for source connectivity, sharing the Private Endpoint UI with Premium Changefeed (dataflow-service pingcap#3347, dbaas-ui pingcap#4755). The canonical Cloud DM doc had no Premium variant for the connection methods table or Step 2 wizard guidance, and the supported-sources block still asserted public-only access. Changes (all in migrate-from-mysql-using-data-migration.md): - Replace the stale "connect via the public endpoint" sentence with a pointer to the connection-methods section. - Add Premium variant of the connection methods table (Public for all providers; Private Link for AWS only). - Add Premium variant of the "Private link or private endpoint" section covering the AWS-side NLB and Endpoint Service setup plus the TiDB Cloud-side Private Endpoint creation under Networking > Private Endpoint for External Services. - Add Premium variants for Step 2 connectivity-method selection and the conditional follow-up (Hostname or IP address for Public, Private Endpoint dropdown with inline create-here link for Private Link). - Add Premium variant for the post-validate step (Public IP allowlist or AWS endpoint connection acceptance). Field labels verified end-to-end against the wizard on dev (cluster keep-720h-cslb, us-east-1, 2026-04-30).
Apply the structural restructure proposed in #2: move Premium Data Migration content into the canonical Cloud DM docs as <CustomContent plan="premium"> blocks alongside Dedicated and Essential, and remove the standalone tidb-cloud/premium/premium-data-migration.md overview. The TOC now lists the canonical Cloud DM docs in the shared "Migrate Data into TiDB Cloud" section instead of the Premium-specific section. This commit cherry-picks Aolin's restructure (Oreoxmt/tidb-docs review PR) on top of the AWS PrivateLink + wizard-verified Premium variants already on this branch, with three follow-up adjustments: - Drop the placeholder Premium quota block ("xxx TODO migration jobs"). The Premium quota is being confirmed with engineering separately; until confirmed, no quota statement is rendered for Premium. - Drop three Premium variants that asserted public-only connectivity ("only public connectivity", "select Public", "fill in Hostname or IP address"): superseded by the Public + Private Link variants already on this branch. - Update three Premium variants in migrate-incremental-data-from-mysql-using-data-migration.md to cover both Public and Private Link (AWS only), mirroring the pattern in the existing-data canonical doc.
Two corrections found during a systematic audit of the canonical
Cloud DM doc against the Premium proto and the dev wizard:
1. The "click the Restart button" recovery instruction in
"Limitations of incremental data migration" is now wrapped to
render only for {{{ .dedicated }}} and {{{ .essential }}}, so
Premium readers no longer see it. The Premium DM proto exposes
only Pause/Resume/Delete and has no Restart RPC, so the
instruction is not actionable on Premium.
2. The "Limitations of Alibaba Cloud RDS" and "Limitations of
Alibaba Cloud PolarDB-X" subsections are duplicated as a
Premium variant. These are source-side constraints (hidden
primary key in binlog, schema keywords) that apply regardless
of the TiDB Cloud target tier; before this commit, only
{{{ .essential }}} readers saw them.
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
The previous commit duplicated these limitation subsections as a Premium variant alongside the existing Essential variant, which triggered markdownlint MD024 (no-duplicate-heading) and caused pull-verify to fail on PR pingcap#22821. The constraints are source-side (hidden primary key in binlog, schema keywords) and apply regardless of TiDB Cloud target tier. Drop both CustomContent wrappers so the section renders for all plans (Dedicated, Essential, Premium). This also matches Aolin's "minimize differences" principle for the canonical Cloud DM doc.
Two style fixes that were flagged in the gemini-code-assist review on PR pingcap#22821 but did not propagate from the deleted standalone doc into the canonical doc. 1. Safe-mode bullet: rewrite from passive to active voice in all three tier blocks (Dedicated, Essential, Premium). Replace "INSERT statements are migrated as REPLACE..." with "TiDB Cloud migrates INSERT statements as REPLACE...". Tighten the trailing duplicate- row clause for clarity. 2. Physical-mode warning blockquote (later in the doc): replace "the migration job will be stuck" with "the migration job stops", matching the active-voice fix already applied to the equivalent Premium-only block earlier in the doc. Both fixes apply to all three tiers (no plan-specific behavior change).
|
@Oreoxmt Structural changes applied as agreed offline: Premium folded into the canonical Cloud DM docs, standalone overview removed, quota TODO dropped pending engineering confirmation. Ready for re-review when you have a chance. Full context in alastori#2. |
- Hide the empty Maximum number of migration jobs section for Premium. - Add the full Set up AWS PrivateLink and Private Endpoint for the MySQL source database procedure under Premium. - Use private endpoint as a generic noun, capitalized only when it appears verbatim as a UI label. - Verified Networking section labels against the live Premium console: AWS Private Endpoint for External Services (not Changefeed).
…gefeed) Verified against the live Premium console: the section card and dialog are both labeled 'AWS Private Endpoint for External Services' / 'Create Private Endpoint for External Services'. The same card services both DM and Changefeed, so 'Changefeed' would mislead readers. This follow-up was needed because the previous squash merge did not include the late label-fix commit from the source branch.
Several customers following the existing AWS PrivateLink procedure for the MySQL source database get stuck because the doc is missing the AllowedPrincipals authorization step. Without authorizing TiDB Cloud's AWS principal (arn:aws:iam::886436925895:root) on the endpoint service, the "Create Private Endpoint for External Services" dialog in TiDB Cloud hangs indefinitely with no error message. This commit also fills in several silent decisions in the AWS NLB and endpoint service wizards that the doc previously left to the customer to guess: - NLB scheme: Internal (the AWS wizard defaults to Internet-facing). - VPC selection: must switch from the wizard's Default VPC pre-selection to the RDS VPC. - Availability Zones: at least 2 are required for endpoint service. - Listener port: must change from the wizard default of 80 to 3306. - Target group target type: IP addresses (the default is Instances, which cannot register an RDS endpoint). - RDS private IP discovery: console-only path via EC2 Network Interfaces with Description = RDSNetworkInterface. - Endpoint service Supported IP address types: must check IPv4. - Service name format: includes the region segment (com.amazonaws.vpce.<region>.vpce-svc-<id>). - Production note pointing to the AWS Database Blog reference implementation for handling RDS IP rotation on failover. References: - AWS docs, Manage permissions (allow principals on endpoint services): https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions - AWS docs, Create a Network Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html - AWS Database Blog, Access Amazon RDS across VPCs using AWS PrivateLink and NLB: https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/
The migration-job-creation flow points customers to https://us-west-2.console.aws.amazon.com/vpc/home to accept the endpoint connection request from TiDB Cloud. The customer's endpoint service lives in whatever AWS region they chose for their RDS or Aurora source, which is rarely us-west-2. Following the hardcoded URL takes the customer to the wrong region and they see no pending request. Replace the four occurrences (two in each migration doc) with the region-neutral https://console.aws.amazon.com/vpc/home and add explicit guidance to switch to the region where the endpoint service was created. References: - AWS console URL convention (region-aware service URLs): https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html
|
|
||
| For detailed instructions, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) in AWS documentation. | ||
|
|
||
| 4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration: |
There was a problem hiding this comment.
| 4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration: | |
| 4. Optional: Test connectivity from a bastion or client inside the same VPC before starting the migration: |
There was a problem hiding this comment.
Reason: “VNet” is an Azure term, but this step is for AWS only.
|
|
||
| For detailed instructions, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) in AWS documentation. | ||
|
|
||
| 4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration: |
There was a problem hiding this comment.
| 4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration: | |
| 4. Optional: Test connectivity from a bastion or client inside the same VPC before starting the migration: |
There was a problem hiding this comment.
Reason: “VNet” is an Azure term, but this step is for AWS only.
|
|
||
| <CustomContent plan="premium"> | ||
|
|
||
| - For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports data from MySQL source databases as SQL statements and then executes them on the target {{{ .premium }}} instance, which consumes Request Capacity Units (RCUs) during the load. Physical mode uses `IMPORT INTO` on the target {{{ .premium }}} instance and is recommended for large datasets when load throughput and cost are priorities. |
There was a problem hiding this comment.
| - For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports data from MySQL source databases as SQL statements and then executes them on the target {{{ .premium }}} instance, which consumes Request Capacity Units (RCUs) during the load. Physical mode uses `IMPORT INTO` on the target {{{ .premium }}} instance and is recommended for large datasets when load throughput and cost are priorities. | |
| - For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports data from MySQL source databases as SQL statements and then executes them on the target {{{ .premium }}} instance, which consumes Request Capacity Units (RCUs) during the load. Physical mode uses `IMPORT INTO` on the target {{{ .premium }}} instance and is recommended for large datasets when you prioritize load throughput and cost efficiency. |
|
|
||
| <CustomContent plan="premium"> | ||
|
|
||
| For {{{ .premium }}}, the Data Migration feature supports any MySQL-compatible source database, and **MySQL** is the only data source type available in the migration job wizard. For supported connection methods, see [Ensure network connectivity](#ensure-network-connectivity). |
There was a problem hiding this comment.
| For {{{ .premium }}}, the Data Migration feature supports any MySQL-compatible source database, and **MySQL** is the only data source type available in the migration job wizard. For supported connection methods, see [Ensure network connectivity](#ensure-network-connectivity). | |
| For {{{ .premium }}}, the Data Migration feature supports the following MySQL-compatible source databases, and **MySQL** is the only data source type available in the migration job wizard. For supported connection methods, see [Ensure network connectivity](#ensure-network-connectivity). |
| If your MySQL service is in an AWS VPC, take the following steps: | ||
|
|
||
| 1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent>. | ||
| 1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. |
There was a problem hiding this comment.
| 1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. | |
| 1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent>. |
There was a problem hiding this comment.
As in line 552, this section is for Dedicated cluster only.
| ``` | ||
|
|
||
| #### Grant required privileges in the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent> | ||
| #### Grant required privileges in the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent> |
There was a problem hiding this comment.
| #### Grant required privileges in the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent> | |
| #### Grant required privileges in the target TiDB Cloud resource |
| - **Username**: enter the username of the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. | ||
| - **Password**: enter the password of the TiDB Cloud username. |
There was a problem hiding this comment.
| - **Username**: enter the username of the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. | |
| - **Password**: enter the password of the TiDB Cloud username. | |
| - **Username**: enter the username of the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. | |
| - **Password**: enter the password of the TiDB Cloud username. |
| > - When you use physical mode, you cannot create a second migration job or import task for the <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent> before the existing data migration is completed. | ||
| > - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent>. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. |
There was a problem hiding this comment.
As in line 812, the two notes above belong to <CustomContent plan="dedicated">, so we might need to remove Essential and Premium content from it and consider whether to add additional content for Premium.
| 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page. | ||
|
|
||
| 2. Click the name of your target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent> to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. | ||
| 2. Click the name of your target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent> to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. |
There was a problem hiding this comment.
Same issue here: all content starting from line 899 belongs to <CustomContent plan="dedicated">, so we might need to remove Essential and Premium content from the line above and consider whether to add additional sections for Premium.
What is changed, added or deleted?
This PR documents the Data Migration feature for TiDB Cloud Premium (Public Preview), folded into the canonical Cloud DM docs alongside the existing Dedicated and Essential variants.
Files changed:
tidb-cloud/migrate-from-mysql-using-data-migration.md— adds Premium variants for: Public Preview note, supported sources, logical/physical mode (with PITR/changefeed and concurrent-job caveats for physical mode), connection methods table, AWS PrivateLink setup, Step 2 wizard guidance (Connectivity Method dropdown, Hostname or IP / Private Endpoint conditional follow-up), post-validate guidance, scaling, and physical-mode performance specifications.tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md— adds Premium variants mirroring the same Public + Private Link pattern.TOC-tidb-cloud-premium.md— moves the Cloud DM docs into the shared "Migrate Data into TiDB Cloud" section so Premium customers reach the canonical guides.tidb-cloud/premium/premium-data-migration.md— deleted (folded into the canonical docs above).The Alibaba Cloud RDS / PolarDB-X source-side limitation subsections were unwrapped from the Essential variant so they render for all tiers (the constraints are source-side and apply regardless of TiDB Cloud target tier).
Premium DM features documented
tidbcloud/dataflow-servicePR dashboard: update grafana-overview-dashboard.md #3347 (feat(premium-dm): Support PrivateLink). UI:tidbcloud/dbaas-uiPR adopters: add case study links (#4742) #4755.Verification
End-to-end wizard verification on dev cluster
keep-720h-cslb(us-east-1, 2026-04-30): Step 1 field labels (Connectivity Method,Public/Private Link,Hostname or IP address,Private Endpoint, "Create a Private Endpoint here"), Step 2/3/4 layout, action menus, and the Public Preview badge. Code references (proto and UI) are linked in the closed companion PR alastori#2.Which TiDB version(s) do your changes apply to?
What is the related PR or file when changing an API or RFC?
N/A — documentation only.
Do your changes match any of the following descriptions?
<CustomContent plan="premium">blocks in existing docs