Skip to content

Conversation

@hongyunyan
Copy link
Collaborator

What problem does this PR solve?

Issue Number: close #xxx

What is changed and how it works?

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Questions

Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?

Release note

Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note.

If you don't think this PR needs a release note then fill it with `None`.

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Jan 16, 2026
@ti-chi-bot
Copy link

ti-chi-bot bot commented Jan 16, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign 3aceshowhand for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 16, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @hongyunyan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the testing capabilities for TiCDC by introducing a sophisticated random DDL and DML test runner. This new utility, integrated into several new weekly tests, aims to thoroughly validate TiCDC's resilience and correctness under diverse and challenging database workloads, including dynamic schema changes, data modifications, and system disruptions like capture failovers. The focus is on simulating realistic operational scenarios to uncover potential issues in data replication and consistency.

Highlights

  • New Random DDL/DML Test Runner: Introduced a comprehensive Go-based utility, random_ddl_test_runner, designed to generate and execute a wide variety of random DDL (schema changes) and DML (data manipulation) operations against a database cluster. This runner simulates complex real-world workloads to stress-test data replication.
  • Enhanced Integration Tests: Added four new weekly integration tests leveraging the new random DDL/DML runner: one for multi-capture scenarios with scheduler enabled, one for multi-capture with random failover, one for single-capture, and one specifically for slow downstream DDL application with MySQL sinks.
  • Dynamic Workload Management: The test runner includes an auto-tuning mechanism that dynamically adjusts the number of active DML and DDL workers based on the changefeed's health and success rate, ensuring adaptive and efficient testing under varying conditions.
  • Robust Verification Mechanisms: Incorporated advanced verification features such as checkpoint advancement monitoring, log scanning for panics/fatal errors, and MySQL syncpoint diff checks. The syncpoint diffs are intelligently skipped during DDL windows to prevent false positives.
  • Schema Evolution Motif: Implemented a specific schema evolution motif that simulates adding a column with per-database defaults, unifying the default, and then evolving the primary key to include the new column, testing complex schema change scenarios.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive random DDL/DML test runner, which is a great addition for improving testing coverage. The framework is well-structured with separate components for configuration, modeling, workload generation, and verification. I've made a few suggestions to improve code clarity, maintainability, and consistency, primarily in the shell scripts and some of the Go helper functions. Specifically, I've pointed out opportunities to refactor duplicated code in shell scripts, replace magic numbers with constants, and simplify some complex implementations in the Go code. Overall, this is a solid piece of work.

Comment on lines +154 to +160
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The log scanning logic here is a bit complex and can be simplified. By capturing the output of rg and checking if it's non-empty, you can avoid running rg twice. This also makes the code more readable.

This comment also applies to weekly_rand_multi_failover/run.sh and weekly_rand_single/run.sh which have identical logic.

Suggested change
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi
if command -v rg >/dev/null 2>&1; then
log_files_to_scan=(
"$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log
"$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log
"$WORK_DIR"/cdc_*_consumer_stdout*.log
)
if matches=$(rg -n -i "panic|fatal|data race" "${log_files_to_scan[@]}" 2>/dev/null); then
echo "log scan: panic/fatal/race detected"
echo "$matches" | head -n 50
exit 1
fi
fi

Comment on lines +161 to +167
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The log scanning logic here is a bit complex and can be simplified. By capturing the output of rg and checking if it's non-empty, you can avoid running rg twice. This also makes the code more readable.

Suggested change
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi
if command -v rg >/dev/null 2>&1; then
log_files_to_scan=(
"$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log
"$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log
"$WORK_DIR"/cdc_*_consumer_stdout*.log
)
if matches=$(rg -n -i "panic|fatal|data race" "${log_files_to_scan[@]}" 2>/dev/null); then
echo "log scan: panic/fatal/race detected"
echo "$matches" | head -n 50
exit 1
fi
fi

Comment on lines +142 to +148
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The log scanning logic here is a bit complex and can be simplified. By capturing the output of rg and checking if it's non-empty, you can avoid running rg twice. This also makes the code more readable.

Suggested change
if command -v rg >/dev/null 2>&1; then
if rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 20 | rg -n . >/dev/null 2>&1; then
echo "log scan: panic/fatal/race detected"
rg -n -i "panic|fatal|data race" "$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log "$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log "$WORK_DIR"/cdc_*_consumer_stdout*.log 2>/dev/null | head -n 50 || true
exit 1
fi
fi
if command -v rg >/dev/null 2>&1; then
log_files_to_scan=(
"$WORK_DIR"/runner.log "$WORK_DIR"/ddl_trace.log "$WORK_DIR"/stdout*.log
"$WORK_DIR"/cdc*.log "$WORK_DIR"/cdc_*_consumer*.log
"$WORK_DIR"/cdc_*_consumer_stdout*.log
)
if matches=$(rg -n -i "panic|fatal|data race" "${log_files_to_scan[@]}" 2>/dev/null); then
echo "log scan: panic/fatal/race detected"
echo "$matches" | head -n 50
exit 1
fi
fi


run_sql "SET GLOBAL tidb_enable_external_ts_read = off;" ${DOWN_TIDB_HOST} ${DOWN_TIDB_PORT}

echo "[$(date)] <<<<<< run test case $TEST_NAME success! >>>>>>"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This script is missing the final log scan for panics, which is present in other run.sh scripts in this PR. This is inconsistent. While the Go runner has its own log scanning, the shell script scan covers logs that the Go runner might not, and it runs at a different point in the test execution. It's good practice to have it for consistency and robustness. Consider adding it before the final echo.

nextDML := activeDML
nextDDL := activeDDL

if sinceAdvance >= soft || successRate < 0.10 {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The success rate threshold 0.10 is a magic number. Consider defining it as a named constant at the top of the file to improve readability, for example autoTuneSuccessRateThreshold.

return autoTuneResult{nextDML: nextDML, nextDDL: nextDDL}
}
if nextDML > 1 {
nextDML -= 8

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The DML worker step 8 is a magic number, also used on line 43. Consider defining it as a named constant, for example autoTuneDMLStep, to improve readability and maintainability.

Comment on lines +189 to +191
func mathMaxInt32() int64 {
return int64(^uint32(0) >> 1)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function mathMaxInt32 re-implements math.MaxInt32. It would be clearer to use the constant from the standard library. You could remove this function and use math.MaxInt32 directly at the call site on line 165, after adding import "math".

Comment on lines +57 to +113
carry := make([]byte, 0, maxPatternLen)
scratch := make([]byte, 0, 256*1024)
tmp := make([]byte, 0, 256*1024)
lineMatched := false

for {
part, isPrefix, err := reader.ReadLine()
if err != nil {
if err == io.EOF {
break
}
_ = f.Close()
return err
}

if !lineMatched {
tmp = append(tmp[:0], part...)
for i := range tmp {
c := tmp[i]
if c >= 'A' && c <= 'Z' {
tmp[i] = c + ('a' - 'A')
}
}

scratch = append(scratch[:0], carry...)
scratch = append(scratch, tmp...)

for i, p := range patternBytes {
if bytes.Contains(scratch, p) {
hits = append(hits, hit{file: filepath.Base(path), line: lineNo + 1, pat: lowerPatterns[i]})
lineMatched = true
break
}
}
}

if !isPrefix {
lineNo++
carry = carry[:0]
lineMatched = false
continue
}
// Keep a small suffix from the previous fragment to detect patterns spanning boundaries.
if len(scratch) == 0 {
carry = carry[:0]
continue
}
keep := maxPatternLen - 1
if keep <= 0 {
carry = carry[:0]
continue
}
if keep > len(scratch) {
keep = len(scratch)
}
carry = append(carry[:0], scratch[len(scratch)-keep:]...)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation for scanning logs is quite complex. It manually handles line splitting with ReadLine and lowercasing. This can be simplified significantly by using bufio.Scanner to handle lines correctly (even long ones) and bytes.ToLower for case-insensitive matching. This would make the code more readable and maintainable.

Here's an example of a simpler approach:

scanner := bufio.NewScanner(f)
lineNo := 0
for scanner.Scan() {
    lineNo++
    lowerLine := bytes.ToLower(scanner.Bytes())
    for i, p := range patternBytes {
        if bytes.Contains(lowerLine, p) {
            hits = append(hits, hit{file: filepath.Base(path), line: lineNo, pat: lowerPatterns[i]})
            break // Found a match on this line
        }
    }
}
if err := scanner.Err(); err != nil {
    _ = f.Close()
    return err
}

@ti-chi-bot
Copy link

ti-chi-bot bot commented Jan 16, 2026

[FORMAT CHECKER NOTIFICATION]

Notice: To remove the do-not-merge/needs-linked-issue label, please provide the linked issue number on one line in the PR body, for example: Issue Number: close #123 or Issue Number: ref #456.

📖 For more info, you can check the "Contribute Code" section in the development guide.

@ti-chi-bot
Copy link

ti-chi-bot bot commented Jan 16, 2026

@hongyunyan: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-check 4f27424 link true /test pull-check

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/needs-linked-issue do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant