Skip to content

[REFACTOR][TEST] Migrate all codegen test to tvmscript#18719

Merged
tqchen merged 1 commit intoapache:mainfrom
tqchen:tvm-script-update
Feb 7, 2026
Merged

[REFACTOR][TEST] Migrate all codegen test to tvmscript#18719
tqchen merged 1 commit intoapache:mainfrom
tqchen:tvm-script-update

Conversation

@tqchen
Copy link
Member

@tqchen tqchen commented Feb 6, 2026

This PR migrates all the codegen tests to explicitly using tvmscript instead of indirectly via s_tir.Schedule. They makes the test surface more unit, contains less dep and more maintainable.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tqchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request undertakes a significant refactoring effort across numerous codegen test files targeting various backends (GPU, AArch64, ARM, C-host, LLVM, Metal, OpenCL, ROCm, Vulkan, x86). The core purpose is to modernize the test suite by migrating all test definitions from the legacy tvm.te and s_tir.Schedule APIs to the more declarative and integrated tvmscript syntax. This strategic shift is intended to streamline test logic, reduce boilerplate, and ultimately improve the readability and long-term maintainability of the codegen tests.

Highlights

  • Migration to TVMScript: All codegen tests have been refactored to explicitly use tvmscript (I.ir_module and T.prim_func) for defining primitive functions and their schedules. This replaces the older tvm.te (Tensor Expression) API and imperative s_tir.Schedule manipulations.
  • Improved Test Maintainability: The change aims to make tests more unit-like, reduce indirect dependencies, and enhance maintainability by embedding scheduling directives directly within the tvmscript definitions, making them more declarative and self-contained.
  • Introduction of Helper Modules: New helper functions (e.g., _reduce_sum_module, _binary_op_module) have been introduced in several test files to encapsulate common tvmscript module patterns, promoting code reuse and clarity.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/python/codegen/test_gpu_codegen_allreduce.py
    • Replaced direct @T.prim_func definitions with I.ir_module wrapped in helper functions (_reduce_sum_module, _reduce_max_module).
    • Scheduling logic (e.g., thread binding) is now directly expressed within the T.prim_func body using T.thread_binding.
  • tests/python/codegen/test_target_codegen_aarch64.py
    • Removed tvm.te imports.
    • Introduced several _op_module helper functions to generate I.ir_module definitions for binary, ternary, unary, boolean, and gather operations using tvmscript.
  • tests/python/codegen/test_target_codegen_arm.py
    • Removed tvm.te imports.
    • Updated test_popcount and test_vmlal_s16 to define their compute and scheduling logic directly within tvmscript I.ir_module.
  • tests/python/codegen/test_target_codegen_bool.py
    • Removed tvm.te imports and tvm.testing.fixture decorators.
    • Replaced with GPUModule and CPUModule defined using tvmscript I.ir_module and T.prim_func, embedding scheduling directly.
  • tests/python/codegen/test_target_codegen_c_host.py
    • Removed tvm.te imports.
    • Updated test_add, test_reinterpret, test_ceil, test_floor, and test_round to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_cross_llvm.py
    • Removed tvm.te imports.
    • Introduced AddModule using tvmscript I.ir_module and T.prim_func with embedded scheduling, and updated test_llvm_add_pipeline to use it.
  • tests/python/codegen/test_target_codegen_cuda.py
    • Removed tvm.te imports.
    • Updated numerous CUDA codegen tests to define their compute and scheduling logic directly within tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_cuda_fp4.py
    • Removed tvm.te imports.
    • Updated various FP4/FP8 related CUDA codegen tests to use tvmscript I.ir_module and T.prim_func for module definitions.
  • tests/python/codegen/test_target_codegen_cuda_fp8.py
    • Updated FP8 related CUDA codegen tests to use tvmscript I.ir_module and T.prim_func for module definitions.
  • tests/python/codegen/test_target_codegen_device.py
    • Removed tvm.te imports.
    • Updated test_large_uint_imm and test_add_pipeline to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_gpu_common.py
    • Removed tvm.te imports.
    • Updated test_int_intrin to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_hexagon.py
    • Removed tvm.te imports.
    • Updated test_basic, test_llvm_target_features, and test_llvm_options to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_llvm.py
    • Removed tvm.te and tvm.tir imports.
    • Converted almost all LLVM codegen tests to use tvmscript I.ir_module and T.prim_func for defining test modules, including complex scheduling and intrinsic calls.
  • tests/python/codegen/test_target_codegen_metal.py
    • Removed tvm.te imports.
    • Updated test_metal_inf_nan and test_metal_erf to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_opencl.py
    • Removed tvm.te imports.
    • Updated various OpenCL codegen tests to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_rocm.py
    • Removed tvm.te imports.
    • Updated test_rocm_inf_nan, test_rocm_copy, and test_rocm_vectorize_add to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_vulkan.py
    • Removed tvm.te, tvm.tir, tvm.topi.math.cast, tvm.tir.TensorIntrin, tvm.tir.IntImm, tvm.tir.Cast imports.
    • Updated various Vulkan codegen tests to use tvmscript I.ir_module and T.prim_func.
  • tests/python/codegen/test_target_codegen_x86.py
    • Removed tvm.te imports.
    • Updated test_fp16_to_fp32 to use tvmscript I.ir_module and T.prim_func.
Activity
  • The pull request was initiated by tqchen.
  • The primary activity involves a large-scale refactoring of test definitions across numerous codegen test files.
  • This refactoring systematically replaces the older tvm.te and s_tir.Schedule APIs with tvmscript's I.ir_module and T.prim_func constructs.
  • The changes reflect a significant modernization effort to align the test infrastructure with current TVM best practices for defining and scheduling Tensor IR.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a large and valuable refactoring that migrates numerous codegen tests to use TVMScript directly, moving away from the older te (Tensor Expression) API and manual scheduling with s_tir.Schedule. The changes are applied consistently across many test files, significantly improving the readability, conciseness, and maintainability of the test suite. The new tests are more explicit and easier to understand. I've reviewed the changes across all affected files and found them to be well-executed and correct. This is an excellent contribution to modernizing the TVM codebase.

@tqchen tqchen force-pushed the tvm-script-update branch from fbca056 to 78b38a4 Compare February 6, 2026 22:41
This PR migrates all the codegen tests to explicitly
using tvmscript instead of indirectly via s_tir.Schedule.
They makes the test surface more unit, contains less dep
and more maintainable.
@tqchen tqchen force-pushed the tvm-script-update branch from 78b38a4 to 6fd80a2 Compare February 6, 2026 23:20
@tqchen tqchen merged commit 198df47 into apache:main Feb 7, 2026
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants