Skip to content

ASV-format microbenchmark suite#487

Open
Micky774 wants to merge 15 commits intodevfrom
zain/asv-demo
Open

ASV-format microbenchmark suite#487
Micky774 wants to merge 15 commits intodevfrom
zain/asv-demo

Conversation

@Micky774
Copy link
Copy Markdown
Contributor

@Micky774 Micky774 commented Mar 16, 2026

Description

This PR is a port of #478.

This PR uses a central driver to parse and run individual benchmark-defining scripts. The driver provides a function that can be imported and used by the individual scripts to make them self-sufficient and runnable. The benchmarks themselves, and the driver, have no hard ASV dependency. Instead, they simply produce results in an ASV-compatible format for later consumption.

ASV is only used for result tracking, visualization, and publishing. A helper bash script is provided to wrap the ASV commands for convenience (as well as offering a wrapper on the main driver script).

Follow-up Work

In future PRs we will:

  • extend benchmarking to new ops
  • re-evaluate bench configs and scope
  • update attention benchmarking to reach parity with the JAX FA benchmarking tool (mainly so we have persistent regression tracking)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Adds benchmarks
  • Adds README.md for documentation
  • Adds driver script
  • Adds helper bash script to wrap driver and ASV

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@Micky774 Micky774 marked this pull request as ready for review March 17, 2026 13:58
@Micky774
Copy link
Copy Markdown
Contributor Author

Note the CI failure is unrelated

@Micky774
Copy link
Copy Markdown
Contributor Author

I've added a helper script like @alextmagro had suggested, as well as corresponding documentation to the README.md.

EOF
)"

- name: Restore previous ASV results
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think benchmarks should go separate workflow from CI. I.e. these microbenchmarks and ones that are already run with CI

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will doing so require a separate TE build and setup? I added it here so that we'd piggy-back off of already running CI.

@@ -0,0 +1,16 @@
{
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it need to be in root of TE?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I've updated it


# Derive a stable machine name from the runner label
case "${RUNNER_NAME}" in
linux-te-mi325*) MACHINE_NAME="mi325" ;;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need it if results are uploaded with just matrix.runner name?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, my understanding is that the matrix.runner name is not 1-1 with the underlying system, i.e. different systems with different machine names can be part of a pool with the same runner name. ASV by default stores results by machine name. Here, we are manually specifying a generic machine name indexed by gpu arch so that each e.g. mi325 runner will store its results in a compatible way.

Ideally, we have dedicated machines for benchmarking (since this would likely be every commit or nightly even), but that's a constraint we'll need to discuss.

set -ex
pip install asv
cd /workspace
asv machine --yes --machine "$MACHINE_NAME"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it re-register machine if it exists already?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but it's registered in the container so it's transient

@Micky774 Micky774 changed the title ASV demo ASV-format microbenchmark suite Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants