Conversation
|
Note the CI failure is unrelated |
|
I've added a helper script like @alextmagro had suggested, as well as corresponding documentation to the |
.github/workflows/rocm-ci.yml
Outdated
| EOF | ||
| )" | ||
|
|
||
| - name: Restore previous ASV results |
There was a problem hiding this comment.
I think benchmarks should go separate workflow from CI. I.e. these microbenchmarks and ones that are already run with CI
There was a problem hiding this comment.
Will doing so require a separate TE build and setup? I added it here so that we'd piggy-back off of already running CI.
| @@ -0,0 +1,16 @@ | |||
| { | |||
There was a problem hiding this comment.
Does it need to be in root of TE?
There was a problem hiding this comment.
No, I've updated it
.github/workflows/rocm-ci.yml
Outdated
|
|
||
| # Derive a stable machine name from the runner label | ||
| case "${RUNNER_NAME}" in | ||
| linux-te-mi325*) MACHINE_NAME="mi325" ;; |
There was a problem hiding this comment.
Why do we need it if results are uploaded with just matrix.runner name?
There was a problem hiding this comment.
So, my understanding is that the matrix.runner name is not 1-1 with the underlying system, i.e. different systems with different machine names can be part of a pool with the same runner name. ASV by default stores results by machine name. Here, we are manually specifying a generic machine name indexed by gpu arch so that each e.g. mi325 runner will store its results in a compatible way.
Ideally, we have dedicated machines for benchmarking (since this would likely be every commit or nightly even), but that's a constraint we'll need to discuss.
.github/workflows/rocm-ci.yml
Outdated
| set -ex | ||
| pip install asv | ||
| cd /workspace | ||
| asv machine --yes --machine "$MACHINE_NAME" |
There was a problem hiding this comment.
Will it re-register machine if it exists already?
There was a problem hiding this comment.
Yes, but it's registered in the container so it's transient
Description
This PR is a port of #478.
This PR uses a central driver to parse and run individual benchmark-defining scripts. The driver provides a function that can be imported and used by the individual scripts to make them self-sufficient and runnable. The benchmarks themselves, and the driver, have no hard ASV dependency. Instead, they simply produce results in an ASV-compatible format for later consumption.
ASV is only used for result tracking, visualization, and publishing. A helper bash script is provided to wrap the ASV commands for convenience (as well as offering a wrapper on the main driver script).
Follow-up Work
In future PRs we will:
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: