Summary
We should add a benchmark test project that runs various common scenarios for Rush, leveraging V8 code coverage to collect information on:
| item |
unit |
source |
| Executed Code |
KiB |
V8 Code Coverage |
| Unused Code |
KiB |
V8 Code Coverage |
| Loaded Code |
KiB |
V8 Code Coverage |
| Loaded Files |
# |
V8 Code Coverage |
| Duration |
ms |
time / cpuprofile |
Once we have this data being generated, we should have a baseline CI pipeline that calculates a new reference value for every commit into main, and then have the PR CI job do performance comparisons and look for regressions/improvements. This will help us catch performance regressions and give us a standard for evaluating impact of optimizations.
This is the same concept as #5690 , but targeting @microsoft/rush.
Candidates for benchmarking:
rush --help
rush check
rush change --no-fetch --verify (--no-fetch is important to avoid externalities, and should really be the default)
rush install (after having already installed, we're checking the "nothing to do" state)
Summary
We should add a benchmark test project that runs various common scenarios for Rush, leveraging V8 code coverage to collect information on:
Once we have this data being generated, we should have a baseline CI pipeline that calculates a new reference value for every commit into main, and then have the PR CI job do performance comparisons and look for regressions/improvements. This will help us catch performance regressions and give us a standard for evaluating impact of optimizations.
This is the same concept as #5690 , but targeting
@microsoft/rush.Candidates for benchmarking:
rush --helprush checkrush change --no-fetch --verify(--no-fetchis important to avoid externalities, and should really be the default)rush install(after having already installed, we're checking the "nothing to do" state)