[WIP] Do not rely on vitest for memory tests#1791
Open
peaBerberian wants to merge 1 commit intodevfrom
Open
Conversation
70a05b4 to
5f71d29
Compare
374c14e to
7bf9f96
Compare
0142e34 to
1fd9df3
Compare
5f71d29 to
601a076
Compare
|
✅ Automated performance checks have passed on commit DetailsPerformance tests 1st run outputNo significative change in performance for tests:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is a low priority proposal.
Even if not merged, I plan to use it at least on my side to be able to run memory tests on my computer.
Until now, we relied on
vitest+webdriverioto run our "memory tests", which check for memory leaks.Those two are huge and complex libraries, a complexity that we don't need for those tests (or even for integration tests):
vitestcan readily patch native browser APIs, mock files, hoist logic blocks before tests - none of that is needed here.More importantly: I always struggled to run them on my computer.
After some headaches, I had a container-based setup that worked (without a container it ended up picking the wrong network interface, or not finding the browser, or timeouting ...) but now it is broken again since very recently (the browser does not run, not sure what changed, but looking up why it broke each time is too time consuming).
Yet what we want to do in our memory tests is extremely simple by comparison to what those libs allow: we want to run a test page on a specific browser (with specific flags), then we want to log the results and finally exit either with success or failure depending on that result.
On that point we already have our own testing library for our "performance tests", because the paradigm of those tests is very different from the ones offered by the testing libs we know of (those tests compare the timings of some operations of two builds of the RxPlayer, by running them hundreds of time on each, and then check if there's a sensible diff with a null hypothesis test + some heuristics).
So
vitestdoesn't work for me, is very complex for reasons we don't need, and we already have a simpler setup which works for our need.Adapting that setup for our memory tests was very straightforward. For now I copy-pasted the performance tests's
run.jsfile (the test server + browser runner) and adapted it to memory tests just because I did not spend any time factorizing yet.I also wrote here a small
vitest/jest-compatible mininal testing library (providingdescribe/it/beforeEachetc. - and re-exportingvitest's assertions).I use
esbuildto bundle the test file and then run a browser on the output, exactly like for our performance tests.