Skip to content

[container_benchmark] Experiment add artifacts on hyperV/WSL#876

Closed
Honny1 wants to merge 2 commits intoopenshift-psap:mainfrom
Honny1:hyper-v-experiment
Closed

[container_benchmark] Experiment add artifacts on hyperV/WSL#876
Honny1 wants to merge 2 commits intoopenshift-psap:mainfrom
Honny1:hyper-v-experiment

Conversation

@Honny1
Copy link
Copy Markdown
Collaborator

@Honny1 Honny1 commented Nov 21, 2025

No description provided.

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 21, 2025
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Nov 21, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign sjmonson for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Nov 21, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Nov 21, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Honny1
Copy link
Copy Markdown
Collaborator Author

Honny1 commented Nov 21, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link
Copy Markdown

topsail-bot Bot commented Nov 21, 2025

🔴 Test of 'container_bench test test_ci' failed after 00 hours 08 minutes 00 seconds. 🔴

• Link to the test results.

• No reports index generated...

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511211763731995/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511211763731995/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir/002__container_bench__artifact_add_benchmark_run_metrics/FAILURE | [002__container_bench__artifact_add_benchmark_run_metrics] ./run_toolbox.py container_bench artifact_add_benchmark --exec_props={'binary_path': 'C:/Users/jrodak-topsail/podman-v5.6.2/usr/bin/podman', 'rootfull': False, 'additional_args': '', 'exec_time_path': 'C:/Users/jrodak-topsail/utils/exec_time.py'} --> 2
/tmp/topsail_202511211763731995/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511211763731995/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench artifact_add_benchmark --exec_props="{'binary_path': 'C:/Users/jrodak-topsail/podman-v5.6.2/usr/bin/podman', 'rootfull': False, 'additional_args': '', 'exec_time_path': 'C:/Users/jrodak-topsail/utils/exec_time.py'}"' returned non-zero exit status 2.
Traceback (most recent call last):

[...]

Signed-off-by: Jan Rodák <hony.com@seznam.cz>
@Honny1 Honny1 force-pushed the hyper-v-experiment branch from db29a9b to 22068f2 Compare November 21, 2025 13:49
@Honny1
Copy link
Copy Markdown
Collaborator Author

Honny1 commented Nov 21, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link
Copy Markdown

topsail-bot Bot commented Nov 21, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 01 hours 33 minutes 26 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Signed-off-by: Jan Rodák <hony.com@seznam.cz>
@Honny1 Honny1 changed the title [container_benchmark] Experiment add artifacts on hyperV [container_benchmark] Experiment add artifacts on hyperV/WSL Nov 24, 2025
@Honny1
Copy link
Copy Markdown
Collaborator Author

Honny1 commented Nov 24, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link
Copy Markdown

topsail-bot Bot commented Nov 24, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 00 hours 25 minutes 06 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

@Honny1 Honny1 closed this Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant