Skip to content

Conversation

@jianzhangbjz
Copy link
Member

Address the timeout issue: level=error msg="30m0s timeout reached". https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test/2004115199598006272/artifacts/olmv1-benchmark-test/olmv1-performance/build-log.txt

+ /tmp/kube-burner-ocp olm --log-level=debug --qps=20 --burst=20 --gc=true --uuid 278368b6-bb08-4a85-a51c-41a3554709f2 --churn-duration=5m --timeout=30m --metrics-profile=/tmp/olm-metrics.yml,/tmp/extended-metrics.yml --gc-metrics=false --profile-type=both --iterations=50 --es-server=xxx --es-index=ripsaw-kube-burner
time="2025-12-25 10:13:51" level=info msg="❤️ Checking for Cluster Health" file="cluster-health.go:46"
...
...
time="2025-12-25 10:44:23" level=info msg="Waiting for garbage collection to finish" file="job.go:275"
time="2025-12-25 10:44:23" level=error msg="30m0s timeout reached" file="helpers.go:98"
time="2025-12-25 10:44:23" level=info msg="👋 kube-burner run completed with rc 2 for UUID 278368b6-bb08-4a85-a51c-41a3554709f2" file="helpers.go:100"

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 26, 2025
@openshift-ci openshift-ci bot requested review from bandrade and kuiwang02 December 26, 2025 03:05
@Xia-Zhao-rh
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 26, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 26, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jianzhangbjz, Xia-Zhao-rh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jianzhangbjz
Copy link
Member Author

/pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test

@openshift-ci-robot
Copy link
Contributor

@jianzhangbjz: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@jianzhangbjz
Copy link
Member Author

It seems a Kube-burner-ocp issue. https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openshift_release/72959/rehearse-72959-periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test/2004389471251861504/artifacts/olmv1-benchmark-test/olmv1-performance/build-log.txt

time="2025-12-26 04:46:14" level=info msg="Re-creating 1 deleted namespaces" file="create.go:449"
time="2025-12-26 04:46:14" level=debug msg="Created namespace: olmv1-ce" file="namespaces.go:53"
time="2025-12-26 04:46:14" level=info msg="0/1 iterations completed" file="create.go:134"
time="2025-12-26 04:46:14" level=debug msg="Creating object replicas from iteration 0" file="create.go:140"
E1226 04:46:14.692899     165 panic.go:262] "Observed a panic" panic="runtime error: invalid memory address or nil pointer dereference" panicGoValue="\"invalid memory address or nil pointer dereference\"" stacktrace=<
	goroutine 5436 [running]:
	k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2f664c8, 0x48d1ee0}, {0x26d5080, 0x487b690})
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/runtime/runtime.go:107 +0xbc
	k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x2f664c8, 0x48d1ee0}, {0x26d5080, 0x487b690}, {0x48d1ee0, 0x0, 0x437465?})
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/runtime/runtime.go:82 +0x5e
	k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0014021c0?})
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/runtime/runtime.go:59 +0x108
	panic({0x26d5080?, 0x487b690?})
		/opt/hostedtoolcache/go/1.23.12/x64/src/runtime/panic.go:791 +0x132
	k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetKind(...)
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/apis/meta/v1/unstructured/unstructured.go:231
	github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).createRequest.func1()
		/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:352 +0x635
	k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000f945e8?)
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:145 +0x3e
	k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x3b9aca00, 0x4008000000000000, 0x0, 0x9, 0x0}, 0xc001e8ae70)
		/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:461 +0x5a
	github.com/kube-burner/kube-burner/v2/pkg/util.RetryWithExponentialBackOff(0xc000f94670, 0x3b9aca00, 0x4008000000000000, 0x0, 0x2?)
		/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/util/utils.go:41 +0xd6
	github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).createRequest(0xc0011e0e00?, {0x2f66650?, 0xc0001fee70?}, {{0xc000c122d0, 0x18}, {0xc000762e6c, 0x2}, {0xc000762e70, 0xf}}, {0x0, ...}, ...)
		/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:310 +0x1d4
	github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).replicaHandler.func1.1({0xc000383ba8?, 0x1?})
		/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:251 +0x129
	created by github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).replicaHandler.func1 in goroutine 5435
		/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:246 +0x616
 >
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x22d4975]

goroutine 5436 [running]:
k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x2f664c8, 0x48d1ee0}, {0x26d5080, 0x487b690}, {0x48d1ee0, 0x0, 0x437465?})
	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/runtime/runtime.go:89 +0xee
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0014021c0?})
	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/runtime/runtime.go:59 +0x108
panic({0x26d5080?, 0x487b690?})
	/opt/hostedtoolcache/go/1.23.12/x64/src/runtime/panic.go:791 +0x132
k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetKind(...)
	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/apis/meta/v1/unstructured/unstructured.go:231
github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).createRequest.func1()
	/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:352 +0x635
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000f945e8?)
	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:145 +0x3e
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x3b9aca00, 0x4008000000000000, 0x0, 0x9, 0x0}, 0xc000f0de70)
	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:461 +0x5a
github.com/kube-burner/kube-burner/v2/pkg/util.RetryWithExponentialBackOff(0xc000f94670, 0x3b9aca00, 0x4008000000000000, 0x0, 0x2?)
	/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/util/utils.go:41 +0xd6
github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).createRequest(0xc0011e0e00?, {0x2f66650?, 0xc0001fee70?}, {{0xc000c122d0, 0x18}, {0xc000762e6c, 0x2}, {0xc000762e70, 0xf}}, {0x0, ...}, ...)
	/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:310 +0x1d4
github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).replicaHandler.func1.1({0xc000383ba8?, 0x1?})
	/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:251 +0x129
created by github.com/kube-burner/kube-burner/v2/pkg/burner.(*JobExecutor).replicaHandler.func1 in goroutine 5435
	/home/runner/go/pkg/mod/github.com/kube-burner/kube-burner/v2@v2.1.0/pkg/burner/create.go:246 +0x616
+ exit_code=2
++ date -u +%Y-%m-%dT%H:%M:%SZ
+ JOB_END=2025-12-26T04:46:14Z

@jianzhangbjz
Copy link
Member Author

/pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test

@openshift-ci-robot
Copy link
Contributor

@jianzhangbjz: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@jianzhangbjz
Copy link
Member Author

/pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.21-nightly-x86-olmv1-benchmark-test

@openshift-ci-robot
Copy link
Contributor

@jianzhangbjz: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@jianzhangbjz
Copy link
Member Author

Reported a bug kube-burner/kube-burner#1066

@jianzhangbjz
Copy link
Member Author

jianzhangbjz commented Dec 26, 2025

The root cause is kube-burner/kube-burner#1067. For this PR, I use --churn-mode objects instead of namespaces for jobs with cluster-scoped resources as a workaround.

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Dec 26, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 26, 2025

New changes are detected. LGTM label has been removed.

@jianzhangbjz
Copy link
Member Author

/pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test

@openshift-ci-robot
Copy link
Contributor

@jianzhangbjz: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@jianzhangbjz
Copy link
Member Author

/pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test

@openshift-ci-robot
Copy link
Contributor

@jianzhangbjz: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@openshift-ci-robot
Copy link
Contributor

[REHEARSALNOTIFIER]
@jianzhangbjz: the pj-rehearse plugin accommodates running rehearsal tests for the changes in this PR. Expand 'Interacting with pj-rehearse' for usage details. The following rehearsable tests have been affected by this change:

Test name Repo Type Reason
periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.22-nightly-x86-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.21-nightly-x86-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.19-nightly-x86-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-openshift-tests-private-release-4.21-amd64-nightly-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-openshift-tests-private-release-4.20-amd64-nightly-olmv1-benchmark-test N/A periodic Registry content changed
periodic-ci-openshift-openshift-tests-private-release-4.19-amd64-nightly-olmv1-benchmark-test N/A periodic Registry content changed
Interacting with pj-rehearse

Comment: /pj-rehearse to run up to 5 rehearsals
Comment: /pj-rehearse skip to opt-out of rehearsals
Comment: /pj-rehearse {test-name}, with each test separated by a space, to run one or more specific rehearsals
Comment: /pj-rehearse more to run up to 10 rehearsals
Comment: /pj-rehearse max to run up to 25 rehearsals
Comment: /pj-rehearse auto-ack to run up to 5 rehearsals, and add the rehearsals-ack label on success
Comment: /pj-rehearse list to get an up-to-date list of affected jobs
Comment: /pj-rehearse abort to abort all active rehearsals
Comment: /pj-rehearse network-access-allowed to allow rehearsals of tests that have the restrict_network_access field set to false. This must be executed by an openshift org member who is not the PR author

Once you are satisfied with the results of the rehearsals, comment: /pj-rehearse ack to unblock merge. When the rehearsals-ack label is present on your PR, merge will no longer be blocked by rehearsals.
If you would like the rehearsals-ack label removed, comment: /pj-rehearse reject to re-block merging.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 29, 2025

@jianzhangbjz: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/rehearse/periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test b6b0ee4 link unknown /pj-rehearse periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-gcp-4.20-nightly-x86-olmv1-benchmark-test

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@jianzhangbjz
Copy link
Member Author

The workaround doesn't work. It was blocked by kube-burner/kube-burner#1067

@jianzhangbjz
Copy link
Member Author

Fix the workaround issue there: kube-burner/kube-burner-ocp#368

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants