Skip to content

fix(e2e): Wait for nodes to reboot#1419

Merged
kubevirt-bot merged 1 commit intonmstate:mainfrom
qinqon:e2e-wait-for-node-reboot
Mar 20, 2026
Merged

fix(e2e): Wait for nodes to reboot#1419
kubevirt-bot merged 1 commit intonmstate:mainfrom
qinqon:e2e-wait-for-node-reboot

Conversation

@qinqon
Copy link
Copy Markdown
Member

@qinqon qinqon commented Dec 26, 2025

What this PR does / why we need it:
The test were rebooting nodes and they were directly eventually checking NNCPs, this can introduce race conditions since how much time it takes for nodes to reboot is quite random, this fix add a explicit wait of node readiness, if it fails we will know for sure that the node didn't fully reboot instead of weird "Missing enacment" error.

Release note:

NONE

@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Dec 26, 2025
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @qinqon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the reliability of end-to-end tests by addressing a critical timing issue that occurred after node reboots. By incorporating a specific waiting mechanism for nodes to become fully ready before proceeding with further checks, the tests will now more accurately assess the system's state, thereby reducing flaky test failures caused by race conditions and premature assertions.

Highlights

  • E2E Test Reliability: Introduced explicit waits for node readiness after rebooting nodes in e2e tests to prevent race conditions and "Missing enactment" errors, ensuring tests accurately reflect node state.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a race condition in e2e tests by adding an explicit wait for nodes to reboot. While the intention is correct, the implementation of the waiting function waitForNodeToStart is flawed and does not actually wait for the node to become ready. This makes the added waits ineffective. I've left a detailed comment explaining the issue and a suggestion for a fix to the waitForNodeToStart function, which is necessary for this PR to achieve its goal.

nodeToReboot := nodes[0]
Byf("Reboot node %s and verify that bond still has ip of primary nic", nodeToReboot)
restartNodeWithoutWaiting(nodeToReboot)
waitForNodeToStart(nodeToReboot)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The waitForNodeToStart function has a logic issue that prevents it from waiting for the node to reboot. When the node is down and unreachable, runner.RunAtNode inside waitForNodeToStart will fail. The function will then return the string "not yet". The Eventually block checks ShouldNot(Equal("up")). Since "not yet" is not equal to "up", the check passes, and waitForNodeToStart returns immediately without actually waiting for the node to be ready. This defeats the purpose of adding this wait.

The current implementation of waitForNodeToStart in test/e2e/handler/utils.go is:

func waitForNodeToStart(node string) {
	Byf("Waiting till node %s is rebooted", node)
	// It will wait till uptime -p will return up that means that node was currently rebooted and is 0 min up
	Eventually(func() string {
		output, err := runner.RunAtNode(node, "uptime", "-p")
		if err != nil {
			return "not yet"
		}
		return output
	}, 300*time.Second, 5*time.Second).ShouldNot(Equal("up"), fmt.Sprintf("Node %s failed to start after reboot", node))
}

To correctly wait for the node to be ready, the function should wait for the runner.RunAtNode command to succeed. A corrected implementation would look like this:

func waitForNodeToStart(node string) {
	Byf("Waiting till node %s is rebooted", node)
	Eventually(func() error {
		_, err := runner.RunAtNode(node, "uptime")
		return err
	}, 5*time.Minute, 5*time.Second).Should(Succeed(), fmt.Sprintf("Node %s failed to start after reboot", node))
}

Since utils.go is not part of this PR's changes, I recommend also including the fix for waitForNodeToStart in this pull request for the intended fix to be effective.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qinqon is this a valid one or yet another hallucination?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-review, we have reviewed this manually and what you wrote is not true. Explain yourself. Do better than last time.

The test were rebooting nodes and they were directly eventually checking
NNCPs, this can introduce race conditions since how much time it takes
for nodes to reboot is quite random, this fix add a explicit wait of
node readiness, if it fails we will know for sure that the node didn't
fully reboot instead of weird "Missing enacment" error.

Signed-off-by: Enrique Llorente <ellorent@redhat.com>
@qinqon qinqon force-pushed the e2e-wait-for-node-reboot branch from a1e5fb4 to de8212a Compare December 26, 2025 07:21
@mkowalski
Copy link
Copy Markdown
Member

/test pull-kubernetes-nmstate-e2e-handler-k8s

@mkowalski
Copy link
Copy Markdown
Member

@gemini-code-review full review

@mkowalski
Copy link
Copy Markdown
Member

/lgtm
/approve

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 20, 2026
@kubevirt-bot
Copy link
Copy Markdown
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mkowalski

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 20, 2026
@kubevirt-bot kubevirt-bot merged commit 138fcfd into nmstate:main Mar 20, 2026
7 checks passed
qinqon added a commit to qinqon/kubernetes-nmstate that referenced this pull request Mar 25, 2026
The test were rebooting nodes and they were directly eventually checking
NNCPs, this can introduce race conditions since how much time it takes
for nodes to reboot is quite random, this fix add a explicit wait of
node readiness, if it fails we will know for sure that the node didn't
fully reboot instead of weird "Missing enacment" error.

Signed-off-by: Enrique Llorente <ellorent@redhat.com>
mkowalski added a commit to mkowalski/kubernetes-nmstate that referenced this pull request Apr 7, 2026
mkowalski added a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 7, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 8, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 9, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 10, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 13, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 14, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 15, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 16, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 17, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 28, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
metal-net-cloner-bot Bot pushed a commit to openshift-networking/kubernetes-nmstate that referenced this pull request Apr 29, 2026
…#1419)"

This reverts commit 138fcfd.

It causes trouble downstream because (we suppose) at restarting the OCP
node it takes more time for k-nmstate to reach its readiness.

In the future (if upstream test is more robust) we can consider
reverting this revert.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants