Skip to content

OCPBUGS-65896: controller/workload: Rework Degraded/Progressing conditions#2128

Open
tchap wants to merge 2 commits intoopenshift:masterfrom
tchap:workload-condition-overwrites
Open

OCPBUGS-65896: controller/workload: Rework Degraded/Progressing conditions#2128
tchap wants to merge 2 commits intoopenshift:masterfrom
tchap:workload-condition-overwrites

Conversation

@tchap
Copy link
Copy Markdown
Contributor

@tchap tchap commented Feb 18, 2026

To enable scaling not to automatically set Progressing/Degraded,
conditions handling is aligned so that the Deployment generation is not
consulted any more.

Degraded only happens when the deployment is not Available or it times
out progressing. This is less strict than before, but checking available
replicas would mean we go Degraded on scaling automatically.

Progressing is set to True when the deployment is not in
NewReplicaSetAvailable and it hasn't timed out progressing.

I also improved the overall test case names in a separate commit.

This affects authentication-operator and openshift-apiserver-operator.

Summary by CodeRabbit

  • Bug Fixes

    • More accurate rollout/progress tracking with improved timeout detection and clearer Progressing reasons.
    • Improved Degraded reporting: concise "no . pods available on any node" message when pods are absent and explicit timeout messaging when rollouts exceed their deadline.
    • Stricter gating for recording new versions—only recorded when workload is at the latest revision and fully updated.
  • Tests

    • Expanded and reworked status scenarios and assertions, including version-recording validations across rollout states.

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Feb 18, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@tchap: This pull request references Jira Issue OCPBUGS-65896, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (ksiddiqu@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

To enable scaling not to automatically set Progressing/Degraded,
conditions handling is aligned so that the Deployment generation is not
consulted any more.

Degraded only happens when the Deployment is not Available or it times
out progressing.

Progressing is set to True when the Deployment is not in
NewReplicaSetAvailable and it hasn't timed out progressing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tchap
Copy link
Copy Markdown
Contributor Author

tchap commented Feb 18, 2026

/cc @atiratree

@openshift-ci openshift-ci bot requested a review from atiratree February 18, 2026 10:53
// Update is done when all pods have been updated to the latest revision
// and the deployment controller has reported NewReplicaSetAvailable
workloadIsBeingUpdated := !workloadAtHighestGeneration || !hasDeploymentProgressed(workload.Status)
workloadIsBeingUpdatedTooLong := v1helpers.IsUpdatingTooLong(previousStatus, *deploymentProgressingCondition.Type)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use IsUpdatingTooLong any more, we based the check on ProgressDeadlineExceeded from the deployment controller.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also set .spec.progressDeadlineSeconds to 15m on the consumer workloads to honor the superseded progressingConditionTimeout value

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can do that. Sync from the delegate returns the synchronized deployment already. So we would have to send another update request, I guess.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can align this in auth when updating library-go. I will actually open a PR there once this is accepted to run CI anyway.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I meant that we should update it in the consumer repos like auth.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to open a proof PR before the library-go changes are merged.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
// Update is done when all pods have been updated to the latest revision
// and the deployment controller has reported NewReplicaSetAvailable
workloadIsBeingUpdated := !workloadAtHighestGeneration || !hasDeploymentProgressed(workload.Status)
workloadIsBeingUpdatedTooLong := v1helpers.IsUpdatingTooLong(previousStatus, *deploymentProgressingCondition.Type)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also set .spec.progressDeadlineSeconds to 15m on the consumer workloads to honor the superseded progressingConditionTimeout value

Comment thread pkg/operator/apiserver/controller/workload/workload_test.go Outdated
Comment thread pkg/operator/apiserver/controller/workload/workload_test.go
Copy link
Copy Markdown
Member

@atiratree atiratree left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one nit, otherwise LGTM, thanks!

Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
Comment thread pkg/operator/apiserver/controller/workload/workload.go Outdated
// Update is done when all pods have been updated to the latest revision
// and the deployment controller has reported NewReplicaSetAvailable
workloadIsBeingUpdated := !workloadAtHighestGeneration || !hasDeploymentProgressed(workload.Status)
workloadIsBeingUpdatedTooLong := v1helpers.IsUpdatingTooLong(previousStatus, *deploymentProgressingCondition.Type)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I meant that we should update it in the consumer repos like auth.

// Update is done when all pods have been updated to the latest revision
// and the deployment controller has reported NewReplicaSetAvailable
workloadIsBeingUpdated := !workloadAtHighestGeneration || !hasDeploymentProgressed(workload.Status)
workloadIsBeingUpdatedTooLong := v1helpers.IsUpdatingTooLong(previousStatus, *deploymentProgressingCondition.Type)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to open a proof PR before the library-go changes are merged.

@atiratree
Copy link
Copy Markdown
Member

/lgtm
/hold
for the proof PR openshift/cluster-authentication-operator#843

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 19, 2026
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 19, 2026
To enable scaling not to automatically set Progressing/Degraded,
conditions handling is aligned so that the Deployment generation is not
consulted any more.

Degraded only happens when the Deployment is not Available or it times
out progressing.

Progressing is set to True when the Deployment is not in
NewReplicaSetAvailable and it hasn't timed out progressing.
@tchap tchap force-pushed the workload-condition-overwrites branch from 0b782a2 to 08c8c15 Compare April 14, 2026 08:21
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 14, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 14, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 618b6d62-3948-4a37-b778-584bc7aae762

📥 Commits

Reviewing files that changed from the base of the PR and between 882ecf8 and 3ae8d51.

📒 Files selected for processing (1)
  • pkg/operator/apiserver/controller/workload/workload_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/operator/apiserver/controller/workload/workload_test.go

Walkthrough

Reworked workload controller condition evaluation: replaced generation-based Progressing logic with timeout-based checks via a new helper, simplified Degraded logic to availability/deadline checks, removed an external dependency, and tightened version-recording gating. Tests updated to reflect new condition outcomes and version-recording behavior.

Changes

Cohort / File(s) Summary
Condition Logic Refactoring
pkg/operator/apiserver/controller/workload/workload.go
Removed strings and github.com/openshift/library-go/pkg/apps/deployment dependency. Added hasDeploymentTimedOutProgressing(status appsv1.DeploymentStatus) (string, bool). Replaced generation-based Progressing handling with timeout-based logic; Progressing now set to PodsUpdating / ProgressDeadlineExceeded / AsExpected. Simplified Degraded logic to availability or deadline checks and updated version-recording gating to require matching generation and desired/updated replica counts.
Test Suite Updates
pkg/operator/apiserver/controller/workload/workload_test.go
Reworked TestUpdateOperatorStatus scenarios and expectations: updated Availability/Degraded/Progressing Status, Reason, and Message values (including wording/punctuation changes). Added version-recording assertions using a fakeVersionRecorder and per-scenario validateVersionRecorder. Added cases covering progress-deadline exceeded, recovery, terminating pods during rollout, and maxSurge behaviors.

Sequence Diagram(s)

sequenceDiagram
  participant Controller
  participant Deployment as DeploymentStatus
  participant VersionRecorder
  Controller->>Deployment: Read deployment status (AvailableReplicas, UpdatedReplicas, Progressing, ObservedGeneration)
  alt Progressing not NewReplicaSetAvailable and not timed out
    Controller->>Controller: set Progressing = True (PodsUpdating)
  else ProgressDeadlineExceeded
    Controller->>Controller: set Progressing = False (ProgressDeadlineExceeded)
    Controller->>Controller: set Degraded = True (ProgressDeadlineExceeded)
  else AsExpected
    Controller->>Controller: set Progressing = False (AsExpected)
    Controller->>Controller: set Degraded = False (AsExpected)
  end
  alt AvailableReplicas == 0
    Controller->>Controller: set Degraded = True (Unavailable)
  end
  alt operator config at highest generation AND workload.Generation == workload.Status.ObservedGeneration AND AvailableReplicas == desiredReplicas AND UpdatedReplicas == desiredReplicas
    Controller->>VersionRecorder: SetVersion(operandName, version)
  else
    note right of Controller: skip version recording
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 9 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (9 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: reworking Degraded/Progressing condition logic in the workload controller, which aligns with the primary focus of the changeset.
Stable And Deterministic Test Names ✅ Passed All Ginkgo test names in workload_test.go are stable and deterministic with no dynamic identifiers, timestamps, or format placeholders.
Test Structure And Quality ✅ Passed The test file demonstrates good Go testing practices with well-structured table-driven patterns, descriptive test names, clear setup through fake implementations, and meaningful assertions using custom validation functions.
Microshift Test Compatibility ✅ Passed PR modifies workload_test.go with standard Go unit tests (TestUpdateOperatorStatus), not Ginkgo e2e tests using BDD patterns like It(), Describe(), Context(), or When().
Single Node Openshift (Sno) Test Compatibility ✅ Passed The PR modifies unit tests in library-go's controller logic, not Ginkgo e2e tests. SNO compatibility check does not apply to standard Go unit tests.
Topology-Aware Scheduling Compatibility ✅ Passed Code changes modify condition evaluation logic only, with no scheduling constraints, affinity rules, topology spread constraints, node selectors, or pod disruption budgets introduced.
Ote Binary Stdout Contract ✅ Passed Modified files are standard Go unit tests without OTE binary patterns, suite-level hooks, or stdout writes at process level.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed Custom check applies only to new Ginkgo e2e tests with IPv4/external connectivity assumptions. This PR modifies standard Go unit tests in a library utility file, not Ginkgo e2e tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot
Copy link
Copy Markdown

@tchap: This pull request references Jira Issue OCPBUGS-65896, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (ksiddiqu@redhat.com), skipping review request.

Details

In response to this:

To enable scaling not to automatically set Progressing/Degraded,
conditions handling is aligned so that the Deployment generation is not
consulted any more.

Degraded only happens when the deployment is not Available or it times
out progressing. This is less strict than before, but checking available
replicas would mean we go Degraded on scaling automatically.

Progressing is set to True when the deployment is not in
NewReplicaSetAvailable and it hasn't timed out progressing.

I also improved the overall test case names in a separate commit.

This affects authentication-operator and openshift-apiserver-operator.

Summary by CodeRabbit

  • Bug Fixes
  • Improved deployment progress tracking and timeout detection to provide more accurate status reporting
  • Enhanced degradation state identification to better reflect unavailable pod conditions
  • Refined version recording requirements to ensure proper state validation before marking updates as complete

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
pkg/operator/apiserver/controller/workload/workload_test.go (1)

54-67: Please add explicit coverage for the new version-recording gate.

pkg/operator/apiserver/controller/workload/workload.go Lines 358-361 changed when SetVersion fires, but this table still never exercises that path: no scenario sets operatorConfigAtHighestRevision = true, and the test controller leaves versionRecorder unset. A small fake recorder plus one positive and one negative case would keep this behavior from regressing.

Also applies to: 846-878

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/operator/apiserver/controller/workload/workload_test.go` around lines 54
- 67, Add explicit scenarios to TestUpdateOperatorStatus that exercise the
version-recording gate by (1) injecting a fake versionRecorder into the test
controller and (2) adding two table entries: one with
operatorConfigAtHighestRevision = true and one with it = false; verify that when
operatorConfigAtHighestRevision is true the fake recorder's SetVersion method is
invoked (and not invoked for the false case). Locate the test table in
TestUpdateOperatorStatus and the controller instance used to run scenarios, set
controller.versionRecorder to a simple fake recorder that records calls, and add
assertions in each scenario's validateOperatorStatus to check whether SetVersion
was called as expected to cover the SetVersion path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@pkg/operator/apiserver/controller/workload/workload_test.go`:
- Around line 54-67: Add explicit scenarios to TestUpdateOperatorStatus that
exercise the version-recording gate by (1) injecting a fake versionRecorder into
the test controller and (2) adding two table entries: one with
operatorConfigAtHighestRevision = true and one with it = false; verify that when
operatorConfigAtHighestRevision is true the fake recorder's SetVersion method is
invoked (and not invoked for the false case). Locate the test table in
TestUpdateOperatorStatus and the controller instance used to run scenarios, set
controller.versionRecorder to a simple fake recorder that records calls, and add
assertions in each scenario's validateOperatorStatus to check whether SetVersion
was called as expected to cover the SetVersion path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 15f9a88c-d378-463c-ad2b-87fa88d3a8d7

📥 Commits

Reviewing files that changed from the base of the PR and between d2db42c and 08c8c15.

📒 Files selected for processing (2)
  • pkg/operator/apiserver/controller/workload/workload.go
  • pkg/operator/apiserver/controller/workload/workload_test.go

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
pkg/operator/apiserver/controller/workload/workload_test.go (1)

847-889: Add negative SetVersion() cases for the other readiness gates.

SetVersion() is also gated on Generation == ObservedGeneration, AvailableReplicas == desiredReplicas, and UpdatedReplicas == desiredReplicas in pkg/operator/apiserver/controller/workload/workload.go:355-364. Right now only the operatorConfigAtHighestRevision gate is exercised, so a regression that records the version during an incomplete rollout would still pass this suite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/operator/apiserver/controller/workload/workload_test.go` around lines 847
- 889, Add negative test cases in the workload_test table that exercise the
other readiness gates which should prevent SetVersion(): create entries similar
to the existing cases but where (a) Generation != ObservedGeneration, (b)
Status.AvailableReplicas < Spec.Replicas, and (c) Status.UpdatedReplicas <
Spec.Replicas; each should set operatorConfigAtHighestRevision=true (so only the
other gate blocks) use the same fakeVersionRecorder and validateVersionRecorder
to assert r.setVersionCalls remains empty, and keep validateOperatorStatus as a
no-op; locate tests in workload_test.go and mirror the structure/fields used by
the existing "version ..." cases so the assertions against setVersionCalls catch
regressions in workload.go’s SetVersion gating logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@pkg/operator/apiserver/controller/workload/workload_test.go`:
- Around line 847-889: Add negative test cases in the workload_test table that
exercise the other readiness gates which should prevent SetVersion(): create
entries similar to the existing cases but where (a) Generation !=
ObservedGeneration, (b) Status.AvailableReplicas < Spec.Replicas, and (c)
Status.UpdatedReplicas < Spec.Replicas; each should set
operatorConfigAtHighestRevision=true (so only the other gate blocks) use the
same fakeVersionRecorder and validateVersionRecorder to assert r.setVersionCalls
remains empty, and keep validateOperatorStatus as a no-op; locate tests in
workload_test.go and mirror the structure/fields used by the existing "version
..." cases so the assertions against setVersionCalls catch regressions in
workload.go’s SetVersion gating logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: f83fb810-ff6d-40fd-9f77-c2365c142fe9

📥 Commits

Reviewing files that changed from the base of the PR and between 08c8c15 and 882ecf8.

📒 Files selected for processing (1)
  • pkg/operator/apiserver/controller/workload/workload_test.go

@tchap tchap force-pushed the workload-condition-overwrites branch from 882ecf8 to 3ae8d51 Compare April 14, 2026 09:08
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 14, 2026

@tchap: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@atiratree
Copy link
Copy Markdown
Member

/hold cancel
/lgtm

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Apr 20, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: atiratree, tchap
Once this PR has been reviewed and has the lgtm label, please assign p0lyn0mial for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -4,7 +4,6 @@ import (
"context"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I right that those are the three (at least) main behavioral changes introduced in this PR ?

1. Scaling the workload (up/down) won't report Progressing=True 

2. A stuck rollout eventually stops reporting Progressing=True 

3. A workload missing some replicas (2/3) no longer triggers Degraded=True

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, right. We only degrade on Available == 0.

Also checking the Progressing timeout now depends on the native Deployment ProgressDeadlineExceeded instead of using a helper and once Progressing deadline is hit, we go Degraded.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to get @benluddy opinion on that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants