Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
371 changes: 371 additions & 0 deletions enhancements/storage/disable-force-detach-option.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,371 @@
---
title: disable-force-detach-option
authors:
- "@dobsonj"
reviewers:
- "@ingvagabund"
- "@gnufied"
approvers:
- "@jsafrane"
api-approvers:
- "@JoelSpeed"
creation-date: 2026-01-30
last-updated: 2026-01-30
tracking-link:
- "https://issues.redhat.com/browse/STOR-2789"
see-also:
- "https://issues.redhat.com/browse/OCPBUGS-61077"
- "https://issues.redhat.com/browse/RFE-8138"
- "https://github.com/openshift/api/pull/2668"
replaces:
superseded-by:
---

# Option to disable force detach of volumes

## Summary

There have been issues with certain drivers where the volume is force detached while the volume is still mounted leading to data corruption. For drivers directly exposing LUNs, force detach bypasses the unstage flow where multipath -f is invoked and goes straight to unpublish which is unmapping the LUN from the per-node igroup. This enhancement introduces an option to disable force detach of volumes in OCP to avoid this problem.

## Motivation

Force detaching a volume can corrupt a volume's data in some clusters. This is not always safe and the cluster admin needs an option to disable it.

### User Stories

As a cluster admin, I need the option to disable force detach on timeout to avoid data loss.

### Goals

* Allow cluster admin to control the disable-force-detach-on-timeout config option for kube-controller-manager (KCM).

### Non-Goals

* Not exposing other parameters.
* Not providing finer-grained control than cluster scope.
* Not allowing anyone other than an admin to control this parameter.

## Proposal

Create a new config object for kube-controller-manager with a force detach option. `cluster-kube-controller-manager-operator` will read the option from the API and control the `disable-force-detach-on-timeout` config option for KCM.

### Workflow Description

The cluster admin sets `.spec.forceDetachOnTimeout="Disabled"` in the `cluster` ControllerManager object. `cluster-kube-controller-manager-operator` is notified of this change via an informer, and it updates the KCM target config to include the `disable-force-detach-on-timeout` option. KCM restarts and reads the new config option.

### API Extensions

There is already a [KubeControllerManager](https://github.com/openshift/api/blob/master/operator/v1/types_kubecontrollermanager.go) operator type to configure cluster-kube-controller-manager-operator, and this enhancement introduces a `ControllerManager` config type to configure kube-controller-manager.
This is approach is consistent with other workload components like
[KubeAPIServer](https://github.com/openshift/api/blob/master/operator/v1/types_kubeapiserver.go) / [APIServer](https://github.com/openshift/api/blob/master/config/v1/types_apiserver.go) and
[KubeScheduler](https://github.com/openshift/api/blob/master/operator/v1/types_scheduler.go) / [Scheduler](https://github.com/openshift/api/blob/master/config/v1/types_scheduling.go).

Example:
```
apiVersion: config.openshift.io/v1alpha1
kind: ControllerManager
spec:
forceDetachOnTimeout: Disabled
```

API PR: <https://github.com/openshift/api/pull/2668>

### Topology Considerations

#### Hypershift / Hosted Control Planes

Are there any unique considerations for making this change work with
Hypershift?

See https://github.com/openshift/enhancements/blob/e044f84e9b2bafa600e6c24e35d226463c2308a5/enhancements/multi-arch/heterogeneous-architecture-clusters.md?plain=1#L282

How does it affect any of the components running in the
management cluster? How does it affect any components running split
between the management cluster and guest cluster?

#### Standalone Clusters

Is the change relevant for standalone clusters?

#### Single-node Deployments or MicroShift

How does this proposal affect the resource consumption of a
single-node OpenShift deployment (SNO), CPU and memory?

How does this proposal affect MicroShift? For example, if the proposal
adds configuration options through API resources, should any of those
behaviors also be exposed to MicroShift admins through the
configuration file for MicroShift?

#### OpenShift Kubernetes Engine

How does this proposal affect OpenShift Kubernetes Engine (OKE)? Does it depend
on features that are excluded from the OKE product offering? See [the
comparison of OKE and OCP in the product documentation](https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/overview/oke-about#about_oke_similarities_and_differences).

### Implementation Details/Notes/Constraints

What are some important details that didn't come across above in the
**Proposal**? Go in to as much detail as necessary here. This might be
a good place to talk about core concepts and how they relate. While it is useful
to go into the details of the code changes required, it is not necessary to show
how the code will be rewritten in the enhancement.

### Risks and Mitigations

What are the risks of this proposal and how do we mitigate. Think broadly. For
example, consider both security and how this will impact the larger OKD
ecosystem.

How will security be reviewed and by whom?

How will UX be reviewed and by whom?

Consider including folks that also work outside your immediate sub-project.

### Drawbacks

The idea is to find the best form of an argument why this enhancement should
_not_ be implemented.

What trade-offs (technical/efficiency cost, user experience, flexibility,
supportability, etc) must be made in order to implement this? What are the reasons
we might not want to undertake this proposal, and how do we overcome them?

Does this proposal implement a behavior that's new/unique/novel? Is it poorly
aligned with existing user expectations? Will it be a significant maintenance
burden? Is it likely to be superceded by something else in the near future?

## Alternatives (Not Implemented)

Similar to the `Drawbacks` section the `Alternatives` section is used
to highlight and record other possible approaches to delivering the
value proposed by an enhancement, including especially information
about why the alternative was not selected.

Add a new field to the Storage CR to control force detach behavior. cluster-kube-controller-manager-operator will read the field and control the disable-force-detach-on-timeout config option for KCM.

## Open Questions [optional]

This is where to call out areas of the design that require closure before deciding
to implement the design. For instance,
> 1. This requires exposing previously private resources which contain sensitive
information. Can we do this?

## Test Plan

**Note:** *Section not required until targeted at a release.*

Consider the following in developing a test plan for this enhancement:
- Will there be e2e and integration tests, in addition to unit tests?
- How will it be tested in isolation vs with other components?
- What additional testing is necessary to support managed OpenShift service-based offerings?

No need to outline all of the test cases, just the general strategy. Anything
that would count as tricky in the implementation and anything particularly
challenging to test should be called out.

All code is expected to have adequate tests (eventually with coverage
expectations).

## Graduation Criteria

**Note:** *Section not required until targeted at a release.*

Define graduation milestones.

These may be defined in terms of API maturity, or as something else. Initial proposal
should keep this high-level with a focus on what signals will be looked at to
determine graduation.

Consider the following in developing the graduation criteria for this
enhancement:

- Maturity levels
- [`alpha`, `beta`, `stable` in upstream Kubernetes][maturity-levels]
- `Dev Preview`, `Tech Preview`, `GA` in OpenShift
- [Deprecation policy][deprecation-policy]

Clearly define what graduation means by either linking to the [API doc definition](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning),
or by redefining what graduation means.

In general, we try to use the same stages (alpha, beta, GA), regardless how the functionality is accessed.

[maturity-levels]: https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions
[deprecation-policy]: https://kubernetes.io/docs/reference/using-api/deprecation-policy/

**If this is a user facing change requiring new or updated documentation in [openshift-docs](https://github.com/openshift/openshift-docs/),
please be sure to include in the graduation criteria.**

**Examples**: These are generalized examples to consider, in addition
to the aforementioned [maturity levels][maturity-levels].

### Dev Preview -> Tech Preview

- Ability to utilize the enhancement end to end
- End user documentation, relative API stability
- Sufficient test coverage
- Gather feedback from users rather than just developers
- Enumerate service level indicators (SLIs), expose SLIs as metrics
- Write symptoms-based alerts for the component(s)

### Tech Preview -> GA

- More testing (upgrade, downgrade, scale)
- Sufficient time for feedback
- Available by default
- Backhaul SLI telemetry
- Document SLOs for the component
- Conduct load testing
- User facing documentation created in [openshift-docs](https://github.com/openshift/openshift-docs/)

**For non-optional features moving to GA, the graduation criteria must include
end to end tests.**

### Removing a deprecated feature

- Announce deprecation and support policy of the existing feature
- Deprecate the feature

## Upgrade / Downgrade Strategy

If applicable, how will the component be upgraded and downgraded? Make sure this
is in the test plan.

Consider the following in developing an upgrade/downgrade strategy for this
enhancement:
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade in order to keep previous behavior?
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade in order to make use of the enhancement?

Upgrade expectations:
- Each component should remain available for user requests and
workloads during upgrades. Ensure the components leverage best practices in handling [voluntary
disruption](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/). Any exception to
this should be identified and discussed here.
- Micro version upgrades - users should be able to skip forward versions within a
minor release stream without being required to pass through intermediate
versions - i.e. `x.y.N->x.y.N+2` should work without requiring `x.y.N->x.y.N+1`
as an intermediate step.
- Minor version upgrades - you only need to support `x.N->x.N+1` upgrade
steps. So, for example, it is acceptable to require a user running 4.3 to
upgrade to 4.5 with a `4.3->4.4` step followed by a `4.4->4.5` step.
- While an upgrade is in progress, new component versions should
continue to operate correctly in concert with older component
versions (aka "version skew"). For example, if a node is down, and
an operator is rolling out a daemonset, the old and new daemonset
pods must continue to work correctly even while the cluster remains
in this partially upgraded state for some time.

Downgrade expectations:
- If an `N->N+1` upgrade fails mid-way through, or if the `N+1` cluster is
misbehaving, it should be possible for the user to rollback to `N`. It is
acceptable to require some documented manual steps in order to fully restore
the downgraded cluster to its previous state. Examples of acceptable steps
include:
- Deleting any CVO-managed resources added by the new version. The
CVO does not currently delete resources that no longer exist in
the target version.

## Version Skew Strategy

How will the component handle version skew with other components?
What are the guarantees? Make sure this is in the test plan.

Consider the following in developing a version skew strategy for this
enhancement:
- During an upgrade, we will always have skew among components, how will this impact your work?
- Does this enhancement involve coordinating behavior in the control plane and
in the kubelet? How does an n-2 kubelet without this feature available behave
when this feature is used?
- Will any other components on the node change? For example, changes to CSI, CRI
or CNI may require updating that component before the kubelet.

## Operational Aspects of API Extensions

Describe the impact of API extensions (mentioned in the proposal section, i.e. CRDs,
admission and conversion webhooks, aggregated API servers, finalizers) here in detail,
especially how they impact the OCP system architecture and operational aspects.

- For conversion/admission webhooks and aggregated apiservers: what are the SLIs (Service Level
Indicators) an administrator or support can use to determine the health of the API extensions

Examples (metrics, alerts, operator conditions)
- authentication-operator condition `APIServerDegraded=False`
- authentication-operator condition `APIServerAvailable=True`
- openshift-authentication/oauth-apiserver deployment and pods health

- What impact do these API extensions have on existing SLIs (e.g. scalability, API throughput,
API availability)

Examples:
- Adds 1s to every pod update in the system, slowing down pod scheduling by 5s on average.
- Fails creation of ConfigMap in the system when the webhook is not available.
- Adds a dependency on the SDN service network for all resources, risking API availability in case
of SDN issues.
- Expected use-cases require less than 1000 instances of the CRD, not impacting
general API throughput.

- How is the impact on existing SLIs to be measured and when (e.g. every release by QE, or
automatically in CI) and by whom (e.g. perf team; name the responsible person and let them review
this enhancement)

- Describe the possible failure modes of the API extensions.
- Describe how a failure or behaviour of the extension will impact the overall cluster health
(e.g. which kube-controller-manager functionality will stop working), especially regarding
stability, availability, performance and security.
- Describe which OCP teams are likely to be called upon in case of escalation with one of the failure modes
and add them as reviewers to this enhancement.

## Support Procedures

Describe how to
- detect the failure modes in a support situation, describe possible symptoms (events, metrics,
alerts, which log output in which component)

Examples:
- If the webhook is not running, kube-apiserver logs will show errors like "failed to call admission webhook xyz".
- Operator X will degrade with message "Failed to launch webhook server" and reason "WehhookServerFailed".
- The metric `webhook_admission_duration_seconds("openpolicyagent-admission", "mutating", "put", "false")`
will show >1s latency and alert `WebhookAdmissionLatencyHigh` will fire.

- disable the API extension (e.g. remove MutatingWebhookConfiguration `xyz`, remove APIService `foo`)

- What consequences does it have on the cluster health?

Examples:
- Garbage collection in kube-controller-manager will stop working.
- Quota will be wrongly computed.
- Disabling/removing the CRD is not possible without removing the CR instances. Customer will lose data.
Disabling the conversion webhook will break garbage collection.

- What consequences does it have on existing, running workloads?

Examples:
- New namespaces won't get the finalizer "xyz" and hence might leak resource X
when deleted.
- SDN pod-to-pod routing will stop updating, potentially breaking pod-to-pod
communication after some minutes.

- What consequences does it have for newly created workloads?

Examples:
- New pods in namespace with Istio support will not get sidecars injected, breaking
their networking.

- Does functionality fail gracefully and will work resume when re-enabled without risking
consistency?

Examples:
- The mutating admission webhook "xyz" has FailPolicy=Ignore and hence
will not block the creation or updates on objects when it fails. When the
webhook comes back online, there is a controller reconciling all objects, applying
labels that were not applied during admission webhook downtime.
- Namespaces deletion will not delete all objects in etcd, leading to zombie
objects when another namespace with the same name is created.

## Infrastructure Needed [optional]

Use this section if you need things from the project. Examples include a new
subproject, repos requested, github details, and/or testing infrastructure.