This repository demonstrates how to orchestrate automatic scale up/down of services in OpenShift based on the number of DevSpaces workspaces created and running. The example showcases an inverse scaling strategy where:
- 0 running workspaces → Service is scaled to 2 replicas
- 1 running workspace → Service is scaled to 1 replica
- 2+ running workspaces → Service is scaled to 0 replicas
This approach maximizes cluster efficiency by dynamically adjusting resource allocation based on developer workspace usage, ensuring that resources are available when developers need them while freeing up capacity when workspaces are active.
The solution is built on the following components:
-
Red Hat DevSpaces Operator: Provides cloud-native development environments (workspaces) for teams. DevSpaces is based on Eclipse Che and enables developers to work in containerized, consistent development environments.
-
Custom Metrics Autoscaler Operator (KEDA): An OpenShift operator that extends Kubernetes Horizontal Pod Autoscaler (HPA) capabilities by enabling autoscaling based on custom metrics from external systems like Prometheus. This operator is based on KEDA (Kubernetes Event-Driven Autoscaling).
-
Prometheus Metrics: The system monitors the number of active DevSpaces workspaces using Prometheus metrics exposed by the DevWorkspace Operator.
-
KEDA ScaledObject: A
ScaledObjectresource queries Prometheus using a PromQL query to count active workspaces:count(kube_pod_labels{label_controller_devfile_io_creator!=""}) -
Inverse Scaling Logic: The ScaledObject implements an inverse scaling formula that calculates the desired replica count based on workspace count.
-
Automatic Scaling: KEDA automatically adjusts the number of replicas of the target deployment based on the metric values and thresholds configured.
For demonstration and testing purposes, all resources are deployed in the openshift-devspaces namespace:
- DevSpaces Operator and instance
- Sample application deployment
- ScaledObject and related KEDA resources
- Service accounts and RBAC configurations
- OpenShift Local (CRC): This example has been tested with the latest version of OpenShift Local (CodeReady Containers). Ensure you have CRC installed and running. (It is required enable-cluster-monitoring in CRC)
Configuration CRC tested
- consent-telemetry : no
- cpus : 12
- disk-size : 93
- enable-cluster-monitoring : true
- memory : 30302
-
Cluster Admin Access: Most of the setup commands require cluster administrator privileges.
-
OpenShift CLI (oc): The
occommand-line tool must be installed and configured to access your cluster.
Apply the DevSpaces Operator configuration files:
oc apply -f devspaces/devspace.yamlNote: Wait for the DevSpaces Operator to be fully installed before proceeding. You can check the installation status with:
oc get pods -n openshift-devspacesInstall a DevSpaces instance:
oc apply -f devspaces/devspaces-instance.yamlVerify that Workspaces is available and there is a route to access it:
on 🎩 ❯ oc get route -n openshift-devspaces
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
devspaces devspaces.apps-crc.testing / che-gateway 8080 edge/Redirect NoneInstall the Custom Metrics Autoscaler Operator (KEDA):
oc apply -f autoscaler/custom-metrics-autoscaler-subscription.yamlWait for the operator to be installed and running:
oc get pods -n openshift-devspacesDeploy the sample application that will be autoscaled:
oc apply -f autoscaler/deployment.yamlThis creates a deployment named my-scaled-service in the openshift-devspaces namespace.
The deployment starts with 0 replicas and will be automatically scaled by KEDA.
Apply the ScaledObject configuration that defines the autoscaling behavior:
oc apply -f autoscaler/scaledobject.yamlThe ScaledObject is configured to:
- Monitor the number of active DevSpaces workspaces via Prometheus
- Scale the deployment between 0 and 2 replicas
- Use an inverse scaling formula:
2 - (number of running workspaces)
Check the Horizontal Pod Autoscaler (HPA) created by KEDA:
oc get hpa -n openshift-devspacesYou should see an HPA named keda-hpa-mi-app-escalador-inverso managing the deployment.
WARNING: Collecting and checking metrics take time so it will take some minutes after event to confirm the successful execution and outputs. If you are running this example in a cloud environment, it could take shorter time compare with OpenShift Local.
-
Initial State (0 workspaces): The deployment should scale to 2 replicas:
on 🎩 ❯ oc get hpa -n openshift-devspaces NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-my-inversed-scaled-app Deployment/my-scaled-service 1/1 (avg) 1 2 2 34s
Expected output: TARGETS should show 1/1 (avg), REPLICAS should be 2.
-
After Creating 1 Workspace: The deployment should scale down to 1 replica
on 🎩 ❯ oc get hpa -n openshift-devspaces NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-my-inversed-scaled-app Deployment/my-scaled-service 500m/1 (avg) 1 2 2 5m11s
Expected output: TARGETS should show 500m/1 (avg), REPLICAS should be 1
-
After Creating 2 Workspaces: The deployment should scale down to 0 replicas
on 🎩 ❯ oc get hpa -n openshift-devspaces NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-my-inversed-scaled-app Deployment/my-scaled-service <unknown>/1 (avg) 1 2 0 15m
Expected output: TARGETS should show 0/1 (avg), REPLICAS should be 0
You can query Prometheus directly to see the number of active workspaces:
count(kube_pod_labels{label_controller_devfile_io_creator!=""}) or on() vector(0)
Or view the metrics in the OpenShift Console under the Observe → Metrics section. Or view the metrics of Dev Spaces in the OpenShift Console Under the Observe → Dashboards → Dev Workspace Operator section.
oc get pods -n openshift-devspaces
oc logs -n openshift-keda deployment/custom-metrics-autoscaler-operatoroc get scaledobject -n openshift-devspaces
oc describe scaledobject mi-app-escalador-inverso -n openshift-devspacesEnsure the KEDA service account can access Prometheus:
oc get clusterrolebinding keda-prometheus-reader-finaloc describe hpa keda-hpa-mi-app-escalador-inverso -n openshift-devspaces
