A Node.js application that automatically scales DigitalOcean Kubernetes (DOKS) node pools to a desired node count. This application periodically checks for matching node pools and scales them up when needed.
- Use the Deploy to DO button for a ready-to-go deployment
doctl kubernetes cluster listto get your cluster UUIDs- For a complete list of all DigitalOcean slugs, see: slugs.do-api.dev
- Use Discord, Slack or Webhook.site for easy Webhooks
- Automatic Scaling: Scales node pools up to a desired node count
- Configurable Parameters: Specify node size slug, cluster IDs, and target count
- Smart Scaling: Only scales pools if your desired count isn't reached (recognized by tag)
- Multi-Cluster Failover: With multiple clusters, tries each in order until the desired count is met
- Docker Support: Run as a containerized application
- Webhook Notifications: Get notified via webhook when node pools are scaled
- DigitalOcean API Token with write access
- An existing DOKS cluster with a node pool tagged with the recognition tag (default:
doks-grabber) and the correct node size - Node.js 22+ (for direct Node.js usage)
- Docker (for container usage)
spec:
name: do-doks-grabber
region: lon
workers:
- dockerfile_path: /Dockerfile
envs:
- key: DO_API_TOKEN
scope: RUN_TIME
type: SECRET
value:
- key: CLUSTER_IDS
scope: RUN_TIME
value: "<cluster-uuid>" # doctl kubernetes cluster list
- key: SLUG
scope: RUN_TIME
value: gpu-h100x1-80gb
- key: DESIRED_COUNT
scope: RUN_TIME
value: "1"
- key: TAG
scope: RUN_TIME
value: doks-grabber
- key: WEBHOOK_URL
scope: RUN_TIME
value: https://webhook.site/<your-webhook-id>
git:
branch: main
deploy_on_push: true
repo_clone_url: https://github.com/DO-Solutions/doks-grabber.git
instance_count: 1
instance_size_slug: apps-s-1vcpu-0.5gb
name: doks-grabber
source_dir: /Details
-
Build the Docker image:
docker build -t doks-grabber . -
Run the container with your configuration:
docker run -d \ -e DO_API_TOKEN="your-do-api-token" \ -e CLUSTER_IDS="cluster-uuid-1,cluster-uuid-2" \ -e SLUG="gpu-h100x1-80gb" \ -e DESIRED_COUNT="1" \ -e TAG="doks-grabber" \ -e WEBHOOK_URL="https://your-webhook-url" \ --name doks-grabber \ doks-grabber
Details
-
Clone the repository:
git clone https://github.com/do-solutions/doks-grabber.git cd doks-grabber -
Install dependencies:
npm install
-
Run the application with your configuration:
DO_API_TOKEN="your-do-api-token" WEBHOOK_URL="https://your-webhook-url" npm start -- \ --cluster_ids="cluster-uuid-1,cluster-uuid-2" \ --slug="gpu-h100x1-80gb" \ --desired_count=1 # Multiple clusters (tries first cluster, falls back to second if scaling fails): npm start -- --cluster_ids="cluster-uuid-1,cluster-uuid-2" --slug="gpu-h100x1-80gb" --desired_count=3
You can also specify the webhook URL as a command line parameter:
DO_API_TOKEN="your-do-api-token" npm start -- \ --cluster_ids="cluster-uuid-1" \ --slug="gpu-h100x1-80gb" \ --desired_count=1 \ --webhook_url="https://your-webhook-url"
| Parameter | Description | Example |
|---|---|---|
cluster_ids |
Comma-separated DOKS cluster UUIDs | uuid-1 or uuid-1,uuid-2 |
slug |
The node size slug to match | gpu-h100x1-80gb |
desired_count |
Target node count globally across all listed clusters | 1 |
tag |
(Optional) Recognition tag for matching pools (default: doks-grabber) |
my-tag |
webhook_url |
(Optional) URL to send notifications when node pools are scaled | https://hooks.slack.com/services/XXX/YYY/ZZZ |
Note: Node pools must be pre-created in your DOKS cluster with the correct node size and tagged with the recognition tag (default: doks-grabber). This tool does not create new node pools — it only scales existing ones that match the slug and tag. With multiple clusters, desired_count is the global total — the app checks all listed clusters for existing matching nodes and only scales up until the count is met (e.g. CLUSTER_IDS=uuid-1,uuid-2 and desired_count=3 scales in uuid-1 first, falls back to uuid-2 if needed).
Auto-scale override: If a matching pool has auto_scale enabled, the tool will disable it and set the count directly, logging a warning.
When a node pool is successfully scaled, the application will send a webhook notification to the specified URL. The notification is a JSON payload with the following structure:
Details
{
"event": "node_pool_scaled",
"clusterId": "cluster-uuid-1",
"pool": {
"id": "pool-uuid",
"name": "gpu-pool",
"size": "gpu-h100x1-80gb",
"count": 3,
"tags": ["doks-grabber"]
},
"previousCount": 1,
"newCount": 3,
"timestamp": "2023-11-06T12:34:56.789Z",
"configuration": {
"slug": "gpu-h100x1-80gb",
"tag": "doks-grabber"
}
}Details
{
"event": "nodes_scaled_summary",
"scaledCount": 2,
"totalNodes": 3,
"clusterIds": ["cluster-uuid-1", "cluster-uuid-2"],
"timestamp": "2023-11-06T12:34:56.789Z",
"configuration": {
"slug": "gpu-h100x1-80gb",
"tag": "doks-grabber"
}
}Details
DigitalOcean DOKS Grabber started
Configuration: slug=gpu-h100x1-80gb, cluster_ids=cluster-uuid-1, desired_count=1 (global across clusters), tag=doks-grabber
Webhook notifications enabled: https://webhook.site/3345c481-c248-4956-9361-335a0d1abcc8
Checking for node pools with slug="gpu-h100x1-80gb" and tag="doks-grabber" across 1 cluster(s)...
Found 0 existing node(s) across matching pools. Desired count: 1
Need 1 more node(s). Trying clusters in order...
Scaling pool "gpu-pool" in cluster cluster-uuid-1 from 0 to 1 nodes...
Successfully scaled pool "gpu-pool" in cluster cluster-uuid-1 to 1 nodes.
Sending webhook notification to https://webhook.site/3345c481-c248-4956-9361-335a0d1abcc8
Webhook notification sent successfully
Successfully scaled 1 node pool(s).
Checking for node pools with slug="gpu-h100x1-80gb" and tag="doks-grabber" across 1 cluster(s)...
Found 1 existing node(s) across matching pools. Desired count: 1
No scaling needed. Current count: 1, desired: 1You can use these webhooks to integrate with services like Slack, Discord, or your own applications.