diff --git a/cloud-accounts/cost-optimization.mdx b/cloud-accounts/cost-optimization.mdx
deleted file mode 100644
index db1f4ce..0000000
--- a/cloud-accounts/cost-optimization.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: "Cost Optimization for Node Groups"
-description: "Learn how to enable cost optimization for your node groups to improve resource efficiency"
----
-
-Porter supports intelligent cost optimization for node groups, allowing you to automatically optimize resource allocation and reduce costs while maintaining performance.
-
-## How Node Groups Work
-
-By default, node groups in Porter use a fixed instance type approach. When you create a node group, you select a specific machine type (e.g., t3.xlarge with 4 CPU, 16GB RAM) and Porter scales by adding or removing instances of that exact type. While this works well for predictable workloads, it often leads to resource fragmentation.
-
-Let's look at an example. Say you're running a web application with 8 replicas, each needing:
-- 2.5 CPU cores
-- 4GB RAM
-- Total requirements: 20 CPU cores and 32GB RAM
-
-With fixed instance types, you might choose t3.xlarge instances (4 CPU, 16GB RAM each). You would need 5 instances to get enough CPU (20 cores needed), which provides 80GB of RAM - far more than the 32GB you need. At \$0.1664 per hour for each instance (\$0.832 total per hour), you're paying for 48GB of unused RAM - a 60% waste in memory.
-
-With cost optimization enabled, Porter can intelligently bin pack your workloads using a combination of instance types. Using 3 t3.large instances (2 CPU, 8GB RAM each) at \$0.0832/hour (\$0.2496 total) and 4 t3.medium instances (2 CPU, 4GB RAM each) at \$0.0416/hour (\$0.1664 total), Porter achieves the exact capacity needed (20 CPU cores and 32GB RAM) for \$0.416/hour - a 50% reduction in cost with the same performance. The machines are reshuffled on a cadence to ensure the best fit as the set of workloads changes.
-
-## Enabling and Configuring Cost Optimization
-
-To enable cost optimization for a node group:
-1. Navigate to your cluster's infrastructure settings
-2. Under node groups, find the node group you want to optimize
-3. Click "Enable Cost Optimization" in the top right corner
-4. Set your maximum CPU cores limit to prevent unexpected scaling. This helps prevent unexpected cost increases by setting a cap.
-
-
-
-*Cost optimization configuration for a node group*
-
-## Best Practices
-
-**Health Checks Required**: For production applications, ensure proper health checks are configured before scheduling them on cost-optimized node groups. This ensures your applications can be safely rescheduled on new nodes without causing any disruption as nodes are reshuffled.
-
-## Limitations
-
-The following node group configurations should continue using fixed instance types until we support cost optimization for them:
-- GPU instances (e.g., instances with NVIDIA GPUs)
-- Spot instances
-- Instances in public subnets
-- Instances with specialized hardware requirements
\ No newline at end of file
diff --git a/cloud-accounts/node-groups.mdx b/cloud-accounts/node-groups.mdx
new file mode 100644
index 0000000..dd1d76e
--- /dev/null
+++ b/cloud-accounts/node-groups.mdx
@@ -0,0 +1,98 @@
+---
+title: "Node Groups"
+description: "Configure node groups and optimize compute costs for your Porter cluster"
+---
+
+Porter provides flexible options for managing compute resources in your cluster.
+You can add custom node groups for specialized workloads or enable cost
+optimization to reduce infrastructure spend.
+
+ ## Creating a Custom Node Group
+
+
+
+ From your Porter dashboard, click on the **Infrastructure** tab in the left sidebar.
+
+
+
+ Click on **Cluster** to view your cluster configuration and node groups.
+
+
+
+ Click **Add an additional node group** to open the node group configuration panel.
+
+
+
+
+
+ ## Cost Optimization for Node Groups
+
+ Set your maximum CPU cores limit to prevent unexpected scaling. This helps prevent unexpected cost increases by setting a cap.
+
+ 
+
+
+ ### Limitations
+
+ The following node group configurations should continue using fixed instance types until we support cost optimization for them:
+
+ - GPU instances (e.g., instances with NVIDIA GPUs)
+ - Spot instances
+ - Instances in public subnets
+ - Instances with specialized hardware requirements
+
+
+ ## Fixed Node Groups
+
+ Fixed node groups uses a specific fixed instance type approach. Applications built on this will only be scheduled on the exact instance type. This gives you more control, but has the limitation of over-provisioning certain resources if configured incorrectly.
+
+ 
+
+ Configure your node group with the following settings:
+
+ | Setting | Description |
+ |---------|-------------|
+ | **Instance type** | The machine type for nodes in this group |
+ | **Minimum nodes** | The minimum number of nodes to maintain (set to 0 for scale-to-zero) |
+ | **Maximum nodes** | The upper limit for autoscaling |
+
+
+ Choose instance types based on your workload requirements. For GPU workloads, create a second node group and select instances with GPU support (e.g., `g4dn.xlarge` on AWS, `Standard_NC4as_T4_v3` on Azure, `g2-standard-4` on GCP).
+
+
+
+
+
+
+ **Health Checks Required**: For production applications, ensure proper health checks are configured before scheduling them on cost-optimized node groups. This ensures your applications can be safely rescheduled on new nodes without causing any disruption as nodes are reshuffled.
+
+
+
+
+ Click **Save** to create the node group. Porter will provision the new nodes in your cluster. This may take a few minutes.
+
+
+
+ ## Assigning Workloads
+
+ Once your custom node group is created, you can assign applications to run on it:
+
+ 1. Navigate to your application in the Porter dashboard
+ 2. Go to the **Services** tab
+ 3. Click the service you want to assign
+ 4. Under **General**, find the **Node group** selector
+ 5. Select your custom node group from the dropdown
+ 6. Save and redeploy your application
+
+ ## Deleting a Node Group
+
+ To remove a custom node group:
+
+ 1. First, migrate any workloads running on the node group to another node group
+ 2. Navigate to **Infrastructure** → **Cluster**
+ 3. Find the node group you want to delete
+ 4. Click the delete icon and confirm
+
+
+ Ensure no workloads are scheduled on the node group before deleting. Workloads will be disrupted if their node group is removed.
+
diff --git a/mint.json b/mint.json
index 17d43cb..ae749c2 100644
--- a/mint.json
+++ b/mint.json
@@ -48,85 +48,66 @@
"cloud-accounts/provisioning-on-gcp",
"cloud-accounts/changing-instance-types",
"cloud-accounts/cluster-upgrades",
- "cloud-accounts/cost-optimization"
+ "cloud-accounts/node-groups"
]
},
{
- "group": "Applications",
+ "group": "Deploy",
"pages": [
+ "deploy/overview",
+ "deploy/types-of-services",
+ "deploy/v1-and-v2",
{
- "group": "Deploy",
+ "group": "v2",
"pages": [
- "deploy/overview",
- "deploy/types-of-services",
- "deploy/v1-and-v2",
- {
- "group": "v2",
- "pages": [
- "deploy/v2/deploy-from-github-repo",
- "deploy/v2/deploy-from-docker-registry",
- "deploy/v2/configuring-application-services"
- ]
- },
- {
- "group": "v1",
- "pages": [
- "deploy/v1/deploy-from-github-repo",
- "deploy/v1/deploy-from-docker-registry"
- ]
- },
- "deploy/builds",
- "deploy/multiple-deploys-from-same-build",
- "deploy/pre-deploy-jobs",
- "deploy/rollbacks",
- "deploy/using-other-ci-tools",
- {
- "group": "Configuration as Code",
- "pages": [
- "deploy/configuration-as-code/overview",
- "deploy/configuration-as-code/reference",
- "deploy/configuration-as-code/addons-porter-yaml",
- {
- "group": "Service Configuration",
- "pages": [
- "deploy/configuration-as-code/services/web-service",
- "deploy/configuration-as-code/services/worker-service",
- "deploy/configuration-as-code/services/job-service",
- "deploy/configuration-as-code/services/predeploy"
- ]
- }
- ]
- }
+ "deploy/v2/deploy-from-github-repo",
+ "deploy/v2/deploy-from-docker-registry",
+ "deploy/v2/configuring-application-services"
]
},
{
- "group": "Configure",
+ "group": "v1",
"pages": [
- "configure/basic-configuration",
- "configure/environment-groups",
- "configure/autoscaling",
- "configure/custom-domains",
- "configure/health-checks",
- "configure/zero-downtime-deployments",
- "configure/advanced-networking"
+ "deploy/v1/deploy-from-github-repo",
+ "deploy/v1/deploy-from-docker-registry"
]
},
+ "deploy/builds",
+ "deploy/multiple-deploys-from-same-build",
+ "deploy/pre-deploy-jobs",
+ "deploy/rollbacks",
+ "deploy/using-other-ci-tools",
{
- "group": "Observability",
+ "group": "Configuration as Code",
"pages": [
- "observability/monitoring",
- "observability/logging",
- "observability/alerts",
- "observability/app-metadata",
- "observability/custom-metrics-and-autoscaling"
+ "deploy/configuration-as-code/overview",
+ "deploy/configuration-as-code/reference",
+ "deploy/configuration-as-code/addons-porter-yaml",
+ {
+ "group": "Service Configuration",
+ "pages": [
+ "deploy/configuration-as-code/services/web-service",
+ "deploy/configuration-as-code/services/worker-service",
+ "deploy/configuration-as-code/services/job-service",
+ "deploy/configuration-as-code/services/predeploy"
+ ]
+ }
]
- },
- {
- "group": "Debug",
- "pages": ["debug/common-errors"]
}
]
},
+ {
+ "group": "Configure",
+ "pages": [
+ "configure/basic-configuration",
+ "configure/environment-groups",
+ "configure/autoscaling",
+ "configure/custom-domains",
+ "configure/health-checks",
+ "configure/zero-downtime-deployments",
+ "configure/advanced-networking"
+ ]
+ },
{
"group": "Command Line Interface (CLI)",
"pages": [
@@ -171,6 +152,20 @@
}
]
},
+ {
+ "group": "Observability",
+ "pages": [
+ "observability/monitoring",
+ "observability/logging",
+ "observability/alerts",
+ "observability/app-metadata",
+ "observability/custom-metrics-and-autoscaling"
+ ]
+ },
+ {
+ "group": "Debug",
+ "pages": ["debug/common-errors"]
+ },
{
"group": "Preview Environments",
"pages": [