|
| 1 | +<!-- mdformat global-off --> |
| 2 | +# Pretrain wan2-1-14b-bf16-gbs64-gpus32 workloads on a4 GKE Node pools with NVIDIA DFM & Megatron-Bridge |
| 3 | + |
| 4 | +This recipe outlines the steps for running a wan2-1-14b-bf16-gbs64-gpus32 pretraining workload on a4 GKE Node pools by using the NeMo DFM (Diffusion Foundation Models) and Megatron-Bridge within Nemo Framework. |
| 5 | + |
| 6 | +## Orchestration and deployment tools |
| 7 | + |
| 8 | +For this recipe, the following setup is used: |
| 9 | + |
| 10 | +- Orchestration - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) |
| 11 | +- Pretraining job configuration and deployment - A Helm chart is used to configure and deploy the Kubernetes Jobset resource which manages the execution of the DFM pretraining workload. |
| 12 | + |
| 13 | +## Test environment |
| 14 | + |
| 15 | +This recipe has been optimized for and tested with the following configuration: |
| 16 | + |
| 17 | +- GKE cluster: Please follow Cluster Toolkit [instructions](https://github.com/GoogleCloudPlatform/cluster-toolkit/tree/main/examples/gke-a4) to create your a4 GKE cluster. |
| 18 | +- Node Configuration: 4 nodes (8 GPUs per node, 32 GPUs total). |
| 19 | +- GPU Architecture: NVIDIA Blackwell B200. |
| 20 | + |
| 21 | +## Training dataset |
| 22 | + |
| 23 | +This recipe uses a mock pretraining dataset. To support long-duration stress testing, the recipe includes a patch to the WanMockDataModule via sed within the launcher to extend the mock data length from 1,024 to effectively infinite (10^12 tokens). |
| 24 | + |
| 25 | +## Docker container image |
| 26 | + |
| 27 | +This recipe uses the following docker images: |
| 28 | + |
| 29 | +- `nvcr.io/nvidia/nemo:25.11.00` |
| 30 | +- `us-docker.pkg.dev/gce-ai-infra/gpudirect-gib/nccl-plugin-gib-arm64:v1.1.1` |
| 31 | + |
| 32 | +## Run the recipe |
| 33 | + |
| 34 | +From your client workstation, complete the following steps: |
| 35 | + |
| 36 | +### Configure environment settings |
| 37 | + |
| 38 | +Set the environment variables to match your environment: |
| 39 | + |
| 40 | +```bash |
| 41 | +export PROJECT_ID=<PROJECT_ID> |
| 42 | +export CLUSTER_REGION=<CLUSTER_REGION> |
| 43 | +export CLUSTER_NAME=<CLUSTER_NAME> |
| 44 | +export GCS_BUCKET=<GCS_BUCKET> # Note: path should not be prefixed with gs:// |
| 45 | +export KUEUE_NAME=<KUEUE_NAME> |
| 46 | +``` |
| 47 | + |
| 48 | +Replace the following values: |
| 49 | + |
| 50 | +- `<PROJECT_ID>`: your Google Cloud project ID. |
| 51 | +- `<CLUSTER_REGION>`: the region where your cluster is located. |
| 52 | +- `<CLUSTER_NAME>`: the name of your GKE cluster. |
| 53 | +- `<GCS_BUCKET>`: the name of your Cloud Storage bucket. Don't include the gs:// prefix. |
| 54 | +- `<KUEUE_NAME>`: the name of the Kueue local queue. The default queue created by the cluster toolkit is a4. |
| 55 | + |
| 56 | +Set the default project: |
| 57 | + |
| 58 | +```bash |
| 59 | +gcloud config set project $PROJECT_ID |
| 60 | +``` |
| 61 | + |
| 62 | +### Get cluster credentials |
| 63 | + |
| 64 | +```bash |
| 65 | +gcloud container clusters get-credentials $CLUSTER_NAME --region $CLUSTER_REGION |
| 66 | +``` |
| 67 | + |
| 68 | +### Get the recipe |
| 69 | + |
| 70 | +Clone the `gpu-recipes` repository and set a reference to the recipe folder. |
| 71 | + |
| 72 | +``` |
| 73 | +git clone https://github.com/ai-hypercomputer/gpu-recipes.git |
| 74 | +cd gpu-recipes |
| 75 | +export REPO_ROOT=`git rev-parse --show-toplevel` |
| 76 | +export RECIPE_ROOT=$REPO_ROOT/training/a4/wan2-1-14b/nemo-pretraining-gke/4node-BF16-GBS64/recipe |
| 77 | +cd $RECIPE_ROOT |
| 78 | +``` |
| 79 | + |
| 80 | +### Configure and submit a pretraining job |
| 81 | + |
| 82 | +#### Using 8 nodes (32 gpus) fp8 precision |
| 83 | + |
| 84 | +To execute the job with the default settings, run the following command from your client: |
| 85 | + |
| 86 | +```bash |
| 87 | +cd $RECIPE_ROOT |
| 88 | +export WORKLOAD_NAME=$USER-a4-wan2-1-14b-4node |
| 89 | +helm install $WORKLOAD_NAME . -f values.yaml \ |
| 90 | +--set-file workload_launcher=launcher.sh \ |
| 91 | +--set workload.image=nvcr.io/nvidia/nemo:25.11.00 \ |
| 92 | +--set volumes.gcsMounts[0].bucketName=${GCS_BUCKET} \ |
| 93 | +--set volumes.gcsMounts[0].mountPath=/job-logs \ |
| 94 | +--set workload.envs[0].value=/job-logs/$WORKLOAD_NAME \ |
| 95 | +--set queue=${KUEUE_NAME} |
| 96 | +``` |
| 97 | + |
| 98 | +**Examples** |
| 99 | + |
| 100 | +- To set the number of training steps to 100, run the following command from |
| 101 | + your client: |
| 102 | + |
| 103 | + ```bash |
| 104 | + cd $RECIPE_ROOT |
| 105 | + export WORKLOAD_NAME=$USER-a4-wan2-1-14b-4node |
| 106 | + helm install $WORKLOAD_NAME . -f values.yaml \ |
| 107 | + --set-file workload_launcher=launcher.sh \ |
| 108 | + --set workload.image=nvcr.io/nvidia/nemo:25.11.00 \ |
| 109 | + --set volumes.gcsMounts[0].bucketName=${GCS_BUCKET} \ |
| 110 | + --set volumes.gcsMounts[0].mountPath=/job-logs \ |
| 111 | + --set workload.envs[0].value=/job-logs/$WORKLOAD_NAME \ |
| 112 | + --set queue=${KUEUE_NAME} \ |
| 113 | + --set workload.arguments[0]="train.train_iters=100" |
| 114 | + ``` |
| 115 | + |
| 116 | +### Monitor the job |
| 117 | + |
| 118 | +To check the status of pods in your job, run the following command: |
| 119 | + |
| 120 | +``` |
| 121 | +kubectl get pods | grep $USER-a4-wan2-1-14b-4node |
| 122 | +``` |
| 123 | +
|
| 124 | +Replace the following: |
| 125 | +
|
| 126 | +- JOB_NAME_PREFIX - your job name prefix. For example $USER-a4-wan2-1-14b-4node. |
| 127 | +
|
| 128 | +To get the logs for one of the pods, run the following command: |
| 129 | +
|
| 130 | +``` |
| 131 | +kubectl logs POD_NAME |
| 132 | +``` |
| 133 | +
|
| 134 | +Information about the training job's progress, including crucial details such as |
| 135 | +loss, step count, and step time, is generated by the rank 0 process. |
| 136 | +This process runs on the pod whose name begins with |
| 137 | +`JOB_NAME_PREFIX-workload-0-0`. |
| 138 | +For example: `$USER-a4-wan2-1-14b-4node-workload-0-0-s9zrv`. |
| 139 | +
|
| 140 | +### Uninstall the Helm release |
| 141 | +
|
| 142 | +You can delete the job and other resources created by the Helm chart. To |
| 143 | +uninstall Helm, run the following command from your client: |
| 144 | +
|
| 145 | +```bash |
| 146 | +helm uninstall $USER-a4-wan2-1-14b-4node |
| 147 | +``` |
0 commit comments