Skip to content

Commit a588265

Browse files
authored
Merge pull request #2981 from madeline-underwood/robots
Robots_reviewed
2 parents 633c553 + f42f595 commit a588265

5 files changed

Lines changed: 62 additions & 51 deletions

File tree

content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/1_introduction_isaac.md

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
## Overview
1010

11-
In this Learning Path, you will build, configure, and run robotic simulation and [reinforcement learning (RL)](https://en.wikipedia.org/wiki/Reinforcement_learning) workflows using NVIDIA Isaac Sim and Isaac Lab on an Arm-based DGX Spark system. The NVIDIA DGX Spark is a personal AI supercomputer powered by the GB10 [Grace Blackwell](https://learn.arm.com/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/1_gb10_introduction/) Superchip. The system couples an Arm CPU cluster with a Blackwell GPU and a unified memory architecture to accelerate simulation orchestration, sensor preprocessing, physics, rendering, and RL training.
11+
In this Learning Path, you'll build, configure, and run robotic simulation and [reinforcement learning (RL)](https://en.wikipedia.org/wiki/Reinforcement_learning) workflows using NVIDIA Isaac Sim and Isaac Lab on an Arm-based DGX Spark system. The NVIDIA DGX Spark is a personal AI supercomputer powered by the GB10 [Grace Blackwell](https://learn.arm.com/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/1_gb10_introduction/) Superchip. The system couples an Arm CPU cluster with a Blackwell GPU and a unified memory architecture to accelerate simulation orchestration, sensor preprocessing, physics, rendering, and RL training.
1212

1313
NVIDIA's Isaac Sim and Isaac Lab tools together provide an end-to-end robotics development workflow:
1414
1. Simulate robots in physically realistic environments.
@@ -106,13 +106,22 @@ You can also filter environments by keyword. For example, to list locomotion env
106106

107107
For the complete list of environments, see the [Isaac Lab Available Environments](https://isaac-sim.github.io/IsaacLab/main/source/overview/environments.html) documentation.
108108

109-
## What you will accomplish in this Learning Path
109+
## What you'll build
110110

111-
In this Learning Path you will:
111+
In this Learning Path, you'll:
112112

113-
1. **Set up Isaac Sim and Isaac Lab** on your DGX Spark by building both tools from source
114-
2. **Run a basic robot simulation** in Isaac Sim and interact with it through Python
115-
3. **Train a reinforcement learning policy** for the Unitree H1 humanoid robot on rough terrain using RSL-RL
116-
4. **Explore additional RL environments** to understand how the workflow generalizes to other robots and tasks.
113+
1. Set up Isaac Sim and Isaac Lab on your DGX Spark by building both tools from source
114+
2. Run a basic robot simulation in Isaac Sim and interact with it through Python
115+
3. Train a reinforcement learning policy for the Unitree H1 humanoid robot on rough terrain using RSL-RL
116+
4. Explore additional RL environments to understand how the workflow generalizes to other robots and tasks
117117

118-
By the end of the Learning Path, you will have a working Isaac Sim and Isaac Lab development environment on DGX Spark and practical experience running a complete robotics reinforcement learning pipeline.
118+
By the end, you'll have a working Isaac Sim and Isaac Lab development environment on DGX Spark and practical experience running a complete robotics reinforcement learning pipeline.
119+
120+
## What you've learned and what's next
121+
122+
In this section:
123+
- You learned what Isaac Sim and Isaac Lab are and how they work together for robotics development
124+
- You discovered why DGX Spark's unified memory architecture is ideal for simulation and RL training
125+
- You explored the available environment categories for different robotics tasks
126+
127+
In the next section, you'll set up your development environment and install Isaac Sim and Isaac Lab on your DGX Spark system.

content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/2_isaac_installation.md

Lines changed: 13 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,17 +8,17 @@ layout: learningpathall
88

99
## Set up your development environment
1010

11-
Before running robotic simulations and reinforcement learning workloads, you need to prepare your DGX Spark development environment and install the dependencies required for Isaac Sim and Isaac Lab.
11+
Before you run robotic simulations and reinforcement learning workloads, you need to prepare your DGX Spark development environment and install the dependencies required for Isaac Sim and Isaac Lab.
1212

13-
In this section you will:
13+
In this section you'll:
1414
* Verify the DGX Spark system configuration
1515
* Install required build dependencies
1616
* Build and configure Isaac Sim
1717
* Set up Isaac Lab on top of the Isaac Sim environment
1818

1919
The full setup typically takes 15–20 minutes on a DGX Spark system and requires approximately 50 GB of available disk space.
2020

21-
## Step 1: Verify your system
21+
## Step 1: Verify your DGX Spark system
2222

2323
Begin by confirming that the DGX Spark system has the expected hardware and software configuration.
2424

@@ -37,7 +37,7 @@ Architecture: aarch64
3737
CPU(s): 20
3838
On-line CPU(s) list: 0-19
3939
```
40-
The Architecture field should report aarch64, indicating that the system is running on Arm.
40+
The Architecture field should report `aarch64`, indicating that the system is running on Arm.
4141

4242
Check that the Blackwell GPU is detected by the NVIDIA driver:
4343

@@ -70,9 +70,7 @@ The expected output includes:
7070
Cuda compilation tools, release 13.0, V13.0.88
7171
```
7272

73-
{{% notice Note %}}
74-
Isaac Sim requires GCC/G++ 11, Git LFS, and CUDA 13.0 or later. If any of these checks fail, resolve the issue before proceeding.
75-
{{% /notice %}}
73+
{{% notice Note %}}Isaac Sim requires GCC/G++ 11, Git LFS, and CUDA 13.0 or later. If any of these checks fail, resolve the issue before you proceed.{{% /notice %}}
7674

7775
## Step 2: Install GCC 11 and Git LFS
7876

@@ -82,7 +80,8 @@ Update the package index and install the GCC 11 toolchain:
8280
```bash
8381
sudo apt update && sudo apt install -y gcc-11 g++-11
8482
```
85-
Register GCC 11 as the default compiler using update-alternatives. This allows multiple compiler versions to coexist while prioritizing GCC 11 for builds:
83+
84+
Register GCC 11 as the default compiler using `update-alternatives`. This allows multiple compiler versions to coexist while prioritizing GCC 11 for builds. The priority value of 200 ensures GCC 11 takes precedence over other installed versions:
8685
```bash
8786
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 200
8887
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 200
@@ -120,9 +119,7 @@ git lfs install
120119
git lfs pull
121120
```
122121

123-
{{% notice Note %}}
124-
The Git LFS download retrieves several gigabytes of simulation assets. Ensure you have a stable internet connection and sufficient disk space before running this step.
125-
{{% /notice %}}
122+
{{% notice Note %}}The Git LFS download retrieves several gigabytes of simulation assets. Ensure you have a stable internet connection and sufficient disk space before you run this step.{{% /notice %}}
126123

127124
Once the repository and assets are downloaded, build Isaac Sim using the provided build script:
128125

@@ -166,7 +163,7 @@ After this step, the variables will be available automatically whenever you open
166163

167164
{{% /notice %}}
168165

169-
## Step 5: Validate the Isaac Sim build
166+
## Step 5: Validate your Isaac Sim build
170167

171168
Launch Isaac Sim to verify the build was successful. On some aarch64 systems, Isaac Sim may require preloading the GNU OpenMP runtime (libgomp) to avoid library compatibility issues. Setting the LD_PRELOAD environment variable ensures the correct library is loaded before Isaac Sim starts.
172169

@@ -244,11 +241,11 @@ Isaac-Reach-Franka-v0
244241

245242
If the environment list displays without errors, both Isaac Sim and Isaac Lab are correctly installed and ready for use.
246243

247-
You are now ready to run and train RL tasks using Isaac Lab environments.
244+
You're now ready to run and train RL tasks using Isaac Lab environments.
248245

249-
## What you have accomplished
246+
## What you've learned and what's next
250247

251-
In this section you have:
248+
In this section you've:
252249

253250
- Verified your DGX Spark system has the required Grace CPU, Blackwell GPU, and CUDA 13 environment
254251
- Installed GCC 11 and Git LFS as build prerequisites
@@ -257,4 +254,4 @@ In this section you have:
257254
- Cloned and installed Isaac Lab with all RL library dependencies
258255
- Validated both installations by launching Isaac Sim and listing available environments
259256

260-
Your development environment is now fully configured for robot simulation and RL workflows. In the next section, you will run your first robot simulation and begin interacting with Isaac Sim through Python scripts.
257+
Your development environment is now fully configured for robot simulation and RL workflows. In the next section, you'll run your first robot simulation and begin interacting with Isaac Sim through Python scripts.

content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/3_isaac_small_project.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
## Deploy a basic robot simulation
1010

11-
With Isaac Sim and Isaac Lab installed, you can now run your first robot simulation. In this section you will launch a pre-built simulation scene, interact with it programmatically, and explore the key concepts behind Isaac Sim's simulation loop.
11+
With Isaac Sim and Isaac Lab installed, you can now run your first robot simulation. In this section you'll launch a pre-built simulation scene, interact with it programmatically, and explore the key concepts behind Isaac Sim's simulation loop.
1212

1313
The example environment used here is Cartpole, a classic control benchmark in which a cart must balance an upright pole by applying horizontal forces. Although simple, this environment demonstrates the core mechanics of simulation environments used in robotics and reinforcement learning.
1414

@@ -30,9 +30,8 @@ Press `Ctrl+C` to exit the simulation.
3030

3131
## Step 2: Spawn and simulate a robot
3232

33-
Next, run a tutorial that loads an articulated robot into the simulation and advances the physics engine.
34-
This example demonstrates how Isaac Sim handles multi-body dynamics, including loading robot assets, configuring actuators, and stepping the physics simulation.
35-
Run the following command:
33+
Next, run a tutorial that loads an articulated robot into the simulation and advances the physics engine. This example demonstrates how Isaac Sim handles multi-body dynamics, including loading robot assets, configuring actuators, and stepping the physics simulation.
34+
3635
```bash
3736
./isaaclab.sh -p scripts/tutorials/01_assets/run_articulation.py
3837
```
@@ -43,7 +42,7 @@ This script loads a robot model, advances the physics simulation, and prints joi
4342
- Configuring joint actuators and control modes
4443
- Stepping the physics simulation and reading back joint positions and velocities
4544

46-
![img1 alt-text#center](run_articulation.gif "Figure 1: run_articulation.py")
45+
![img1 alt-text#center](run_articulation.gif "run_articulation.py")
4746

4847
## Step 3: Run the Cartpole environment
4948

@@ -56,17 +55,18 @@ Run the following command:
5655
./isaaclab.sh -p scripts/tutorials/03_envs/create_cartpole_base_env.py --num_envs 32
5756
```
5857

59-
This command launches 32 parallel Cartpole environments on the Blackwell GPU. Each environment runs its own independent simulation with random joint efforts applied to the cart. You will see the pole joint angle printed to the terminal for each step.
58+
This command launches 32 parallel Cartpole environments on the Blackwell GPU. Each environment runs its own independent simulation with random joint efforts applied to the cart. You'll see the pole joint angle printed to the terminal for each step.
6059

61-
![img2 alt-text#center](32_cartpole.gif "Figure 2: 32 parallel Cartpole")
60+
![img2 alt-text#center](32_cartpole.gif "32 parallel Cartpole")
6261

6362
{{% notice Note %}}
6463
This tutorial script uses a hardcoded `CartpoleEnvCfg` configuration. It does not accept a `--task` argument. The `--num_envs` flag controls how many parallel environments are spawned on the GPU.
6564
{{% /notice %}}
6665

6766
## Step 4: Run the Cartpole RL environment
6867

69-
The previous tutorial created a base simulation environment that advances physics and applies actions but does not include reinforcement learning components such as rewards or episode termination.
68+
The previous tutorial created a base simulation environment that advances physics and applies actions but doesn't include reinforcement learning components such as rewards or episode termination.
69+
7070
To run the full reinforcement learning version of the environment, execute the following command:
7171

7272
```bash
@@ -206,7 +206,7 @@ Each call to `env.step(action)` performs these operations on the GPU:
206206

207207
All computations happen in parallel across all environments using PyTorch tensors on the GPU. This is what makes Isaac Lab efficient: thousands of environments run in parallel without Python loop overhead.
208208

209-
## Step 6: Run with headless mode
209+
## Step 6: Run in headless mode
210210

211211
For reinforcement learning workflows, it is common to run Isaac Sim without rendering. Disabling the viewer allows more GPU resources to be used for physics simulation and neural network computation.
212212

@@ -221,9 +221,9 @@ In headless mode, all GPU resources are dedicated to physics simulation and tens
221221
When running headless on DGX Spark, the Blackwell GPU handles both the physics simulation and neural network computation. The unified memory architecture means there is no performance penalty for sharing GPU memory between these workloads.
222222
{{% /notice %}}
223223

224-
## What you have accomplished
224+
## What you've learned and what's next
225225

226-
In this section you have:
226+
In this section you've:
227227

228228
- Launched your first Isaac Sim scene on DGX Spark and verified the rendering and physics engines work correctly
229229
- Spawned articulated robots and observed multi-body physics simulation
@@ -232,4 +232,5 @@ In this section you have:
232232
- Tested headless mode for maximum training performance
233233

234234
You now understand the core components of an Isaac Lab simulation environment, including scene creation, robot articulation, observation and action structures, and simulation loop execution.
235-
In the next section, you will use these concepts to train a reinforcement learning policy for a humanoid robot.
235+
236+
In the next section, you'll use these concepts to train a reinforcement learning policy for a humanoid robot.

content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/4_isaac_rfl.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
## Train a reinforcement learning policy using Isaac Lab and RSL-RL
1010

11-
In this section you will train a reinforcement learning (RL) policy for the [Unitree] (https://www.unitree.com/) H1 humanoid robot to walk over rough terrain. The training workflow uses Isaac Lab’s integration with the RSL-RL library, which implements the Proximal Policy Optimization (PPO) algorithm. This integration connects Isaac Sim’s physics simulation with an efficient RL training pipeline. By the end of this section you will understand the key stages of the RL training pipeline, including:
11+
In this section you'll train a reinforcement learning (RL) policy for the [Unitree](https://www.unitree.com/) H1 humanoid robot to walk over rough terrain. The training workflow uses Isaac Lab’s integration with the RSL-RL library, which implements the Proximal Policy Optimization (PPO) algorithm. This integration connects Isaac Sim’s physics simulation with an efficient RL training pipeline. By the end of this section you'll understand the key stages of the RL training pipeline, including:
1212
* Task configuration and environment selection
1313
* PPO training parameters and rollout collection
1414
* Monitoring training progress
@@ -26,7 +26,8 @@ RSL-RL (Robotic Systems Lab Reinforcement Learning) is a lightweight RL library
2626
Isaac Lab includes ready-to-use training scripts for RSL-RL under `scripts/reinforcement_learning/rsl_rl/`.
2727

2828
## Step 1: Understand the training task
29-
In this section you will train the **Isaac-Velocity-Rough-H1-v0** environment. This is a locomotion task where the [Unitree H1](https://www.unitree.com/h1/) humanoid robot must track a velocity command while navigating rough terrain.
29+
30+
In this section you'll train the **Isaac-Velocity-Rough-H1-v0** environment. This is a locomotion task where the [Unitree H1](https://www.unitree.com/h1/) humanoid robot must track a velocity command while navigating rough terrain.
3031

3132
The task details are:
3233

@@ -58,6 +59,7 @@ export LD_PRELOAD="$LD_PRELOAD:/lib/aarch64-linux-gnu/libgomp.so.1"
5859
```
5960

6061
Once training begins, the terminal displays iteration progress, reward statistics, and performance metrics.
62+
6163
Example output:
6264
```output
6365
Learning iteration 15/3000
@@ -118,8 +120,10 @@ This error occurs because the NVRTC runtime compiler inside PyTorch does not yet
118120
Support for Blackwell GPUs is expected to improve in upcoming PyTorch and Isaac Sim releases.
119121
{{% /notice %}}
120122

121-
### Adjusting training parameters
122-
You can also override default parameters from the command line:
123+
### Adjust training parameters
124+
125+
You can also override default parameters from the command line.
126+
123127
For example:
124128
```bash
125129
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py \
@@ -186,8 +190,8 @@ PPO (Proximal Policy Optimization) is the RL algorithm used by RSL-RL. Understan
186190
| `save_interval` | `50` | Save a model checkpoint every N iterations. Useful for resuming training or evaluating intermediate policies |
187191

188192
### How the hyperparameters interact
189-
During training, each iteration collects experience from all parallel environments.
190-
The total batch size per iteration is:
193+
194+
During training, each iteration collects experience from all parallel environments. The total batch size per iteration is:
191195
```
192196
batch_size = num_envs × num_steps_per_env
193197
```
@@ -273,25 +277,25 @@ This progression—from falling to stable walking—demonstrates how PPO gradual
273277

274278
The following visualizations compare two training stages using `num_envs=512`, showcasing the benefit of large-scale parallel training on DGX Spark.
275279

276-
*** Iteration 50 (Early Stage, num_envs=512) ***
280+
**Iteration 50 (Early Stage, num_envs=512)**
277281

278282
At iteration 50, the policy is still in its exploration phase. Most robots exhibit noisy joint actions, lack coordination, and frequently fall. There is no observable response to the velocity command, and no stable gait has emerged.
279283

280-
![img3 alt-text#center](isaaclab_h1_512_0050.gif "Figure 3: Early Stage")
284+
![img3 alt-text#center](isaaclab_h1_512_0050.gif "Early Stage")
281285

282-
*** Iteration 1350 (Late Stage, num_envs=512) ***
286+
**Iteration 1350 (Late Stage, num_envs=512)**
283287

284288
By iteration 1350, the policy has matured. Most robots demonstrate coordinated walking behavior, balance maintenance, and accurate velocity tracking, even on rough terrain. The improvement in foot placement and heading stability is clearly visible.
285289

286-
![img4 alt-text#center](isaaclab_h1_512_1350.gif "Figure 4: Late Stage")
290+
![img4 alt-text#center](isaaclab_h1_512_1350.gif "Late Stage")
287291

288-
## What you have accomplished
292+
## What you've learned
289293

290-
In this module, you have:
294+
In this section, you've:
291295

292296
- Trained a reinforcement learning policy for the Unitree H1 humanoid robot using RSL-RL and the PPO algorithm
293297
- Understood key hyperparameters in the training pipeline, including policy architecture, rollout strategy, and PPO optimization settings
294298
- Monitored training progress using reward curves, episode statistics, and performance metrics
295299
- Evaluated the trained policy through interactive visualization and behavior analysis
296300

297-
You have now completed the end-to-end workflow of training and validating a reinforcement learning policy for humanoid locomotion on DGX Spark.
301+
You've now completed the end-to-end workflow of training and validating a reinforcement learning policy for humanoid locomotion on DGX Spark.

content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ title: Build Robot Simulation and Reinforcement Learning Workflows with Isaac Si
44
draft: true
55
cascade:
66
draft: true
7-
7+
88
minutes_to_complete: 90
99

10-
who_is_this_for: This learning path is intended for robotics developers, simulation engineers, and AI researchers who want to run high-fidelity robotic simulations and reinforcement learning (RL) pipelines using NVIDIA Isaac Sim and Isaac Lab on Arm-based NVIDIA DGX Spark system powered by the Grace–Blackwell (GB10) architecture.
10+
who_is_this_for: This is an advanced topic for robotics developers, simulation engineers, and AI researchers who want to run high-fidelity robotic simulations and reinforcement learning (RL) pipelines using NVIDIA Isaac Sim and Isaac Lab on Arm-based NVIDIA DGX Spark system powered by the Grace–Blackwell (GB10) architecture.
1111

1212
learning_objectives:
1313
- Describe the roles of Isaac Sim and Isaac Lab within a robotics simulation and RL pipeline

0 commit comments

Comments
 (0)