You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/1_introduction_isaac.md
+17-8Lines changed: 17 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ layout: learningpathall
8
8
9
9
## Overview
10
10
11
-
In this Learning Path, you will build, configure, and run robotic simulation and [reinforcement learning (RL)](https://en.wikipedia.org/wiki/Reinforcement_learning) workflows using NVIDIA Isaac Sim and Isaac Lab on an Arm-based DGX Spark system. The NVIDIA DGX Spark is a personal AI supercomputer powered by the GB10 [Grace Blackwell](https://learn.arm.com/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/1_gb10_introduction/) Superchip. The system couples an Arm CPU cluster with a Blackwell GPU and a unified memory architecture to accelerate simulation orchestration, sensor preprocessing, physics, rendering, and RL training.
11
+
In this Learning Path, you'll build, configure, and run robotic simulation and [reinforcement learning (RL)](https://en.wikipedia.org/wiki/Reinforcement_learning) workflows using NVIDIA Isaac Sim and Isaac Lab on an Arm-based DGX Spark system. The NVIDIA DGX Spark is a personal AI supercomputer powered by the GB10 [Grace Blackwell](https://learn.arm.com/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/1_gb10_introduction/) Superchip. The system couples an Arm CPU cluster with a Blackwell GPU and a unified memory architecture to accelerate simulation orchestration, sensor preprocessing, physics, rendering, and RL training.
12
12
13
13
NVIDIA's Isaac Sim and Isaac Lab tools together provide an end-to-end robotics development workflow:
14
14
1. Simulate robots in physically realistic environments.
@@ -106,13 +106,22 @@ You can also filter environments by keyword. For example, to list locomotion env
106
106
107
107
For the complete list of environments, see the [Isaac Lab Available Environments](https://isaac-sim.github.io/IsaacLab/main/source/overview/environments.html) documentation.
108
108
109
-
## What you will accomplish in this Learning Path
109
+
## What you'll build
110
110
111
-
In this Learning Path you will:
111
+
In this Learning Path, you'll:
112
112
113
-
1.**Set up Isaac Sim and Isaac Lab** on your DGX Spark by building both tools from source
114
-
2.**Run a basic robot simulation** in Isaac Sim and interact with it through Python
115
-
3.**Train a reinforcement learning policy** for the Unitree H1 humanoid robot on rough terrain using RSL-RL
116
-
4.**Explore additional RL environments** to understand how the workflow generalizes to other robots and tasks.
113
+
1. Set up Isaac Sim and Isaac Lab on your DGX Spark by building both tools from source
114
+
2. Run a basic robot simulation in Isaac Sim and interact with it through Python
115
+
3. Train a reinforcement learning policy for the Unitree H1 humanoid robot on rough terrain using RSL-RL
116
+
4. Explore additional RL environments to understand how the workflow generalizes to other robots and tasks
117
117
118
-
By the end of the Learning Path, you will have a working Isaac Sim and Isaac Lab development environment on DGX Spark and practical experience running a complete robotics reinforcement learning pipeline.
118
+
By the end, you'll have a working Isaac Sim and Isaac Lab development environment on DGX Spark and practical experience running a complete robotics reinforcement learning pipeline.
119
+
120
+
## What you've learned and what's next
121
+
122
+
In this section:
123
+
- You learned what Isaac Sim and Isaac Lab are and how they work together for robotics development
124
+
- You discovered why DGX Spark's unified memory architecture is ideal for simulation and RL training
125
+
- You explored the available environment categories for different robotics tasks
126
+
127
+
In the next section, you'll set up your development environment and install Isaac Sim and Isaac Lab on your DGX Spark system.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/2_isaac_installation.md
+13-16Lines changed: 13 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,17 +8,17 @@ layout: learningpathall
8
8
9
9
## Set up your development environment
10
10
11
-
Before running robotic simulations and reinforcement learning workloads, you need to prepare your DGX Spark development environment and install the dependencies required for Isaac Sim and Isaac Lab.
11
+
Before you run robotic simulations and reinforcement learning workloads, you need to prepare your DGX Spark development environment and install the dependencies required for Isaac Sim and Isaac Lab.
12
12
13
-
In this section you will:
13
+
In this section you'll:
14
14
* Verify the DGX Spark system configuration
15
15
* Install required build dependencies
16
16
* Build and configure Isaac Sim
17
17
* Set up Isaac Lab on top of the Isaac Sim environment
18
18
19
19
The full setup typically takes 15–20 minutes on a DGX Spark system and requires approximately 50 GB of available disk space.
20
20
21
-
## Step 1: Verify your system
21
+
## Step 1: Verify your DGX Spark system
22
22
23
23
Begin by confirming that the DGX Spark system has the expected hardware and software configuration.
24
24
@@ -37,7 +37,7 @@ Architecture: aarch64
37
37
CPU(s): 20
38
38
On-line CPU(s) list: 0-19
39
39
```
40
-
The Architecture field should report aarch64, indicating that the system is running on Arm.
40
+
The Architecture field should report `aarch64`, indicating that the system is running on Arm.
41
41
42
42
Check that the Blackwell GPU is detected by the NVIDIA driver:
43
43
@@ -70,9 +70,7 @@ The expected output includes:
70
70
Cuda compilation tools, release 13.0, V13.0.88
71
71
```
72
72
73
-
{{% notice Note %}}
74
-
Isaac Sim requires GCC/G++ 11, Git LFS, and CUDA 13.0 or later. If any of these checks fail, resolve the issue before proceeding.
75
-
{{% /notice %}}
73
+
{{% notice Note %}}Isaac Sim requires GCC/G++ 11, Git LFS, and CUDA 13.0 or later. If any of these checks fail, resolve the issue before you proceed.{{% /notice %}}
76
74
77
75
## Step 2: Install GCC 11 and Git LFS
78
76
@@ -82,7 +80,8 @@ Update the package index and install the GCC 11 toolchain:
Register GCC 11 as the default compiler using update-alternatives. This allows multiple compiler versions to coexist while prioritizing GCC 11 for builds:
83
+
84
+
Register GCC 11 as the default compiler using `update-alternatives`. This allows multiple compiler versions to coexist while prioritizing GCC 11 for builds. The priority value of 200 ensures GCC 11 takes precedence over other installed versions:
The Git LFS download retrieves several gigabytes of simulation assets. Ensure you have a stable internet connection and sufficient disk space before running this step.
125
-
{{% /notice %}}
122
+
{{% notice Note %}}The Git LFS download retrieves several gigabytes of simulation assets. Ensure you have a stable internet connection and sufficient disk space before you run this step.{{% /notice %}}
126
123
127
124
Once the repository and assets are downloaded, build Isaac Sim using the provided build script:
128
125
@@ -166,7 +163,7 @@ After this step, the variables will be available automatically whenever you open
166
163
167
164
{{% /notice %}}
168
165
169
-
## Step 5: Validate the Isaac Sim build
166
+
## Step 5: Validate your Isaac Sim build
170
167
171
168
Launch Isaac Sim to verify the build was successful. On some aarch64 systems, Isaac Sim may require preloading the GNU OpenMP runtime (libgomp) to avoid library compatibility issues. Setting the LD_PRELOAD environment variable ensures the correct library is loaded before Isaac Sim starts.
172
169
@@ -244,11 +241,11 @@ Isaac-Reach-Franka-v0
244
241
245
242
If the environment list displays without errors, both Isaac Sim and Isaac Lab are correctly installed and ready for use.
246
243
247
-
You are now ready to run and train RL tasks using Isaac Lab environments.
244
+
You're now ready to run and train RL tasks using Isaac Lab environments.
248
245
249
-
## What you have accomplished
246
+
## What you've learned and what's next
250
247
251
-
In this section you have:
248
+
In this section you've:
252
249
253
250
- Verified your DGX Spark system has the required Grace CPU, Blackwell GPU, and CUDA 13 environment
254
251
- Installed GCC 11 and Git LFS as build prerequisites
@@ -257,4 +254,4 @@ In this section you have:
257
254
- Cloned and installed Isaac Lab with all RL library dependencies
258
255
- Validated both installations by launching Isaac Sim and listing available environments
259
256
260
-
Your development environment is now fully configured for robot simulation and RL workflows. In the next section, you will run your first robot simulation and begin interacting with Isaac Sim through Python scripts.
257
+
Your development environment is now fully configured for robot simulation and RL workflows. In the next section, you'll run your first robot simulation and begin interacting with Isaac Sim through Python scripts.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/3_isaac_small_project.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ layout: learningpathall
8
8
9
9
## Deploy a basic robot simulation
10
10
11
-
With Isaac Sim and Isaac Lab installed, you can now run your first robot simulation. In this section you will launch a pre-built simulation scene, interact with it programmatically, and explore the key concepts behind Isaac Sim's simulation loop.
11
+
With Isaac Sim and Isaac Lab installed, you can now run your first robot simulation. In this section you'll launch a pre-built simulation scene, interact with it programmatically, and explore the key concepts behind Isaac Sim's simulation loop.
12
12
13
13
The example environment used here is Cartpole, a classic control benchmark in which a cart must balance an upright pole by applying horizontal forces. Although simple, this environment demonstrates the core mechanics of simulation environments used in robotics and reinforcement learning.
14
14
@@ -30,9 +30,8 @@ Press `Ctrl+C` to exit the simulation.
30
30
31
31
## Step 2: Spawn and simulate a robot
32
32
33
-
Next, run a tutorial that loads an articulated robot into the simulation and advances the physics engine.
34
-
This example demonstrates how Isaac Sim handles multi-body dynamics, including loading robot assets, configuring actuators, and stepping the physics simulation.
35
-
Run the following command:
33
+
Next, run a tutorial that loads an articulated robot into the simulation and advances the physics engine. This example demonstrates how Isaac Sim handles multi-body dynamics, including loading robot assets, configuring actuators, and stepping the physics simulation.
This command launches 32 parallel Cartpole environments on the Blackwell GPU. Each environment runs its own independent simulation with random joint efforts applied to the cart. You will see the pole joint angle printed to the terminal for each step.
58
+
This command launches 32 parallel Cartpole environments on the Blackwell GPU. Each environment runs its own independent simulation with random joint efforts applied to the cart. You'll see the pole joint angle printed to the terminal for each step.
This tutorial script uses a hardcoded `CartpoleEnvCfg` configuration. It does not accept a `--task` argument. The `--num_envs` flag controls how many parallel environments are spawned on the GPU.
65
64
{{% /notice %}}
66
65
67
66
## Step 4: Run the Cartpole RL environment
68
67
69
-
The previous tutorial created a base simulation environment that advances physics and applies actions but does not include reinforcement learning components such as rewards or episode termination.
68
+
The previous tutorial created a base simulation environment that advances physics and applies actions but doesn't include reinforcement learning components such as rewards or episode termination.
69
+
70
70
To run the full reinforcement learning version of the environment, execute the following command:
71
71
72
72
```bash
@@ -206,7 +206,7 @@ Each call to `env.step(action)` performs these operations on the GPU:
206
206
207
207
All computations happen in parallel across all environments using PyTorch tensors on the GPU. This is what makes Isaac Lab efficient: thousands of environments run in parallel without Python loop overhead.
208
208
209
-
## Step 6: Run with headless mode
209
+
## Step 6: Run in headless mode
210
210
211
211
For reinforcement learning workflows, it is common to run Isaac Sim without rendering. Disabling the viewer allows more GPU resources to be used for physics simulation and neural network computation.
212
212
@@ -221,9 +221,9 @@ In headless mode, all GPU resources are dedicated to physics simulation and tens
221
221
When running headless on DGX Spark, the Blackwell GPU handles both the physics simulation and neural network computation. The unified memory architecture means there is no performance penalty for sharing GPU memory between these workloads.
222
222
{{% /notice %}}
223
223
224
-
## What you have accomplished
224
+
## What you've learned and what's next
225
225
226
-
In this section you have:
226
+
In this section you've:
227
227
228
228
- Launched your first Isaac Sim scene on DGX Spark and verified the rendering and physics engines work correctly
229
229
- Spawned articulated robots and observed multi-body physics simulation
@@ -232,4 +232,5 @@ In this section you have:
232
232
- Tested headless mode for maximum training performance
233
233
234
234
You now understand the core components of an Isaac Lab simulation environment, including scene creation, robot articulation, observation and action structures, and simulation loop execution.
235
-
In the next section, you will use these concepts to train a reinforcement learning policy for a humanoid robot.
235
+
236
+
In the next section, you'll use these concepts to train a reinforcement learning policy for a humanoid robot.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/4_isaac_rfl.md
+17-13Lines changed: 17 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ layout: learningpathall
8
8
9
9
## Train a reinforcement learning policy using Isaac Lab and RSL-RL
10
10
11
-
In this section you will train a reinforcement learning (RL) policy for the [Unitree](https://www.unitree.com/) H1 humanoid robot to walk over rough terrain. The training workflow uses Isaac Lab’s integration with the RSL-RL library, which implements the Proximal Policy Optimization (PPO) algorithm. This integration connects Isaac Sim’s physics simulation with an efficient RL training pipeline. By the end of this section you will understand the key stages of the RL training pipeline, including:
11
+
In this section you'll train a reinforcement learning (RL) policy for the [Unitree](https://www.unitree.com/) H1 humanoid robot to walk over rough terrain. The training workflow uses Isaac Lab’s integration with the RSL-RL library, which implements the Proximal Policy Optimization (PPO) algorithm. This integration connects Isaac Sim’s physics simulation with an efficient RL training pipeline. By the end of this section you'll understand the key stages of the RL training pipeline, including:
12
12
* Task configuration and environment selection
13
13
* PPO training parameters and rollout collection
14
14
* Monitoring training progress
@@ -26,7 +26,8 @@ RSL-RL (Robotic Systems Lab Reinforcement Learning) is a lightweight RL library
26
26
Isaac Lab includes ready-to-use training scripts for RSL-RL under `scripts/reinforcement_learning/rsl_rl/`.
27
27
28
28
## Step 1: Understand the training task
29
-
In this section you will train the **Isaac-Velocity-Rough-H1-v0** environment. This is a locomotion task where the [Unitree H1](https://www.unitree.com/h1/) humanoid robot must track a velocity command while navigating rough terrain.
29
+
30
+
In this section you'll train the **Isaac-Velocity-Rough-H1-v0** environment. This is a locomotion task where the [Unitree H1](https://www.unitree.com/h1/) humanoid robot must track a velocity command while navigating rough terrain.
@@ -186,8 +190,8 @@ PPO (Proximal Policy Optimization) is the RL algorithm used by RSL-RL. Understan
186
190
|`save_interval`|`50`| Save a model checkpoint every N iterations. Useful for resuming training or evaluating intermediate policies |
187
191
188
192
### How the hyperparameters interact
189
-
During training, each iteration collects experience from all parallel environments.
190
-
The total batch size per iteration is:
193
+
194
+
During training, each iteration collects experience from all parallel environments. The total batch size per iteration is:
191
195
```
192
196
batch_size = num_envs × num_steps_per_env
193
197
```
@@ -273,25 +277,25 @@ This progression—from falling to stable walking—demonstrates how PPO gradual
273
277
274
278
The following visualizations compare two training stages using `num_envs=512`, showcasing the benefit of large-scale parallel training on DGX Spark.
275
279
276
-
***Iteration 50 (Early Stage, num_envs=512)***
280
+
**Iteration 50 (Early Stage, num_envs=512)**
277
281
278
282
At iteration 50, the policy is still in its exploration phase. Most robots exhibit noisy joint actions, lack coordination, and frequently fall. There is no observable response to the velocity command, and no stable gait has emerged.
279
283
280
-

By iteration 1350, the policy has matured. Most robots demonstrate coordinated walking behavior, balance maintenance, and accurate velocity tracking, even on rough terrain. The improvement in foot placement and heading stability is clearly visible.
285
289
286
-

Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_isaac_robotics/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ title: Build Robot Simulation and Reinforcement Learning Workflows with Isaac Si
4
4
draft: true
5
5
cascade:
6
6
draft: true
7
-
7
+
8
8
minutes_to_complete: 90
9
9
10
-
who_is_this_for: This learning path is intended for robotics developers, simulation engineers, and AI researchers who want to run high-fidelity robotic simulations and reinforcement learning (RL) pipelines using NVIDIA Isaac Sim and Isaac Lab on Arm-based NVIDIA DGX Spark system powered by the Grace–Blackwell (GB10) architecture.
10
+
who_is_this_for: This is an advanced topic for robotics developers, simulation engineers, and AI researchers who want to run high-fidelity robotic simulations and reinforcement learning (RL) pipelines using NVIDIA Isaac Sim and Isaac Lab on Arm-based NVIDIA DGX Spark system powered by the Grace–Blackwell (GB10) architecture.
11
11
12
12
learning_objectives:
13
13
- Describe the roles of Isaac Sim and Isaac Lab within a robotics simulation and RL pipeline
0 commit comments