Skip to content

jayjayhust/isaac_so_arm101

 
 

Repository files navigation

Isaac Lab – SO‑ARM100 / SO‑ARM101 Project

This repository implements tasks for the SO‑ARM100 and SO‑ARM101 robots using Isaac Lab. It serves as the foundation for several tutorials in the LycheeAI Hub series Project: SO‑ARM101 × Isaac Sim × Isaac Lab.

📰 News featuring this repository:

  • 10 June 2025: 🎥 LycheeAI Channel Premiere: SO-ARM101 tutorial series announcement! 🔗 Watch on YouTube
  • 23 April 2025: 🤖 NVIDIA Omniverse Livestream: Training a Robot from Scratch in Simulation (URDF → OpenUSD). 🔗 Watch on YouTube
  • 19 April 2025: 🎥 LycheeAI Tutorial: How to Create External Projects in Isaac Lab. 🔗 Watch on YouTube

🎬 Watch the Lift Task in action

rl-video-step-0

🛠️ Installation

  1. Install Isaac Lab by following the official installation guide (using conda).
# only need once
conda create -n env_isaaclab python=3.11
conda activate env_isaaclab  # then: cd <isaaclab root directory>
./isaaclab.sh --conda
./isaaclab.sh --install
source _isaac_sim/setup_conda_env.sh  # make sure isaacsim soft link exsists

# make sure conda env is activated everytime
conda activate env_isaaclab
  1. Clone this repository outside the IsaacLab directory.

  2. Install the package:

    python -m pip install -e source/SO_100

🚀 Quickstart

To list all available environments:

python scripts/list_envs.py

🐞 Debugging Tasks

Two scripts can help verify your setup:

Zero Agent

Sends zero commands to all robots, confirming that the environment loads correctly:

python scripts/zero_agent.py --task SO-ARM100-Reach-Play-v0

Random Agent

Sends random commands to all robots, confirming proper actuation:

python scripts/random_agent.py --task SO-ARM100-Reach-Play-v0

🏋️‍♂️ Training and Playback

You can train a policy for SO‑ARM100 / SO‑ARM101 tasks (for example, the Reach task, which is a basic RL-based IK) with the rsl_rl and/or skrl library:

python scripts/rsl_rl/train.py --task SO-ARM100-Reach-v0 --headless
# or
python scripts/rsl_rl/train.py --task SO-ARM100-Lift-Cube-v0 --num_envs 1000 --max_iterations 12000 --headless

python scripts/skrl/train.py --task SO-ARM100-Reach-v0 --headless
# or
python scripts/skrl/train.py --task SO-ARM100-Lift-Cube-v0 --num_envs 1000 --max_iterations 12000 --headless

After training, validate the learned policy:

python scripts/rsl_rl/play.py --task SO-ARM100-Reach-Play-v0
# or
python scripts/rsl_rl/play.py --task SO-ARM100-Lift-Cube-Play-v0

python scripts/skrl/play.py --task SO-ARM100-Reach-Play-v0
# or
python scripts/skrl/play.py --task SO-ARM100-Lift-Cube-Play-v0

This ensures that your policy performs as expected in Isaac Lab before attempting real‑world transfer.

change policy to downgrade the iterations

add a lift high reward, try to see the consequence

# source/SO_100/SO_100/tasks/lift/mdp/rewards.py
# ...
def object_lift_height(
    env: ManagerBasedRLEnv, 
    scale: float = 1.0,
    object_cfg: SceneEntityCfg = SceneEntityCfg("object")
) -> torch.Tensor:
    """Reward the agent for lifting the object higher. The higher the better."""
    object: RigidObject = env.scene[object_cfg.name]
    # Get the z-coordinate (height) of the object
    object_height = object.data.root_pos_w[:, 2]
    # Return the height scaled by the scale factor
    # We use a maximum to ensure the reward is always positive
    return torch.maximum(object_height * scale, torch.zeros_like(object_height))

# source/SO_100/SO_100/tasks/lift/lift_env_cfg.py
# ...
from . import mdp as local_mdp
lift_height = RewTerm(func=local_mdp.object_lift_height, params={"scale": 10.0}, weight=10.0)

related resource

🔄 Sim2Real Transfer

Work in progress.

🤝 Contributing

We welcome contributions of all kinds!
Please read our Contributing Guide to learn how to set up your environment, follow our coding style, and submit pull requests.

📄 License

This project is licensed under the BSD 3-Clause License. See the LICENSE file for details.

🙏 Acknowledgements

This project builds upon the excellent work of several open-source projects and communities:

  • Isaac Lab - The foundational robotics simulation framework that powers this project
  • NVIDIA Isaac Sim - The underlying physics simulation platform
  • RSL-RL - Reinforcement learning library used for training policies
  • SKRL - Alternative RL library integration
  • SO-ARM100/SO-ARM101 Robot - The hardware platform that inspired this simulation environment

Special thanks to:

  • The Isaac Lab development team at NVIDIA for providing the simulation framework
  • Hugging Face and The Robot Studio for the SO‑ARM robot series
  • The LycheeAI Hub community for tutorials and support

📚 Citation

If you use this work, please cite it as:

@software{Louis_Isaac_Lab_2025,
author = {Louis, Le Lay and Muammer, Bay},
doi = {https://doi.org/10.5281/zenodo.16794229},
license = {BSD-3-Clause},
month = apr,
title = {{Isaac Lab – SO‑ARM100 / SO‑ARM101 Project}},
url = {https://github.com/MuammerBay/isaac_so_arm101},
version = {1.1.0},
year = {2025}
}

About

Isaac Lab external project for SO-ARM100/101 arm robot.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%