Skip to content

luckyrobots/luckylab

Repository files navigation

LuckyLab

A unified robot learning framework powered by LuckyEngine

License: MIT Python 3.10+ Ruff

LuckyLab handles RL training, IL training, and policy inference for robots simulated in LuckyEngine. It communicates with the engine over gRPC (port 50051) via luckyrobots.

Robot Task Learning
Unitree Go2 Velocity tracking RL (PPO, SAC)
SO-100 Pick-and-place IL (ACT via LeRobot)

Setup

1. Install LuckyLab

git clone https://github.com/luckyrobots/luckylab.git
cd luckylab

LuckyLab uses uv for dependency management. Install it if you don't have it:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then install the dependency group for your use case:

# RL only
uv sync --group rl

# IL only (LeRobot)
uv sync --group il

# Everything (RL + IL + Rerun + dev tools)
uv sync --all-groups

2. Start LuckyEngine

LuckyLab does not launch the engine, you need to start it yourself first.

  1. Open LuckyEngine
  2. Load a scene (e.g. the Go2 velocity scene or SO-100 pick-and-place scene)
  3. Enable the gRPC panel, this starts the gRPC server on port 50051
  4. LuckyLab will connect to localhost:50051 by default

If the engine is not running or gRPC is not enabled, LuckyLab will fail to connect.


Training

RL — Go2 velocity tracking

uv run python -m luckylab.scripts.train go2_velocity_flat \
    --agent.algorithm sac --agent.backend skrl --device cuda

Checkpoints are saved to runs/go2_velocity_sac/checkpoints/ every 5,000 steps, named by step count:

runs/go2_velocity_sac/checkpoints/
├── agent_5000.pt
├── agent_10000.pt
├── agent_15000.pt
└── ...

IL — SO-100 pick-and-place

uv run python -m luckylab.scripts.train so100_pickandplace \
    --il.policy act \
    --il.dataset-repo-id luckyrobots/so100_pickandplace_sim \
    --device cuda

Datasets are loaded from the HuggingFace Hub or from a local directory at ~/.luckyrobots/data/.


Inference

RL

# Run a trained SAC policy
uv run python -m luckylab.scripts.play go2_velocity_flat \
    --algorithm sac --backend skrl \
    --checkpoint runs/go2_velocity_sac/checkpoints/agent_25000.pt

# With keyboard velocity command control
uv run python -m luckylab.scripts.play go2_velocity_flat \
    --algorithm sac --backend skrl \
    --checkpoint runs/go2_velocity_sac/checkpoints/agent_25000.pt \
    --keyboard

Keyboard controls: W/S forward/back, A/D strafe, Q/E turn, Space zero, Esc quit.

IL

uv run python -m luckylab.scripts.play so100_pickandplace \
    --policy act --checkpoint runs/so100_pickandplace_act/final

Available Tasks

# List all registered tasks
uv run python -m luckylab.scripts.list_envs
Task ID Robot Type Algorithms
go2_velocity_flat Unitree Go2 RL PPO, SAC
so100_pickandplace SO-100 IL ACT

Any algorithm supported by skrl or Stable Baselines3 can be used for RL, and any policy supported by LeRobot can be used for IL — you just need to define the configs for them.


Visualization

Rerun — live inspection of observations, actions, rewards, and camera feeds:

# Browse a dataset
uv run python -m luckylab.scripts.visualize_dataset \
    --repo-id luckyrobots/so100_pickandplace_sim --episode-index 0 --web

# Attach to an evaluation run
uv run python -m luckylab.scripts.play go2_velocity_flat \
    --algorithm sac --backend skrl \
    --checkpoint runs/go2_velocity_sac/checkpoints/agent_25000.pt --rerun

Weights & Biases — enabled by default for RL training. Disable with --agent.wandb false.


Development

uv sync --all-groups
uv run pre-commit install

uv run pytest tests -v
uv run ruff check src tests
uv run ruff format src tests

License

MIT License — see LICENSE for details.

About

A unified robot learning framework powered by LuckyEngine

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages