[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
-
Updated
Nov 19, 2025 - Python
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
A lean, ROS-free sim-to-real framework for training and deploying Vision-Language-Action (VLA) models and RL agents. Native MuJoCo Gymnasium wrappers with synchronous execution for Franka, UR5e, xArm, and SO101.
Using VLM-based visual question answering to perceive scenes and control robots in MuJoCo simulation.
Official WidowX deployment code for EO-1
Add a description, image, and links to the vision-language-actions-models topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-actions-models topic, visit your repo's landing page and select "manage topics."