| layout | project_page |
|---|---|
| permalink | / |
| title | MOPS: Multi-Object Photoreal Simulation Dataset for Computer Vision in Robot Manipulation |
| authors | Maximilian X. Li, Paul Mattes, Nils Blank, Rudolf Lioutikov |
| affiliations | Intuitive Robots Lab, Karlsruhe Institute of Technology, Germany |
| paper | ./static/Li2026_MOPS.pdf |
| code | https://github.com/LiXiling/mops-data |
mops-data — Image generation in ManiSkill3
Available
🤖
mops-il — Full robot trajectories in RoboCasa v0.1
Coming Soon
Normalized asset management across multiple 3D libraries with automatic part-level annotation and semantic scene understanding.
Comprehensive annotations including RGB, depth, surface normals, segmentation masks, affordance maps, and 6D pose information.
Built on ManiSkill3 and SAPIEN for physics-accurate simulation with photorealistic rendering and programmable scene generation.
| Dataset | Level | Aff. Labels | Obj. Cat. | Objects |
|---|---|---|---|---|
| RGB-D Part | Part | 7 | 17 | 105 |
| 3D-AffNet | Part | 16 | 23 | 22,949 |
| MOPS-Partnet | Part | 24 | 46 | 2,345 |
| MOPS-Robocasa | Object | 44 | 101 | 1,008 |
| MOPS (Total) | Mixed | 56 | 137 | 3,353 |
While 3D-AffNet has more instances, MOPS provides significantly higher taxonomic coverage across object categories and affordance types.
Imitation learning on 24 RoboCasa tasks, evaluated over 10 environment seeds each
| Policy Inputs | Success Rate | Gain |
|---|---|---|
| RGB only | 13.33% | — |
| RGB + MOPS Affordances | 21.25% | +7.92 |
MOPS affordance annotations provide a consistent boost to imitation learning performance across 24 RoboCasa manipulation tasks.
Prerequisites: Python 3.10 · CUDA-compatible GPU · 16 GB+ RAM
conda create -n mops python=3.10 conda activate mops
pip install mani_skill git clone https://github.com/LiXiling/mops-data cd mops-data pip install -e .
@article{li2026mops,
title = {Multi-Objective Photoreal Simulation (MOPS) Dataset
for Computer Vision in Robot Manipulation},
author = {Maximilian Xiling Li and Paul Mattes and
Nils Blank and Rudolf Lioutikov},
year = {2026}
}This work is supported by the Intuitive Robots Lab at Karlsruhe Institute of Technology, Germany.



