Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection
Qingyu Fan1,2,3, Yinghao Cai1,2†, Chao Li3, Wenzhe He3,
Xudong Zheng3, Tao Lu1, Bin Liang3, Shuo Wang1,2
1 Institute of Automation, Chinese Academy of Sciences.
2 School of Artificial Intelligence, University of Chinese Academy of Sciences.
3 Qiyuan Lab.
†Corresponding Authors
- [05/09/2025]: The code of NeuGrasp is released.
- [01/31/2025]: NeuGrasp is accepted to ICRA 2025.
[ICRA'25] This is the official repository of NeuGrasp: Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection.
In this paper, we introduce NeuGrasp, a neural surface reconstruction method that leverages background priors for material-agnostic grasp detection. NeuGrasp integrates transformers and global prior volumes to aggregate multi-view features with spatial encoding, enabling robust surface reconstruction in narrow and sparse viewing conditions. By focusing on foreground objects through residual feature enhancement and refining spatial perception with an occupancy-prior volume, NeuGrasp excels in handling objects with transparent and specular surfaces.
Please kindly star ⭐️ this project if it helps you 😁.
- ROS — Required for grasp and scene visualization.
- Blender 2.93.3 — Used for simulation and synthetic data generation. You can download it from the official Blender website.
To install the required Python packages, run:
pip install -r requirements.txt
⚠️ Blender Python Dependencies Please ensure Blender is properly installed and accessible from the command line (i.e., theblendercommand works). Additionally, note that Blender uses its own bundled Python environment for simulation scripts. You need to install the required packages within Blender's Python. Run the following commands to set it up:
/path/to/blender-2.93.3-linux-x64/2.93/python/bin/python3.9 -m ensurepip --upgrade
/path/to/blender-2.93.3-linux-x64/2.93/python/bin/python3.9 -m pip install -r requirements.txt🔁 Replace
/path/to/with your actual Blender installation path.
Before running the code, please ensure that all file and directory paths in the scripts (e.g., Blender path, dataset path, save directories) are correctly set according to your local environment.
For example, update:
BLENDER_PATH = "/path/to/blender-2.93.3-linux-x64/blender"
PYTHON_PATH = "/path/to/python"🔁 Replace
/path/to/with your actual working directory.
- Download and extract the required assets into
data/. - Download the ImageNet 2012 test set and organize as
src/assets/imagenet/images/test. This is used for randomized texture assignment during simulation. - Download scene descriptor files from GIGA into
data/NeuGraspData/data/. - Download and extract the renderer module to
data_generator/render/.
You can also download example generated data for reference
Once all resources are prepared, run the following:
cd data_generator/render
bash run_packed_rand.sh💡 For pile scenes and background data, follow the same procedure.
After rendering is complete, process the raw data using:
python data_generator/render/wash.py
python data_generator/render/depth2tsdf.pyThese scripts will clean up the data and convert the rendered depth images into .npz TSDF files for downstream use.
Once the training data and configuration files are ready, run the following command:
bash train.sh GPU_IDFor example:
bash train.sh 0We provide pretrained weights for direct evaluation. Please download the checkpoint and place it in:
src/neugrasp/ckpt/
To run simulation-based grasping:
bash run_simgrasp.sh-
RViz Visualization To visualize the scene and predicted grasps in RViz, set
RVIZ=1in the configuration, and run the following commands beforehand:roscore rviz
Then, in the RViz interface, open the config file:
src/gd/config/sim.rviz
-
PyBullet Execution To visualize grasp execution in PyBullet, set
GUI=1.
Our method is robust without geometry ground truth and can be further improved by finetuning on real-world data.
- We provide our collected data and partial scripts for reference.
- Due to the complexity of hardware setups, we only offer guidelines for real-world data collection.
- The robot captures RGB-D images at the start of each scene.
- A human operator manually moves the arm to feasible grasp positions.
- Another person inputs
yes, triggering the gripper to close and saving the grasp pose and width.
We recommend rewriting a collection script tailored to your setup. Useful reference functions can be found in the VGN repository.
If you find our work helpful, please consider citing:
@INPROCEEDINGS{fan2025neugrasp,
author={Fan, Qingyu and Cai, Yinghao and Li, Chao and He, Wenzhe and Zheng, Xudong and Lu, Tao and Liang, Bin and Wang, Shuo},
booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
title={NeuGrasp: Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection},
year={2025},
volume={},
number={},
pages={3197-3203},
keywords={Surface reconstruction;Aggregates;Refining;Focusing;Grasping;Reconstruction algorithms;Transformers;Feature extraction;Encoding;Robots},
doi={10.1109/ICRA55743.2025.11127348}
}We would like to thank the authors of VGN and GraspNeRF for open-sourcing their codebases, which greatly inspired this work.
For questions or feedback, feel free to open an issue or reach out:
- Qingyu Fan:
fanqingyu23@mails.ucas.edu.cn - Yinghao Cai:
yinghao.cai@ia.ac.cn
