Skip to content

Ali-Rashidi/GtG_1.0_1.1

Repository files navigation

Grasp the Graph (GtG 1.0 & GtG 1.1): A Super Light Graph-RL Framework for Robotic Grasping

Welcome to the repository for the "Grasp the Graph" (GtG) framework! This project implements the concepts outlined in the GtG paper. We utilize CoppeliaSim as our simulation environment, along with key dependencies such as PyTorch, PyTorch Geometric, and Open3D.

The model is trained in a scenario featuring a single object within the scene. We employ 3 RGB-D cameras strategically positioned with a 120-degree separation to capture a comprehensive point cloud representation of the object.

Our test results exhibit remarkable generalization and robustness to previously unseen objects. Even with just a single camera, the performance of our model remains commendably stable. Notably, while the model is trained solely on scenes featuring single objects, it demonstrates the ability to operate in scenarios with multiple objects, provided they do not occlude each other. (Note: this capability is not covered in the paper.)

For your convenience, we provide the checkpoint file named "GtG_best.pth". Further details can be found in the paper, available on IEEE Xplore or ResearchGate, and on our Google Site.

Results

Training Result

TrainingPlot

Test Result - Single Object

3 Cameras: 3Cameras

2 Cameras: 2Cameras

1 Camera: 1Camera

Overall Performance on test Objects: Overall Performance on test Objects

Test Result - Multi Object

Grasping in Multi Object Scene

References

Contact

For any inquiries or feedback, feel free to reach out to me at [AliRashidiMoghadam@gmail.com].

About

Implementation of "Grasp the Graph (GtG)"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages