Skip to content

Debojit-D/RL-Based-Dual-Arm-Manipulation

Repository files navigation

RL_WS Environment

This repository contains the setup and requirements for the RL_WS environment used for Reinforcement Learning (RL) tasks.

Setup Instructions

1. Clone the Repository

cd RL_WS

2. Create the Environment

Using Conda:

conda env create -f environment.yml
conda activate RL_WS

3. Alternative: Using Pip

If Conda is not available, you can use a virtual environment and pip:

python -m venv RL_WS
source RL_WS/bin/activate
pip install -r requirements

System Requirements

  • Python: 3.9
  • GPU: NVIDIA GPU with CUDA 11.8+ (optional, for accelerated training)
  • Frameworks: PyTorch (supports GPU acceleration)

Verifying the Installation

After setup, test the environment with:

python -c "import mujoco, gym; print('MuJoCo and Gym are ready!')"

For GPU support:

python -c "import torch; print(torch.cuda.is_available())"

Notes

  • The environment.yml file ensures full compatibility with Conda.
  • Use the requirements file for pip-based installations.

Customizing Robosuite Installation

To modify Robosuite components, we track specific files from the Conda installation path using symbolic links. The changes are stored in:

RL_WS/robosuite_installation_path_changes/

Tracking Modified Files

  1. Robosuite Environment & Manipulation Tasks

    • Modify lift_box.py (custom RL environment).
    • Update __init__.py inside environments/manipulation/.
  2. Custom Robot Model (Addverb Heal)

    • Modify addverb_heal/ assets inside robosuite_models/assets/robots/.
    • Update addverb_heal_robot.py in robosuite_models/robots/manipulators/.
How to Apply Changes
  • Ensure these files remain up to date with symbolic links (ln -s).
  • If needed, copy them back to the Conda path before running experiments.

Reinforcement Learning (RL) Setup

This repository includes RL training scripts for dual-arm manipulation using PPO (Proximal Policy Optimization).

Editing Reward Functions

Modify lift_box.py inside:

RL_WS/robosuite_installation_path_changes/robosuite/environments/manipulation/lift_box.py

Key function to edit:

def reward(self, action=None):
    """
    Defines the reward function for RL training.
    """
    ...
    return reward

Training & Testing

  • Training: Edit train_ppo.py to adjust hyperparameters (learning rate, batch size, etc.).
  • Testing: Modify test_ppo.py to visualize and debug policies.
  • Vectorized Environments: make_env.py supports multiple instances for parallel training.
How to Run Training
python train_ppo.py
Testing Trained Policy
python test_ppo.py
Logging & Debugging
  • Use TensorBoard for real-time monitoring:
tensorboard --logdir=./ppo_tensorboard --port=6006

About

This repository contains the current work that I am doing for dual arm manipulation using a RL based approach.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors