Jason Brown, Eli Fox, Jacob Harrelson, Srushti Hippargi
University of Michigan
In this work, we build upon the paper Grasping the Ungraspable with Emergent Extrinsic Dexterity by Wenxuan Zhou and David Held. The original paper demonstrates that a simple gripper using intuition about its environment can still perform complex manipulation tasks. That work studies the task of "Occluded Grasping" that aims to reach a grasp in configurations that are initially intersecting with the environment. While the original work only considered occlusions by the ground, our code extends their work by considering occlusions by side walls along with unoccluded configurations. Our system trains different policies for each occlusion type and selects between them at run-time.
This repository contains the code for the simulation environment of the Occluded Grasping task and RL training and rollouts. The code for the real robot can be found in a separate repository.
This repository is built on top of robosuite-benchmark. The simulation environment is based on robosuite and the RL training related code is based on rlkit. As an overview of this repository, ungraspable/robosuite_env defines the Occluded Grasping task. ungraspable/rlkit_utils defines helper functions to be used with rlkit.
Please feel free to contact us if you have any questions on the code or anything else related to our paper!
Robotics Institute, Carnegie Mellon University
Conference on Robot Learning (CoRL) 2022 (Oral)
Paper | Website | Real robot code
Installation must be done on Linux / WSL.
Install Miniconda in your home directory using this tutorial. You should see that ~/miniconda3 is a directory afterwards.
Download MuJoCo 2.0 and extract the folder.
sudo apt-get install unzip
unzip mujoco200_linux.zipAfter extracting the zip, move the extracted folder to ~/.mujoco/mujoco200.
Download the MuJoCo license file and put it into ~/.mujoco/mjkey.txt.
Run the following command:
sudo apt install libglew-devAdd the following to your .bashrc, then source it.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/.mujoco/mujoco200/bin
export PATH="$LD_LIBRARY_PATH:$PATH"
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.soClone the current repository:
git clone --recursive https://github.com/HarrelsonJ/DeepRob_Ungraspable.git
cd DeepRob_UngraspableEdit the robosuite subrepo to change the versions of some packages.
- In robosuite/requirements.txt, change the version of mujoco-py to 2.0.2.5
- In robosuite/setup.py, change the version of mujoco-py to 2.0.2.5
Run the script finish_install.sh and go through the final install process. This will create a conda environment with the required packages. IMPORTANT: We require the exact version of robosuite and rlkit included in this directory.
Activate the conda environment.
conda activate ungraspableUse viskit to visualize training log files. Do not install it in the above conda environment because there are compatibility issues.
Do not train on this branch! Training should be done on the ground_occlusion and side_occlusion branches for their respective policies.
python train.py --ExpID 0000The results will be saved under "./results" by default. During training, you can visualize current logged runs using viskit.
To train the policy with a multi-grasp curriculum:
python train.py --adr_mode 0001_ADR_MultiGrasp --ExpID 0001 --goal_range use_threshold"--adr_mode" specified an ADR configuration file under ungraspable/rlkit_utils/adr_config. Similarly, to train the policy with Automatic Domain Randomization over physical parameters:
python train.py --adr_mode 0002_ADR_physics --ExpID 0002We include the results of the above training commands in result/examples, including the model and the training logs. You may visualize the training curves of these examples using viskit:
python your_viskit_folder/viskit/frontend.py ungraspable/results/examplesTo visualize a trained policy with onscreen mujoco renderer:
python rollout.py --load_ground_dir results/examples/Exp0000_OccludedGraspingSimEnv_tmp-0 --load_side_dir results/examples/Exp0003_OccludedGraspingSimEnv_tmp-0 --camera sideview --grasp_and_liftFeel free to try out other checkpoints in the result/examples folder.
If you find this repository useful, please cite the original paper:
@inproceedings{zhou2022ungraspable,
title={Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity},
author={Zhou, Wenxuan and Held, David},
booktitle={Conference on Robot Learning (CoRL)},
year={2022}
}
