Reinforcement learning project aimed at having a Kinova Gen3 manipulator catch a ball in the air
- Make sure you have the
OmniIsaacGymEnvsrepository cloned to your device. Follow their installation instructions if not. - Create a symbolic link from the directory
OmniIsaacGymEnvs/omniisaacgymenvs/cgf/task/tokinova_ball_catching_RL/config/KinovaTask.yaml - Create a symbolic link from the directory
OmniIsaacGymEnvs/omniisaacgymenvs/cgf/train/tokinova_ball_catching_RL/config/KinovaTaskPPO.yaml - Navigate to
OmniIsaacGymEnvs/omniisaacgymenvs/utils/task_util.py. - Inside the
import_tasks()function, addfrom kinova_task import KinovaTask. - Inside the
task_mapdictionary, add an entry"KinovaTask": KinovaTask.
- Add the
isaac_scriptsfolder in this repo to yourPYTHONPATHenvironment variable manually. Example:export PYTHONPATH=$PYTHONPATH:/path/to/isaac_scripts. - Navigate to the
OmniIsaacGymEnvs/omniisaacgymenvsfolder. - Run
/path/to/your/isaac-sim/python.sh scripts/rl_train.py task=KinovaTask. - Additional arguments you can pass to that script include:
headless=Truenum_envs=<how many robots you want to spawn>test=Trueif you want to examine a policycheckpoint=/path/to/a/checkpoint(Note that you always have to do this to examine a policy. Just settingtesttoTruewill not load a trained policy)max_iterations=<how many epochs to run>The default is 100 and is pretty quick (about 400,000 timesteps with 256 robots). Setting to 1000 is pretty long but gives very good results.