Skip to content

Steplerchik/drl_local_planner_ros_stable_baselines

 
 

Repository files navigation

What is this repository for?

Prerequisites

Installation (Docker only)

Installation is performed via docker-compose. All corresponding files you can find in 'docker' folder.

For installation run the following commands in the terminal:

# In terminal 1:

cd <path_to_dir>/drl_local_planner_ros_stable_baselines
./docker/scripts/build.sh

# In terminal 2 (when the build in the 1st terminal is done and hanging):

./docker/scripts/save_build.sh

# Stop Terminal 1

Example usage

  1. Train agent

Run the command in the terminal:

cd <path_to_dir>/drl_local_planner_ros_stable_baselines
tmuxp load ./docker/scripts/tmuxp/train_dummy_example.yaml

  1. Execute self-trained ppo-agent
cd <path_to_dir>/drl_local_planner_ros_stable_baselines
tmuxp load ./docker/scripts/tmuxp/run_dummy_example.yaml

Training

  1. In start_scripts/training_params/ppo2_params, define the new agents training parameters. You can find examples of defining params for training from the scratch (pretrained_model_names field is empty) and for training on pretrained models.

    Parameter Desctiption
    agent_name Number of timestamps the agent will be trained.
    total_timesteps Number of timestamps the agent will be trained.
    policy see PPO2 Doc
    gamma see PPO2 Doc
    n_steps see PPO2 Doc
    ent_coef see PPO2 Doc
    learning_rate see PPO2 Doc
    vf_coef see PPO2 Doc
    max_grad_norm see PPO2 Doc
    lam see PPO2 Doc
    nminibatches see PPO2 Doc
    noptepochs see PPO2 Doc
    cliprange see PPO2 Doc
    robot_radius The radius if the robot footprint
    rew_func The reward functions that should be used. They can be found and defined in rl_agent/src/rl_agent/env_utils/reward_container.py.
    num_stacks State representation includes the current observation and (num_stacks - 1) previous observation.
    stack_offset The number of timestamps between each stacked observation.
    disc_action_space 0, if continuous action space. 1, if discrete action space.
    normalize 0, if input should not be normalized. 1, if input should be normalized.
    stage stage number of your training. It is supposed to be 0, if you train for the first time. If it is > 0, it loads the agent of the "pretrained_model_path" and continues training.
    pretrained_model_name If stage > 0 this agent will be loaded and training can be continued.
    task_mode - "ped" for training on pedestrians only; "static" for training on static objects only; "ped_static" for training on both, static
  2. In docker/train.yml add the desired agent name and the number of simulations in the row:

 ./entrypoint_ppo2.sh agent_name number_of_simulations
  1. Run the command and wait for a very long time:
cd <path_to_dir>/drl_local_planner_ros_stable_baselines
./docker/scripts/train.sh

Run trained models

If you want a good(real) visualization, change the param:

<path_to_dir>/drl_local_planner_ros_stable_baselines/rl_bringup/config/rl_common.yaml:

train_mode: 2

After running, you can send 2D Nav Goal in RVIZ to create the global path for the robot to follow.

  1. 1 raw disc:
cd <path_to_dir>/drl_local_planner_ros_stable_baselines
tmuxp load ./docker/scripts/tmuxp/run_1_raw_disc.yaml
  1. 3 raw disc:
cd <path_to_dir>/drl_local_planner_ros_stable_baselines
tmuxp load ./docker/scripts/tmuxp/run_3_raw_disc.yaml

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 56.2%
  • Jupyter Notebook 25.4%
  • C++ 12.2%
  • CMake 3.9%
  • Shell 2.1%
  • Dockerfile 0.2%