Skip to content

silvery107/rl-mpc-locomotion

Repository files navigation

RL MPC Locomotion

This repo aims to provide a fast simulation and RL training framework for a quadruped locomotion task by dynamically predicting the weight parameters of a MPC controller. The control framework is a hierarchical controller composed of a higher-level policy network and a lower-level model predictive controller.

The MPC controller refers to Cheetah Software but written in python, and it completely opens the interface between sensor data and motor commands, so that the controller can be easily ported to any mainstream simulators.

The RL training utilizes the NVIDIA Isaac Gym in parallel using Unitree Robotics Aliengo model, and transferring it from simulation to reality on a real Aliengo robot (sim2real is not included in this codebase).

Frameworks

Dependencies

Installation

  1. Clone this repository
    git clone [email protected]:silvery107/rl-mpc-locomotion.git
  2. Initialize submodules
     git submodule update --init
    Or use the --recurse option in step 1 to clone submodules at the same time.
  3. Create the conda environment:
    conda env create -f environment.yml
  4. Install rsl_rl at commit 2ad79cf under <extern> folder
    cd extern/rsl_rl
    pip install -e .
  5. Compile python binding of the MPC solver:
    pip install -e .

Quick Start

  1. Play the MPC controller on Aliengo:

    python RL_MPC_Locomotion.py --robot=Aliengo

    All supported robot types are Go1, A1 and Aliengo.

    Note that you need to plug in your Xbox-like gamepad to control it, or pass --disable-gamepad. The controller mode is default to Fsm (Finite State Machine), and you can also try Min for the minimum MPC controller without FSM.

    • Gamepad keymap

      Press LB to switch gait types between Trot, Walk and Bound.

      Press RB to switch FSM states between Locomotion and Recovery Stand

  2. Train a new policy:

    cd RL_Environment
    python train.py task=Aliengo headless=False

    Press the v key to disable viewer updates, and press again to resume. Set headless=True to train without rendering.

    Tensorboard support is available, run tensorboard --logdir runs.

  3. Load a pretrained checkpoint:

    python train.py task=Aliengo checkpoint=runs/Aliengo/nn/Aliengo.pth test=True num_envs=4

    Set test=False to continue training.

  4. Run the pretrained weight-policy for MPC controller on Aliengo: Set bridge_MPC_to_RL to False in <MPC_Controller/Parameters.py>

    python RL_MPC_Locomotion.py --robot=Aliengo --mode=Policy --checkpoint=path/to/ckpt

    If no checkpoint is given, it will load the latest run.

Roadmap

User Notes

Gallery