Official code of
L2Calib (Learn-To-Calibrate) is a reinforcement learning-based extrinsic calibration method that currently supports LiDAR-IMU extrinsic calibration, achieving comparable or even better performance than traditional calibration approaches.
- Ceres 1.14
- Sophus 1.22
- Clone the Repository.
cd
git clone https://github.com/APRIL-ZJU/learn-to-calibrate.git
git submodule update --init --recursive # for Traj-LO. Note that if you already have the reference trajectory, you don't need this component
- Create Conda Environment
conda create -n L2Calib python=3.11
conda activate L2Calib
pip install numpy==1.26.4 torch==2.4.0 empy==3.3.4 pybind11==2.13.1
- Build RL Environment
cd Environment
export CMAKE_PREFIX_PATH="$(python -m pybind11 --cmakedir)":$CMAKE_PREFIX_PATH
zsh build.sh
- Before running, make sure the PATHs are specified
export PYTHONPATH=${HOME}/learn-to-calibrate/Environment/build/app:$PYTHONPATH
export PYTHONPATH=${HOME}/learn-to-calibrate/rl_solver:$PYTHONPATH
-
Download our handheld rosbag csc_01.bag
-
Replace the
bag_dir
variable in calib_csc.sh -
Run
zsh demo/calib_csc.sh
- Download NTU VIRAL or MCD VIRAL
- Replace the
bag_dir
variable in calib_ntu.sh / calib_mcd.sh - Run
zsh demo/calib_ntu.sh
orzsh demo/calib_mcd.sh
- Put your rosbag in an empty folder
- Specify the
fastlio_config
,bag_dir
,lidar_type
,imu_topic
andrough_trans
in demo/calib.sh.
rough_trans
represents the rough estimating of the translation between the sensors, where 0.1 means 10 cm.- Note that if you already have the reference IMU trajectory, e.g., the IMU ground truth trajectory obtained from Mocap/RTK systems, you can set
use_imu
toTrue
. And ignore the steps below, just run
python train.py --lio-config {FASTLIO CONFIG} --bag-dir {BAG DIR} --alg ppo --SO3-distribution Bingham --num-epochs 8000 --min -{ROUGH} --max 0.1
- Specify the Traj-LO configuration under
Environment/Traj-LO/data/
. Only thetopic
needs to be specified. - Run
zsh demo/calib.sh
Thanks for these awesome works Traj-LO(Lidar-only odometry) Faster-lio (Tightly-coupled LIO) BPP (Bingham policy parameterization for RL)
The parallel environment is adapted from Fast-Evo (Faster evo-ape/traj implemented in C++20)
Thanks Chengrui Zhu for implementing the PPO RL algorithm.