Skip to content

Parameters

HuanNguyenARL edited this page Sep 26, 2023 · 6 revisions

Parameters for ORACLE-family of methods

CHECKPOINT PATHS

  • CPN_TF_CHECKPOINT_PATH: list of Tensorflow weight files for the Collision Prediction Network (CPN)
  • CPN_TRT_CHECKPOINT_PATH: list of folders containing TRT files (need to run the optimize scripts to create TRT files)
  • seVAE_CPN_TF_CHECKPOINT_PATH: list of Tensorflow weight files for the CPN part of seVAE-ORACLE
  • seVAE_CPN_TRT_CHECKPOINT_PATH: list of folders containing TRT files (need to run the optimize scripts to create TRT files)
  • IPN_TF_CHECKPOINT_PATH: Tensorflow weight file for the Information gain Prediction Network (IPN)
  • IPN_TRT_CHECKPOINT_PATH: the folder containing TRT files (need to run the optimize scripts to create TRT files)

PLANNING PARAMS

  • PLANNING_TYPE: 0 (end2end ORACLE), 1 (seVAE-ORACLE), 2 (A-ORACLE), 3 (Voxblox-expert)

ROS topics in the real robot

  • ROBOT_DEPTH_TOPIC: the topic of the depth image, type sensor_msgs/Image, depth unit: mm
  • ROBOT_ODOM_TOPIC: the topic of the robot's odometry in the world frame (for calculating the unit goal vector and checking if the robot has reached the waypoints), type nav_msgs/Odometry
  • ROBOT_CMD_TOPIC: the command topic, type geometry_msgs/Twist, containing: 3D velocity command in vehicle frame (yaw-rotated world frame) + reference yaw angle (we use angular.z field to save the reference yaw angle!)
  • ROBOT_MASK_TOPIC: the interestingness mask topic for visually attentive navigation task, type sensor_msgs/Image. This mask is used only when PLANNING_TYPE >= 2. The detection mask is such that each pixel of the depth image is associated with a value from 0 (lowest) to 255 (highest) based on its interestingness.
  • TRAJECTORY_TOPIC: visualization of the trajectory endpoints, type visualization_msgs/MarkerArray, estimated (roughly) from the motion primitives library and the robot's initial state.
  • ROBOT_LATENT_TOPIC: the latent_vector topic when using PLANNING_TYPE = 1, type std_msgs/Float32MultiArray

Motion Primitives Library (MPL) params

  • PLANNING_HORIZONTAL_FOV: horizontal FOV of the MPL (in inference phase), unit: degrees
  • PLANNING_VERTICAL_FOV: vertical FOV of the MPL (in inference phase), unit: degrees
  • STEPS_TO_REPLAN: replan after receiving ... depth images
  • CMD_VELOCITY: forward velocity (in the current implementation, all the primitives in the MPL have the same forward velocity though this's not a requirement of the methods). Note: CMD_VELOCITY should be less than VEL_MAX = MAX_RANGE / (ACTION_HORIZON * SKIP_STEP_GENERATE * DEPTH_TS)
  • NUM_VEL_X: the number of discrete forward velocities in the MPL (only 1 is supported now)
  • NUM_VEL_Z: the number of discrete vertical velocities in the MPL
  • NUM_YAW: the number of discrete steering angles in the MPL

Inference type

  • COLLISION_USE_TENSORRT: use the TensorRT (True) or Tensorflow (False) model for CPN
  • INFOGAIN_USE_TENSORRT: use the TensorRT (True) or Tensorflow (False) model for IPN

Visualization

  • ENABLE_VISUALIZATION: publish message for visualization of the networks' prediction in RViz
  • VISUALIZATION_MODE: 0 (visualize only the timestamps at the end of the prediction horizon), 1 (visualize all timestamps), 3 (visualize only the timestamps at the end of the prediction horizon, for all the networks in the ensemble). The visualized positions of the robot at future timestamps are obtained by integrating the first-order approximations of the velocity controllers and the yaw controller using the below parameters:
  • ALPHA_V: first-order approximation of the velocity controllers (obtained by discretizing the transfer function $\frac{1}{T_{vel} \times s + 1}$ with $T_{sampling} =$ SKIP_STEP_GENERATE $\times$ DEPTH_TS $\times ~0.1$)
  • ALPHA_PSI: first-order approximation of the yaw controller (obtained by discretizing the transfer function $\frac{1}{T_{yaw} \times s + 1}$ with $T_{sampling} =$ SKIP_STEP_GENERATE $\times$ DEPTH_TS $\times ~0.1$)

Note: The green markers correspond to the estimated trajectory endpoints of the safe action sequences while the blue marker denotes the estimated trajectory endpoint of the chosen action sequence.

rviz_markers

Use image pre-processing step in the real robot?

  • USE_D455_HOLE_FILLING: turn off (True) or on (False) the image pre-preprocessing step in the real robot to fill in the missing pixels in the real-world depth images

Collision cost

  • DEADEND_COL_SCORE_THRESHOLD_HIGH: when the collision cost of the safest action sequence is greater than this threshold, a dead-end is detected and we allow the robot to yaw in one spot to find a new free direction
  • DEADEND_COL_SCORE_THRESHOLD_LOW: the robot can exit the yaw in one spot mode when the collision cost of the safest action sequence is smaller than this threshold
  • TIME_WEIGHT_FACTOR: time-step weighting factor $\lambda$ ($\lambda = 0$: every future time step in the prediction horizon is weighted the same when calculating the collision cost for each primitive, $\lambda > 0$: nearer future time steps are weighted more)
  • COLLISION_THRESHOLD: the collision threshold $c_{th}$ (compared to the safest action sequence) to classify an action sequence as "safe". Lowering this value will lead to more conservative navigation behavior.

Waypoint params

  • WAYPOINT_FILE: path to the file containing the list of waypoints (we provide some example waypoint files in the waypoints folder)
  • WAYPOINT_DISTANCE_THRESHOLD: distance to check if the waypoint has been reached (only in the x-y plane), unit: meters
  • WAYPOINT_YAW_THRESHOLD: yaw difference to check if the robot has finished yaw in one spot action, unit: degrees
  • ALLOW_YAW_AT_WAYPOINT: allow the robot to yaw in one spot to face the next waypoint when reaching a waypoint

Uncertainty-aware

  • N_E: the number of CPNs in the Deep Ensembles
  • USE_UT: use the Unscented Transform or not

Noise parameters

  • USE_ADDITIVE_GAUSSIAN_IMAGE_NOISE: simulate additive Gaussian noise on the depth image
  • USE_ADDITIVE_GAUSSIAN_STATE_NOISE: simulate velocity noise on the robot's velocity estimate
  • IMAGE_NOISE_FACTOR: quadratic coefficient for the depth image noise model in Link (value: 0 - 0.005)
  • P_vx, P_vy, P_vz: diagonal values of the velocity estimate's covariance matrix ($diag([\sigma_{vx}^2, \sigma_{vy}^2, \sigma_{vz}^2])$)

A-ORACLE params

  • TIMEOUT_TYPE: timeout type for visually-attentive navigation task, aka switch to normal ORACLE:
    • 0: after TIME_ALLOWED $\times$ T_STRAIGHT, where T_STRAIGHT is the time to travel in the straight-line connection between the waypoints with the velocity CMD_VELOCITY
    • 1: when total_time_from_previous_waypoint + current_distance_to_next_waypoint / CMD_VELOCITY < TIME_ALLOWED $\times$ T_STRAIGHT
    • 2 (default): when total_distance_from_previous_waypoint + current_distance_to_next_waypoint < TIME_ALLOWED $\times$ D_STRAIGHT
  • TIME_ALLOWED: parameter to determine timeout period (check TIMEOUT_TYPE description)

DATA COLLECTION, EVALUATION PARAMS IN SIM

ROS topics in sim

  • SIM_DEPTH_TOPIC: the topic of the depth image in the simulator, type sensor_msgs/Image, unit: meters
  • SIM_ODOM_TOPIC: the topic of the robot's odometry in the world frame in the simulator (for calculating the unit goal vector and checking if the robot has reached the waypoints), type nav_msgs/Odometry
  • SIM_CMD_TOPIC: the command topic in the simulator, type geometry_msgs/Twist, containing: 3D velocity command in vehicle frame (yaw-rotated world frame) + reference yaw angle (we use angular.z field to save the reference yaw angle!)
  • SIM_IMU_TOPIC: the IMU topic in the simulator, type sensor_msgs/Imu
  • SIM_MASK_TOPIC: the interestingness mask topic in the simulator for visually attentive navigation task, type sensor_msgs/Image. This mask is used only when PLANNING_TYPE >= 2. The detection mask is such that each pixel of the depth image is associated with a value from 0 (lowest) to 255 (highest) based on its interestingness.
  • SIM_LATENT_TOPIC: the latent_vector topic in the simulator when using PLANNING_TYPE = 1, type std_msgs/Float32MultiArray

Depth sensor params

  • MAX_RANGE: max range of the depth image, unit: meters
  • HORIZONTAL_FOV: horizontal FOV of the depth camera, unit: degrees
  • VERTICAL_FOV: vertical FOV of the depth camera, unit: degrees
  • DEPTH_TS: inversion of depth sensor FPS
  • DEPTH_CX, DEPTH_CY, DEPTH_FX, DEPTH_FY: depth camera's intrinsic params for depth_to_pcl
  • CAM_PITCH: pitch angle of the depth camera with respect to the robot's body frame, positive value: pitch down, negative value: pitch up
  • t_BC: coordinate of the depth camera's origin in the robot's body frame (the axes in the body frame follow the ROS convention where $x_B, y_B, z_B$ point forward, to the left of the robot and upward, respectively). Format: np.array([[$x_{BC}$], [$y_{BC}$], [$z_{BC}$]])

Data collection params

  • THRESHOLD_DISTANCE: threshold distance to record one data point, unit: meters
  • SKIP_STEP_GENERATE: the number of depth images to skip before recording one data point
  • ACTION_HORIZON: prediction horizon H, or the length of the action sequence in the MPL

Simulation evaluation's params

  • NUM_EPISODES_EVALUATE: the number of episodes to evaluate in simulation
  • EPISODE_TIMEOUT: timeout period for each episode
  • MAX_INITIAL_X, MAX_INITIAL_Y, MAX_INITIAL_Z, MAX_INITIAL_YAW, MIN_INITIAL_X, MIN_INITIAL_Y, MIN_INITIAL_Z, MIN_INITIAL_YAW: parameters to randomize the initial pose of the robot with uniform distribution, $x,y,z$ are in meters while the yaw angle is in degrees

Flightmare params (only used when SIM_USE_FLIGHTMARE = True): match config from agile_autonomy

  • SIM_USE_FLIGHTMARE: use Flightmare for evaluation in sim, only used when RUN_IN_SIM = True
  • SPACING: spacing between trees or objects. For comparison with Agile, this needs to be the same as test_time/spacings param in agile_autonomy/planner_learning/config/test_settings.yaml
  • UNITY_START_POS: start pose for the robot in Flightmare, format:[x,y,z,yaw], ignore MAX_INITIAL_..., MIN_INITIAL_... params above
  • TAKEOFF_HEIGHT: takeoff height of the robot, the same as /hummingbird/autopilot/optitrack_start_height in agile_autonomy
  • CRASHED_THR: collision radius in Flightmare
  • EXPERT_FOLDER: the path to agile_autonomy/data_generation/data/ folder in your system

TRAINING AND OPTIMIZING PARAMS

  • TRAIN_INFOGAIN: process the recorded files to train ORACLE (False) or A-ORACLE (Train). Please check Link
  • EVALUATE_MODE: collect data (False) or evaluate the trained network (True) in sim, only used when RUN_IN_SIM = True

Network's params

  • DI_SHAPE: input shape of the depth image, format (height, width, 1)
  • SKIP_STEP_INFERENCE_INFOGAIN: only evaluate once after ... steps for IPN
  • DI_LATENT_SIZE: size the latent vector (for PLANNING_TYPE = 1)