-
Notifications
You must be signed in to change notification settings - Fork 7
Parameters
HuanNguyenARL edited this page Sep 26, 2023
·
6 revisions
-
CPN_TF_CHECKPOINT_PATH: list of Tensorflow weight files for the Collision Prediction Network (CPN) -
CPN_TRT_CHECKPOINT_PATH: list of folders containing TRT files (need to run the optimize scripts to create TRT files) -
seVAE_CPN_TF_CHECKPOINT_PATH: list of Tensorflow weight files for the CPN part of seVAE-ORACLE -
seVAE_CPN_TRT_CHECKPOINT_PATH: list of folders containing TRT files (need to run the optimize scripts to create TRT files) -
IPN_TF_CHECKPOINT_PATH: Tensorflow weight file for the Information gain Prediction Network (IPN) -
IPN_TRT_CHECKPOINT_PATH: the folder containing TRT files (need to run the optimize scripts to create TRT files)
-
PLANNING_TYPE: 0 (end2end ORACLE), 1 (seVAE-ORACLE), 2 (A-ORACLE), 3 (Voxblox-expert)
-
ROBOT_DEPTH_TOPIC: the topic of the depth image, typesensor_msgs/Image, depth unit: mm -
ROBOT_ODOM_TOPIC: the topic of the robot's odometry in the world frame (for calculating the unit goal vector and checking if the robot has reached the waypoints), typenav_msgs/Odometry -
ROBOT_CMD_TOPIC: the command topic, typegeometry_msgs/Twist, containing: 3D velocity command in vehicle frame (yaw-rotated world frame) + reference yaw angle (we useangular.zfield to save the reference yaw angle!) -
ROBOT_MASK_TOPIC: the interestingness mask topic for visually attentive navigation task, typesensor_msgs/Image. This mask is used only when PLANNING_TYPE >= 2. The detection mask is such that each pixel of the depth image is associated with a value from 0 (lowest) to 255 (highest) based on its interestingness. -
TRAJECTORY_TOPIC: visualization of the trajectory endpoints, typevisualization_msgs/MarkerArray, estimated (roughly) from the motion primitives library and the robot's initial state. -
ROBOT_LATENT_TOPIC: the latent_vector topic when using PLANNING_TYPE = 1, typestd_msgs/Float32MultiArray
-
PLANNING_HORIZONTAL_FOV: horizontal FOV of the MPL (in inference phase), unit: degrees -
PLANNING_VERTICAL_FOV: vertical FOV of the MPL (in inference phase), unit: degrees -
STEPS_TO_REPLAN: replan after receiving ... depth images -
CMD_VELOCITY: forward velocity (in the current implementation, all the primitives in the MPL have the same forward velocity though this's not a requirement of the methods). Note:CMD_VELOCITYshould be less thanVEL_MAX = MAX_RANGE / (ACTION_HORIZON * SKIP_STEP_GENERATE * DEPTH_TS) -
NUM_VEL_X: the number of discrete forward velocities in the MPL (only 1 is supported now) -
NUM_VEL_Z: the number of discrete vertical velocities in the MPL -
NUM_YAW: the number of discrete steering angles in the MPL
-
COLLISION_USE_TENSORRT: use the TensorRT (True) or Tensorflow (False) model for CPN -
INFOGAIN_USE_TENSORRT: use the TensorRT (True) or Tensorflow (False) model for IPN
-
ENABLE_VISUALIZATION: publish message for visualization of the networks' prediction in RViz -
VISUALIZATION_MODE: 0 (visualize only the timestamps at the end of the prediction horizon), 1 (visualize all timestamps), 3 (visualize only the timestamps at the end of the prediction horizon, for all the networks in the ensemble). The visualized positions of the robot at future timestamps are obtained by integrating the first-order approximations of the velocity controllers and the yaw controller using the below parameters: -
ALPHA_V: first-order approximation of the velocity controllers (obtained by discretizing the transfer function$\frac{1}{T_{vel} \times s + 1}$ with$T_{sampling} =$ SKIP_STEP_GENERATE$\times$ DEPTH_TS$\times ~0.1$ ) -
ALPHA_PSI: first-order approximation of the yaw controller (obtained by discretizing the transfer function$\frac{1}{T_{yaw} \times s + 1}$ with$T_{sampling} =$ SKIP_STEP_GENERATE$\times$ DEPTH_TS$\times ~0.1$ )
Note: The green markers correspond to the estimated trajectory endpoints of the safe action sequences while the blue marker denotes the estimated trajectory endpoint of the chosen action sequence.
-
USE_D455_HOLE_FILLING: turn off (True) or on (False) the image pre-preprocessing step in the real robot to fill in the missing pixels in the real-world depth images
-
DEADEND_COL_SCORE_THRESHOLD_HIGH: when the collision cost of the safest action sequence is greater than this threshold, a dead-end is detected and we allow the robot to yaw in one spot to find a new free direction -
DEADEND_COL_SCORE_THRESHOLD_LOW: the robot can exit the yaw in one spot mode when the collision cost of the safest action sequence is smaller than this threshold -
TIME_WEIGHT_FACTOR: time-step weighting factor$\lambda$ ($\lambda = 0$ : every future time step in the prediction horizon is weighted the same when calculating the collision cost for each primitive,$\lambda > 0$ : nearer future time steps are weighted more) -
COLLISION_THRESHOLD: the collision threshold$c_{th}$ (compared to the safest action sequence) to classify an action sequence as "safe". Lowering this value will lead to more conservative navigation behavior.
-
WAYPOINT_FILE: path to the file containing the list of waypoints (we provide some example waypoint files in thewaypointsfolder) -
WAYPOINT_DISTANCE_THRESHOLD: distance to check if the waypoint has been reached (only in thex-yplane), unit: meters -
WAYPOINT_YAW_THRESHOLD: yaw difference to check if the robot has finished yaw in one spot action, unit: degrees -
ALLOW_YAW_AT_WAYPOINT: allow the robot to yaw in one spot to face the next waypoint when reaching a waypoint
-
N_E: the number of CPNs in the Deep Ensembles -
USE_UT: use the Unscented Transform or not
-
USE_ADDITIVE_GAUSSIAN_IMAGE_NOISE: simulate additive Gaussian noise on the depth image -
USE_ADDITIVE_GAUSSIAN_STATE_NOISE: simulate velocity noise on the robot's velocity estimate -
IMAGE_NOISE_FACTOR: quadratic coefficient for the depth image noise model in Link (value: 0 - 0.005) -
P_vx, P_vy, P_vz: diagonal values of the velocity estimate's covariance matrix ($diag([\sigma_{vx}^2, \sigma_{vy}^2, \sigma_{vz}^2])$ )
-
TIMEOUT_TYPE: timeout type for visually-attentive navigation task, aka switch to normal ORACLE:- 0: after
TIME_ALLOWED$\times$ T_STRAIGHT, whereT_STRAIGHTis the time to travel in the straight-line connection between the waypoints with the velocityCMD_VELOCITY - 1: when
total_time_from_previous_waypoint+current_distance_to_next_waypoint/CMD_VELOCITY<TIME_ALLOWED$\times$ T_STRAIGHT - 2 (default): when
total_distance_from_previous_waypoint+current_distance_to_next_waypoint<TIME_ALLOWED$\times$ D_STRAIGHT
- 0: after
-
TIME_ALLOWED: parameter to determine timeout period (checkTIMEOUT_TYPEdescription)
-
SIM_DEPTH_TOPIC: the topic of the depth image in the simulator, typesensor_msgs/Image, unit: meters -
SIM_ODOM_TOPIC: the topic of the robot's odometry in the world frame in the simulator (for calculating the unit goal vector and checking if the robot has reached the waypoints), typenav_msgs/Odometry -
SIM_CMD_TOPIC: the command topic in the simulator, typegeometry_msgs/Twist, containing: 3D velocity command in vehicle frame (yaw-rotated world frame) + reference yaw angle (we useangular.zfield to save the reference yaw angle!) -
SIM_IMU_TOPIC: the IMU topic in the simulator, typesensor_msgs/Imu -
SIM_MASK_TOPIC: the interestingness mask topic in the simulator for visually attentive navigation task, typesensor_msgs/Image. This mask is used only when PLANNING_TYPE >= 2. The detection mask is such that each pixel of the depth image is associated with a value from 0 (lowest) to 255 (highest) based on its interestingness. -
SIM_LATENT_TOPIC: the latent_vector topic in the simulator when using PLANNING_TYPE = 1, typestd_msgs/Float32MultiArray
-
MAX_RANGE: max range of the depth image, unit: meters -
HORIZONTAL_FOV: horizontal FOV of the depth camera, unit: degrees -
VERTICAL_FOV: vertical FOV of the depth camera, unit: degrees -
DEPTH_TS: inversion of depth sensor FPS -
DEPTH_CX, DEPTH_CY, DEPTH_FX, DEPTH_FY: depth camera's intrinsic params for depth_to_pcl -
CAM_PITCH: pitch angle of the depth camera with respect to the robot's body frame, positive value: pitch down, negative value: pitch up -
t_BC: coordinate of the depth camera's origin in the robot's body frame (the axes in the body frame follow the ROS convention where$x_B, y_B, z_B$ point forward, to the left of the robot and upward, respectively). Format: np.array([[$x_{BC}$ ], [$y_{BC}$ ], [$z_{BC}$ ]])
-
THRESHOLD_DISTANCE: threshold distance to record one data point, unit: meters -
SKIP_STEP_GENERATE: the number of depth images to skip before recording one data point -
ACTION_HORIZON: prediction horizonH, or the length of the action sequence in the MPL
-
NUM_EPISODES_EVALUATE: the number of episodes to evaluate in simulation -
EPISODE_TIMEOUT: timeout period for each episode -
MAX_INITIAL_X, MAX_INITIAL_Y, MAX_INITIAL_Z, MAX_INITIAL_YAW, MIN_INITIAL_X, MIN_INITIAL_Y, MIN_INITIAL_Z, MIN_INITIAL_YAW: parameters to randomize the initial pose of the robot with uniform distribution,$x,y,z$ are in meters while the yaw angle is in degrees
Flightmare params (only used when SIM_USE_FLIGHTMARE = True): match config from agile_autonomy
-
SIM_USE_FLIGHTMARE: use Flightmare for evaluation in sim, only used when RUN_IN_SIM = True -
SPACING: spacing between trees or objects. For comparison with Agile, this needs to be the same astest_time/spacingsparam inagile_autonomy/planner_learning/config/test_settings.yaml -
UNITY_START_POS: start pose for the robot in Flightmare, format:[x,y,z,yaw], ignoreMAX_INITIAL_..., MIN_INITIAL_...params above -
TAKEOFF_HEIGHT: takeoff height of the robot, the same as/hummingbird/autopilot/optitrack_start_heightinagile_autonomy -
CRASHED_THR: collision radius in Flightmare -
EXPERT_FOLDER: the path toagile_autonomy/data_generation/data/folder in your system
-
TRAIN_INFOGAIN: process the recorded files to train ORACLE (False) or A-ORACLE (Train). Please check Link -
EVALUATE_MODE: collect data (False) or evaluate the trained network (True) in sim, only used whenRUN_IN_SIM = True
-
DI_SHAPE: input shape of the depth image, format (height, width, 1) -
SKIP_STEP_INFERENCE_INFOGAIN: only evaluate once after ... steps for IPN -
DI_LATENT_SIZE: size the latent vector (forPLANNING_TYPE = 1)