Skip to content

Latest commit

 

History

History
executable file
·
73 lines (46 loc) · 5.81 KB

REFLECTIONS.md

File metadata and controls

executable file
·
73 lines (46 loc) · 5.81 KB

Reflections

This is a list of reflections that I have encountered while working on this project. This list will be updated as I encounter more reflections.

About Exploration (or Mapping)

I have one confusion about the Exploration (or Mapping) part: Am I required to design a randomly exploration policy? i.e. We can use the keyboard to control the robot to map the environment (By sending msgs to the /cmd_vel topic). But I am not sure if I am required to design a random exploration policy which could be used to map the environment without human intervention. (maybe more elegant and feasible)

How do I qualitatively and quantitatively analyse the SLAM performance?

I found that there is no ground truth to compare the SLAM performance. So, how do I qualitatively and quantitatively analyse the SLAM performance? (generated by Claude 3 Opus)

Qualitative Analysis

  • Visual inspection of generated map: clarity, completeness, consistency
  • Loop closure detection: success rate, global consistency, drift reduction
  • Localization accuracy: estimated poses vs. expected positions, orientation consistency

Quantitative Analysis

  • Relative Pose Error (RPE): local consistency, RMSE or median error
  • Map Quality Metrics:
    • Occupancy Grid Map: resolution, completeness, occupied/free space ratio
    • Point Cloud Map: density, distribution, nearest neighbor distances
  • Computational Performance: runtime, memory usage, scalability
  • Robustness and Reliability: performance in challenging scenarios, success rate
  • Comparative Analysis: performance vs. well-known or state-of-the-art methods

Presentation and Discussion

  • Visualizations: maps, trajectories, error plots
  • Strengths and weaknesses based on analysis
  • Challenges encountered and proposed solutions
  • Evaluation methodology, assumptions, and limitations

Effect of the costmap params on the robot's navigation

I have considered the effect of the costmap params on the robot's navigation. The costmap params are defined in the costmap_common_params.yaml file. The costmap params include the following:

How can I navigate to the dynamic goal?

I have considered two methods to detect the dynamic goal (number 3 on the box) using the camera information:

  • Template matching method provided by the OpenCV library.
  • The find_object_2d package

I found that both the methods required a depth information to publish the detected object's position, but the original given camera is only a RGB non-depth camera, which means that I can not obtain the depth information from the original camera. So I come up with two ideas:

  • Replace the original camera with a depth camera
    • When we detect the object, we can get the bounding box and depth information of the detected object, then we can use some camera calibration methods to get the 3D position of the detected object. Then we only need to publish the positions to the /move_base_simple/goal topic, and the robot would navigate to the goal.
  • Give a dummy depth information to the detected object
    • If the detection accuracy is high enough, then we can always navigate to the right direction of the detected object, which means that we can reach the goal only by a set of 2D images.
    • But if there is some obstacles between the robot and the detected object, then the robot may collide with the obstacles.
    • Sometimes the robot may lose the detected object, which means that the robot may stay still in the path.

How can I set a dynamic prohibition area for the randomly spawned cone?

Initially, we adapted a method which is shown in the dynamic_obstacle_updater.py script. The basic idea is to subscribe the /gazebo/cone_position topic, and then we can update the location of the prohibition area (by updating the rosparam prohibition_area) based on the subscribed message. But we found that this prohibition_area has been loaded and reflected in the costmap_common_params.yaml file, which means that we can not update the prohibition_area dynamically.

How to check collision while set a random goal in the boxes area?

Initially, we implemented a function isPointInObstacle() in the box_explorer_node.cpp script, where the box_explorer_node subsribe and update the global_costmap to check whether the randomly generated goal is in the obstacle area. (i.e. has collision with the boxes) But we then found that this function didn't work, and we realized that the global_costmap did not conclude the information of the boxes.

We then updated our policy to subscribe the gazebo/ground_truth/box_markers topic to get the true position of the spawned boxes, and then we can check whether the randomly generated goal is in the obstacle area.

Reflections about the object detection Accuracy?

Both the template matching approach and the find_2d_object approch can not detect the object accurately. The template matching approach is sensitive to the object's size and the background, and the find_2d_object approach is sensitive to the object's color and the background. So we need to consider the following factors to improve the detection accuracy:

  • Capturing a set of template images from the scene: By using multiple template images that match the object in different orientations and positions relative to the camera, we can significantly improve the accuracy of the detection. This approach ensures that the object can be detected regardless of its direction relative to the camera.
  • Scaling the template image (for the template matching approach): To enhance the robustness of the detection, we scale the template image to different sizes. This allows us to match the object in the scene accurately, regardless of whether the object is far from or near the camera. By considering various scales, we can accommodate changes in the object's apparent size due to its distance from the camera.