This ROS 2 node implements real-time visual tracking and servoing for an underwater robot (ROV) by detecting a colored buoy using a camera. It processes camera images, extracts the buoy’s position and area, computes visual servoing control commands, and publishes velocity commands for autonomous target following.
- Subscriptions
/bluerov2/camera/image(sensor_msgs/msg/Image): Input image stream for processing.
- Publishers
tracked_point(std_msgs/msg/Float64MultiArray): Publishes [x error, y error, depth error, x, y].camera_velocity(geometry_msgs/msg/Twist): Camera-frame visual servoing velocity./bluerov2/visual_tracker(geometry_msgs/msg/Twist): Robot-frame velocity commands.
-
Image Acquisition: Receives images from a ROS topic and converts them to OpenCV format using
cv_bridge. -
Buoy Detection: Applies HSV filtering and contour extraction to locate the buoy and estimate its area.
-
Error Calculation: Computes the difference between the detected and desired (clicked) target location and estimated depth.
-
Control Computation:
- Uses an interaction matrix and pseudo-inverse control law for visual servoing
- Supports dynamic gain adjustment via ROS parameters (
lambda_gain,thruster_gain). - Transforms camera velocities to robot frame with an analytically defined 6x6 transformation matrix.
-
Command Publishing: Publishes tracking data and velocity commands for downstream robot control.
-
User Interaction: Supports runtime selection of tracking points and HSV bounds via OpenCV window mouse events.
lower_hsv,upper_hsv: HSV bounds for color tracking (can be dynamically set).lambda_gain: Control gain for visual servoing.thruster_gain: Gain for scaling output robot velocities.
This ROS 2 node implements multi-marker ArUco tag detection, tracking, and visual servoing for underwater robotics (e.g., BlueROV2). The node detects multiple ArUco tags, estimates their pixel locations, computes 2D tracking errors relative to user-selected targets, and publishes velocity commands for closed-loop visual servoing.
- Subscribed
/bluerov2/camera/image(sensor_msgs/msg/Image): Camera images for detection.
- Published
/tagged_frames(sensor_msgs/msg/Image): Annotated image stream with ArUco detections./aruco_stats(std_msgs/msg/Float32MultiArray): Detected [ID, cx, cy] for selected marker IDs.tracked_point(std_msgs/msg/Float64MultiArray): Tracking errors for all relevant markers.camera_velocity(geometry_msgs/msg/Twist): Visual servoing velocity (camera frame)./bluerov2/visual_tracker(geometry_msgs/msg/Twist): Velocity command for the ROV (robot frame).
- Image Callback:
- Converts ROS images to OpenCV, detects ArUco tags, computes centroids and areas, and visualizes all detections.
- Target Selection:
- User double-clicks on the video feed to register the current visible marker locations as desired positions.
- Error & Interaction Matrix:
- For each marker in both
desired_pointsandcurrent_points, computes pixel error, builds a block interaction matrix, and uses IBVS (Image-Based Visual Servoing) for velocity computation.
- For each marker in both
- Velocity Computation:
- Camera-frame velocity is calculated using the interaction matrix and error vector.
- Transforms velocity from camera frame to robot frame using a predefined rotation and translation.
- Applies scaling via
lambda_gainandthruster_gainparameters.
- ROS Messaging:
- Publishes velocities, error vectors, and annotated images for downstream control and debugging.
lambda_gain: Visual servoing control gain (default: 0.5).thruster_gain: Scaling factor for robot output velocity (default: 0.5).
Both parameters are dynamically adjustable via ROS 2 parameter services.
This collection of Python scripts automates the extraction, conversion, and visualization of time-series data from ROS 2 bag files. The scripts are tailored for a visual servoing ROV setup, targeting topics like /camera_velocity, /bluerov2/visual_tracker, and /tracked_point.
Purpose:
Recursively finds all ROS2 bag files (.db3) in a base directory, extracts selected topics, and saves them as CSV files.
- Target topics:
/camera_velocity,/bluerov2/visual_tracker,/tracked_point - Output: Each topic’s messages are written as a CSV (
_<topic>.csv) in the bag’s folder. - Usage:
python extract_rosbag_to_csv.py # or whatever you named this script - Customizable:
- Edit the
TOPICSlist to specify which ROS2 topics to extract.
- Edit the
Purpose:
Reads extracted camera velocity CSVs and generates plots for all linear and angular velocity components.
- Input:
_camera_velocity.csv - Plot Output:
- Plots
linear_x,linear_y,linear_z,angular_x,angular_y,angular_zvs time. - Saves as both
.pdfand.pngin aplot/subfolder in each relevant directory.
- Plots
- Usage:
python plot_camera_velocity.py
- Features:
- Automatically finds CSVs in subdirectories.
- Handles ROS2
Vector3string parsing.
Purpose:
Plots visual servoing error and current detected positions over time from the _tracked_point.csv CSV.
- Input:
_tracked_point.csv - Plot Output:
- Error X/Y (desired - current) and Current X/Y over time.
- Saves as
.pdfand.pngin aplot/subfolder.
- Usage:
python plot_tracked_point.py
- Features:
- Parses ROS-style array fields.
- Can be adapted for other servoing-related CSVs with similar data structure.
Purpose:
Plots only the linear_y and angular_z velocity components from the ROV’s visual tracker output.
- Input:
_bluerov2_visual_tracker.csv - Plot Output:
linear_yandangular_zvs time (the components most relevant for certain motion tasks).- Saved as
.pdfand.pngin aplot/subfolder.
- Usage:
python plot_visual_tracker.py
- Features:
- Similar vector parsing as the camera velocity script.