This project implements SLAM (Simultaneous Localization and Mapping) using RTAB-Map for a differential drive robot with optimized parameters for high-performance mapping and navigation. It supports:
- Intel RealSense D455 camera (real robot deployment)
- NVIDIA Isaac Sim simulation (testing and development)
- NVIDIA RTX 4070 laptop (GPU-accelerated processing)
- Unified launch system (seamless switching between sim and real hardware)
- ROS2 humble, Ubuntu 22.04
The system is designed for differential drive robots and provides robust SLAM capabilities with excellent loop closure detection and map optimization.
rtabmap_isaacsim_d455/
├── launch/
│ ├── rtabmap_main.launch.py # 🚀 Main unified launch file
│ ├── realsense_d455_stereo.launch.py # 📷 RealSense D455 camera setup (included by main launch)
│ ├── isaac_sim.launch.py # 🤖 Isaac Sim setup (included by main launch)
│ ├── stereo_image_processing.launch.py # 🖼️ Isaac Sim image processing (helper)
│ └── isaac_visual_slam.launch.py # 🤖 Isaac ROS Visual SLAM (helper)
├── config/
│ ├── rtabmap_params.yaml # ⚙️ RTAB-Map optimized parameters
│ └── nav2_rtabmap_params.yaml # 🧭 Nav2 navigation parameters
└── README.md # 📖 This documentation
- CPU: Intel i5-8th gen or AMD Ryzen 5 3600 (minimum)
- GPU: NVIDIA RTX 4070 or better (for GPU acceleration)
- RAM: 16GB (32GB recommended for large mapping sessions)
- Storage: SSD with at least 50GB free space
- Intel RealSense D455: USB 3.0+ connection, good lighting conditions
- Isaac Sim: NVIDIA Isaac Sim 2023.1.0+
sudo apt install ros-humble-rtabmap-ros
sudo apt install ros-humble-nav2-bringup
sudo apt install ros-humble-realsense2-camera
sudo apt install ros-humble-imu-filter-madgwick# Follow NVIDIA Isaac ROS installation guide
sudo apt install ros-humble-isaac-ros-visual-slam
sudo apt install ros-humble-isaac-ros-image-proc
sudo apt install ros-humble-isaac-ros-stereo-image-procsudo apt install ros-humble-teleop-twist-keyboard
sudo apt install ros-humble-rqt-robot-monitor# Basic SLAM with D455
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true
# With visual odometry for better accuracy
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true vo:=rtabmap
# Localization mode (requires existing map)
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true localization:=trueStep 1: Start Isaac Sim
- Launch NVIDIA Isaac Sim
- Open:
Isaac Examples → ROS2 → Navigation → Carter Navigation - In Stage tab, enable stereo cameras:
- Navigate to
World → Nova_Carter_ROS → front_hawk → left_camera_render_product - Under
Property → Isaac Create Render Product Node → Inputs, check "Enabled" - Set
height=600andwidth=960for better performance - Repeat for
right_camera_render_product
- Navigate to
Step 2: Launch RTAB-Map
# Basic simulation SLAM
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=false
# With Isaac visual odometry (disable wheel odom TF first!)
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=false vo:=isaac
# Custom image resolution
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=false image_width:=1280 image_height:=720| Parameter | Type | Default | Description |
|---|---|---|---|
d455 |
bool | false |
Use RealSense D455 instead of Isaac Sim |
rtabmap_viz |
bool | true |
Launch RTAB-Map visualization GUI |
localization |
bool | false |
Run in localization mode (map must exist) |
vo |
string | none |
Visual odometry: none, rtabmap, isaac |
stereo |
bool | true |
Use stereo vision instead of RGB+Depth |
image_width |
int | 960 |
Image width for Isaac Sim processing |
image_height |
int | 600 |
Image height for Isaac Sim processing |
# D455 with visual odometry and stereo
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true vo:=rtabmap stereo:=true
# Simulation with higher resolution
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=false image_width:=1280 image_height:=720 vo:=rtabmap# Use existing map for navigation
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true localization:=true
# Disable visualization for headless operation
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true localization:=true rtabmap_viz:=false# RGB+Depth mode instead of stereo
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true stereo:=false
# Pure wheel odometry (no visual odometry)
ros2 launch rtabmap_isaacsim_d455 rtabmap_main.launch.py d455:=true vo:=noneThe configuration includes optimized parameters for:
- Loop Closure Detection: Aggressive loop closure with
Rtabmap/DetectionRate: 1.0 - Memory Management: Balanced STM/LTM with
Mem/STMSize: 30 - Visual Features: GFTT detector with 400 max features for speed
- Registration: 3DoF mode optimized for differential robots
- Grid Mapping: 5cm resolution occupancy grids
- GPU Optimization: CUDA-accelerated stereo processing
Key optimizations for RTX 4070:
Kp/DetectorStrategy: 6 # GFTT for speed
Kp/MaxFeatures: 400 # Balanced feature count
Vis/MaxFeatures: 1000 # High-quality matching
Grid/CellSize: 0.05 # 5cm grid resolution
Reg/Force3DoF: true # 2D robot constraintOptimized for differential drive robots with:
- DWB Local Planner: Smooth path following
- Costmap Integration: RTAB-Map point cloud obstacles
- Velocity Limits: Conservative for safety (
max_vel_x: 0.26) - Recovery Behaviors: Spin, backup, and wait actions
# Keyboard control
ros2 run teleop_twist_keyboard teleop_twist_keyboard
# Gamepad control (if available)
ros2 launch teleop_twist_joy teleop-launch.py- Set Initial Pose: Use RViz "2D Pose Estimate" tool
- Send Goal: Use RViz "Nav2 Goal" tool
- Monitor Progress: Check
/cmd_veland navigation status
# Check camera topics
ros2 topic list | grep camera
# Monitor RTAB-Map status
ros2 topic echo /rtabmap/info
# Check navigation status
ros2 topic echo /navigation_result
# View point clouds
ros2 topic echo /rtabmap/cloud_map# System monitor
ros2 run rqt_robot_monitor rqt_robot_monitor
# TF tree visualization
ros2 run rqt_tf_tree rqt_tf_tree
# Topic frequency check
ros2 topic hz /front_stereo_camera/left/image_raw- GPU Memory: Monitor with
nvidia-smi - CPU Cores: RTAB-Map uses multi-threading effectively
- Storage: Use SSD for database storage
- Cooling: Ensure adequate cooling during long mapping sessions
- Database Location: Store on fast SSD (
~/rtabmap.db) - Clear Cache: Delete old databases to free space
- Parameter Tuning: Adjust
Mem/STMSizebased on available RAM
# Check TF tree
ros2 run tf2_tools view_frames
# Verify nav2 parameters
ros2 param list /controller_server
# Check cmd_vel output
ros2 topic echo /cmd_vel# Increase feature detection
ros2 param set /rtabmap Kp/MaxFeatures 600
# Try different detector
ros2 param set /rtabmap Kp/DetectorStrategy 9 # ORB detector
# Check camera calibration
ros2 topic echo /camera/color/camera_info# Check USB connection
lsusb | grep Intel
# Restart camera driver
ros2 lifecycle set /camera/realsense2_camera_manager configure
ros2 lifecycle set /camera/realsense2_camera_manager activate
# Verify camera topics
ros2 topic list | grep camera- Reduce Resolution: Use
image_width:=640 image_height:=480 - Disable Unnecessary Sensors: Turn off unused cameras
- GPU Memory: Close other GPU applications
- Simulation Speed: Reduce physics timestep
| Error | Solution |
|---|---|
TF timeout |
Check robot_state_publisher and odom→base_link |
Database locked |
Delete existing .db file or change path |
No camera info |
Verify camera calibration and topics |
Memory limit |
Reduce Mem/STMSize or increase system RAM |
- Mapping Rate: 10-20 Hz depending on scene complexity
- Loop Closure: ~2-5 seconds detection time
- Memory Usage: 4-8GB RAM for typical office environment
- GPU Usage: 30-60% during active mapping
- Office Environment (50m × 50m): ~500MB
- Large Building (100m × 100m): ~2-5GB
- Outdoor Area (200m × 200m): ~10-20GB
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- RTAB-Map Team: For the excellent SLAM library
- NVIDIA Isaac Team: For Isaac ROS and simulation tools
- Intel RealSense Team: For camera drivers and SDK
- Nav2 Team: For the navigation stack
For issues and questions:
- Check this README and troubleshooting section
- Search existing GitHub issues
- Create new issue with detailed description and logs
- Join ROS Discourse for community support
Welcome, aspiring developer! You've built something magnificent, and now it's time to give it a permanent home in the digital cosmos. This guide will turn you from a Git-newbie into a version control virtuoso.
This is how you get your project onto GitHub for the first time.
Step 1: Initialize Your Local Time Machine (Git Repository) First, we must turn your project folder into a repository. It's like installing a time machine right in your lab.
# Navigate to your project's root directory
cd /home/robot/robot_ws
# Initialize the repository
git initStep 2: Prepare Your Files for Launch Gather all your brilliant work and prepare it for the first snapshot in time.
# Add all files to the staging area
git add .Step 3: Seal the First Time Capsule (Commit) Create your first commit. This is a snapshot of your project at this exact moment. The message explains what's in the snapshot.
# Commit the files with a descriptive message
git commit -m "feat: Initial stable version of RTAB-Map project"Step 4: Create a Home in the Cosmos (GitHub Repository)
- Go to GitHub.com and log in.
- Click the
+icon in the top-right corner and select "New repository". - Name your repository (e.g.,
my-robot-slam-project). - IMPORTANT: Do NOT initialize it with a README, .gitignore, or license. Your project already has these.
- Click "Create repository".
Step 5: Connect Your Lab to the Cosmos You'll see a page with a URL. Copy it. Now, link your local repository to the one on GitHub.
# Replace <YOUR_GITHUB_REPO_URL> with the URL you copied
git remote add origin <YOUR_GITHUB_REPO_URL>
# Verify the connection
git remote -vStep 6: The Final Push! Launch your code into the GitHub galaxy!
# Rename your primary branch to 'main' (a common standard)
git branch -M main
# Push your code to the 'main' branch on GitHub
git push -u origin mainCongratulations! Your code is now safely stored on GitHub.
You wanted to mark this as a "stable version." The best way to do this is with a tag. Tags are markers for specific commits, perfect for releases.
# Create a tag for your first stable version
# The -a flag creates an annotated tag, and -m provides a message
git tag -a v1.0 -m "Stable Version 1.0: Initial setup for D455 and Isaac Sim"
# Push the tag to GitHub (they don't go up automatically)
git push origin v1.0Now, if you look at your repository on GitHub, you'll see "v1.0" in the "Releases" or "Tags" section.
For all future changes, your workflow will be a simple loop.
- Make your changes: Edit code, add files, etc.
- Check the status: See what you've changed.
git status
- Add your changes: Stage the files you want to save in the next snapshot.
# Add a specific file git add path/to/your/file.py # Or add all changes git add .
- Commit your changes: Create the new snapshot with a clear message.
git commit -m "feat: Add an amazing new feature" # or "fix: Fix a pesky bug" # or "docs: Update the README"
- Push your changes: Send your new commits to GitHub.
git push
And that's it! You are now officially a practitioner of the version control arts. Go forth and code with confidence!