Integrating Modular SLAM Pipeline into OpenCV Contrib#4043
Integrating Modular SLAM Pipeline into OpenCV Contrib#4043QueenofUSSR wants to merge 29 commits intoopencv:4.xfrom
Conversation
@QueenofUSSR build is failing. |
|
@QueenofUSSR, can you add some tests and a sample? |
Okay, we will upload some tests and a visualized sample soon! |
| catch(...) { points4D.release(); } | ||
| if(points4D.empty()) return newPoints; | ||
| Mat p4d64; | ||
| if(points4D.type() != CV_64F) points4D.convertTo(p4d64, CV_64F); else p4d64 = points4D; |
| for (int kfId : localKfIndices) { | ||
| if (fixedSet.find(kfId) != fixedSet.end()) continue; | ||
| if (kfId < 0 || kfId >= static_cast<int>(keyframes.size())) continue; | ||
| KeyFrame &kf = keyframes[kfId]; | ||
| // Build matchedMpIndices: for each keypoint in this KF, which mappoint index it corresponds to | ||
| std::vector<int> matchedMpIndices(kf.kps.size(), -1); | ||
| for (size_t mpIdx = 0; mpIdx < mappoints.size(); ++mpIdx) { | ||
| const MapPoint &mp = mappoints[mpIdx]; | ||
| if (mp.isBad) continue; | ||
| for (const auto &obs : mp.observations) { | ||
| if (obs.first == kfId) { | ||
| int kpIdx = obs.second; | ||
| if (kpIdx >= 0 && kpIdx < static_cast<int>(matchedMpIndices.size())) | ||
| matchedMpIndices[kpIdx] = static_cast<int>(mpIdx); | ||
| } | ||
| } | ||
| } | ||
| std::vector<bool> inliers; | ||
| // Use the same number of iterations as outer loop for RANSAC attempts | ||
| optimizePose(kf, mappoints, matchedMpIndices, fx, fy, cx, cy, inliers, std::max(20, iterations)); | ||
| } | ||
| } |
There was a problem hiding this comment.
Avoid this nested loop of 3 for loop calls, you can precompute matchedMpIndices.
modules/slam/src/vo.cpp
Outdated
| auto ratioKeep = [&](const std::vector<std::vector<DMatch>>& knn, bool forward) { | ||
| std::vector<DMatch> filtered; | ||
| for(size_t qi=0; qi<knn.size(); ++qi){ | ||
| if(knn[qi].empty()) continue; | ||
| DMatch best = knn[qi][0]; | ||
| float ratio = 0.75f; | ||
| if(knn[qi].size() >= 2){ | ||
| if(knn[qi][1].distance > 0) { | ||
| if(best.distance / knn[qi][1].distance > ratio) continue; | ||
| } | ||
| } | ||
| // mutual check | ||
| int t = forward ? best.trainIdx : (int)qi; | ||
| // find reverse match for t | ||
| const auto &rev = forward ? knn21 : knn12; | ||
| if(t < 0 || t >= (int)rev.size() || rev[t].empty()) continue; | ||
| DMatch rbest = rev[t][0]; | ||
| if((forward && rbest.trainIdx == (int)qi) || (!forward && rbest.trainIdx == best.queryIdx)){ | ||
| filtered.push_back(best); | ||
| } | ||
| } | ||
| return filtered; | ||
| }; | ||
| std::vector<DMatch> goodMatches = ratioKeep(knn12, true); | ||
|
|
There was a problem hiding this comment.
why do we need this lambda function? you can use the code directly.
… CSV and unused variables, and fix warnings.
|
@QueenofUSSR, Please let me know once you are done adding tests and sample. |
Thank you for your notification! Now there is a simple visualized VO sample. It would be built at [build directory]/bin/example_slam_run_vo_sample if |
…al, triangulation fix (camera↔world), switch to OpenCV logging, and minor robustness tweaks.
…y, SlamSystem, map persistence, and visualization/sample updates to replace the old VO interface.
|
@abhishek-gola We have tested our VO module on EuRoC MH01 cam0, Root Mean Square of Absolute Trajectory Error is 2.46924m, Root Mean Square of Relative Pose Error is 0.0367811m. The test code is in |
…ose conversions/gauge handling and exports, and tightens map/visualizer robustness.
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.
Summary
This PR adds the SLAM module (slam) providing a compact visual odometry and small-scale SLAM toolkit. The module includes feature extraction, matching, two-view initialization, pose estimation, local mapping (MapManager / MapPoint), a simple optimizer (local BA with SFM fallback and optional g2o support), visualizer/top-down trajectory, localizer (PnP-based relocalization), and utilities for loading image sequences and intrinsics.
Motivation
Offer a lightweight, self-contained SLAM implementation that demonstrates how to use OpenCV building blocks for VO/SLAM research and examples. Useful as an educational reference and quick prototyping base for researchers who want to extend or compare with other methods (ORB-SLAM family, etc.). It also provides a straightforward path to enable g2o when available for robust BA.
What changed / what’s included
slam.hpp— module aggregatorvo.hpp— VisualOdometry wrapperdata_loader.hpp— sequence loader with YAML intrinsics parsingfeature.hpp— FeatureExtractor (ORB + ANMS + flow-aware selection)matcher.hpp— Matcher (ratio test, bucketing, mutual check)initializer.hpp— Two-view initializer using H/F decomposition + triagulationpose.hpp— PoseEstimator (essential matrix + recoverPose)map.hpp&keyframe.hpp— MapManager, MapPoint, KeyFramelocalizer.hpp— PnP-based relocalizeroptimizer.hpp— Optimizer (local BA using either g2o or OpenCV-based SFM fallback)visualizer.hpp— simple frame/top-down visualization and trajectory savingImplementation (under src/):
vo.cpp— main VisualOdometry run loop, backend thread for local BA, CSV diagnostics, initialization + tracking + triangulation pipelinedata_loader.cpp— filesystem/glob image enumerator, optional sensor.yaml intrinsics parsing (small comment fix in this PR)feature.cpp,matcher.cpp,initializer.cpp,pose.cpp,optimizer.cpp,visualizer.cpp,localizer.cpp— corresponding implementations described above