You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem Description:After visual fusion, the shape estimation of the vehicle is inaccurate, resulting in incorrect object flashes (moving quickly, for about one or two frames) during tracking and prediction, and these objects are driving towards the lane.
Scene Description: Our usage and testing scenarios usually involve many large trucks, vehicles that are 15 to 30 meters long. We have found that when detecting vehicle objects in the adjacent lanes (separated by one or two lanes), the problems described above occur. Meanwhile, there is also a situation where although the actual orientation of the vehicle is along the lane, the detected orientation of the vehicle has an angle with the lane, and sometimes it is almost perpendicular.
After troubleshooting, when we only activate CenterPoint or only enable the clustering route, there are not many problems with the orientation of the detected vehicle objects. However, after activating only the visual detection, we found that it is caused by this particular line of code. I have read the source code and papers of the shape estimation node, and this problem is highly likely to occur. At the same time, I have also analyzed the YOLO detection boxes. Due to the length of our trucks, one vehicle may be detected as two or more vehicles, which may also be the cause of this problem.
At first, I wanted to handle this phenomenon from the object_merge node, but I found it not easy to optimize. Finally, after analysis, it turns out that the visual fusion detection of vehicles at a long distance (more than 80 meters) in front of or behind is relatively accurate, and the detection distance of CenterPoint is approximately less than 80 meters, so they are exactly complementary. Therefore, currently, we only remove the vehicle objects within 80 meters (in front of and behind the base_link) from the visual fusion detected objects, and the remaining objects are used for the subsequent object_merge.
What I want to ask is whether this problem occurs with the visual fusion vehicle objects in your tests? Are there any other more reliable solutions to this problem? If so, please provide me with some ideas. Thank you.
Here is the object merged without the utilization of the filter node.
Here is the object merged with the utilization of the filter node.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Problem Description:After visual fusion, the shape estimation of the vehicle is inaccurate, resulting in incorrect object flashes (moving quickly, for about one or two frames) during tracking and prediction, and these objects are driving towards the lane.
Scene Description: Our usage and testing scenarios usually involve many large trucks, vehicles that are 15 to 30 meters long. We have found that when detecting vehicle objects in the adjacent lanes (separated by one or two lanes), the problems described above occur. Meanwhile, there is also a situation where although the actual orientation of the vehicle is along the lane, the detected orientation of the vehicle has an angle with the lane, and sometimes it is almost perpendicular.
After troubleshooting, when we only activate CenterPoint or only enable the clustering route, there are not many problems with the orientation of the detected vehicle objects. However, after activating only the visual detection, we found that it is caused by this particular line of code. I have read the source code and papers of the shape estimation node, and this problem is highly likely to occur. At the same time, I have also analyzed the YOLO detection boxes. Due to the length of our trucks, one vehicle may be detected as two or more vehicles, which may also be the cause of this problem.
At first, I wanted to handle this phenomenon from the object_merge node, but I found it not easy to optimize. Finally, after analysis, it turns out that the visual fusion detection of vehicles at a long distance (more than 80 meters) in front of or behind is relatively accurate, and the detection distance of CenterPoint is approximately less than 80 meters, so they are exactly complementary. Therefore, currently, we only remove the vehicle objects within 80 meters (in front of and behind the base_link) from the visual fusion detected objects, and the remaining objects are used for the subsequent object_merge.
What I want to ask is whether this problem occurs with the visual fusion vehicle objects in your tests? Are there any other more reliable solutions to this problem? If so, please provide me with some ideas. Thank you.
Here is the object merged without the utilization of the filter node.
Here is the object merged with the utilization of the filter node.
Beta Was this translation helpful? Give feedback.
All reactions