-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
omnimapper died with pcl::OrganizedMultiPlaneSegmentation::segment message #4
Comments
Hey @zikprid0 , thanks for sharing an example bag file to reproduce and confirm the issue: Triggering the segfault using the bagfile in debug workspace, with stack-trace captured here:
Due to the lock down, we've been unable to fully use the robots in the university lab, so we're working towards a setting up a gazebo simulation environment for testing with as well. Could I ask about which gazebo plugin you used to generate the point clouds recorded in the bag file, and how you configured your world file for the depth sensor model? The However, the height should not be one, but instead should correspond to the aspect ratio of the image resolution from the virtual depth camera. Are you using an external node for doing the RGB depth registration onto the point cloud? It seems like that's something the |
Hello, @ruffsl
|
I think the problem is that libgazebo_ros_camera generates data format in non-organized point clouds. From pcl docs:
Maybe we can use |
@zikprid0 , where you able to test the suggestion of using |
@BillWSY , Your linked page occured error 404 not found. |
@zikprid0 That repo is the one we used to deliver Omnimapper to you last Novemeber -- @yjkim046 and @hyunseok have access to it. I just added you as well (you probably need to accept the invitation to be able to access it). It's just height should be > 1. Somethingl like width = 1280 height = 720 will look reasonable. |
Hello. Omnimapper did not die anymore. But I cannot certain it works correctly. Thank you. |
Did you download the latest code? We encountered similar issues before but fixed with #3 |
By the way, the LGe deliverable repo contains a dockerfile that patches the current ROS2 code and builds depth_image_proc. May I ask how did you implement your own depth image process? |
@BillWSY I think I download latest code, but I will chek it and tell you.
I re-cloned omnimapper_ros repository and tested.
Thanks. |
Hi -- thanks for the feedback. I think the error message Input dataset is not from a projective device! gives us a hint that that input data is malformed. Unless you can be sure the data in your msg is organized and flattened sequentially, I think it is a bit risky just to change the header -- it will be much safer if we can use |
Hello, I tested depth_image_proc.
I checked topic's echo of /front_camera/camera_info_raw, /front_camera/rgb/image_raw and /front_camera/depth/image_raw. Thanks. |
Hi -- thanks for the feedback. Did you happen to encounter any error messages while running depth_image_proc? I am worried it is running at all. |
Hello. I could not see error messages.
|
Could you list the output from Also, using markdown code blocks helps readability and avoid word wrapping: |
Hello. This is info of nodes and topics.
|
Can we check the QOS policies on all the publisher/subscribers to see if there is a mismatch? What DDS vendor were you using? |
Good suggestion @BillWSY . @zikprid0 , could you verify that the respective subscribers and publishers are using compatible QoS. For example, by default omnimapper_ros attempts to subscribe to sensor topics using ROS's sensor QoS profile. There is a parameter in omnimapper_ros that can be used to change the QoS, like when using rosbag that publishes via system default QoS. I.e: f1ee4fc The later should be less of an issue in ROS2 Foxy now that rosbag has support for preserving QoS settings used when recording. ros2/ros2_documentation#627 However, configuring QoS at runtime is still an open design question: |
Hello, @ruffsl, @BillWSY So I checked depth_image_proc and image_transport source code.
And I got this result
image_transport::Subscriber of sub_depth_ and sub_rgb_ printed 0x1. Thank you. |
Could you step back an detail exactly what you change, which launch file are you using, what values you set for what parameters? For example, the
If that's the case, then I'm thinking To repeat @BillWSY question,
|
@BillWSY @ruffsl
to
|
Looking back at the world file your provided. What are you trying to do with the <plugin name="plugin_name" filename="libgazebo_ros_camera.so">
<ros>
<argument>/front_camera/depth/image_raw:=/front_camera/depth/image_raw</argument>
<argument>/front_camera/depth/camera_info:=/front_camera/depth/camera_info</argument>
<argument>/front_camera/image_raw:=/front_camera/rgb/image_raw</argument>
<argument>/front_camera/image_raw/compressed:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/compressedDepth:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/theora:=/front_camera/rgb/image_raw/theora</argument>
<argument>/front_camera/camera_info_raw:=/front_camera/camera_info_raw</argument>
<argument>/front_camera/points:=/front_camera/depth/points</argument>
</ros> If that's the case, then I think you should be using the Could you please check that gazebo plugin is publishing the correct data to the expected topics, regardless if |
Hello. Thanks. |
Hi -- would you be able to record a bag file for all of the image_raw topics. We can try to run it here to see if we can find out the issue. |
Hi @zikprid0 I was inspecting the ros2 bag file you sent us, and it seems the depth image is malformed. In rviz I can only see a black image with a few bright pixels on the top. Do you have the same issue? Were you able to use rviz to visualize /front_camera/depth/image_raw? |
To @BillWSY . Thanks. |
I've been waiting on foxy here, as the current LTS ships with QoS improvements to rosbag given ros2/rosbag2#125 has been resolved . I had been suspecting that the differing QoS from recording vs playback may have been subtly effecting your setup, given before foxy, you would have had to use switch between reliable QoS publication to record bag files, vs unreliable QoS publication you'd have to use for Omnimapper to subscribe to. But foxy, rosbag2 can now record and playback with the same original QoS, helping us to determine if you've set your publication QoS options correctly. I had also been hoping that ros-simulation/gazebo_ros_pkgs#1080 would be resolved by now, but one can still build the gazebo ros2 packages from source and is working as I've been doing over on ros-swg/turtlebot3_demo#34 . There is an open PR for adding a depth only option to avoid serializing a color registered depth cloud , ros-simulation/gazebo_ros_pkgs#1118 , so the reduction in number of channels may help debug any density forming issues. With CogRob/omnimapper#30 , omnimapper and omnimapper_ros are now building with foxy, so we could try to debug your QoS issues further with the current LTS distro. |
It seems your image does not include a visualization of the |
Hello. I got a new progress today.
I tried after move robot here and there. |
Thanks for the update. We had seen similar issues before and fixed it with this commit: CogRob/omnimapper@79c8b12 Can you check you are using the latest code? |
@BillWSY . Thank you. I made progress with your mention. |
Hello. I tested omnimapper with docker image cogrob/omnimapper_ros:latest.
Omnimapper died at first cloud callback.
I tested it with ros-dashing, ubuntu bionic, and gazebo simulator.
This is error log of omnimapper.
It seems pcl library could not take normal cloud from input cloud.
This is CloudPoint2 ros message of that test.
I also tested 'is_dense' flag made true by moving robot.
But it was not helpful.
The text was updated successfully, but these errors were encountered: