Replies: 9 comments 1 reply
-
Hi @JuanDYB An easy solution for both cameras may be to use 1280x720 for both streams and align color to depth instead of depth to color. This causes the color FOV to automatically resize to fit the depth FOV, meaning that you get a fullscreen aligned image where no detail is cut off from the edges of the aligned image. Color to depth alignment can be done by changing Align(Stream.Color) to |
Beta Was this translation helpful? Give feedback.
-
Hi! @MartyG-RealSense, thanks for your quick reply! And I have another question related to this. D455 have bigger Fov but it is designed to use at 60cm minimum depth. I need bigger Fov but I need to read depth at least at 30-40cm. I have read that If I use smaller resolution in depth pipeline the minimum depth will decrease. With regards, |
Beta Was this translation helpful? Give feedback.
-
The FOV size of the RGB sensor determines how much gets cut off when aligning depth to color. If the RGB FOV is smaller than the depth FOV, then when performing depth to color alignment the extra area of depth detail at the edges that the RGB FOV cannot see are excluded from the aligned image. Reducing the resolution to a lower one is a simple but crude way to reduce the minimum distance of the camera. The more refined, precision-control method of doing so is to change the value of the Disparity Shift option from its default of '0' to a higher value. Increasing the value reduces the minimum distance, enabling the camera to get closer to an object / surface. The drawback is that as minimum distance is decreased, the maximum observable depth distance decreases, meaning that more of the background detail is excluded from the image as Disparity Shift is increased. For the large reduction in minimum distance that you need, I would recommend trying to set Disparity Shift to a value of 200. As you are using C# it is more difficult to configure Disparity Shift directly than it is to do so when using C++ or Python. The best solution for C# is to define the value in a json camera configuration file and then load the json into your C# script to import the settings contained in the file, as described at #9609 (comment) Using the Decimation Filter to 'downsample' the resolution to make the image less detailed (by using larger-size pixels) would not reduce the camera's minimum distance. |
Beta Was this translation helpful? Give feedback.
-
Thanks so much for your clarification! I will try to load json configuration file. |
Beta Was this translation helpful? Give feedback.
-
I did some tests with a D455 with the RealSense Viewer's disparity shift. The ideal 'sweet spot' value for capturing objects 30 to 40 cm away whilst also capturing background detail 2 m away appeared to be '20' rather than 200. At a setting of 20, the near-range detail disappeared when the camera was 20 cm or lower from a surface. You can manually strip out settings from a json file that you do not need. The link below has some examples of json preset files. https://dev.intelrealsense.com/docs/d400-series-visual-presets#preset-table The edited-down json file's contents would likely look like this:
|
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
I was using the RealSense Viewer's default settings without any changes other than Disparity Shift. I was testing with the Windows version, so there may be some variation if you are using Ubuntu. You could try '25' to see if you can get below 44 cm. |
Beta Was this translation helpful? Give feedback.
-
I was testing with 848x480 depth resolution. |
Beta Was this translation helpful? Give feedback.
-
Ah ok! I was testing with Disparity Shift of 20 but I was using full resolution 1280x720 because you said that decreasing resolution it's not the best way of reading distances bellow 60cm. Thanks! I will try with this config! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
I'm testing RealSense D415 and D455 and I have some doubts about image alignment between depth and BGR or RGB streams.
In camera specs each camera have different Fov angles for each sensor.
D415
D455
My question is about what is the recommended way to run those camera streams because if you enable streams with aspect ratio different than 16:9 the image is cropped, so, the fov angle is not the same as camera specs.
So I supose that if you want to get the Fov angle of camera specs you have to use specific resolution in camera streaming.
That's correct? What will be the recommended resolutions to use for rgb and depth pipelines If I want to get full fov of camera?
For example, in case of D455 you can enable RGB stream at 1280x800 and Depth stream with 1280x720. With this configuration you can get more Fov in RGB frame than depth frame. So, if I want to get depth from some point in RGB frame how I can get the correct px coordinate equivalence?
I'm doing it in the following way in C#. I have checked and after appying alignment both RGB and depth frames have the same resolution, so If I go to some zone that depth sensor doesn't reach I will get zero depth. I'm in the correct way?
Beta Was this translation helpful? Give feedback.
All reactions