Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

解析.bag包获得深度图和rgb图,根据内外参对齐深度图到rgb图得到某一像素点对应的深度值。这个方法与直接在Intel.RealSense.Viewer上使用鼠标停留在深度图上显示的深度值有出入,感觉Intel.RealSense.Viewer上显示的深度值更加准确 #13629

Open
weishiguan opened this issue Dec 25, 2024 · 18 comments
Labels

Comments

@weishiguan
Copy link

通过什么方式可以通过解析.bag包得到更准确的 每帧rgb图上某像素点对应的深度值呢?Intel.RealSense.Viewer是如何做到的,我能否使用同样的方法实现我的目的?
image
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 25, 2024

Hello @weishiguan Have you written your own program script to read the depth from the bag file please? If you have then a difference between the RealSense Viewer and a program script is that the Viewer applies a range of filters to the depth information by default, but a script applies no filters unless you deliberately put the filters into the script. An absence of filters in a program script could cause the depth values to be different from those provided in the Viewer.

If you are performing depth to color alignment then it is important to use the color intrinsics or aligned intrinsics, and not the depth intrinsics. This is because when depth to color alignment is performed, the origin of depth changes from the center line of the left infrared sensor to the center line of the RGB sensor.

Also, looking at the '3204' distance value that you provided, I wonder if this is the raw pixel depth value. To get the real world distance in meters, you would multiply the raw depth value by the depth scale value of the camera. For most RealSense 400 Series camera models this value will be '0.001'. For example, 3204 (raw pixel depth value) x 0.001 (camera depth scale) = 3.204 (real-world meters)


您好 @weishiguan 您是否编写了自己的程序脚本来从包文件中读取深度?如果您编写了,那么 RealSense Viewer 和程序脚本之间的区别在于,Viewer 默认将一系列过滤器应用于深度信息,但脚本不应用任何过滤器,除非您故意将过滤器放入脚本中。程序脚本中缺少过滤器可能会导致深度值与 Viewer 中提供的值不同。

如果您正在执行深度到颜色对齐,那么使用颜色内在函数或对齐内在函数而不是深度内在函数非常重要。这是因为当执行深度到颜色对齐时,深度的原点从左红外传感器的中心线变为 RGB 传感器的中心线。

此外,查看您提供的“3204”距离值,我想知道这是否是原始像素深度值。要获得以米为单位的真实世界距离,您需要将原始深度值乘以相机的深度比例值。对于大多数 RealSense 400 系列相机型号,此值为“0.001”。例如,3204(原始像素深度值)x 0.001(相机深度比例)= 3.204(真实世界米)

@weishiguan
Copy link
Author

很高心收到您的回复。如你所说,我确实编写了自己的程序脚本从深度图(解析后的.bag)中读取深度,我能否在我的代码中引入与Viewer相同的过滤器,相关的代码可否提供帮助。

此外,对于深度对齐到rgb,我使用了rgb内参和外参进行对齐,这和您所提示的一致;对于距离值,我提供的单位为mm(毫米),感谢您的提示。

此外,我将提供我的代码作为对齐参考,如果有问题非常感谢您的指导。
coordinate.txt

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 26, 2024

The RealSense Viewer applies multiple filters including Decimation, Spatial, Disparity and Temporal. Each filter would have to be individually programmed into the script.

There is a Python example of post-processing and alignment code at #11246


RealSense Viewer 应用了多个过滤器,包括抽取、空间、视差和时间。每个过滤器都必须单独编程到脚本中。

#11246 上有一个后处理和对齐代码的 Python 示例

@zhanyaoaaaaaa
Copy link

You said: The RealSense Viewer applies multiple filters including Decimation, Spatial, Disparity and Temporal. Each filter would have to be individually programmed into the script.
However, how to apply filters when calling bag files? I have reviewed the information you provided. But what you provided is to use Python code to record video streams, I still want to use realsense viewer to record, Python code call, how should I do it? In addition, the code examples you provided are not detailed enough, and most of the filters are not mentioned. Do you have more complete Python code?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 2, 2025

Hi @zhanyaoaaaaaa The effects of post-processing filters are not recorded to bag files, so you have to load in the bag file and then apply the filters in real-time.

Applying filters in real-time to bag files uses the same code as when using a live camera, becayse the main difference between a bag file script and a live-camera script is just that the bag file is being used as the data source instead of the camera.

There is not a single all-in-one Python script that covers all of the filters. If there is a particular filter that you would like to use then I can assist in finding references for it. For example, Python code for the Threshold Filter that sets a minimum and maximum depth range can be found at #8170 (comment)

You may also find the RealSense post-processing filter tutorial for Python at the link below to be helpful.

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb

@zhanyaoaaaaaa
Copy link

I hope you can give some practical suggestions, because after trying to add temporal filter to the code, I found that no matter how I do the code, it will always report an error. May I ask how to add the code for the temporal filter?
Here is the original code.

    try:
        for _ in range(num_frames_to_collect):
            frames = pipeline.wait_for_frames()
            aligned_frames = align.process(frames)
            aligned_depth_frame = aligned_frames.get_depth_frame()
            aligned_color_frame = aligned_frames.get_color_frame()

For example, when I change the code to the following form, an error will be reported: AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_distance'.

    try:
        for _ in range(num_frames_to_collect):
            frames = pipeline.wait_for_frames()
            aligned_frames = align.process(frames)
            aligned_depth_frame = aligned_frames.get_depth_frame()
            aligned_depth_frame = temporal.process(aligned_depth_frame)
            aligned_color_frame = aligned_frames.get_color_frame()

@weishiguan
Copy link
Author

weishiguan commented Jan 2, 2025

我在提取深度图时间戳时,得到如图所示的结果,结果表明录制时存在跳帧现象,请问我可以通过Viewer中的一些设置来避免跳帧情况的发生吗。
image
另外,我需要想你确定一件事情,对Viewer中时间,空间,空洞过滤的设置并不会影响从bag包中得到的是原始深度值的结果?
我的目的是获取每对应帧(rgb和深度图),在之前的使用中,我已经确定Post-Processing中Decimation Filter 抽取过滤会对深度图进行下采样,导致返回的深度图与设置不符,另外Threshold Filter 只对设置距离之内的深度有效。我想请问的是,我在录制时是否还有其他选项对最终的深度图造成影响,这对数据采取至关重要,希望得到您的帮助,万分感谢!!!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 2, 2025

@zhanyaoaaaaaa Usually when adding a new filter, before you insert the .process line to apply the filter, you have to first define the filter itself. See the script at #10078 (comment)

temporal = rs.temporal_filter()

aligned_depth_frame = temporal.process(aligned_depth_frame)

@zhanyaoaaaaaa
Copy link

Can you explain that when I apply temporal filtering to a deep frame, the attribute of this frame changes from "pyrealsense2. pyrealsense2. depthframe" to "pyrealsense2. pyrealsense2. frame", causing the function get-distance to fail and report an error: AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_distance'。
An error will be reported: AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_distance'.

Your references are different from my questions every time, and I can't learn from them. My current problem is an error, not not not not not being applied. I have changed your code many times and found that it still reports an error. After applying the time filter, even the properties of the depth frame have been changed, which is really unbelievable.

The code as follows:

try:
        for _ in range(num_frames_to_collect):
            frames = pipeline.wait_for_frames()
            aligned_frames = align.process(frames)
            aligned_depth_frame = aligned_frames.get_depth_frame()
            temporal_filter = rs.temporal_filter(smooth_alpha=0, smooth_delta=20, persistence_control=1)
            aligned_depth_frame = temporal_filter.process(aligned_depth_frame)
            aligned_color_frame = aligned_frames.get_color_frame()
def get_3d_camera_coordinate(depth_pixel, aligned_depth_frame, depth_intrin):
    x = depth_pixel[0]
    y = depth_pixel[1]
    dis = aligned_depth_frame.get_distance(x, y)
    camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, dis)
    camera_xyz = np.round(np.array(camera_coordinate), 6) * 1000
    return camera_xyz

Looking forwar for your reply!

@MartyG-RealSense
Copy link
Collaborator

Hello @weishiguan When a bad frame occurs, the RealSense SDK will go back to the last known good frame and then progress onwards from the frame that it returned to. This can cause frames to be repeated. There are things that you can do in program scripts that will reduce the risk of frames repeating, but not much that you can do about it in the RealSense Viewer tool.

You could try disabling all post-processing filters to see whether the filters are placing a burden on your computer's CPU that is making frame skips more likely to occur, because filters are processed on the CPU and not in the camera hardware. The Spatial filter especially can place a heavy processing burden on the CPU.

The Temporal filter could affect the final depth values that you get. For example, increasing the value of the Alpha Smooth setting of the Temporal filter will make the depth values update more frequently (become more unstable), whilst reducing Alpha Smooth reduces fluctuations in the depth values are provides more stable readings by updating the depth values less frequently.

The depth values resulting from areas that have been filled in by the Hole-Filling filter may be less accurate because the values are estimated rather than being ones that were produced by the camera.


您好 @weishiguan 当出现坏帧时,RealSense SDK 将返回到最后一个已知的好帧,然后从返回的帧继续前进。这可能会导致帧重复。您可以在程序脚本中执行一些操作来降低帧重复的风险,但在 RealSense Viewer 工具中对此无能为力。

您可以尝试禁用所有后处理过滤器,以查看过滤器是否会给计算机的 CPU 带来负担,从而使跳帧更容易发生,因为过滤器是在 CPU 上而不是在相机硬件上处理的。空间过滤器尤其会给 CPU 带来沉重的处理负担。

时间过滤器可能会影响您获得的最终深度值。例如,增加时间过滤器的 Alpha Smooth 设置的值将使深度值更新更频繁(变得更不稳定),而降低 Alpha Smooth 会减少深度值的波动,并通过以更低的频率更新深度值来提供更稳定的读数。

由填洞过滤器填充的区域产生的深度值可能不太准确,因为这些值是估计的,而不是由相机产生的值。

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 2, 2025

@zhanyaoaaaaaa I do not know the reason why adding a temporal filter is breaking your aligned-data script, unfortunately.

What happens if you test the Python script at #13099 (comment) which uses aligned_frames, post-processing and get_distance, please?

@zhanyaoaaaaaa
Copy link

zhanyaoaaaaaa commented Jan 3, 2025

I succeeded, I modified the get_distance function code to the following and it worked successfully. Thank you very much for your guidance. Is the result obtained by the two methods of getting the same distance? (The commented-out code is the original code.)

def get_3d_camera_coordinate(depth_pixel, aligned_depth_frame, depth_intrin):
    x = depth_pixel[0]
    y = depth_pixel[1]
    depth_image = np.asanyarray(aligned_depth_frame.get_data())
    dis = depth_image[y, x]
    # dis = aligned_depth_frame.get_distance(x, y)
    camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, dis)
    camera_xyz = np.round(np.array(camera_coordinate), 6)
    return camera_xyz

Additionally, I have another question: How should the parameters of the temporal filter be set for tracking the motion trajectory of moving objects?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 3, 2025

If the pixel value of the depth is obtained with dis = depth_image[y, x] instead of using get_distance() and you are not multiplying the pixel depth value by 0.001 to convert it to meters then the output should be approximately the same as the '3204' obtained using get_distance() and working in the mm unit of measurement.

In regard to the temporal filter, I would recommend setting Filter Smooth Alpha to at least '0.1' instead of '0'. The normal default value if the Alpha is not customized is '0.4'. For tracking fast motion, '0.4' is likely to be a good setting so that the depth image updates frequently. The Smooth Delta parameter can usually be left on its default value, which is '20'.

@zhanyaoaaaaaa
Copy link

zhanyaoaaaaaa commented Jan 3, 2025

For tracking slow-moving or objects with small amplitude of motion, should I set the smoothing Alpha value to ‘0.1’? Because for slow-moving objects, frequent updates of the depth image can lead to unnecessary errors. For example, for a stationary object, its depth value should remain largely unchanged over time, but when the smoothing Alpha value is set to ‘0.4’, its depth value shows significant changes over time, with fluctuations possibly ranging from 5 to 10 mm.

In addition, how should the persistence_control be set for fast-moving objects and slow-moving objects, respectively?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 3, 2025

If the object is slow-moving then 0.1 should be okay.

The choice of persistency index number for the temporal filter should not depend on the speed of the object. Instead, it sets how careful the filter is about replacing missing pixels with the last valid value, with values ranging from 0 to 8, based on the history of previous frames received. The default is 3.

As the set value is increased, the filter should become less strict about the validity of a pixel.

0 - Disabled - Persistency filter is not activated and no hole filling occurs.
1 - Valid in 8/8 - Persistency activated if the pixel was valid in 8 out of the last 8 frames
2 - Valid in 2/last 3 - Activated if the pixel was valid in two out of the last 3 frames
3 - Valid in 2/last 4 - Activated if the pixel was valid in two out of the last 4 frames
4 - Valid in 2/8 - Activated if the pixel was valid in two out of the last 8 frames
5 - Valid in 1/last 2 - Activated if the pixel was valid in one of the last two frames
6 - Valid in 1/last 5 - Activated if the pixel was valid in one out of the last 5 frames
7 - Valid in 1/last 8 - Activated if the pixel was valid in one out of the last 8 frames
8 - Persist Indefinitely - Persistency will be imposed regardless of the stored history (most aggressive filtering)

@zhanyaoaaaaaa
Copy link

zhanyaoaaaaaa commented Jan 3, 2025

I got it!
Your guidence is meanginful to my study!
Thank you for your guidence very much!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. :)

@MartyG-RealSense
Copy link
Collaborator

Hi @weishiguan Do you require further assistance with this case, please? Thanks!

您需要进一步协助处理此案吗?谢谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants