Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generally correct rendered edge map but wrong edges extracted #8

Open
peteryuan123 opened this issue Oct 1, 2024 · 4 comments
Open

Comments

@peteryuan123
Copy link

Hi! Thanks for the amazing work. I met some problems when try to train EMAP on a custom dataset.
I provided 249 color images, their corresponding edge maps and a meta_data.json file. I suppose the "worldtogt" transformation in the meta file is just for evaluation, so I just set it as Identity. I also copy all the settings in DTU.conf to perform training.

After training, the rendered edge maps seems fairly good enough.
The rendered edge map in validation process, after 195000 and 200000 iters.
00195000_20 00195000_20
The rendered depth map in validation process, after 195000 and 200000 iters.
00195000_20 00195000_20
The rendered normals in validation process, after 195000 and 200000 iters.
00195000_20 00195000_20

The last log states,

iter:200000 loss = 0.0443 edge_loss = 0.0435 eki_loss = 0.0797 eki_ns_loss = 0.2072 
iter:200000 variance = 0.009402 beta = 0.004565 gamma = 0.0396 lr_geo=0.00000500 lr=0.00002500 
psnr = 13.6153 weight_sum = 0.2400 weight_sum_fg_bg = 0.2400 udf_min = 0.04729267 udf_mean = 0.6288 igr_ns_weight = 0.0000 igr_weight = 0.0100 

However, after edge extraction, there are no points extracted.
before visible checking: 0 after visible checking: 0

Then I try to play with the parameter in config file, I found increasing udf_threshold could increase the number of extracted points. I modify the udf_threshold from 0.015 to 0.05 but the extracted points are a mess.
Screenshot from 2024-10-01 11-41-12

Could you help to solve the problem or tell me in which direction I can explore? Thank you very much!

@Endvour
Copy link

Endvour commented Oct 30, 2024

Hi, can you tell me how made your custom dataset? @peteryuan123

@peteryuan123
Copy link
Author

Hi, can you tell me how made your custom dataset? @peteryuan123

I just prepared the color image and edge image extracted from pidiNet. Also, provide the meta_data.json containing poses and camera matrix. Then organize the data into the structure mentioned in README.

@rayeeli
Copy link
Collaborator

rayeeli commented Oct 31, 2024

Hi @peteryuan123, sorry for the late reply! For custom datasets, please ensure the near and far parameters in the config file align with your data, setting far beyond your inputs' farthest depth value if the camera intrinsics and extrinsics haven’t been adjusted. Additionally, try adjusting igr_weight in the config to achieve a more stable UDF field if your dataset setup is correct.

@peteryuan123
Copy link
Author

Thanks for your kind reply! I will try it, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants