You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's a single-test, I took 4 photos at the same place 1 by 1 as the following map indicates:
This is really where I stand and bearing. Then use OrienterNet to get the results(in the same order):
The red arrow is a dummy position/direction, because beside a high building the accuration of gps is really bad, so I manually filled a random data in image_infors/xxx.json.
I uploaded my testing files as ON_test and shared you with the access, sorry for personal privacy I don't want to make it public.
Just type:
python test_single.py
If you have time please have a try.
The result is not ideal. I tried so many modification on parameters but failed,such as:
enlarge crop_size_meters
Increase/decrease z_max/x_max/ppm
I think, maybe the problem is the depth estimation, it's easy to inference a relative depth but not absolute depth from a mono photo, for example, in the first photo there is a building about 200 meters away, but that appeared in photo and be figured out in BEV, this is already far beyond the range of z_max(32 meters), so it's hard to let the BEV to match the map in real size. I had tried to increase z_max but my GPU only has 40GB ram, after adjust z_max to 36, it crashed OOM.
I know the current BEV-Map registering is by a FFT optimized template matching, the rotation can be 64/256, is it possible to add a scale mechanism just like rotation in the matching process(of cause the resolution should be down sampled to fit the GPU RAM size).
Thanx!
BTW, why don't you use SuperGlue in this project?
The text was updated successfully, but these errors were encountered:
Hi sarlin:
I make a small testing sample, followed maploc/data/mapillary/prepare.py
left all the key parameters as default:
crop_size_meters: 64
z_max: 32
x_max: 32
pixel_per_meter: 2
It's a single-test, I took 4 photos at the same place 1 by 1 as the following map indicates:
This is really where I stand and bearing. Then use OrienterNet to get the results(in the same order):
The red arrow is a dummy position/direction, because beside a high building the accuration of gps is really bad, so I manually filled a random data in image_infors/xxx.json.
I uploaded my testing files as ON_test and shared you with the access, sorry for personal privacy I don't want to make it public.
Just type:
python test_single.py
If you have time please have a try.
The result is not ideal. I tried so many modification on parameters but failed,such as:
I think, maybe the problem is the depth estimation, it's easy to inference a relative depth but not absolute depth from a mono photo, for example, in the first photo there is a building about 200 meters away, but that appeared in photo and be figured out in BEV, this is already far beyond the range of z_max(32 meters), so it's hard to let the BEV to match the map in real size. I had tried to increase z_max but my GPU only has 40GB ram, after adjust z_max to 36, it crashed OOM.
I know the current BEV-Map registering is by a FFT optimized template matching, the rotation can be 64/256, is it possible to add a scale mechanism just like rotation in the matching process(of cause the resolution should be down sampled to fit the GPU RAM size).
Thanx!
BTW, why don't you use SuperGlue in this project?
The text was updated successfully, but these errors were encountered: