-
Notifications
You must be signed in to change notification settings - Fork 23
Description
I’m trying to reproduce image B from image A using the genwarp_inference.ipynb example, but the generated view never lands exactly on the target. I’ve attempted several ways of deriving the three angular parameters
azi_deg # horizontal ( + = right )
ele_deg # vertical ( + = up )
radius # extra distance from scene centre
yet the synthesis is always a few (or many) degrees off.
Could you clarify the canonical procedure for computing these values so that the camera that generates image B is reproduced exactly?
Below is what I have done so far:
Using NeRF-synthetic JSON – I take the two 4 × 4 cam_to_world matrices, subtract the camera centres, rotate into A’s view space and turn the result into spherical angles. The angles look plausible but the generated image is still mis-aligned.
Using COLMAP output – same idea with images.bin and cameras.bin, normalising by focal length, but I get similar mis-alignment.
Would it be possible to expose an R | t + K interface?
I noticed that forward_warper already has a branch that accepts explicit R, t and K, but it is not wired into the current pipeline. Having an official pathway that takes exact extrinsics/intrinsics would make the process unambiguous—especially when the three-angle parameterisation (azi, ele, radius) is hard to infer robustly.
Are there plans to provide an example notebook (or update the current one) that takes R | t and K directly?