Hi Clement,
Thanks for translating the tf repo to a pytorch one!
I do have a small question with the 6 DOF pose used in
|
def inverse_warp(img, depth, pose, intrinsics, rotation_mode='euler', padding_mode='zeros'): |
My own dataset only contains intrinsics and extrinsics matrices. I wonder if it is possible to translate this 6 DOF pose to a series of matrices multiplications.
I already have a rough idea that the pose can be expressed in the form of:
extrinsics_src @ extrinsics_tgt.inverse()
where extrinsics_src is the extrinsic matrix of the source image and extrinsics_tgt is the extrinsic matrix of the target image. So the whole warping process can be written (roughly) as:
grid_sample(source_image, intrinsics @ extrinsics_src @ extrinsics_tgt.inverse() @ intrinsics_inverse() @ target_depth)
assuming all images are taken by the same camera and matrices are in homogeneous coordinates.
Really appreciate your input here!
Hi Clement,
Thanks for translating the tf repo to a pytorch one!
I do have a small question with the 6 DOF pose used in
My own dataset only contains intrinsics and extrinsics matrices. I wonder if it is possible to translate this 6 DOF pose to a series of matrices multiplications.
I already have a rough idea that the pose can be expressed in the form of:
extrinsics_src @ extrinsics_tgt.inverse()where extrinsics_src is the extrinsic matrix of the source image and extrinsics_tgt is the extrinsic matrix of the target image. So the whole warping process can be written (roughly) as:
grid_sample(source_image, intrinsics @ extrinsics_src @ extrinsics_tgt.inverse() @ intrinsics_inverse() @ target_depth)assuming all images are taken by the same camera and matrices are in homogeneous coordinates.
Really appreciate your input here!