Camera Location Information

#3
by yaraalaa0 - opened

Hello,
I would like to know please where can I find the files corresponding to the camera GPS location and rotation information at every frame.
Thanks,

https://huggingface.co/datasets/Meehai/dronescapes/blob/main/raw_data/camera_matrices.tar.gz

they are here, though as far as I remember, it's only the camera camera rotation matrices. I'm not sure at the moment about the GPS location, I will come back.

Okay, I have looked over the raw data and found the archive that I have uploaded at:

https://huggingface.co/datasets/Meehai/dronescapes/blob/main/raw_data/raw_camera_info.tar.gz

I included some info in the README as well about this.

Each file over there is a npz dictionary with the following structure:

>>> a=dict(np.load("atanasie_DJI_0652_full.npz"))

>>> dict_keys(['linear_velocity_c', 'angular_velocity_c', 'linear_velocity_gt_c', 'angular_velocity_gt_c', 'linear_velocity_gt_R_c', 'angular_velocity_gt_R_c', 'linear_velocity_gt_d', 'angular_velocity_gt_d', 'position_w', 'position_ecef', 'position_w_full', 'lat_long_height', 'orientation_rpy_rad', 'orientation_rpy_rad_gt', 'orientation_rpy_rad_gt_lw', 'angular_velocity_gps_w', 'linear_velocity_gps_w', 'linear_velocity_gt_w', 'angular_velocity_gt_w', 'video_t', 'linear_velocity_camera_gt_d_rw', 'ang_velocity_camera_gt_d_rw', 'linear_velocity_camera_gt_d_rw_avgv', 'ang_velocity_camera_gt_d_rw_avgv', 'linear_velocity_camera_gt_d_rspl', 'ang_velocity_camera_gt_d_rspl'])


>>> a["lat_long_height"].shape
(9021, 3)

>>> a["lat_long_height"]
array([[45.31130865, 29.29587291, 29.89813934],
       [45.31131254, 29.29587061, 29.89852169],
       [45.31131643, 29.29586831, 29.89724953],
       ...,
       [45.30972288, 29.29537316, 30.06230333],
       [45.30972409, 29.29537258, 30.06323121],
       [45.30972531, 29.295372  , 30.06596606]])

I couldn't find the raw logs for the norway scene and as far as I remember, that video came from a 3rd party which never provided us with more that the MP4 file.

Thanks a lot for your help

Hello @Meehai ,
I have investigated the files "raw_camera_info" and "camera_matrices", and they contain useful information
However, it would be appreciated if you can clarify about the following..
In my project it is required to know:

  • the position of the camera with respect to the world frame --> this is contained in 'position_w_full' ?
  • the orientation of the camera with respect to the world frame --> it is confusing whether to use 'orientation_rpy_rad_gt', or 'orientation_rpy_rad_gt_lw', or the camera rotation matrix in the file "camera_matrices"?

Also, if possible, I would like to know the camera axes convention (meaning how the xyz axes of the camera are aligned)
These are example pictures:
xyz_axis_cam_.png
xyz_axis_cam2.png

Thanks in advance

Sadly, you will have to do a bunch of reverse engineering yourself here, perhaps correlating the npz files with the actual RGB images.

I mostly worked on the videos/ML/multi task learning part of the project, and didn't really touch a lot about the raw sensors & geometric side of the data. Nor do I have a lot of knowledge in this domain.

There's a bunch of things that I can tell you (but take them with a grain of salt!):

  1. About the camera matrices archive
  • the rgb images were also loaded in a SfM tool and we extracted a 3D reconstruction of each scene. This is where "depth_sfm_manual202204" and (world) "normals_sfm_manual202204" representations come from (in data/ folder)
  • we used the camera rotation matrices to project from "normals_sfm" to "camera_normals_sfm" (see this: https://huggingface.co/datasets/Meehai/dronescapes#121-convert-camera-normals-to-world-normals). So I guess that these camera matrices are maybe the internal camera matrices of the SfM tool.

So you will need to be a bit careful with these keys as some may refer to the SfM world model, while others may refer to the actual position/orientation of the UAV as it flew in the real world. For example, there is a bunch of code that I found on the original server where the data was processed:

def depth_to_cloud(path_depth, path_pose, path_out, depth_norm=1):
    npzs = sorted(glob(os.path.join(path_depth, '*.npz')))
    ny, nx = np.load(npzs[0])['arr_0'].shape[:2]

    intrinsics = cam.fov_diag_to_intrinsic(71.35, (3840, 2160), (nx, ny))
    intrinsics = intrinsics.astype(np.float32)

    data = np.load(path_pose)
    poses = np.concatenate((data['position_w_full'], data['orientation_rpy_rad_gt_lw']), axis=1)    # velocities
    # poses = np.concatenate((data['position'], data['orientation']), axis=1)   # frame_data
    poses = poses.astype(np.float32)

    projection = Projection((ny, nx), tr.from_numpy(intrinsics))
    ...
    depth = next(iter(np.load(path).values())).astype(np.float32)
    cloud = projection.depth_to_cloud(depth)
    cloud = projection.apply_pose(cloud, poses[frame], invert_pose=False)
   ...

So for sure those entries were used to do 2D -> 3D projection via depth.

  1. "position_w_full" vs "lat_long_height"
>>> a["position_w_full"]
array([[164.74842124, 262.08083751,  29.89813934],
       [164.56761922, 262.51281972,  29.89852169],
       [164.38701837, 262.94522609,  29.89724953],
       ...,
       [125.55643353,  85.8415207 ,  30.06230333],
       [125.51134745,  85.97569627,  30.06323121],
       [125.46604517,  86.1112753 ,  30.06596606]])
>>> a["lat_long_height"]
array([[45.31130865, 29.29587291, 29.89813934],
       [45.31131254, 29.29587061, 29.89852169],
       [45.31131643, 29.29586831, 29.89724953],
       ...,
       [45.30972288, 29.29537316, 30.06230333],
       [45.30972409, 29.29537258, 30.06323121],
       [45.30972531, 29.295372  , 30.06596606]])

So, the GPS (lat long height) vs the other one also has some interesting information. I think (but maybe I'm wrong) that "position_w_full" is actually in the world coordinates of the SfM (in meters), while the height is kept fixed (From GPS). So maybe ignore "position_w_full" and "orientation_rpy_rad_gt_lw" completely as they are w.r.t the SFM Model ?

I think all the keys that contain 'gt' refer to the Sfm world model, so perhaps ignore then and only look into the other keys and try to correlate the with the RGB images

pprint({k: v[0:1].round(2).tolist()+v[500:501].round(2).tolist() for k, v in a.items() if k.find("gt") == -1})
{'angular_velocity_c': [[0.0, 0.01, 0.0], [0.0, -0.19, -0.07]],
 'angular_velocity_gps_w': [[0.0, 0.0, -0.01], [0.0, 0.0, 0.2]],
 'lat_long_height': [[45.31, 29.3, 29.9], [45.31, 29.29, 29.96]],
 'linear_velocity_c': [[-0.2, -4.8, 13.19], [5.6, -3.81, 10.48]],
 'linear_velocity_gps_w': [[-5.42, 12.94, 0.0], [-5.74, -11.08, 0.0]],
 'orientation_rpy_rad': [[0.0, 0.0, 1.95], [0.0, 0.0, -1.58]],
 'position_ecef': [[3918436.8, 2198553.06, 4511767.02],
                   [3918436.36, 2198400.49, 4511841.33]],
 'position_w': [[164.75, 262.08, 0.0], [31.9, 367.68, 0.0]],
 'position_w_full': [[164.75, 262.08, 29.9], [31.9, 367.68, 29.96]],
 'video_t': [0.0, 16.68]}

So for sure position_w_full = position_w + lat_long_height[:3]. Now, whether they represent the distance from the origin of the SfM scene or not, I don't know.

I will look more into the server to find how/where these keys were used, to perhaps help you more with your questions, but maybe these are good pointers for you to also dig.

I will come back, I may've found the original raw logs of the flights (DJI CSV format) and a very verbose script that produced these npz files that I uploaded earlier.

Thank you for the information

Sign up or log in to comment