Skip to content

Commit 770ce65

Browse files
tomashynekpix4d_concourse_developers
andauthored
Publish pyopf (#11)
Co-authored-by: pix4d_concourse_developers <[email protected]>
1 parent 62f1a95 commit 770ce65

File tree

15 files changed

+1398
-840
lines changed

15 files changed

+1398
-840
lines changed

CHANGELOG.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,13 @@ All notable changes to this project will be documented in this file.
44

55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
66

7+
## 1.3.0
8+
9+
### Added
10+
11+
- Parameter to opf2nerf to produce Nerfstudio-ready outputs
12+
- Example script to compute the reprojection error of input GCPs in calibrated cameras
13+
714
## 1.2.0
815

916
### Added

README.md

Lines changed: 36 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,17 @@ For more information about what OPF is and its full specification, please refer
55

66
### Installation
77

8-
The tool can be installed using `pip` with the following command:
8+
The library can be installed using `pip` with the following command:
99

1010
```shell
1111
pip install pyopf
1212
```
1313

14-
This command installs the `pyopf` package and tools.
14+
The additional command line tool dependencies are available through a package extra, and can be installed like so:
1515

16+
```shell
17+
pip install pyopf[tools]
18+
```
1619

1720
### Structure of the PyOPF repository
1821

@@ -30,7 +33,7 @@ from pyopf.uid64 import Uid64
3033
project_path = "spec/examples/project.opf"
3134

3235
# We are going to search for the calibrated position of the camera with this ID
33-
camera_id = Uid64(hex = "0x57282923")
36+
camera_id = Uid64(hex = "0x2D1A1DE")
3437

3538
# Load the json data and resolve the project, i.e. load the project items as named attributes.
3639
project = load(project_path)
@@ -45,8 +48,11 @@ if project.calibration is None:
4548
calibrated_camera = [camera for camera in project.calibration.calibrated_cameras.cameras if camera.id == camera_id]
4649

4750
# Print the pose of the camera.
48-
print("The camera {} is calibrated at:".format(camera_id), calibrated_camera[0].position)
49-
print("with orientation", calibrated_camera[0].orientation_deg)
51+
if calibrated_camera:
52+
print("The camera {} is calibrated at:".format(camera_id), calibrated_camera[0].position)
53+
print("with orientation", calibrated_camera[0].orientation_deg)
54+
else:
55+
print("There is no camera with id: {} in the project".format(camera_id))
5056
```
5157

5258
The custom attributes are stored per node in the `custom_attributes` dictionary. This dictionary might be `None` if
@@ -128,7 +134,7 @@ The conversion can be done by calling
128134

129135
#### Convert to NeRF
130136

131-
This tool converts OPF projects to NeRF. NeRF consists of transforms file(s), which contain information about distortion, intrinsic and extrinsinc parameters of cameras. Usually it is split in `transforms_train.json` and `transforms_test.json` files, but can sometimes also have only the train one. The split can be controlled with the parameter `--train-frac`, for example `--train-frac 0.7` will randomly assign 70% of images for training, and the remaining 30% for testing. If this parameter is unspecified or set to 1.0, only the `transforms_train.json` will be generated. Sometimes an additional `transforms_val.json` is required. It is to evaluate from new points of view, but the generation of new point of views is not managed by this tool, so it can just be a copy of `transforms_test.json` renamed.
137+
This tool converts OPF projects to NeRF. NeRF consists of transforms file(s), which contain information about distortion, intrinsic and extrinsic parameters of cameras. Usually it is split in `transforms_train.json` and `transforms_test.json` files, but can sometimes also have only the train one. The split can be controlled with the parameter `--train-frac`, for example `--train-frac 0.7` will randomly assign 70% of images for training, and the remaining 30% for testing. If this parameter is unspecified or set to 1.0, only the `transforms_train.json` will be generated. Sometimes an additional `transforms_val.json` is required. It is to evaluate from new points of view, but the generation of new point of views is not managed by this tool, so it can just be a copy of `transforms_test.json` renamed.
132138

133139
The tool can also convert input images to other image formats using `--out-img-format`. An optional output directory can be given with `--out-img-dir`, otherwise the images are written to the same directory as the input ones. If `--out-img-dir` is used without `--out-img-format`, images will be copied. When copying or converting an image, the input directory layout is preserved.
134140

@@ -138,13 +144,22 @@ Only calibrated projects with perspective cameras are supported.
138144

139145
##### Examples
140146

141-
Different NeRFs require different parameter settings, by default all values are set to work with Instant-NeRF, so it can be used as:
147+
Different NeRFs require different parameter settings, here are some popular examples:
148+
149+
- **Instant-NeRF**:
150+
By default all values are set to work with Instant-NeRF, so it can be used as:
151+
152+
`opf2nerf project.opf --output-extension`
153+
154+
- **Nerfstudio**:
155+
Nerfstudio is another popular tool. The converter has a parameter to use the proper options:
142156

143-
`opf2nerf project.opf --output-extension`
157+
`opf2nerf project.opf --out-dir out_dir/ --nerfstudio`
144158

145-
DirectVoxGo only works with PNG image files, and contrary to Instant-NeRF it doesn't flip cameras orientation with respect to OPF. Thus it can be used as:
159+
- **DirectVoxGo**:
160+
DirectVoxGo only works with PNG image files, and contrary to Instant-NeRF it doesn't flip cameras orientation with respect to OPF. Thus it can be used as:
146161

147-
`opf2nerf project.opf --out-img-format png --out-img-dir ./images --no-camera-flip`
162+
`opf2nerf project.opf --out-img-format png --out-img-dir ./images --no-camera-flip`
148163

149164
#### Convert to LAS
150165

@@ -160,11 +175,21 @@ It can be used as follows:
160175

161176
`opf2ply path_to/project.opf --out-dir your_output_dir`
162177

178+
### Examples
179+
180+
We provide also a few examples of command line scripts to illustrate and educate about various photogrammetry knowledge using the OPF projects.
181+
182+
#### Compute reprojection error
183+
184+
This script computes the reprojection error of input GCPs in calibrated cameras using the OPF project as an input.
185+
186+
`python examples/compute_reprojection_error.py --opf_path path_to/project.opf`
187+
163188
## License and citation
164189

165190
If you use this work in your research or projects, we kindly request that you cite it as follows:
166191

167-
The Open Photogrammetry Format Specification, Grégoire Krähenbühl, Klaus Schneider-Zapp, Bastien Dalla Piazza, Juan Hernando, Juan Palacios, Massimiliano Bellomo, Mohamed-Ghaïth Kaabi, Christoph Strecha, Pix4D, 2023, retrived from https://pix4d.github.io/opf-spec/
192+
The Open Photogrammetry Format Specification, Grégoire Krähenbühl, Klaus Schneider-Zapp, Bastien Dalla Piazza, Juan Hernando, Juan Palacios, Massimiliano Bellomo, Mohamed-Ghaïth Kaabi, Christoph Strecha, Pix4D, 2023, retrieved from https://pix4d.github.io/opf-spec/
168193

169194
Copyright (c) 2023 Pix4D SA
170195

Lines changed: 252 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,252 @@
1+
#!/usr/bin/env python
2+
3+
"""
4+
this script computes the reprojection error of input GCPs in calibrated cameras
5+
"""
6+
7+
import argparse
8+
9+
import numpy as np
10+
11+
import pyopf.cameras
12+
import pyopf.io
13+
import pyopf.resolve
14+
15+
# == define some helper functions
16+
17+
18+
def find_object_with_given_id(objects: list, id):
19+
"""
20+
Returns the first object in the list that matches the given id or None if not found
21+
"""
22+
23+
return next((obj for obj in objects if obj.id == id), None)
24+
25+
26+
def make_basis_change_matrix_from_opk_in_degrees(omega_phi_kappa: np.array):
27+
"""
28+
Computes a basis change matrix from angles (in degrees) expressed in the omega-phi-kappa convention.
29+
This matrix transforms points from a right-top-back camera coordinate reference frame to the scene one
30+
31+
Please see the definition of the omega-phi-kappa angles in the OPF specifications:
32+
https://pix4d.github.io/opf-spec/specification/input_cameras.html#omega-phi-kappa-orientation
33+
More details are provided in:
34+
https://s3.amazonaws.com/mics.pix4d.com/KB/documents/Pix4D_Yaw_Pitch_Roll_Omega_to_Phi_Kappa_angles_and_conversion.pdf
35+
"""
36+
37+
omega_rad, phi_rad, kappa_rad = np.deg2rad(omega_phi_kappa)
38+
39+
r_x = np.array(
40+
[
41+
[1.0, 0.0, 0.0],
42+
[0.0, np.cos(omega_rad), -np.sin(omega_rad)],
43+
[0.0, np.sin(omega_rad), np.cos(omega_rad)],
44+
],
45+
dtype=np.float64,
46+
)
47+
48+
r_y = np.array(
49+
[
50+
[np.cos(phi_rad), 0.0, np.sin(phi_rad)],
51+
[0.0, 1.0, 0.0],
52+
[-np.sin(phi_rad), 0.0, np.cos(phi_rad)],
53+
],
54+
dtype=np.float64,
55+
)
56+
57+
r_z = np.array(
58+
[
59+
[np.cos(kappa_rad), -np.sin(kappa_rad), 0.0],
60+
[np.sin(kappa_rad), np.cos(kappa_rad), 0.0],
61+
[0.0, 0.0, 1.0],
62+
],
63+
dtype=np.float64,
64+
)
65+
66+
return r_x @ r_y @ r_z
67+
68+
69+
def invert_transformation(transformation: np.ndarray):
70+
"""
71+
Computes the inverse of a given 4x4 transformation
72+
"""
73+
74+
inverse_transformation = np.identity(4, dtype=np.float64)
75+
76+
inverse_transformation[0:3, 0:3] = transformation[0:3, 0:3].transpose()
77+
inverse_transformation[0:3, 3] = (
78+
-transformation[0:3, 0:3].transpose() @ transformation[0:3, 3]
79+
)
80+
81+
return inverse_transformation
82+
83+
84+
def make_camera_intrinsic_matrix(
85+
internals: pyopf.cameras.sensor_internals.PerspectiveInternals,
86+
):
87+
"""
88+
makes a 3x3 camera intrinsic matrix form the OPF internals
89+
"""
90+
91+
intrinsic_matrix = np.zeros((3, 3), dtype=np.float64)
92+
93+
intrinsic_matrix[0, 0] = internals.focal_length_px
94+
intrinsic_matrix[1, 1] = internals.focal_length_px
95+
96+
intrinsic_matrix[0:2, 2] = internals.principal_point_px
97+
98+
intrinsic_matrix[2, 2] = 1.0
99+
100+
return intrinsic_matrix
101+
102+
103+
def apply_distortion_model(
104+
ux: float, uy: float, internals: pyopf.cameras.sensor_internals.PerspectiveInternals
105+
):
106+
"""
107+
applies the distortion model to undistorted image coordinates
108+
"""
109+
110+
k1, k2, k3 = internals.radial_distortion
111+
t1, t2 = internals.tangential_distortion
112+
cpx, cpy = internals.principal_point_px
113+
f = internals.focal_length_px
114+
115+
ux = (ux - cpx) / f
116+
uy = (uy - cpy) / f
117+
r = ux * ux + uy * uy
118+
dr = 1.0 + r * k1 + r**2 * k2 + r**3 * k3
119+
dtx = 2.0 * t1 * ux * uy + t2 * (r + 2.0 * ux * ux)
120+
dty = 2.0 * t2 * ux * uy + t1 * (r + 2.0 * uy * uy)
121+
122+
return f * (dr * ux + dtx) + cpx, f * (dr * uy + dty) + cpy
123+
124+
125+
def project_point(
126+
camera: pyopf.cameras.CalibratedCamera,
127+
internals: pyopf.cameras.sensor_internals.PerspectiveInternals,
128+
point: np.array,
129+
):
130+
"""
131+
computes the projection of a given 3d point in a camera given its internals
132+
"""
133+
134+
basis_change_from_camera_to_scene = make_basis_change_matrix_from_opk_in_degrees(
135+
camera.orientation_deg
136+
)
137+
138+
camera_pose = np.identity(4)
139+
camera_pose[0:3, 0:3] = basis_change_from_camera_to_scene
140+
camera_pose[0:3, 3] = camera.position
141+
142+
# as per the definition of the omega-phi-kappa angles, the camera frame axes are defined as
143+
# X: camera/image right (looking through the camera/image)
144+
# Y: camera/image top (looking through the camera/image)
145+
# Z: camera back (opposite to viewing direction through camera)
146+
# to go to the standard computer vision convention, the Y and Z axes need to be flipped
147+
148+
flip_y_and_z = np.array(
149+
[
150+
[1.0, 0.0, 0.0, 0.0],
151+
[0.0, -1.0, 0.0, 0.0],
152+
[0.0, 0.0, -1.0, 0.0],
153+
[0.0, 0.0, 0.0, 1.0],
154+
],
155+
dtype=np.float64,
156+
)
157+
158+
camera_pose_right_bottom_front = camera_pose @ flip_y_and_z
159+
160+
camera_pose_inverse = invert_transformation(camera_pose_right_bottom_front)
161+
162+
camera_intrinsic_matrix = make_camera_intrinsic_matrix(internals)
163+
164+
point_homogeneous = np.append(point, 1.0)
165+
166+
# project the point on the camera image
167+
168+
point_in_camera_homogeneous = camera_pose_inverse @ point_homogeneous
169+
170+
x, y, z = camera_intrinsic_matrix @ point_in_camera_homogeneous[:-1]
171+
172+
ux = x / z
173+
uy = y / z
174+
175+
# apply distortion model
176+
177+
distorted_ux, distorted_uy = apply_distortion_model(ux, uy, internals)
178+
179+
return np.array([distorted_ux, distorted_uy], dtype=np.float64)
180+
181+
182+
def parse_args() -> argparse.Namespace:
183+
parser = argparse.ArgumentParser(
184+
description="Compute the reprojection error of GCPs in an OPF project."
185+
)
186+
187+
parser.add_argument(
188+
"opf_path", type=str, help="[REQUIRED] The path to your project.opf file."
189+
)
190+
191+
return parser.parse_args()
192+
193+
194+
def main():
195+
args = parse_args()
196+
197+
# == Load the OPF ==
198+
199+
project = pyopf.resolve.resolve(pyopf.io.load(args.opf_path))
200+
201+
input_gcps = project.input_control_points.gcps
202+
projected_gcps = project.projected_control_points.projected_gcps
203+
204+
# alternatively, we can also use the optimized coordinates for the GCPs
205+
# projected_gcps = project.calibration.calibrated_control_points.points
206+
207+
calibrated_cameras = project.calibration.calibrated_cameras.cameras
208+
sensors = project.calibration.calibrated_cameras.sensors
209+
210+
# == for all gcps, compute the reprojection error of all marks and the mean ==
211+
212+
for gcp in input_gcps:
213+
214+
# get the corresponding projected gcp
215+
scene_gcp = find_object_with_given_id(projected_gcps, gcp.id)
216+
217+
all_reprojection_errors = []
218+
219+
for mark in gcp.marks:
220+
221+
# find the corresponding calibrated camera
222+
calibrated_camera = find_object_with_given_id(
223+
calibrated_cameras, mark.camera_id
224+
)
225+
226+
# find the internal parameters for this camera
227+
calibrated_sensor = find_object_with_given_id(
228+
sensors, calibrated_camera.sensor_id
229+
)
230+
internal_parameters = calibrated_sensor.internals
231+
232+
# project the 3d point on the image
233+
gcp_on_image = project_point(
234+
calibrated_camera, internal_parameters, scene_gcp.coordinates
235+
)
236+
237+
# compute reprojection error
238+
reprojection_error = gcp_on_image - mark.position_px
239+
240+
all_reprojection_errors.append(reprojection_error)
241+
242+
# compute the mean of the norm of the reprojection errors
243+
all_reprojection_errors = np.array(all_reprojection_errors)
244+
mean_reprojection_error = np.mean(
245+
np.apply_along_axis(np.linalg.norm, 1, all_reprojection_errors)
246+
)
247+
248+
print(gcp.id, mean_reprojection_error)
249+
250+
251+
if __name__ == "__main__":
252+
main()

0 commit comments

Comments
 (0)