Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

RESEARCH ON CLOSE-RANGE PHOTOGRAMMETRY WITH BIG ROTATION

ANGLE

Lu Juea
a
The Department of Surveying and Geo-informatics Engineering, Tongji University, Shanghai, 200092. -
lujue1985@126.com

KEY WORDS: Big Rotation Angle; Colinearity Equation; Coplanarity Equation; Exterior Orientation

ABSTRACT:

This paper studies the influence of big rotation angles on the classical solution of analytical process. Close-range photogrammetry
differs from traditional aerial photogrammetry, for the former usually adopts oblique photography or convergent photography, while
the latter is similar to take vertical photos wherein the orientation elements are very small. In this paper, an exploration of the
algorithm in every step of the analytical process from dependent relative orientation to absolute orientation is described. In order to
emphasize the differences, contrasts will be made between the solution of the classical aerial photography and the algorithm of the
close-range photogrammetry. A practical close-range photogrammetry experiment is presented to demonstrate this new algorithm.
Through the example, it is pointed that with this new algorithm, the coordinates of the ground points and the exterior orientation can
be accurately calculated. The results indicate that the algorithm deduced in this paper is applicable in the close-range
photogrammetry with big rotation angle.

1. INTRODUCTION matrix R. So the collinearity equations can be expressed as


follow:
Close-range photogrammetry determines the shape, size, and
geometric position of close targets with photogrammetric
technology [1]. Close-range photogrammetry differs from a1 ( X − X s ) + b1 (Y − Ys ) + c1 ( Z − Z s )
x − x0 = − f (1)
traditional aerial photogrammetry in several ways. For example, a3 ( X − X s ) + b3 (Y − Ys ) + c3 ( Z − Z s )
the former usually takes oblique or convergent photographs, so a2 ( X − X s ) + b2 (Y − Ys ) + c2 ( Z − Z s )
the rotation angles in the exterior orientation elements are often y − y0 = − f
a3 ( X − X s ) + b3 (Y − Ys ) + c3 ( Z − Z s )
very big and the discrepancies between the adjacent camera
stations’ three-dimensional coordinates will also be quite large, ⎡a1 a2 a3 ⎤
(2)
while the latter is similar to taking vertical photos wherein the R = ⎢⎢b1 b2 b3 ⎥⎥
orientation elements are very small. So in the field of close- ⎣⎢ c1 c2 c3 ⎦⎥
range photogrammetry, performance of every step of the
analytical process, such as dependent relative orientation, space
intersection, strip formation process, absolute orientation and Where ( x0 , y0 , f ) are the interior orientations. ( X , Y , Z ) and
space resection, etc, a simplified model with small angles or a
simple mathematical model can not be continued to use to ( X s , Ys , Z s ) are the object space coordinates of the ground
calculate the parameters or coordinates that we need. points and the perspective centres. a1、a2、a3、b1、b2、b3、
c1 、 c2 、 c3 are the 9 elements of the rotation matrix. In
In this paper, an exploration of the algorithm in every step of resection, once the interior parameters are known, the
the analytical process in close-range photogrammetry is collinearity equations can be employed to determine 6 exterior
described. Special emphasis is placed on the contrasts between orientation parameters. With more than 3 control points, to
the solution of the classic aerial photography and the close- estimate better results, we linearize the collinearity equations
range photogrammetry. And corresponding examples will be and introduce the Least Square method [3]. However, good
presented to explain and verify the algorithm. initial values are required for the LS estimation process to
converge to appropriate values [2].

2. BASIC PRINCIPLE OF SPACE RESECTION AND In traditional aerial photogrammetry, the initial values of three
TLS orientation parameters can be set at zeros, for its vertical
manner [4]. But in close-range photogrammetry with big rotation
Collinearity condition equations are essential in analytical angles, if they are still zeros, the process may not converge to
photogrammetry. The fundamental theory of the collinearity proper results. So we need to determine the exterior orientations
equations is that the perspective center, the image point, and the without the knowledge of their approximate values.
corresponding object point all lie on a line. Based on these
equations, image and object coordinate systems will be related In this paper, the collinearity solution based on the orthogonal
by three position parameters and three orientation parameters, matrix is applied. In this method, the nine elements of the
called exterior orientations [2]. No matter what format of the matrix are used instead of the three angles, with three position
three orientation parameters are present, they will be implicitly parameters to take the roles of the unknown. The linear
expressed in the nine elements of a 3×3 orthogonal rotation observation equations can then be made along with six

157
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B6b. Beijing 2008

additional condition equations based on the orthogonal Cx + W = 0


conditions of the rotation matrix (RRT=RTR=I) [5], the 6 ⎡0 0 0 2a1 2a2 2a3 0 0 0 0 0 0 ⎤
orthogonal conditions are : ⎢0 0 0 0 0 0 2b1 2b2 2b3 0 0 0 ⎥⎥

⎢0 0 0 0 0 0 0 0 0 2c1 2c2 2c3 ⎥
C=⎢ ⎥
⎢0 0 0 a2 a1 0 b2 b1 0 c2 c1 0 ⎥
⎧ a12 + a 22 + a 32 =1 ⎢0
⎪ 2 2 2 0 0 a3 0 a1 b3 0 b1 c3 0 c1 ⎥
⎢ ⎥
⎪ b1 + b2 + b3 =1 ⎢⎣0 0 0 0 a3 a2 0 b3 b2 0 c3 c2 ⎥⎦
⎪ 2 2 2
⎪ c1 + c 2 + c 3 =1 ⎡ a12 + a22 + a32 -1 ⎤
⎨ ⎢ 2 2 2 ⎥
⎪ a1 × a 2 + b1 × b2 + c1 × c 2 = 0 ⎢ b1 + b2 + b3 -1 ⎥
⎪ a × a + b ×b + c × c = 0 ⎢ c 2 + c 2 + c 2 -1 ⎥
⎪ 1 3 1 3 1 3 W =⎢ 1 2 3 ⎥ (6)
⎪⎩ a 2 × a 3 + b2 × b3 + c 2 × c 3 = 0 ⎢ a1a2 + b1b2 + c1c2 ⎥
⎢ a a +bb +c c ⎥
(3) ⎢ 1 3 1 3 1 3⎥
⎣⎢a2 a3 + b2b3 + c2c3 ⎦⎥

Then the adjustment with conditions is applied. The expressions


of the error equations are: This process has eliminated the need for differentiating the
nonlinear observation equations and it can be used for any angle
regardless of the format of the three ratios and what size they
v = Ax - l are.
⎡ vx ⎤ ⎡lx ⎤ ⎡ x - ( x) ⎤
v = ⎢ ⎥ ,l = ⎢ ⎥ = ⎢ ⎥
v
⎣ y⎦ ⎣l y ⎦ ⎣ y - ( y ) ⎦
3. ALGORITHM OF THE ANALYTICAL PROCESS
xT = [ dX s dYs dZ s da1 da2 da3 db1 db2 db3 dc1 dc2 dc3 ] WITH BIG ROTATION ANGLES
⎡a a a a a a a17 a18 a19 a20 a21 a22 ⎤
A = ⎢ 11 12 13 14 15 16
⎣b11 b12 b13 b14 b15 b16 b17 b18 b19 b20 b21 b22 ⎥⎦ 3.1 Dependent Relative Orientation
⎡ ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x ⎤
⎢ ∂X The model coordinate system in dependent relative orientation
∂Ys ∂Z s ∂a1 ∂a2 ∂a3 ∂b1 ∂b2 ∂b3 ∂c1 ∂c2 ∂c3 ⎥ is parallel to the first image’s coordinate system, leaving two
=⎢ ⎥
s

⎢ ∂y ∂y ∂y ∂y ∂y ∂y ∂y ∂y ∂y ∂y ∂y ∂y ⎥ position parameters and the three orientation parameters of the


⎢ ∂X ∂c3 ⎥⎦
⎣ s ∂Ys ∂Z s ∂a1 ∂a2 ∂a3 ∂b1 ∂b2 ∂b3 ∂c1 ∂c2 second image with respect to the model coordinate system to be
(4) determined. In one image pair, if the left and the right images’
space coordinates are S1-X1Y1Z1 and S2-X2Y2Z2, the
coordinates of the image points A1, A2 in these two coordinate
If Z = ( a3 × ( X - X S ) + b3 × (Y - YS ) + c3 × ( Z - Z S )) ,then the systems are ( X1 , Y1 , Z1 ) and ( X 2 , Y2 , Z 2 ) , and the coordinates of the
elements in the coefficient matrix A are: perspective centre S2 in coordinate system S1-X1Y1Z1
are ( Bx , By , Bz ) , then the coplanarity condition equations can be
expressed as follow:
a11 = (a1 × f + a3 × ( x - x0 )) / Z, b11 = ( a2 × f + a3 × ( y - y0 )) / Z
a12 = (b1 × f + b3 × ( x - x0 )) / Z, b12 = (b2 × f + b3 × ( y - y0 )) / Z
a13 = (c1 × f + c3 × ( x - x0 )) / Z, b13 = (c2 × f + c3 × ( y - y0 )) / Z Bx By Bz Bx By Bz (7)
F = X1 Y1 Z1 = X1 Y1 Z1 = 0
a14 = - f × ( X - X S ) / Z, b14 = 0 (5)
X 1 + Bx Y1 + B y Z1 + B z X2 Y2 Z2
a15 = 0, b15 = - f × ( X - X S ) / Z
a16 = -( x - x0 ) × ( X - X S ) / Z, b16 = -( y - y0 ) × ( X - X S ) / Z
a17 = - f × (Y - Y S ) / Z, b17 = 0 In aerial photogrammety, since the perspective centres lie in
nearly a straight line, the usual procedure is to extract the
a18 = 0, b18 = - f × (Y - YS ) / Z component of the base line Bx, and then By, Bz can be expressed
a19 = -( x - x0 ) × (Y - YS ) / Z, b19 = -( y - y0 ) × (Y - YS ) / Z by minute angles, like μ and ν. With the other three rotation
a20 = - f × ( Z - Z S ) / Z, b20 = 0 angles, the five relative orientation parameters μ, ν, ϕ, ω,κ are
formed. Then the coplanarity condition equations are simplified
a21 = 0, b21 = - f × ( Z - Z S ) / Z as:
a22 = -( x - x0 ) × ( Z - Z S ) / Z, b22 = -( y - y0 ) × ( Z - Z S ) / Z

tgν
1 tgμ (8)
And the condition equations can be written as: cos μ 1 μ ν
F = Bx X 1 Y1 Z1 ≈ Bx X 1 Y1 Z1
X2 Y2 Z2 X2 Y2 Z2

Where ( X 2 , Y2 , Z 2 )T = R × ( X1 , Y1 , Z1 )T , and R is the rotation matrix


from right image to left one, like equation (2). This method is

158
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B6b. Beijing 2008

available in the classic aerial photography, but in wide-angle ⎧ Bx 2 + By 2 + Bz 2 = B 2


photogrammetry, these five angles (five relative orientation ⎪
⎪ a12 + a22 + a32 = 1 (11)
parameters) are not small any more, so two problems may be ⎪ b12 + b22 + b32 = 1
arised. ⎪
⎨ c12 + c22 + c32 = 1
⎪a ×a + b ×b + c ×c = 0
First, since the values of the relative orientation parameters ϕ, ⎪ 1 2 1 2 1 2
⎪ a1 × a3 + b1 × b3 + c1 × c3 = 0
ω, κ are very big, if the initial values are not appropriate ⎪
enough, the LS estimation process of the nonlinear observation ⎩a2 × a3 + b2 × b3 + c2 × c3 = 0
equations may not converge to right values. So the relative
orientation solution based on the orthogonal matrix is employed. Where B is the arbitrary constant, so the condition equations
can be expressed as:
Second, in the aerial photography, μ、ν are adopted instead of
tanμ、 tanν/cosμ, for the values of the two angles are small.
Cx + W = 0
This step can avoid the need for differentiating the nonlinear
⎡2 Bx 2 By 2 Bz 0 0 0 0 0 0 0 0 0 ⎤
function. But in close-range photogrammetry, μ、ν may be big ⎢ 0
⎢ 0 0 2a 2a2 2a3 0 0 0 0 0 0 ⎥⎥
angles, so the simplified formats are not still suitable. The 1

⎢ 0 0 0 0 0 0 2b1 2b2 2b3 0 0 0 ⎥


development of the solution is to calculate the three components ⎢ ⎥
of the base line Bx, By and Bz directly, not using the two angles C=⎢ 0 0 0 0 0 0 0 0 0 2c1 2c2 2c3 ⎥
⎢ 0 0 0 0 0 0 ⎥
μ 、 ν. So we use twelve unknowns, three of which are ⎢
a2 a1 b2 b1 c2 c1

components of the base line, while the others are the nine ⎢ 0 0 0 a3 0 a1 b3 0 b1 c3 0 c1 ⎥
⎢ 0 0 0 0 0 0 c2 ⎥⎦
elements of the rotation matrix. Since relative orientation ⎣ a3 a2 b3 b2 c3
involves the determination of five degrees of freedom, seven ⎡ Bx 2 + By 2 + Bz 2 - B 2 ⎤
condition equations will be added after the observation ⎢ ⎥
⎢ a1 + a2 + a3 -1 ⎥
2 2 2

equations: one for the limitation of base line components and ⎢ b12 + b22 + b32 -1 ⎥
the others based on the orthogonal conditions of the rotation ⎢ ⎥
W = ⎢ c12 + c22 + c32 -1 ⎥
matrix. The error equations are: ⎢ a a +bb +c c ⎥
⎢ 1 2 12 12 ⎥
⎢ a1a3 + b1b3 + c1c3 ⎥
⎢ a a +b b +c c ⎥
v = Ax - l ⎣ 2 3 2 3 2 3 ⎦
xT = ⎡⎣ dBx dBy dBz da1 da2 da3 db1 db2 db3 dc1 dc2 dc3 ⎤⎦
(12)
l = − F0 = X 2Y1 Bz + X 1Z 2 By + Y2 Z1 Bx − Y1Z 2 Bx − X 1Y2 Bz − X 2 Z1 By
A = [ a11 a12 a13 a14 a15 a16 a17 a18 a19 a20 a21 a22 ] By this means, the nonlinear function can be changed into
⎡ ∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F ⎤
linear one as to eliminate the process of differentiating and it
=⎢ ⎥ can converge to the correct result quickly without the
⎢⎣ ∂Bx ∂By ∂Bz ∂a1 ∂a2 ∂a3 ∂b1 ∂b2 ∂b3 ∂c1 ∂c2 ∂c3 ⎥⎦
knowledge of the approximate values of the parameters. The
(9) process of the deduction is not complicated, and the
construction of the coefficient of the adjustment equations is
clear and concise.
And the coefficients of the data matrix A are:
3.2 Intersection
∂F ∂F Intersection refers to the determination of a point’s position in
a11 = = Y1Z 2 − Y2 Z1 , a12 = = X 2 Z1 − X 1Z 2
∂Bx ∂By object space by intersecting the image rays from two or more
∂F ∂F images. Here the standard method will be used that is the
a13 = = X 1Y2 − X 2Y1 , a14 = = ( x2 − x0 ) × ( By Z1 − BzY1 )
∂Bz ∂a1 application of the collinearity equations, with two equations for
∂F ∂F each image of the point. With this method, more than two
a15 = = ( y2 − y0 ) × ( By Z1 − BzY1 ), a16 = = (− f ) × ( By Z1 − BzY1 )
∂a2 ∂a3 images can be performed to calculate the coordinates of the
a17 =
∂F
= ( x2 − x0 ) × ( Bz X 1 − Bx Z1 ), a18 =
∂F
= ( y2 − y0 ) × ( Bz X 1 − Bx Z1 )
model points, making it a strict solution and not bounded by the
∂b1 ∂b2 number of photos [6].
∂F ∂F
a19 = = (− f ) × ( Bz X 1 − Bx Z1 ), a20 = = ( x2 − x0 ) × ( BxY1 − By X 1 )
∂b3 ∂c1
∂F ∂F ∂x ∂x ∂x
a21 = = ( y2 − y0 ) × ( BxY1 − By X 1 ), a22 = = (− f ) × ( BxY1 − By X 1 ) vx = dX + dY + d Z − ( x − ( x )) (13)
∂c2 ∂c3 ∂X ∂Y ∂Z
(10) ∂y ∂y ∂y
vy = dX + dY + d Z − ( y − ( y ))
∂X ∂Y ∂Z

The conditions are:


3.3 Bridging of Models

The essence of model link is to calculate the scale between the


two adjoining modles [6]. In aerial photogrammetry, the
connective condition is based on the equation of the depths of
the common points [7].

159
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B6b. Beijing 2008

In contrast to the traditional method, in close-range 3.5 Absolute Orientation


photogrammetry, parallax is not only appeared along the X axis,
but also existed along the Z axis. So the two adjoining stereo The final step is the absolute orientation. Analytical absolute
pairs of photographs are incongruous both in scale and orientation is based on the three-dimensional similarity
orientation. If the connective condition is only the depths of the transform. Seven parameters are involved: a uniform scale λ,
common points, it is obviously not strict enough. So the three- three translations (X0,Y0,Z0), and three rotations (Φ, Ω,
dimensional coordinates of the common points will be Κ). In classic aerial photography, the simplified model with
employed to bridge the successive model, for they should have small angles is usually performed. But in wide-angle
the same coordinates in the same system. With following photogrammety, it can not be applied for the big rotation angles
equations, the scale parameter ki can be calculated: any longer.

The solution here is also based on the orthogonal rotation


⎡ Xim ⎤ ⎡ Bxi ⎤ ⎡ Xim+1 ⎤ matrix. Instead of the calculation of the three rotations Φ,
⎢ m⎥ ⎢ ⎥ ⎢ m⎥ (14)
⎢ Yi ⎥ = ⎢ By i ⎥ + ki+1 × R(ϕi +1 , ωi +1 , κ i +1 ) × ⎢ Yi +1 ⎥
Ω,Κ directly, the nine elements of the matrix are used to
⎢ Zim ⎥ ⎢⎣ Bzi ⎥⎦ ⎢ Zim+1 ⎥ establish the linear observation equations. So the adjustment
⎣ ⎦ ⎣ ⎦ with conditions will be applied [8].

In the process of calculation, the first step is to centralize the


Here (Xim , Yim , Zim ) are the coordinates of the m model point in coordinates. The initial values of X00 , Y00 , Z00 are the
the i stereo pair of photographs. These values suit for the image differences of the corresponding center of gravity coordinates in
coordinate system of the left image. ( Bxi , By i , Bzi ) , (ϕi , ωi , κ i ) the two coordinate systems, λ0=1,R0=I. The error equations
are the relative orientation parameters between the left image of are:
the (i+1) stereo pair and the left image of the i stereo pair. So
the unknown parameter is only the scale ki.
v = Ax − l
xT = [dX0 dY0 dZ0 dλ da1 da2 da3 db1 db2 db3 dc1 dc2 dc3]
3.4 Building the Integrated Successive Model ⎡1 0 0 a10 x(i) + a20 y(i) + a30 z(i) λ0 x(i) λ0 y(i) λ0 z(i) 0 0 0 0 0 0 ⎤
⎢ ⎥
A = ⎢0 1 0 b10 x(i) + b20 y(i) + b30 z(i) 0 0 0 λ0 x(i) λ0 y(i) λ0z(i) 0 0 0 ⎥
In close-angle photogrammetry, there are two inconsistencies ⎢0 0 0 c10 x(i) + c20 y(i) + c30 z(i)
⎣ 0 0 0 0 0 0 λ0 x(i) λ0 y(i) λ0 z(i)⎥⎦
between the two adjacent models. One is the scale, the other is ⎡ X00 ⎤ ⎡a10 a20 a30 ⎤ ⎡ x(i)⎤ ⎡ X (i)⎤
the orientation. If R and k is the rotation matrix and the scale of ⎢ ⎥ ⎢ ⎥
l = ⎢ Y00 ⎥ + λ0 ⎢b10 b20 b30 ⎥ ⎢⎢ y(i)⎥⎥ - ⎢⎢Y(i) ⎥⎥
the right image to the left one, the R’ and k’ are the rotation ⎢ Z00 ⎥
⎣ ⎦
⎢c10 c20 c30 ⎥ ⎣⎢ z(i) ⎦⎥ ⎣⎢ Z(i) ⎦⎥
⎣ ⎦
matrix and the scale to the first image: (18)

⎡1 0 0 ⎤ ( X (i ), Y (i ), Z (i )) denotes the object space coordinates of the


R′(1) = R(1) = ⎢⎢0 1 0 ⎥⎥ control points, ( x (i ), y (i ), z (i )) the relative model space
⎢⎣0 0 1 ⎥⎦
coordinates. This model of three-dimensional datum
⎡ a1 (1) a2 (1) a3 (1) ⎤ (15) transformation can suit any angle.
R′(2) = R(2) R(1) = R(2) = ⎢⎢ b1 (1) b2 (1) b3 (1) ⎥⎥
⎣⎢ c1 (1) c2 (1) c3 (1) ⎥⎦
R′( j ) = R( j ) × R( j − 1) ×⋅⋅⋅× R(2) × R(1), ( j = 3, 4..., n) 4. CASE STUDY
k ′(1) = k (1) = 1
(16) In this section, the preceding approaches in close-range
k ′(2) = k (2) k (1) = k (2)
photogrammetry with big rotation angles will be studied. The
k ′( j ) = k ( j ) × k ( j − 1) ⋅⋅⋅ k (2) × k (1), ( j = 3, 4,..., n ) target is a simple house, and 4 photos are taken, as expressed in
Figure 1, where the red points are perspective centers, the blue
points are known ground points, the green lines represent the
On this basis, the coordinates of every perspective center in main lines of the house, and the yellow lines are for the
model coordinate system ( XXs(j),YYs(j),ZZs(j)) can be windows. The span of the house is from (0,0,0)m to (10,
deduced as: 10,10)m, the coordinate of the top is (5,5,11)m. Randomly
select No. 23, No. 24 and No. 25 points as ground control
points, and the residual ones perform as check points. Interior
⎛ XXs(j+1) ⎞ ⎛ XXs(j) ⎞ ⎛ Bx (j) ⎞ orientations are ( x0 , y0 , f ) = (0, 0,50)mm , and the exterior
⎜ ⎟ ⎜ ⎟ ′ ′(j) × ⎜ By (j) ⎟
(17)
⎜ YYs(j+1) =
⎟ ⎜ YYs(j) ⎟ + k (j) × R ⎜ ⎟ orientations are described in Table 1. The errors in image
⎜ ZZs(j+1) ⎟ ⎜ ZZs(j) ⎟ ⎜ B (j) ⎟
⎝ ⎠ ⎝ ⎠ ⎝ z ⎠ coordinates are about 0.1mm.
(j = 2,3,...,n)

After this process, every model coordinate system in the strip is


parallel to the first image’s coordinate system. So the model
coordinates of all unknown points can be calculated with the
application of the standard method of intersection using the
collinearity equations.

160
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B6b. Beijing 2008

Image dXs dYs dZs Dφ Dω Dκ


No. (mm) (mm) (mm) (″) (″) (″)
1 0.1205 0.0946 0.0223 2.251 1.371 1.198
2 0.1142 0.0775 0.1076 2.549 1.354 1.250
3 0.0250 0.0461 0.0411 2.560 2.591 1.874
4 - - - 1.898 3.248 1.066
0.0042 0.0344 0.0234

Table 2. Differences of the exterior orientations between


the computed result and the known values
Figure 1. Image for experimental data
From Table 2, we can see the result is appropriate. More data
Image Xs(m) Ys(m) Zs(m) φ(°) ω(°) κ(°) have been analyzed outside this paper, and the outcome is also
No. correct. So they verify the validity and robustness of the
1 18 5 12 -30 0 0 algorithms in close-range photogrammetry with big rotation
2 16 16 12 -30 0 -20 angles in this paper.
3 5 18 12 0 -30 0
4 -6 16 12 30 0 20
5. CONCLUSION
Table 1. Given exterior orientations of every image
The algorithms for close-range photogrammetry with big
rotation angles have been developed in this paper to perform the
With the algorithms in the preceding section, the object space
analytical process, including dependent relative orientation,
coordinates of the unknown points can be calculated. Two
space intersection, strip formation process, absolute orientation
methods will be presented to evaluate the result.
and space resection, etc.
Firstly, the accuracy is evaluated using check points whose
Using these algorithms, we have obtained high accuracy and
world coordinates are known but not used as control in the
robust results. So they verify the validity and robustness of the
solution. The root mean square of the differences between the
algorithms deduced in this research paper. Compared with the
computed coordinates and the known values provides a measure
approaches of the analytical process in classic aerial
of the solution accuracy. The distribute of the errors of the
photogrammetry, the algorithms in this paper have such
check points is like Figure 2:
advantages:

(1) They are applicable in close-range photogrammetry with big


rotation angles and they differ from the methods used in classic
photogrammetery because the algorithms in this paper are
stricter, but not employ the simplified models.

(2) They continue to be suitable when the absolute values of the


difference between the adjacent camera stations’ three-
dimensional coordinates are quite large.
With these algorithms, the images can be tied into a successive
model like a strip, so fewer control points (more than 2) are
Figure 2. Plot of check point error needed to perform the absolute orientation. It reduces the
amount of the field work.
Figure 2 indicates that the orientations of the errors are random
and the absolute values are small, and the maximal one is No.2,
the values are max(dX , dY , dZ ) = (0.891, 0.228, -0.662)mm . If 6. ACKNOWLEDGEMENTS
the number of check points is ncheck , then the mean square errors
The paper is substantially supported by National High
are: Technology Research and Development Program of China (863
Program) “Complicated Features’ Extraction and Analysis from
Central Districts in Metropolis”, ChangJiang Scholars Program,
dX 2 dY 2 dZ 2 Ministry of Education, PRC and Tongji University “985”
(σ X , σ Y , σ Z ) = ( ∑n , ∑n , ∑n ) = (0.416, 0.101, 0.279) mm
check check check Project Sub-system “Integrated Monitoring for City Spatial
Information”. The author would like to thank ChangJiang
Scholars Dr. Ron., Li and Dr. Hongxing Sun for their
Secondly, using the solution of space resection with big rotation substantial help.
angles (described in section 2) to calculate the exterior
orientations of every image, and make a comparison between
the computed values and the given ones. The result is expressed REFERENCE
in Table 2.
Aero Photogrammetry. Beijing: People’s Liberation Army Press,
pp. 80-198.

161
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B6b. Beijing 2008

Chen Yi, Shen Yun-zhong, Liu Da-jie. A Simplified Model of Feng Wen-hao, 2002. Close Range Photogrammetry-The Rules
Three Dimensional-Datum Transformation Adapted to Big for Photography with the Shape and Motion State of Objects.
Rotation Angle, 2004. Geomatics and Information Science of Wu Han: Wuhan University Press, pp. 1-14.
Wuhan University, 29: pp. 1101-1105.
Li De-ren, Zheng Zhao-bao, 1992. Analytical Photogrammetry.
Chen Yi, 2004. New Computation Method of Collinearity Beijing: Surveying Press, pp. 114-128.
Equation Suiting Digital Photogrammetry. Journal of Tongji Qian Ceng-bo, Liu Jing-zi, Xiao Guo-chao, 1992
University, 32(5): pp. 660-663.
Li De-ren, Zhou Yue-qin, Jin Wei-xian, 2001. The Outline of
Edward M. Mikhail, James S. Bethel, J. Chris McGlone, 2001. Photogrammetry and Remote Sensing. Beijing: Surveying Press,
Introduction to Modern Photogrammetry [M]. Wiley, Har/Cdr pp. 15-68.
edition, pp. 66-103.
Wang Zhi-Zhuo, 1984. The Principle of Photogrammetry. Wu
Han: Surveying Press, pp. 14-122.

162

You might also like