Professional Documents
Culture Documents
Robotic Vision
Robotic Vision
Robotic Vision
Abstract
This paper presents how to determine the position of object in Many research efforts have been devoted to the issues
the 3D coordinate by the stetreo camera. In order to know where is of stereo camera calibration. There are some way to
the object we have to calculate or estimate the intrinsic and estimate or calculate the parameters of camera like
extrinsic parameter of the camera. Here, We use newton’s method Radial Basis Function Networks (RBF), Camera
to minimize the error, but we only estimate the translation vector Calibration Using 1D Pattern with single camera… But
in the external parameter. Beside we use the stick and put it in front each way has the advantages and disavantages. When we
both of two cameras and then we can get the 2D points in the plane use single camera with 1D pattern, we will estimate to be
of the 2D coordinate, we will receive two points with one stick, not exactly external parameters [2] but we only need to
after that we take that point to find out the relationship between
two cameras. The results show that we just need to know the points
use one camera and estimate the internal exactly.
in the 2D coordinate we will know the relationship between both This paper is organized as follows: Section 2 build
of two cameras. Finally, we will determine the position of the calibration algorthm for stereo camera, Section 3 To simulate
object through the parameters which we have estimated before. the relationship between two cameras. We conclude in Section
4
Keywords: Stereo camera, Newton’s method, Computer vision,
Camera calibration ,intrinsic and external parameters. 2. Calibration Algorithm
Stereo-camera calibration not only requires determining the
1. Introduction internal and external parameters of a single camera, but also
need to calibrate the relative pose relationship between two
Nowadays, we can see the most companies in the world cameras. The mapping relationship of the target points from the
apply the camera for the robotic. Because the camera look right camera coordinate system to the left camera coordinate
like our eyes. for example, when you wanna take some system can be described by the rotation matrix Rc and the
thing, firstly we have to know where it is and then control translation vector Tc. The 3D points is Pw(Xw, Yw, Zw), whose
your hand to take it. coordinates in the left camera coordinate system are (Xl, Yl, Zl),
Camera calibration is an important issue in computer and the corresponding coordinates in the right camera
vision especially for video surveillance, 3D reconstruction, coordinate system are (Xr, Yr, Zr). Then, the mapping from right
image correspondence, image registration and so on. camera coordinates to the left camera coordinates can be
Camera calibration is to estimate parameters that are either expressed as:
internal to or external to a camera. The internal (or intrinsic) Xl Xr
parameters determine how the image coordinates of the Y R Y T (1)
point are derived, given the spatial position of the point with l c r c
u1
Zl u2 Z l fx1t1 u2t3
fx1 (6)
Z v v1 Z fy t v t
l 2 fy l 1 2 2 3
1
From Eq.(6) we have:
u1 v
fx1t1 u2t3 u2 z1 fy1t2 v2t3 v2 1 z1
Z1 fx1 fy1
2 2
u1 v
u2 z1 v2 1 z1
fx1 fy1
b1t1 b2t2 b3t3
(7)
Fig. 1. Diagram of the geometry relationship d
Where:
This algotrithm is performed following the projection u
method on camera. It can be expressed as b1 fx2 u2 1 fx2
u fx1
X Z
f x (2)
v
v b2 fy2 v2 1 fy2
Y Z fy1
fy
u v
Where: b3 u2 u2 1 fx2 v2 v2 1 fy2
X, Y, Z: the position of object in the 3D coordinate fx1 fy1
u, v: the pixel value on camera 2 2
u v
fx, fy: the focal length of camera d u2 1 z1 v2 1 z1
Firstly, we choose the right camera as the base. From fx1 fy1
Eq.(2) The projection on right camera as follows: In order to calculate the position of object on camera 1, we
u1 only need to replace Zl in Eq. (7) into Eq. (3) and we obtain:
Xr Zr
fx1 (3)
u1 b1t1 b2t2 b3t3
v1 Xl
Y Zr fx1 d
r
fy1
v bt b t b t
The position of object on the left camera based on Eq.(1) Yl 1 1 1 2 2 3 3
can be simplified and written as: fy1 d
X l X r t1 Finally, we apply the Newton’s method for estimating the
(4) translation vector. The target function for estimating based on
Yl Yr t2
Z Z t the ruler which is about 80 cm. It can be expressed as
l r 3
G 0.5(1 X1 2 X1 )2 0.5(1Y1 2 Y1 )2 0.5(1 Z1 2 Z1 )2 0.5L2 sign(1 Z1 )
We have (u2, v2) are the pixel value and (fx, fy) are focal
We need to minimize this function reach to zero for
length of camera 2. We use it for the projection of object to
estimating.
figure out the relationship of point. It can be expressed as
We have Jacobian matrix:
u2
X l fx Z l (5) J
G G G
X Y Z
2
Y 2 Zv
From the Taylor series : G(xn+1) = G(xn) + G ’(xn) . (xn+1 – xn) = 0
l fy2 l
G(xn) + J(xn). (xn+1 – xn) = 0
Then, combined with Eq. (4), we can obtain the J(xn) . (xn+1 – xn) = – G(xn)
expression relative to both two camera.
J (xn). J(xn) . (xn+1 – xn) = – J(xn) T.G(xn)
T
u2 ( Z r t3 ) fx2 ( X r t1 ) xn+1 = xn – (JT(xn). J(xn))-1. J(xn) T.G(xn)
v2 ( Z r t3 ) fy2 (Yr t2 )
u1
u2 ( Z r t3 ) fx2 Z l t1
fx1
3. Calibration with Simulation Data
In this paper, we just estimate the translation vector Tc and
v ( Z t ) fy v1 Z t suppose the rotation matrix, focal length were known: Rc = I3 ,
2 r 3 2 l 2
fy1 fx1 = fy1 = fx2 = fy2 = 1. The translation vector were set to Tc=[-
0.4;0.5;-0.2]. We use the ruler which is about 0.8m, then we
make sure that was put in front both of two cameras.
Initial guess = [0;0;0.01];
3
You can see in Figure 2, we collect the ruler on the plane References
of two cameras after that we use this data in order to estimate
the translation vector by algorithm in section 2. 1. Richard Hartley and Andrew Zisserman, "Mutilple View
Geometry in computer vision second edition,". Cambridge
University Press, New York, pages 153-176, 2003.
2. Xiangjian He, Huaifrng Zhang, Jinwoong Kim, Qiang Wu
and Taeone Kim “Estimation of Internal and External
Parameters for Camera Calibration Using 1D Pattern,”
Computer Society, 2018.
4. Conclusion
This paper is focus on solving the problem of calibration
for stereo camera using Newton’s method. But we have just
estimated translation vector. In real-life we must estimate
both of translation vector and rotation matrix, then we can
reconstruct the 3D point in world coordinate from the 2D
coordinates of image points. In next paper we’ll estimate
rotation matrix and calculate the focal length of two
cameras.