Professional Documents
Culture Documents
3D Gazetrackingmethodusingpurkinjeimages
3D Gazetrackingmethodusingpurkinjeimages
3D Gazetrackingmethodusingpurkinjeimages
x
1
x
4
2
y
1
y
4
2
q
F
6
: pupil diameter as pixel
Here, (x
1
, y
1
), (x
4
, y
4
), and (c
x
, c
y
) are the positions of the rst Purkinje
images, fourth Purkinje images, and pupil center, respectively. With
these six features, we obtain the depth gaze position (Z
d
) using the
multi-layered perceptron (MLP) [34], as shown in Fig. 8.
Using the back propagation algorithm for learning MLP, we
obtain the optimal parameters (w
ij
, w
0
j1
), which are used to estimate
the users depth gaze position (Z
d
). As shown in Fig. 8, there are six
input nodes and one output node.
The users depth gaze position (Z
d
) can be represented as
follows:
Z
d
func2w
0
11
UO_h
1
w
0
21
UO_h
2
w
0
31
UO_h
3
w
0
n1
UO_h
n
1
where O_h
i
is the output value of the hidden node h
i
. w
0
i1
is the
weight value between the hidden node h
i
and output node (o
1
).
func2( ) is the kernel function of output node (o
1
). For the hidden
node (h
i
) and the output node (o
1
), various kinds of functions such
as linear, sigmoid functions, etc. can be used. For example, if
func2( ) is sigmoid function, the Eq. (1) can be shown as follows:
Z
d
1expw
0
11
UO_h
1
w
0
21
UO_h
2
w
0
31
UO_h
3
w
0
n1
UO_h
n
1
2
O_h
1
, O_h
2
, O_h
3
, y O_h
n
can be represented as follows:
O_h
1
func1F
1
Uw
11
F
2
Uw
21
F
3
Uw
31
F
6
Uw
61
O_h
2
func1F
1
Uw
12
F
2
Uw
22
F
3
Uw
32
F
6
Uw
62
O_h
3
func1F
1
Uw
13
F
2
Uw
23
F
3
Uw
33
F
6
Uw
63
^
O_h
n
func1F
1
Uw
1n
F
2
Uw
2n
F
3
Uw
3n
F
6
Uw
6n
3
where func1( ) is the kernel function of hidden node (h
i
). By
replacing O_h
1
, O_h
2
, O_h
3
, y O_h
n
of Eq. (1) by Eq. (3), Z
d
can be
Fig. 7. Examples of pupil size change according to the depth gaze position: (a) gazing at the ve reference positions in depth direction on the left eyes visual line at
(1) 10 cm, (2) 20 cm, (3) 30 cm, (4) 40 cm, and (5) 50 cm; (b) changes in the left eyes pupil size at the ve reference positions of (a) (1) 120 pixels, (2) 150 pixels, (3) 165
pixels, (4) 185 pixels, and (5) 200 pixels.
Fig. 8. MLP for estimating the depth gaze position of a user.
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 741
represented as follows:
Z
d
func2w
0
11
Uf unc1F
1
Uw
11
F
2
Uw
21
F
3
Uw
31
F
6
Uw
61
w
0
21
Ufunc1F
1
Uw
12
F
2
Uw
22
F
3
Uw
32
F
6
Uw
62
w
0
31
Ufunc1F
1
Uw
13
F
2
Uw
23
F
3
Uw
33
F
6
Uw
63
^
w
0
n1
Ufunc1F
1
Uw
1n
F
2
Uw
2n
F
3
Uw
3n
F
6
Uw
6n
4
Fig. 9 shows that the mean square errors (MSE) with various
numbers of hidden nodes decreased according to the learning epoch
of MLP training. Thus, we determined the optimal number of hidden
nodes to be nine, with which the minimum MSE is obtained.
2.8. Estimating planar gaze position using geometric transformation
considering the calculated depth gaze position
This section shows the method of calculating the gaze position
on the planar space (X, Y). The extracted pupil center position
(C
x
, C
y
) is used for calculating a users gaze position in the planar
space. As shown in Fig. 10, the transform matrix (mapping
function) T between the movable pupil center region ((C
x1
, C
y1
),
(C
x2
, C
y2
), (C
x3
, C
y3
), and (C
x4
, C
y4
)) and a users view region
((S
x1
, S
y1
), (S
x2
, S
y2
), (S
x3
, S
y3
), and (S
x4
, S
y4
)) is obtained using
geometric transformation [811,24,25,30,33]. The pupils mova-
ble area ((C
x1
, C
y1
), (C
x2
, C
y2
), (C
x3
, C
y3
), and (C
x4
, C
y4
)) is deter-
mined by gazing at the four corners of the users view region at
the initial stage of user calibration. The equation of the geometric
transformation is as follows [811,24,25,30,33], and matrix T
Fig. 9. Training procedures of MLP according to the number of hidden nodes.
Fig. 10. Relation between the pupils movable area and a users view plane
[811,24,25,30,33].
Fig. 11. Nine reference points gazed at for the ve different depth positions.
(a) Five depth positions from 10 cm to 50 cm. (b) Nine reference points in the
planar region.
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 742
of Fig. 10 can be calculated with matrix S and inverse matrix of C,
and the eight parameters ah can then be obtained
[811,24,25,30,33]:
S TC
S
x1
S
x2
S
x3
S
x4
S
y1
S
y2
S
y3
S
y4
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
a b c d
e f g h
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
C
x1
C
x2
C
x3
C
x4
C
y1
C
y2
C
y3
C
y4
C
x1
C
y1
C
x2
C
y2
C
x3
C
y3
C
x4
C
y4
1 1 1 1
0
B
B
B
@
1
C
C
C
A
5
Since 15 persons took part in the experiment of planar gaze
detection at 5 Z distances (10, 20, 30, 40, and 50 cm) of Fig. 11(a),
75 (155) T matrices were obtained. That is, each person has 5 T
matrices at 5 Z distances (10, 20, 30, 40, and 50 cm). Examples of
Matrix T real values are as follows:
T
a10
1:17361 0:25277 0:0001 274:26
0:0748 1:66715 0:0003 202:97
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
T
a30
3:27567 0:93095 0:000917914 1159:7
0:5602 3:84433 0:000810824 698:73
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
T
a50
4:73389 0:93574 0:00054243 1824:6
0:22047 8:23577 0:003079604 1740
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
T
b10
1:1155 0:20704 4:3 10
5
260:35
0:2299 1:48583 0:00031 147:21
0 0 0 0
0 0 0 0
0
B
B
B
B
@
1
C
C
C
C
A
T
a10
, T
a30
, and T
a50
are the T matrices, which were obtained
from one person at Z distances of 10, 30, and 50 cm, respectively.
T
b10
is the T matrix, which was obtained from the other person at
Z distance of 10 cm.
After we get matrix T at the initial stage of user calibration, the
users gaze position (S
0
x
, S
0
y
) can be obtained on the planar space
(X, Y) with the two features (C
0
x
, C
0
y
0
) of the pupil center, as shown
in Eq. (6) [811,24,25,30,33]:
S
0
x
S
0
y
0
0
0
B
B
B
@
1
C
C
C
A
a b c d
e f g h
0 0 0 0
0 0 0 0
0
B
B
B
@
1
C
C
C
A
C
0
x
C
0
y
C
0
x
C
0
y
1
0
B
B
B
B
@
1
C
C
C
C
A
6
However, this method has the problem that the matrix T
should be changed according to the planar spaces of different Z
distances. For example, matrix T obtained from the planar space
at the Z distance of 10 cm of Fig. 11(a) shows the errors in gaze
Table 2
Average ZGE (standard deviation of the error) of depth gaze estimation by the
proposed method and other methods (unit: cm).
Linear
regression
(LR)
Support vector
regression (SVR)
MLP (proposed method)
Linear kernel for
output node
Sigmoid kernel
for output node
Training
data
4.90(2.50) 11.80(8.04) 5.04(2.36) 2.26(2.11)
Test data 6.67(3.22) 11.89(7.94) 6.69(3.09) 4.59(3.96)
Table 3
Average ZGE (standard deviation of the error) of depth gaze estimation by the proposed method and other methods according to Z distance (unit: cm).
Z distance (cm) Linear
regression (LR)
Support vector
regression (SVR)
MLP (proposed method)
Linear kernel for
output node
Sigmoid kernel
for output node
Training data 10 0(0) 3.33(3.20) 0(0) 0(0)
20 5.44(4.65) 13.05(10.39) 5.77(4.80) 0(0)
30 9.81(4.24) 14.01(8.18) 10.18(3.38) 2.72(2.67)
40 0(0) 13.47(7.18) 0(0) 3.85(3.62)
50 9.23(3.62) 15.15(11.26) 9.23(3.62) 4.71(4.24)
Testing data 10 1.92(1.92) 3.33(3.20) 1.92(1.92) 1.83(1.82)
20 7.70(5.01) 11.71(9.53) 8.16(4.80) 2.56(2.51)
30 11.39(3.85) 14.14(7.86) 12.02(3.62) 4.27(3.43)
40 2.72(2.67) 13.61(6.81) 1.92(1.92) 6.42(5.55)
50 9.62(2.67) 16.67(12.31) 9.43(3.20) 7.89(6.47)
Table 4
Accuracies of the planar gaze estimation at ve depth positions (depth gaze
position calculated by the proposed method).
Depth position (cm) Average XGE (deg./cm) Average YGE (deg./cm)
10 1.05/0.18 1.98/0.35
20 1.05/0.37 2.06/0.72
30 1.18/0.62 1.07/0.56
40 0.81/0.57 1.75/1.22
50 0.73/0.64 1.13/0.99
Average error 0.96/0.48 1.60/0.77
Table 5
Accuracies of planar gaze estimation for ve depth positions (depth gaze position
measured manually).
Depth position (cm) Average XGE (deg./cm) Average YGE (deg./cm)
10 1.02/0.18 1.64/0.29
20 1.01/0.35 1.55/0.54
30 1.08/0.57 1.04/0.54
40 0.79/0.55 1.43/1
50 0.72/0.63 1.05/0.92
Average error 0.92/0.46 1.34/0.66
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 743
position for the planar space at the Z distance of 50 cm of
Fig. 11(a). This is because the users view plane of Fig. 10 changes
according to Z distance even if the pupil movable area of Fig. 10 is
the same. To overcome these problems, we use the following
method. In the user calibration stage, a user is required to see the
20 positions (four gaze positions in a planar space ve planar
spaces at ve Z distances) of Fig. 11(a). From this, ve T matrices
of ve planar spaces are calculated. In the testing stage, when the
depth gaze position is calculated using the method presented in
Section 2.7, the T matrix of the corresponding Z distance is
selected, and the (X, Y) gaze position is calculated using the
selected T matrix. For example, if the depth gaze position is
calculated as 22 cm, the T matrix calculated when a user gazes at
the four positions of the planar space of 20 cm of Fig. 11(a) is used
for calculating the (X, Y) gaze position.
3. Experimental results
The proposed method for 3D gaze estimation was tested on a
desktop computer with Intel Core2 Quad 2.33 GHz CPU and 4 GB
RAM. The algorithm was implemented with Microsoft Foundation
Class (MFC) based C programming, and the image capturing
software through the proposed camera device used DirectX
9.0 software development kit (SDK). In our experiments, a user
gazed at reference points in the 3D space as shown in Fig. 11. The
distances between the reference points for ve depth positions
(at 10 cm, 20 cm, 30 cm, 40 cm, and 50 cm) are 1 cm, 2 cm, 3 cm,
4 cm, and 5 cm, respectively, as shown in Fig. 11. Fifteen subjects
participated in the experiment, and each subject underwent six
trials of gazing at the nine reference points for ve depth
positions from 10 cm to 50 cm. Half of the trial data were
randomly selected and used for training. The other half were
used for testing. This procedure was repeated ve times, and the
average accuracy was measured.
In the rst experiment, we measured the error of the depth
gaze estimation, which is shown in Table 2. As the metric for
evaluating accuracy, we use Z gaze error (ZGE) between the
calculated Z distance (Z
c
) and the reference Z distance (Z
r
) as
shown in Eq. (7):
ZGE 9Z
c
2Z
r
9 7
Since there is no previous research to measure users 3D gaze
position using just one camera and one eye image, the compar-
isons with the previous studies were not performed in experi-
ments. Instead, we just compared the accuracy of the proposed
method to the methods using linear regression (LR) and support
vector regression (SVR).
Linear regression (LR) is a method that denes the relation
between one output value and one (or more than one) input value
using linear functions [35,36]. Support vector regression (SVR) is a
supervised learning method that uses nonlinear regression based
on a scalar function. SVR uses a nonlinear kernel function, which
can project the input data into a high-dimensional space, and SVR
can then use linear regression to t a hyper-plane [25,37,38]:
f x wFxb 8
Eq. (8) is established by attening the hyper-plane. The data
points lie outside a margin e surrounding the hyper-plane when
minimizing 99w99
2
and the sum of errors [7,25,37,38]. Any high-
dimensional projection can then be calculated using nonlinear
kernel functions [25]. For a fair comparison, the same six features
explained in Section 2.7 were used for the LR, SVR, and MLP-based
methods. As shown in Table 2, the accuracy of the proposed
method was better than those of the other methods.
Fig. 12. Reference (red diamond) and average calculated (blue cross) gaze positions in 3D space. (For interpretation of the references to color in this gure legend, the
reader is referred to the web version of this article.)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 744
Fig. 13. XY plane view of Fig. 12 (red diamond: reference positions; blue cross: calculated gaze positions): (a) Z10 cm, (b) Z20 cm, (c) Z30 cm, (d) Z40 cm, and
(e) Z50 cm. (For interpretation of the references to color in this gure legend, the reader is referred to the web version of this article.)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 745
Fig. 14. XZ plane view of Fig. 12 (red diamond: reference positions; blue cross: the calculated gaze positions): (a) Y1 cm, (b) Y2 cm, (c) Y3 cm, (d) Y4 cm,
(e) Y5 cm, (f) Y6 cm, (g) Y7 cm, (h) Y8 cm, (i) Y9 cm, (j) Y10 cm, and (k) Y11 cm. (For interpretation of the references to color in this gure legend, the reader
is referred to the web version of this article.)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 746
Fig. 14. (continued)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 747
Fig. 15. YZ plane view of Fig. 12 (red diamond: reference positions; blue cross: the calculated gaze positions): (a) X1 cm, (b) X2 cm, (c) X3 cm, (d) X4 cm,
(e) X5 cm, (f) X6 cm, (g) X7 cm, (h) X8 cm, (i) X9 cm, (j) X10 cm, (k) X11 cm. (For interpretation of the references to color in this gure legend, the reader is
referred to the web version of this article.)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 748
Fig. 15. (continued)
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 749
In the next experiment, we measured the accuracies of depth
gaze estimation for the proposed method and other methods
according to the Z distance, as shown in Table 3. As shown in
Table 3, the accuracy of the proposed method was better than
those of the other methods.
In the next experiment, we measured the accuracy of gaze
estimation in the planar space (X, Y). As the metric for evaluating
accuracy on X-axis, we use X gaze error (XGE) between the
calculated X gaze position (x
c
) and the reference gaze position (x
r
)
as shown in Eq. (9). As the metric for evaluating accuracy on Y-axis,
we use Y gaze error (YGE) between the calculated Y gaze position
(y
c
) and the reference gaze position (y
r
) as shown in Eq. (10):
XGE 9x
c
2x
r
9 9
YGE 9y
c
2y
r
9 10
As shown in Fig. 11, each user gazed at the nine reference points
for ve depth positions from 10 cm to 50 cm. The accuracies of
the planar gaze estimations at the ve depth positions are shown
in Table 4. The average XGE and YGE between the estimated
and ground-truth positions (reference points of Fig. 11) were
0.961(0.48 cm) and 1.601(0.77 cm), respectively. The XGE (1) or
YGE (1) can be obtained as follows:
XGE1 tan
1
XGEcm=Z distancecm between users eye
and gazed plane 11
YGE1 tan
1
YGEcm=Z distance cmbetween users eye
and gazed plane 12
The YGE was larger than the XGE because the eye camera captures
the eye image on the slant under the eye, as shown in Fig. 2.
The gaze errors shown in Table 4 include the errors for depth
gaze estimation (ZGE of Eq. (7)) of Tables 2 and 3 since the
calculated depth gaze position was used for selecting the corre-
sponding T matrix for calculating the planar gaze position, as
explained in Section 2.8.
Thus, to calculate the accuracy of planar gaze estimation
without the error of the depth gaze estimation, we measured
the accuracy of gaze estimation in the planar space (X, Y) with the
known depth gaze position. In other words, we assumed that the
depth gaze position is known (manually measured) instead of
calculated by the proposed method of Section 2.7. As shown in
Figs. 11 and 15 users gazed at the nine reference points for ve
depth positions from 10 cm to 50 cm. The accuracies of the planar
gaze estimations at the ve depth positions are shown in Table 5.
The average XGE and YGE between the estimated and ground-
truth positions (reference points of Fig. 11) were 0.921(0.46 cm)
and 1.341(0.66 cm), respectively. When comparing Tables 4 and 5,
the errors shown in the latter are smaller than those in the former
since the errors of depth gaze estimation are not included.
Fig. 12 shows the reference and calculated gaze positions of
three persons in 3D space. Figs. 1315 show the positions in the
XY, XZ, and YZ plane views, respectively.
In the last experiments, we measured the processing time of
the proposed method. The processing times for detecting the
pupil region and Purkinje images were 16 and 0 ms, respectively.
That of calculating the gaze position in the Z plane was 0 ms, and
that of calculating the gaze position in the X, Y plane was 20 ms.
In general, the pupil size becomes larger as a user gazes at a
point further from his or her position, as shown in Fig. 16(a).
However, when the user has poor eyesight, there are some cases
where the pupil size does not change even though the depth gaze
position becomes further, as shown in Fig. 16(b); this causes an
error in depth gaze estimation.
There is another cause for error in the X and Y gaze positions.
As shown in Fig. 17, even if a user gazes at the same position, the
pupil center position in the image changes due to the movements
of our device of Fig. 2, which causes an error when calculating the
gaze position in the X and Y plane.
4. Conclusions
In this paper, we propose a new method for estimating the 3D
gaze position based on the illuminative reections on the surface of
the cornea and lens (Purkinje image) by considering the 3D structure
of the human eye. We use a lightweight glasses-type device for
Fig. 16. Error case of depth gaze estimation: (a) good case; (b) error case.
Fig. 17. Error for X and Y gaze position due to the movement of our device:
(a) gazing at a point; (b) gazing at the same point as (a), but the device has moved.
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 750
capturing the eye image that includes one USB camera and one
NIR-LED. We theoretically analyzed the generation models of Purkinje
images based on the 3D human eye model for 3D gaze estimation.
The relative positions of the rst and fourth Purkinje images to the
pupil center, inter-distance between these two Purkinje images, and
pupil size are used as the six features for calculating the Z gaze
position. With these features as inputs, the nal Z gaze position is
calculated using a multi-layered perceptron (MLP). The X, Y gaze
position on the 2D plane is calculated by the positions of the pupil
center based on geometric transform considering the calculated Z
gaze position. Experimental results showed that the average errors of
the 3D gaze estimation were about 0.961(0.48 cm) on the X-axis,
1.601(0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D
space. In future work, we would test the proposed method with more
people of various ages and race in various environments and examine
the feasibility of combining the proposed one eye-based method and
that using two eyes.
Acknowledgments
This research was supported by Basic Science Research Pro-
gram through the National Research Foundation of Korea (NRF)
funded by the Ministry of Education, Science and Technology (No.
2011-0004362), and in part by the Public welfare and Safety
research program through the National Research Foundation of
Korea (NRF) funded by the Ministry of Education, Science, and
Technology (No. 2011-0020976).
References
[1] Lin C-S, Huan C-C, Chan C-N, Yeh M-S, Chiu C-C. Design of a computer game
using an eye-tracking device for eyes activity rehabilitation. Opt Lasers Eng
2004;42(1):91108.
[2] Lin C-S, Ho C-W, Chang K-C, Hung S-S, Shei H-J, Yeh M-S. A novel device for
head gesture measurement system in combination with eye-controlled
humanmachine INTERFACE. Opt Lasers Eng 2006;44(6):597614.
[3] Bulling A, Roggen D, Tr oster G. Wearable EOG goggles: seamless sensing and
context-awareness in everyday environments. J Ambient Intell Smart Environ
2009;1(2):15771.
[4] Young L, Sheena D. Survey of eye movement recording methods. Behav Res
Methods Instrum 1975;7(5):397429.
[5] Yoo DH, Chung MJ. A novel non-intrusive eye gaze estimation using cross-
ratio under large head motion. Comput Vision Image Understanding 2005;
98(1):2551.
[6] Wang J-G, Sung E. Study on eye gaze estimation. IEEE Trans Syst, Man,
Cybern, Part B 2002;32(3):33250.
[7] Murphy-Chutorian E, Doshi A, Trivedi MM. Head pose estimation for driver
assistance systems: a robust algorithm and experimental evaluation. In:
Proceedings of the 2007 IEEE Intelligence Transaction Systems Conference;
2007. pp. 70914.
[8] Cho CW, Lee JW, Lee EC, Park KR. Robust gaze-tracking method using frontal-
viewing and eye-tracking cameras. Opt Eng 2009;48(12):127202-115.
[9] Ko YJ, Lee EC, Park KR. A robust gaze detection method by compensating for
facial movements based on corneal specularities. Pattern Recognition Letters
2008;29(10):147485.
[10] Bang JW, Lee EC, Park KR. New computer interface combining gaze tracking
and brainwave measurements. IEEE Transactions on Consumer Electronics;
accepted for publication.
[11] Lee EC, Park KR, Whang MC, Park J. Robust gaze tracking method for
stereoscopic virtual reality systems. Lecture Notes in Computer Science
2007;4552:7009.
[12] Lee EC, Lee JW, Park KR. Experimental investigations of pupil accommodation
factors. Invest Ophthalmol Visual Sci 2011;52(9):647885.
[13] /http://www.avatarmovie.com/S [accessed 28.10.11].
[14] /http://adisney.go.com/disneypictures/aliceinwonderland/S [accessed 28.10.11].
[15] /http://www.imdb.com/title/tt0892791/S [accessed 28.10.11].
[16] Kwon Y-M, Jeon K-W, Ki J, Shahab QM, Jo S, Kim S-K. 3D gaze estimation and
interaction to stereo display. Int J Virtual Reality 2006;5(3):415.
[17] Hennessey C, Lawrence P. 3D point-of-gaze estimation on a volumetric
display. In: Proceedings of the 2008 symposium on eye tracking research
and applications; 2008. p. 59.
[18] Essig K, Pomplun M, Ritter H. A neural network for 3D gaze recording with
binocular eye trackers. Int J Parallel, Emergent Distributed Syst 2006;21(2):
7995.
[19] Pfeiffer T, Latoschik ME, Wachsmuth I. Evaluation of binocular eye trackers
and algorithms for 3D gaze interaction in virtual reality environments.
J Virtual Reality Broadcast 2008;5(16).
[20] Sumi K, Sugimoto A, Matsuyama T, Toda M, Tsukizawa S. Active wearable
vision sensor: recognition of human activities and environments. In: Pro-
ceedings of international conference on informatics research for develop-
ment of knowledge society infrastructure; 2004. p. 1522.
[21] Mitsugami I, Ukita N, Kidode M. Estimation of 3D gazed position using view
lines. In: Proceedings of international conference on image analysis and
processing; 2003. p. 46671.
[22] /http://www.logitech.comS [accessed 28.10.11].
[23] Yamato M, Monden A, Matsumoto K, Inoue K, Torii K. Quick button selection
with eye gazing for general GUI environments. In: Proceedings of interna-
tional conference on software: theory and practice; 2000. p. 7129.
[24] Lee EC, Woo JC, Kim JH, Whang M, Park KR. A braincomputer interface
method combined with eye tracking for 3D interaction. J Neurosci Methods
2010;190(2):28998.
[25] Cho CW, Lee JW, Shin KY, Lee EC, Park KR, Lee HK et al. Gaze tracking method
for an IPTV interface based on support vector regression. ETRI J; submitted
for publication.
[26] Gonzalez RC, Woods RE. Digital image processing. 2nd ed. NJ: Prentice-Hall;
2002.
[27] Lee EC, Ko YJ, Park KR. Fake iris detection method using Purkinje images
based on gaze position. Opt Eng 2008;47(6):067,204-116.
[28] Gullstrand A. Helmholtzs physiological optics. Opt Soc Am 1924:3508.
[29] /http://en.wikipedia.org/wiki/Accommodation_(eye)S [accessed 28.10.11].
[30] Lee HC, Luong DT, Cho CW, Lee EC, Park KR. Gaze tracking system at a distance
for controlling IPTV. IEEE Trans Consum Electron 2010;56(4):257783.
[31] Jain R, Kasturi R, Schunck BG. Machine vision.McGraw-Hill; 1995.
[32] Ripps H, Chin NB, Siegel IM, Breinin GM. The effect of pupil size on
accommodation, convergence, and the AC/A ratio. Invest Ophthalmol Visual
Sci 1962;1:12735.
[33] Heo H, Lee EC, Park KR, Kim CJ, Whang M. A realistic game system using multi-
modal user interfaces. IEEE Trans Consum Electron 2010;56(3):136472.
[34] Freeman JA, Skapura DM. Neural networks: algorithms, applications, and
programming techniques.Addison-Wesley; 1991.
[35] Zou KH, Tuncali K, Silverman SG. Correlation and simple linear regression.
Radiology 2003;227:61722.
[36] Zanutto EL. A comparison of propensity score and linear regression analysis
of complex survey data. J Data Sci 2006;4(1):6791.
[37] Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik V. Support vector
regression machines. Adv Neural Inf Process Syst 1997;9:15561.
[38] Smola AJ, Sch olkopf B. A tutorial on support vector regression. Stat Comput
2004;14(3):199222.
J.W. Lee et al. / Optics and Lasers in Engineering 50 (2012) 736751 751