Novel Object Tracking Method Using The Block-Based Motion Estimation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

SICE Annual Conference 2007

Sept. 17-20, 2007, Kagawa University, Japan


Novel object tracking method using the block-based motion estimation

H eon Soo Shin1, Sung Min Kim1, Joo Woong Kim 2 and Ki Hwan Eom1
1
Department of Electric Engineering, Dongguk University, Seoul, Korea.
(Tel : +81-2-2260-3332; E-mail: remyname@msn.com )
2
New power electronics co., Korea.
( Tel : +81-2-2201-4103; E-mail: kimjoowoong@naver.com )

Abstract: In this paper, we propose a novel object tracking method which is applicable to surveillance system
application. Proposed method that we use, are adapted to MPEG-2 for predicts the object contour using motion vector
information, and the fuzzy control system. This algorithm uses the block-based motion estimation, and simple to apply
the real time. Accordingly, we can make the security system with a moving system. The best of our knowledge is that
it is easy to apply the object tracking system. In order to verify the effectiveness of the proposed method, we performed
the experimentation. Experimental results are presented that show the effectiveness of the proposed system.

Keywords : Object tracking, motion vector, macro-block, Fuzzy control.

1. INTRODUCTION

Object tracking is one of the attractive issues in many


applications such as security and surveillance systems.
Although many algorithms have been studied for
improving the object tracking technique, the previous
methods are difficult to extract the moving object, when
the illumination is abruptly changed between
consecutive video frames, because the most previous Fig1. Block Diagram.
methods use a color component of image as a feature for
object segmentation [2~4]. Also, it is impossible when In Fig. 1 shows a block diagram of the composition
the camera is on moving. The previous methods extract system. The PC achieved images from camera and
the moving object only under the condition of fixed processing the proposed object tracking algorithm, and
camera. Therefore, the previous method is not suitable send move order to Pan-tilter.
to the omni-directional robot field.
To improve these problems, we propose a novel
object tracking method based on motion analysis. 2.2 Full Search Block Matching Algorithm
Motion information is robust to the illumination
changes, and the proposed method successfully extracts For extracting the features of the proposed method,
the moving object, even if the camera is on moving. each video frame is divided into the blocks, and the
Moreover, the feature for extracting the object is one of block based motion information is extracted from
elements used in MPEG video compression. Therefore, between consecutive frames. It makes an algorithm
the proposed method can be easily applied to the simply. Accordingly, we can make the security system
existing MPEG based systems. with pan-tilter or mobile robot.
To achieve good object tracking capability, this paper In the proposed method, the image is divided into
use a fuzzy control system. In order to verify the several sub-blocks. And then, the motion information of
efficacy of the proposed method, we perform current image is calculated by block-based motion
experiments on object tracking. estimation.
For the proposed algorithm, we use still images size
of 320 x 240 pixels. At first, split reference still image
2. PROPOSED OBJECT TRACKING to several macro-blocks, and compare macro-block with
METHOD the blocks in provide search range on previously image.
Therefore, we’re able to find the best matched block.
2.1 System composition Now, we obtain the difference of location between
macro-block and the best matched block that we found.
These values are the vector of moving block. The
way of calculate those values is using the MAD (Mean
Absolute Differ) and, the MSD (Mean Square Differ).
Calculate the MAD between macro-block and

- 2535 -
PR0001/07/0000-2535 ¥400 © 2007 SICE
previously image using full pixel searching algorithm 2.3 Control system
that we proposed. And then, also compute the MSD for
the value’s accuracy, using half pixel searching Fig.4 shows the basic configuration of fuzzy system.
algorithm. The following two equations show the MAD The inputs of the fuzzy logic system are the motion
and the MSD. In the following equations, where dx and vector of the object and camera moving, and the output
dy are the set of the motion vector’s searching location. is the motion vector of the pan-tilter.
F ( i , j ) and G ( i , j ) are the meaning of the location of
macro-block which is on the reference image and on the
previously image [1].

Fig.4 Basic configuration of fuzzy system.


(1)

3. EXPERIMENT

3.1 Experiment

(2) The following figures show the camera and the


pan-tilter that we use, for this experiment.
In Fig.2, show that how to processing full searching
algorithm. According to this algorithm, compare
macro-block on reference image to every 16x16 pixel
blocks in search range on previously image, from the
center of search range to the edge of search range, move
in a spiral. Fig.3 shows the geometry of the four causal
neighbor block.

Steel image

Search range

(a) CCTV camera.

Macro-block

Fig2. Spiral search.

(b) pan-tilter
Fig.5 CCTV camera and pan-tilter.

The specifications for this surveillance camera define


Fig3. Geometry of the four causal neighbor block. the minimum acceptable characteristics for color or
B/W image, pan-tilt-zoom. It has interlace scanning
system and minimum scene illumination such a 0.06

- 2536 -
Lux.
For using image process function on PC is necessary
to memory storage for digital converted data. The Image
grabber is able to execute this function. The Image
grabber that we use for this experiment communicates
with PC on PCI Bus Master, so we can obtain images
immediately.
A pan-tilter makes the camera tracking possible. The
above pan-tilter in the Fig.5 (b) can move up-down and
right-left about from 150ms to 500ms velocities
maximally. And the pan-tilter communicates with PC on
RS-232C.
Fig7. Describe motion vector.
3.2 Motion Estimation 3.3 Fuzzy control system
Motion estimation is a fundamental measurement to The input control variables to the fuzzy controller are
object tracking and hence an accurate estimation of the object moving and the camera moving. They are
motion is one of the most important steps [1]. The calculated and observed periodically from the measured
method presented in this paper calculates the motion and desired positions.
vectors each macro-block. Fig.6 shows two still images, In this experiment, the inputs of the fuzzy system are
that are took continuously. Fig.7 shows the motion the motion vector of the object (x) and the motion
vector between (t-1) and (t). As the figure shown, the vector of the surveillance camera moving (y), and
motion vector existed near the moving object in Fig.7. output is the command to the pan-tilter about calculated
motion vector (z). Fig.8 shows the membership
functions. About the input membership function.

(a) Membership function of Object input.

(a) (t-1) image.

(b) Membership function of Camera input.

(b) (t) image.


Fig6. Still image.
(c) Membership function of fuzzy output.
Fig.8 Membership function.

The input and output fuzzy relation is chosen as


shown Table 1.

- 2537 -
Table1. Fuzzy rule base of implemented system.
x
NN N Z P PP
y
PP NN NN NN N Z
P NN NN N Z P
Z NN N Z P PP
N N Z P PP PP
NN Z P PP PP PP
NN – move to left quickly.
N – move to left slowly. (a) (t-1) image.
Z – stop.
P – move to right slowly.
PP – move to right quickly.

Rule base inference was accomplished using the


Mamdani inference procedure. Defuzzification of output
was achieved through the center-of-gravity
computation.
In Fig. 9 describe the fuzzy output surface by matlab
simulation.

(b) (t) image.

Fig.9 Output surface of fuzzy system.

3.5 Experimental result

Fig.10 shows three images which are took images one (c) (t+1) image.
by one continuously. Each figure is shows the images at Fig.10 Still image.
(t-1), (t), (t+1). In Fig.11, (a) shows the motion vector
between (t-1) and (t), (b) shown the motion vector
between (t) and (t+1). As the figure shown, the motion
vector existed near the moving object in Fig.11 (a).
Though, in the Fig.11 (b), the motion vectors spread out
the whole image. The reason why Fig.11 (b) took a
image during the surveillance camera moving because
calculate motion vector between (t-1) and (t), order to
move camera to the motion vector of object is occurred.
To solve this problem, subtract previously motion
vector to current motion vector in second step.

(a)(t-1), (t) motion vector.

- 2538 -
from MPEG-2 Bit Stream,” Journal of Visual
Communication and Image Representation 11, pp.
154~182, 2000.
[5] Jack Golten, Andy Verwer, “Control System
Design and Simulation,” McGRAW-HILL
INTERNATIONAL EDITIONS, 1992.
[6] Mansoor Doostfatemeh, Stefan C. Kremer,
“Developing a New Fuzzy Controller,” The North
American Fuzzy Information Processing Society,
2005.
[7] Li-Xing Wang, “A Course in Fuzzy Systems and
Control,” International-edition, Prentice Hall,
(b) (t), (t+1) motion vector. 1997.
Fig.11 Describe motion vector. [8] Rong-Jong Wai, “Motion Control of Linear
Induction Motor via Petri Fuzzy Neural Network,”
IEEE Transactions on Industrial Electronics, vol.
54, no. 1, February 2007.
4. CONCLUSIONS
In this paper, we have proposed a novel tracking [9] Gianluca Antonelli, Stefano Chiaverini and
method that using macro-block algorithm and fuzzy Giuseppe Fusco, “A Fuzzy-Logic-Based Approach
for Mobile Robot Path Tracking,” IEEE
control system. Object tracking using block motion
Transactions on Fuzzy Systems, vol. 15, no. 2,
vectors have seldom been exploited. Such an approach
APRIL 2007.
can be implemented using moving embedded system.
In order to facilitate tracking, the first stages of the
algorithm involves division of still image to
macro-block, and then, calculate the motion vector
using full search algorithm. And then we use the fuzzy
logic system to control the camera. The inputs of the
fuzzy logic system are the motion vector of the object
and camera moving, and the output is the motion vector
of the pan-tilter. In order to verify the effectiveness of
the proposed system, we performed the experimentation
on the object tracking. The results shows that improve
considerably the previous object tracking method.

z This algorithm is simple to apply the real


time.
z It is possible to object tracking when the
camera is on moving.

ACKNOWLEDGEMENT

This work was supported by the ERC program of


MOST/MOSEF (R11-1999-058-01006-0)

REFERENCES

[1] MSSG, “MPEG-2 Source Code,” 1994.


[2] I. D. Raid, D. W. Murray, “Active Tracking of
Forveated Feature Clusters using Affine
Structures,” International Journal of Computer
Vision,” vol. 10, no. 1, pp. 41~60, 1996.
[3] Karthik Hariharakrishnan and Dan Schonfeld,
“Fast Object Tracking Using Adaptive Block
Matching,” IEEE Transactions on Multimedia, vol.
7, no. 5, October 2005.
[4] Dan Schonfeld and Dan Lelescu, “VORTEX :
Video Retrieval and Tracking from Compressed
Mulimedia Databases – Multiple Object Tracking

- 2539 -

You might also like