Professional Documents
Culture Documents
Thesis. Facial Recognition Security System
Thesis. Facial Recognition Security System
Thesis. Facial Recognition Security System
Fa09-bce-051
Hashim khan
sp08-bee-153
Israr-ul-haq
sp08-bce-020
Acknowledgement
Almighty ALLAH is very kind, merciful and compassionate .His benevolence and
blessings enabled us to accomplish this task. I thank Almighty ALLAH the most
beneficent, the most merciful. I offer my humblest gratitude from deepest core of heart to
the Holy Prophet Hazarat Muhammad (Peace Be Upon Him), who is forever will be a
model of guidance and knowledge for humanity as a whole.
We had dedicated this project to our Parents and Family, whose love and affection had
been inspirational throughout our lives,
To our Teachers to whom we owe a lot for success in our personeer
AND
To all those who in any capacity had been help full in the project.
We are thankful to our project supervisor Engr. Nauman Tareen, lecturer CIIT/EE
COMSATS University, Abbottabad. It is hard to find words of appropriate dimensions to
express our gratitude to our worthy supervisor for his keen potential interest, suggestions,
consistent encouragement and support throughout the course of this project.
We are highly grateful to Engr. Asmat ali shah without whose help and guidance, this
project could not have been a success.
Abstract
A comprehensive research has been undertaken to design and develop a system that will
count ,detect and measure speed of Faces for highways .This project describes a visual
object detection framework that is capable of processing images extremely rapidly while
achieving high detection rates ,will count detected object and will measure speed of
detected object. There are five key contributions.
The first is the introduction of a new image representation called the Integral Image
which allows the features used by our detector to be computed very quickly. The second
is a learning algorithm, based on AdaBoost, which selects a small number of critical
visual features and yields extremely efficient classifiers. The third contribution is a
method for combining classifiers in a cascade which allows background regions of the
image to be quickly dispersonded while spending more computation on promising objectlike regions. The fourth and fifth contribution is to count Faces and measure speed of
Faces by processing on pixels of an image.
Facial recognition security system has a wide range of applications, such as it is useful
for security system in many sensitive areas and helpful to reduces security risks in many
departments and in institutes.
A set of experiments in the domain of facial recognition security system presented. The
system yields Facial recognition security system performance comparable to the best
previous systems.
Contents
Table of Content:
4
Chapter 1
Introduction ......................................................................................... 10
Conclusion.................................................................................................. 29
5
Table of Figures:
Figure. 3.5 This feature is selected by AdaBoost. The feature measures the difference in intensity
between the region below headfeatures and a region of shadow between the tyres. The feature
capitalizes on the observation that the headfeature region is often featureer than the shadow region. 25
Figure. 3.6. Schematic depiction of a detection cascade. A series of classifiers are applied to every subwindow. The initial classifier eliminates a large number of negative examples with very little processing.
Subsequent layers eliminate additional negatives but require additional computation. After several stages of
processing the numbers of sub-windows have been reduced radically. Further processing can take any form
such as additional stages of the cascade (as in our detection system) or an alternative detection system25
Figure. 3.7. Positive Samples used in training process..26
Figure. 3.7. Positive Samples used in training process26
Figure. 3.8. Negative Samples used in training process27
Figure 4.1 shows the counting of Faces on the Detection
Window29
Figure 4.2 shows different technologies of Face count33
Figure 6.1 shows Face detection41
Figure 6.2 shows the Face count42
Figure 6.3 shows the Face speed..43
Chapter 1
Introduction
1.2Goal
The goals of this project are to enhance public safety, reduce congestion, improved travel and transit
information, generate cost savings to person personriers and emergencies operators, reduce
detrimental environmental impacts, etc. This technology assists states, cities, and towns nationwide
to meet fully the increasing demands on Facial recognition Security system. The efficiency of the
system is mainly there is based on the performance and comprehensiveness of the Face detection
technology. Face on detection and tracking are an integral part of any Face detection technology,
since it gathers all or part of the information that are used in an efficient way.
10
Faces Could Be Applied to A Wide variety of practical applications including criminal identification,
security systems, identity verification etc. Face detection and recognition is used in many places now a
days , in websites hosting images and social networking sites. Face recognition and detection can be
achieved using technologies related to computer science. Features extracted from a face are processed
and compared with similarly processed faces present in the database. If a face is recognized it is
known or the system may show a similar face existing in database else it is unknown. In surveillance
system if a unknown face appears more than one time then it is stored in database for further
recognition. These steps are very useful in criminal identification. In general, face recognition
techniques can be divided into two groups based on the face representation they use appearance-based,
which uses holistic texture features and is applied to either whole-face or specific regions in a face image
and feature-based, which uses permanent facial features (mouth, eyes, brows, cheeks etc.), and geometric
relationships between them.
1.7 Description
The AT89C51 is a low-power, high-performance CMOS 8-bit microcomputer with 4K bytes of
Flash programmable and erasable read only memory (PEROM). The device is manufactured using
Atmels high-dens its nonvolatile memory technology and is compatible with the industry-standard
MCS-51 instruction set and pin out. The on-chip Flash allows the program memory to be
reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a
versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C51 is a powerful
microcomputer which provides a highly-flexible and cost-effective solution to many embedded
control applications.
1.8 Features
4K Bytes of In-System Reprogrammable Flash Memory
11
12
Chapter 2
Image Processing
The field of digital image processing refers to processing digital images by means of a digital
computer. There is no general agreement among authors regarding where image processing stops
and other related areas, such as image analysis and computer vision, start. Sometimes a
distinction is made by defining image processing as a discipline in which both the input and
output of a process are images. We believe this to be a limiting and somewhat artificial
boundary. For example, under this definition, even the trivial task of computing the average
intensity of an image (which yields a single number) would not be considered an image
processing operation. On the other hand, there are fields such as computer vision whose ultimate
goal is to use computers to emulate human vision, including learning and being able to make
inferences and take actions based on visual inputs. This area itself is a branch of artificial
intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its
earliest stages of infancy in terms of development, with progress having been much slower than
originally anticipated. The area of image analysis (also called image understanding) is in
between image processing and computer vision.
2.1 Background:
One of the first applications of digital images was in the newspaper industry, when pictures were
first sent by submarine cable between London and New York. Introduction of the Bartlane cable
picture transmission system in the early 1920s reduced the time required to transport a picture
across the Atlantic from more than a week to less than three hours. Specialized printing
equipment coded pictures for cable transmission and then reconstructed them at the receiving
end.
Some of the initial problems in improving the visual quality of these early digital pictures were
related to the selection of printing procedures and the distribution of intensity levels. The
printing method used to obtain was abandoned toward the end of 1921 in favor of a technique
based on photographic reproduction made from tapes perforated at the telegraph receiving
terminal.
Low-level processing involves primitive operations such as image preprocessing to reduce noise,
contrast enhancement, and image sharpening. A low-level process is characterized by the fact
that both its inputs and outputs are images.
2.2.2 Mid-level processing:
Mid-level processing on images involves tasks such as segmentation (partitioning an image into
regions or objects), description of those objects to reduce them to a form suitable for computer
processing, and classification (recognition) of individual objects. A mid-level process is
characterized by the fact that its inputs generally are images, but its outputs are attributes
extracted from those images (e.g., edges, contours, and the identity of individual objects).
2.2.3 Higher-level processing:
second, called a digitizer, is a device for converting the output of the physical sensing device into
digital form. For instance, in a digital video camera, the sensors produce an electrical output
proportional to feature intensity.
Problem domain
Figure 2.1 Components of general purpose image processing system.
Specialized image processing hardware usually consists of the digitizer just mentioned, plus
hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), that
performs arithmetic and logical operations in parallel on entire images. One example of how an
ALU is used is in averaging images as quickly as they are digitized, for the purpose of noise
15
reduction. This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs functions that require
fast data throughputs (e.g., digitizing and averaging video images at 30 frames/s) that the typical
main computer cannot handle.
The computer in an image processing system is a general-purpose computer and can range from
a PC to a supercomputer. In dedicated applications, some-times custom computers are used to
achieve a required level of performance, but our interest here is on general-purpose image
processing systems. In these systems, almost any well-equipped PC-type machine is suitable for
off-line image processing tasks.
Software for image processing consists of specialized modules that perform specific tasks. A
well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules. More sophisticated software packages allow the integration of
those modules and general-purpose software commands from at least one computer language.
2.4 Applications:
Today, there is almost no area of technical endeavor that is not impacted in some way by digital
image processing. We can cover only a few of these applications in the context and space of the
current discussion. However, limited as it is, the material presented in this section will leave no
doubt in your mind regarding the breadth and importance of digital image processing.
In this section numerous areas of application, each of which routinely utilizes the
digital image processing techniques. Many of the images shown in this section are used later in
one or more of the examples given in the book. The areas of application of digital image
processing are so varied that some form of organization is desirable in attempting to capture the
breadth of this field.
One of the simplest ways to develop a basic understanding of the extent of image processing
applications is to categorize images according to their source.
The principal energy source for images in use today is the electromagnetic energy spectrum.
Other important sources of energy include acoustic, ultrasonic, and electronic. Synthetic images,
used for modeling and visualization, are generated by computer.
Images based on radiation from the EM spectrum are the most familiar, especially images in the
X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as
propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of
mass less particles, each traveling in a wavelike pattern and moving at the speed of feature. Each
mass less particle contains a certain amount (or bundle) of energy. Each bundle of energy is
called a photon.
16
17
CHAPTER 3
FACE DETECTION
3.1 Introduction
There is a lot of work done on the problem of Face detection. Several of the existing techniques
can robustly detect with extremely high accuracy. Due to various poses, different scales,
different expressions, featureing conditions, complex background and orientation, the detection
rate and real time of exist methods have not reached the satisfaction. For Face detection, we use
Viola-Jones method. Their method is the most common method for Face detection. In this
chapter we will explain the Face detection process.
There are four key concepts in Face detection.
3.2. Computing simple rectangular features, called Haar features.
3.3. An Integral image is first created which is then used for rapid computation of features.
3.4. The AdaBoost machine-learning method is used to select and train weak classifiers from the
feature set.
3.5. A cascaded classifier is constructed by combining many features (week classifiers)
efficiently.
Now we will explain these concepts in more detail and describe the Face detection procedure.
using features rather than the pixels directly are that features can act to encode ad-hoc domain
knowledge that is difficult to learn using a finite quantity of training data. Also feature-based
system operates much faster than a pixel-based system. Most commonly used features are shown
in Fig 3.1. The value of a two-rectangle feature is the difference between the sums of the pixels
within two rectangular regions. A three- rectangle feature computes the sum within two outside
rectangles subtracted from the sum in a center rectangle. Finally a four-rectangle feature
computes the difference between diagonal pairs of rectangles. Given that the base resolution of
the detector is 24 24, the exhaustive set of rectangle features is quite large, 160,000.
Figure.3-1. Example rectangle features shown relative to the enclosing detection window. The sum of the
pixels which lie within the white rectangles are subtracted from the sum of pixels in the grey rectangles. Tworectangle features are shown in (A) and (B). Figure (C) shows a three-rectangle feature, and (D) a fourrectangle feature.
19
Rectangle features can be computed very rapidly using integral image. The integral image at
location x, y is the sum of the pixels above and to the left of x, y, inclusive:
ii(x,y) =, ( , )
(3-1)
where ii(x,y) is the integral image and i(x,y) is the original image. See figure (3.2). Using the
following pair of recurrences:
(3-2)
In an integral image the value at pixel (x,y) is the sum of pixels above and to the left of (x,y).
20
Input image
Integral image
Figure 3-2. shows how an integral image formed from the input image
The integral image is computed in one pass over the original image. Once integral image is
computed, any rectangular sum can be computed in four array references (see Fig. 3.4). Clearly
the difference between two rectangular sums can be computed in eight references. Since the tworectangle features defined above involve adjacent rectangular sums they can be computed in six
array references, eight in the case of the three-rectangle features, and nine for four-rectangle
features.
Figure 3-3. The value of the integral image at point (x, y) is the sum
of all the pixels above and to the left.
21
Figure.3-4.The sum of the pixels within rectangle D can be computed with four array references. The value of
the integral image at location 1 is the sum of the pixels in rectangle A. The value at location 2 is A + B, at
location 3 is A + C, and at location 4 is A + B + C + D. The sum within D can be computed as 4 + 1 (2 + 3).
By seeing above figures, now we understand the concept of Integral Image which is used in the
computation of rectangular features. Now we will discuss the AdaBoost machine-learning
algorithm which was first time used by Viola and Jones in 2004.
learner determines the optimal threshold classification function, such that the minimum numbers
of examples are misclassified. A weak classifier (h(x, f, p, )) thus consists of a feature (f), a
threshold () and a polarity (p) indicating the direction of the inequality:
( )
( , , , )
(3-3)
Here x is a 24 24 pixel sub-window of an image. In practice no single feature can perform the
classification task with low error. Table 3.1 shows the learning algorithm.
Table 3.1. The boosting algorithm for learning a query online. T hypotheses are constructed each using a
single feature. The final hypothesis is a weighted linear combination of the T hypotheses where the weights
are inversely proportional to the training errors.
Given example images (x1,y1), (xn,yn) where yi = 0, 1 for negative and positive
examples respectively.
23
100% of the players/ s with a false positive rate of 50%. Fig. 3.5 shows a description of the
features used in this classifier.
Figure. 3.6 This feature is selected by AdaBoost. The feature measures the difference in intensity between the
region below headfeatures and a region of shadow between the tyres. The feature capitalizes on the
observation that the headfeature region is often featureer than the shadow region.
Training a cascade:
Since finding optimum combination is extremely difficult. Viola & Jones suggested a heuristic
algorithm for the cascade training.
Manual Tweaking:
select fi (Maximum Acceptable False Positive rate / stage)
select di (Minimum Acceptable True Positive rate / stage)
select Ftarget (Target Overall False Positive rate)
Until Ftarget is met:
Add new stage:
Until fi, di rates are met for this stage
Keep adding features & train new strong classifier with AdaBoost.
Table 3-2 shows the shows pseudo code for building a cascade detector.
User selects values for f, the maximum acceptable false positive rate per layer and d, the
minimum acceptable detection rate per layer.
User selects target overall false positive rate[1], Ftarget.
P = set of positive examples
N = set of negative examples
F0 = 1.0; D0 = 1.0.
I=0
i i + 1
ni = 0; Fi = Fi -1
while Fi > f x Fi -1
ni ni + 1
Use P and N to train a classifier with ni features using AdaBoost.
Evaluate current cascaded classifier on validation set to determine Fi and Di.
Decrease threshold for the ith classifier until the current cascaded classifier has
a detection rate of at least d x Di -1 (this also affects Fi)
N
If Fi > Ftarget then evaluate the current cascaded detector on the set of nonFace images and put any false detections into the set N
26
The overall form of the detection process is that of a degenerate decision tree, what is called a
cascade (Quinlan, 1986) (see Fig. 2.6). A positive result from the first classifier starts a second
classifier which has also been adjusted to achieve very high detection rates. A positive result
from the second classifier triggers evaluation of a third classifier, and so on. A negative outcome
at any point means an immediate rejection of the sub-window. The structure of the cascade
utilized the fact that within any single image an overwhelming majority of sub-windows are
negative. So the cascade tries to reject as many negatives as possible at the earliest stage
possible. We developed 20 stage classifier and training took 30 hours to complete on Pentium IV
system with 3 GHz processor and 2 GB RAM.
Figure. 3.6. Schematic depiction of a detection cascade. A series of classifiers are applied to every subwindow. The initial classifier eliminates a large number of negative examples with very little processing.
Subsequent layers eliminate additional negatives but require additional computation. After several stages of
processing the numbers of sub-windows have been reduced radically. Further processing can take any form
such as additional stages of the cascade (as in our detection system) or an alternative detection system.
While a positive instance will starts the evaluation of every classifier in the cascade, this is a very
rare event. Like a decision tree, subsequent classifiers are trained using those examples which
pass through all the previous stages. So the second classifier performs a more difficult task than
the first. The examples which passes through the first stage are difficult than typical examples.
We collected 1000 positive samples of Faces and conditions for training with size of 24x24. To
obtain a robust classifier we used good quality positive samples for training. Figure 3.8 shows
some of the positive samples used in our work. Similarly 1500 negative images were used of size
640x480. Figure 3.9 shows some of the negative images used in our training process
27
Now the training algorithm based on AdaBoost learning algorithm takes a set of Positive and
Negative samples and generates a classifier and detect the Face.
28
Chapter 6
Conclusion
In this work we developed an application for high ways. Our application can vigorously detect
Faces, count them and measure their speed. We achieved precise and robust Face detection,
count and speed measurement. For Face detection and count we used Haar-like features and
AdaBoost algorithm for both feature selection and classification. The Face speed measurement
system used AdaBoost algorithm with optical flow method. For Face recognition general
databases have been used. Tested Face images videos include different featureing conditions. In
our application we have tested the Face video of different image sizes. The
recognition system efficiently worked on most difficult scenario of sample size of 5 x 5. The
results have shown that the developed application is fully applicable in
real world environments.
6.1 DETECTION:
It is the result of detection. It only detects the respective Face and does not detect any other thing
in the surrounding.
REFRENCES:
1)Detection Window Feature Data: Collection Methods and Applications Guillaume Leduc
Working Papers on Energy, Transport and Climate Change
2) Automatic player detection and recognition in images using adaboost Zahid Mahmood
August 2011.
3) Digital Image Processing Third Edition Rafael C. Gonzales published in 7th October 2007.
4) Robust Real-time Object Detection Paul Viola Michael Jones 13 July 2001
Website:
5) http://ntl.bts.gov/DOCS/arizona_report.html
6) http://en.wikipedia.org/wiki/OpenCV
7) http://en.wikipedia.org/wiki/Optical_flow
30
31
Appendix A
OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly
aimed at real-time computer vision, developed by Intel, and now supported by Willow
Garage and Itseez. It is free for use under the open source BSD license. The library is crossplatform. It focuses mainly on real-time image processing. If the library finds Intel's Integrated
Performance Primitives on the system, it will use these proprietary optimized routines to
accelerate itself.
Officially launched in 1999, the OpenCV project was initially an Intel Research initiative to
advance CPU-intensive applications, part of a series of projects including real-time ray
tracing and 3D display walls. The main contributors to the project included a number of
optimization experts in Intel Russia, as well as Intels Performance Library Team. In the early
days of OpenCV, the goals of the project were described as
Advance vision research by providing not only open but also optimized code for basic vision
infrastructure. No more reinventing the wheel.
The first alpha version of OpenCV was released to the public at the IEEE Conference on
Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001
and 2005. The first 1.0 version was released in 2006. In mid-2008, OpenCV obtained corporate
support from Willow Garage, and is now again under active development. A version 1.1 "prerelease" was released in October 2008.
The second major release of the OpenCV was on October 2009. OpenCV 2 includes major
changes to the C++ inter, aiming at easier, more type-safe patterns, new functions, and better
implementations for existing ones in terms of performance (especially on multi-core systems).
Official releases now occur every six months and development is now done by an independent
Russian team supported by commercial corporations.
In August 2012, support for OpenCV was taken over by a non-profit foundation, OpenCV.org,
which maintains a developer and user site.
32
APPLICATIONS:
OpenCV's applications areas include:
OS support:
OpenCV runs on Windows, Android, Maemo, FreeBSD, OpenBSD, BlackBerry ,Linux and OS
X. The user can get official releases from SourceForge, or take the current snapshot
under SVN from there. OpenCV uses CMake.
Description:
Syntax:
OpenCV(parent);
Fields:
BILATERAL
Blur method
BLUR
Blur method
BUFFER
Type of Image
CASCADE_FRONTAL_ALT
CASCADE_FRONTAL_ALT2
CASCADE_FRONTAL_ALT_TREE
CASCADE_FRONTAL_DEFAULT
CASCADE_FULLBODY
Standa
objec
CASCADE_UPPERBODY
FLIP_BOTH
Flip mode
FLIP_HORIZONTAL
Flip mode
FLIP_VERTICAL
Flip mode
GAUSSIAN
Blur method
GRAY
HAAR_DO_CANNY_PRUNING
HAAR_DO_ROUGH_SEARCH
HAAR_FIND_BIGGEST_OBJECT
HAAR_SCALE_IMAGE
INTER_AREA
Interpolation method
34
INTER_CUBIC
Interpolation method
INTER_LINEAR
Interpolation method
INTER_NN
Interpolation method
MAX_VERTICES
MEDIAN
Blur method
MEMORY
Type of Image
MOVIE_FRAMES
MOVIE_MILLISECONDS
MOVIE_RATIO
RGB
SOURCE
Type of Image
THRESH_BINARY
Thresholding method
THRESH_BINARY_INV
Thresholding method
THRESH_OTSU
Thresholding method
THRESH_TOZERO
Thresholding method
THRESH_TOZERO_INV
Thresholding method
THRESH_TRUNC
Thresholding method
height
width
35
ROI()
absDiff()
allocate()
blobs()
blur()
brightness()
capture()
cascade()
Load into memory the descriptor file for a trained cascade classifier.
contrast()
convert()
copy()
Copy the image (or a part of it) into the current OpenCV
buffer (or a part of it).
detect()
flip()
image()
interpolation()
invert()
Invert image.
36
jump()
loadImage()
movie()
pixels()
read()
remember()
restore()
stop()
threshold()
37
Appendix B
C++ language:
C++ (pronounced "see plus plus") is a statically typed, free-form, multi-paradigm, compiled,
general-purpose programming language. It is regarded as an intermediate-level language, as it
comprises
both high-level and low-level language
features. Developed
by
Bjarne
Stroustrup starting in 1979 at Bell Labs, C++ was originally named C with Classes,
adding object oriented features, such as classes, and other enhancements to the C programming
language. The language was renamed C++ in 1983,as a pun involving the increment operator.
C++ is one of the most popular programming languages and is implemented on a wide variety of
hardware and operating system platforms. As an efficient compiler to native code, its application
domains include systems software, application software, device drivers, embedded software,
high-performance server and client applications, and entertainment software such as video
games.Several groups provide both free and proprietary C++compiler software, including
the GNU Project, LLVM,Microsoft, Intel and Embarcadero Technologies. C++ has greatly
influenced many other popular programming languages, most notably C# and Java. Other
successful languages such as Objective-C use a very different syntax and approach to adding
classes to C.
C++ is also used for hardware design, where the design is initially described in C++, then
analyzed, architecturally constrained, and scheduled to create a register-transfer level hardware
description language via high-level synthesis.
The language began as enhancements to C, first adding classes, then virtual functions, operator
overloading, multiple inheritance, templates and exception handling, among other features. After
years of development, the C++ programming language standard was ratified in 1998 as ISO/IEC
14882:1998.
Philosophy:
In The Design and Evolution of C++ (1994), Bjarne Stroustrup describes some rules that he used
for the design of C++.
38
Inside the C++ Object Model (Lippman, 1996) describes how compilers may convert C++
program statements into an in-memory layout. Compiler authors are, however, free to implement
the standard in their own manner.
Operators
Symbols
::
Conditional Operator
:?
Dot Operator
.*
sizeof Operator
Sizeof
typeid Operator
typeid
39
C++ provides more than 35 operators, covering basic arithmetic, bit manipulation, indirection,
comparisons, logical operations and others. Almost all operators can be overloaded for userdefined types, with a few notable exceptions such as member access (. and .*) as well as the
conditional operator. The rich set of overloadable operators is central to using user created types
in C++ as well and as easily as built in types (so that the user using them cannot tell the
difference). The overloadable operators are also an essential part of many advanced C++
programming techniques, such as smart pointers. Overloading an operator does not change the
precedence of calculations involving the operator, nor does it change the number of operands
that the operator uses (any operand may however be ignored by the operator, though it will be
evaluated prior to execution). Overloaded "&&" and "||" operators lose their short-circuit
evaluation property.
40
APPENDIX C
FALSE POSITIVE RATE:
In statistics when performing multiple comparisons, the term false positive ratio, also known as
false alarm ratio, usually refers to the probability of falsely rejecting the null hypothesis for a
particular test. The false positive rate usually refers to the expectancy of the positive ratio.
41
REFRENCES:
1)Detection Window Feature Data: Collection Methods and Applications Guillaume Leduc
Working Papers on Energy, Transport and Climate Change
2) Automatic player detection and recognition in images using adaboost Zahid Mahmood
August 2011.
3) Digital Image Processing Third Edition Rafael C. Gonzales published in 7th October 2007.
4) Robust Real-time Object Detection Paul Viola Michael Jones 13 July 2001
Website:
5) http://ntl.bts.gov/DOCS/arizona_report.html
6) http://en.wikipedia.org/wiki/OpenCV
7) http://en.wikipedia.org/wiki/Optical_flow
42
43
44