Three Article Jean 805415007

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO.

10, OCTOBER 2013

1809

Fast Background Subtraction Based on a Multilayer


Codebook Model for Moving Object Detection
Jing-Ming Guo, Senior Member, IEEE, Chih-Hsien Hsia, Member IEEE, Yun-Fu Liu, Student Member, IEEE,
Min-Hsiung Shih, Cheng-Hsin Chang, and Jing-Yu Wu

AbstractMoving object detection is an important and fundamental step for intelligent video surveillance systems because
it provides a focus of attention for post-processing. A multilayer
codebook-based background subtraction (MCBS) model is proposed for video sequences to detect moving objects. Combining
the multilayer block-based strategy and the adaptive feature
extraction from blocks of various sizes, the proposed method
can remove most of the nonstationary (dynamic) background
and significantly increase the processing efficiency. Moreover,
the pixel-based classification is adopted for refining the results
from the block-based background subtraction, which can further
classify pixels as foreground, shadows, and highlights. As a result,
the proposed scheme can provide a high precision and efficient
processing speed to meet the requirements of real-time moving
object detection.
Index TermsBackground subtraction, codebook model,
foreground detection, hierarchical structure, shadow removal.

I. Introduction

ACKGROUND subtraction is an essential issue in visual surveillance and can extract moving objects for
further analysis. However, a difficult issue in background
subtraction is that the background is usually nonstationary,
such as a waving tree or changing lights. Moreover, when
moving objects are involved in a scene, there might be some
shadows cast or changes in the lighting, which could result
in incorrect detections. To solve this problem, many previous
studies have proposed a corresponding pixel classification
algorithms to classify the pixels as shadow, highlight, or
foreground. Cucciara et al. [1] proposed a hue-saturationvalue color model to handle the shadow; this method defined
shadows by the luminance and saturation values and used a
predefined parameter for the hue variation. In [2] and [3],
a red, green, and blue (RGB) color model was proposed to

Manuscript received September 21, 2012; revised January 8, 2013 and


March 26, 2013; accepted March 26, 2013. Date of publication June 17,
2013; date of current version September 28, 2013. This work was supported
by the National Science Council, R. O. C., under contract NSC 100-2221-E011-103-MY3. This paper was recommended by Associate Editor L. Onural.
J.-M. Guo, Y.-F. Liu, M.-H. Shih, C.-H. Chang, and J.-Y. Wu are with
the Department of Electrical Engineering, National Taiwan University of
Science and Technology, Taipei 10607, Taiwan (e-mail: jmguo@seed.net.tw;
yunfuliu@gmail.com; onlybearbear@gmail.com; M10107304@mail.ntust.
edu.tw; M10107305@mail.ntust.edu.tw).
C.-H. Hsia is with the Department of Electrical Engineering, Chinese
Culture University, Taipei 11114, Taiwan (e-mail: chhsia625@gmail.com).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCSVT.2013.2269011

remove the shadow; however, one problem with this model


was that there were too many parameters in the color model.
All of these methods show good performance in managing the
shadow issue; however, some disadvantages are apparent, such
as a nonstationary background removal capability. Currently,
the mainstream techniques of background subtraction can be
roughly separated into three groups, including the mixture of
Gaussian (MoG), the kernel density estimation (KDE), and
the codebook (CB). Among these, the MoG attracts the most
attention.
Stauffer and Grimson [4] used multiple Gaussian distributions to construct a background model for each pixel, and
this method can achieve good performance through a learning
procedure that builds statistical models. However, there are
some disadvantages; for example, it cannot detect and remove
shadows. Martel-Brisson and Zaccarin [5] proposed a pixelbased statistical algorithm for detecting moving shadows of
nonuniform objects. However, its high computational complexity results in a longer learning time. Hu and Su [6]
proposed a Gaussian distribution-based RGB color model; the
cone-shaped color model can classify pixels as shadows and
highlights, but the processing time is too long for practical
implementation. Xue et al. [7] proposed a phase-based background modeling approach that combines Gabor wavelet transforms to handle illumination changes. Kim et al. [8] proposed
a real-time method that uses CB; this method gathers the
background pixel values to construct the background model.
This method compresses the pixel information to increase
the processing speed. Wu and Peng [9] proposed a spatialtemporal CB model that includes the concept of a spatial
relationship between the pixels and uses the Markov random
field (MRF) to address background subtraction. However,
the applied MRF leads to a low processing speed. In [10],
the Kohonen network concept and self-organizing maps [11]
were utilized to construct the background model, which can
adapt in a self-organizing manner. Guo et al. [12] adopted
the concept of block-based CBs to construct four different
background models. However, the fixed-sized block-based
process loses adaptation flexibility and thus causes more false
detections. Barnich and Van Droogenbroeck [13] proposed a
unique model that uses a random concept in the color space
along with an updating method. This approach provides good
detection performance and a high processing speed. However,
it is a challenge for a software platform to meet real-time
performance at a higher resolution. In [14], a high frame

c 2013 IEEE
1051-8215

1810

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

rate implementation based on the MoG is presented using


general-purpose computing on graphics processing units,
which can provide very high speed processing performance
for server systems. Compared to the MoG, the proposed
complexity-reduced hierarchical method for foreground detection using the CB model can achieve a good classification
performance without substantial hardware resources.
In recent years, the resolutions of digital cameras and video
recorders have increased over time, e.g., standard definition
(SD); however, the complexity of the former foreground
detection methods is still too high to handle high resolution
scenarios. To address this concern, this paper proposes a
new multilayer, adaptive block-based background subtraction
method and a pixel-based refinement procedure that uses the
rather robust mean feature to cater to the CB to yield high processing efficiency and detection precision (Pr) simultaneously.
Herein, all of the parameters in the multilayer codebook-based
background subtraction (MCBS) are fixed, and the parameters
in this paper are unified to yield general results.
This paper is organized as follows. The feature extraction
for the blocks of various sizes is first introduced in Section II,
in which the multilayer CB construction algorithm is also presented. Section III describes the combination of the proposed
multilayer background subtraction and the CB model. Section
IV describes the details of the fake foreground removal model
(FFRM). Section V demonstrates the extensive experimental
results and performance comparisons to prove the reliability
of the proposed method. Finally, the conclusions are drawn in
Section VI.

Fig. 1. Background subtraction results of the HCB [12] using various block
sizes with test sequence CAMPUS. (a) Original image. (b) 16 16. (c) 8 8.
(d) 4 4.

II. Multilayer Background Model Construction


With the block-based CB background model, the hierarchical method for foreground detection using the CB model
(termed the hierarchical codebook (HCB) [12]) was employed
to solve the dynamic background and to improve the processing speed by adopting high-mean and low-mean values
of blocks; however, many additional issues were introduced.
For example, fixed-size block-based processing that uses an
identical threshold could result in false detection in the background subtraction. Fig. 1(a) shows the original frame #366 of
the video CAMPUS [18], and Figs. 1(b)(d) are the results of
the HCB using different block sizes. Among these, although
a greater block size 16 is good to handle the dynamic background issue, some false subtractions (false negatives) can be
observed over the vehicle (left) and the pedestrian (right). For
the smaller block of size 4 4, the foreground is well detected
but the subtracted background is rather poor for the dynamic
background, which is why the tradeoff block size 8 8 is
adopted in HCB [12]. However, in some other sequences, a
block of size 8 8 is not sufficiently large, which leads to
noise, as is the case with a block of size 4 4 [Fig. 1(d)].
Conversely, for some sequences, the block of size 8 8 is not
sufficiently small, which leads to false subtractions, as is the
case with a block of size 16 16 [Fig. 1(b)]. To solve these
issues, a multilayer block-based background model is proposed
with three adaptive block-based layers for coarse detection
and a pixel-layer (block of size 1 1) for further refinement.

Fig. 2.

Conceptual flowchart of the proposed scheme.

With this strategy, the reliability of the system is improved


against the dynamic background problem, and the integrity
of the foreground is well preserved. In the experiments, the
proposed algorithm shows better performance in terms of
various evaluation indices using the mean value of a block
(as defined below) instead of using the high and low means
in the HCB.
Fig. 2 shows the conceptual flowchart of the proposed
method, in which the right vertical axis denotes the time index
(t). The flowchart can be separated into two parts: the first half
(1 t T ) , on the top, is for training the background model
using four CBs, as introduced below, and the other half (t > T )
is for background subtraction using the hierarchical conceptual
algorithm. The FFRM, which is adopted for adapting the
background, is on the bottom-right corner. Moreover, as shown
on the left of this figure, an illumination change procedure is
also proposed to overcome the light condition changes. In this
section, the first part (1 t T ), known as the background
construction, is introduced.
A. Feature Extraction
Feature extraction is a very crucial factor in foreground
detection because it could induce a very large impact on the

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

results. Although the HCB [12] used block-based features


and achieved good performance, the adopted features still
have
First, each frame Ft =
! t room for further improvement.
"
Xm,n |1 m I, 1 n J of size I J of a sequence is
separated into multiple nonoverlapping blocks of size M M
(in different time slots, each frame of size I J is divided
into 1/M J/M blocks, and each
! eblock is recorded
" into a
t
|e = R, G, B denotes
CB individually), where Xm,n
= Xm,n
a color pixel in the RGB color domain, and M denotes the
covered region (block size) of a CB. Subsequently, each block
is processed independently. In the HCB, the concept of block
truncation coding (BTC) was used, which entailed dividing an
image into nonoverlapping blocks, and each block was simply
represented by four regional means, namely the high-top mean,
high-bottom mean, low-top mean, and low-bottom mean [thus,
in total, 12 (three color channels four means) feature values],
which are used to construct the block-based CBs. However,
these four means induce additional computational complexity;
moreover, they are easily interfered by environmental factors
such as lighting conditions.
According to the extensive experiments, one mean value of
a block as defined below is employed in this paper to replace
the roles of the former four means in the HCB. As a result,
an extremely low computational complexity can be obtained
without noticeably degrading the description capability
(M,e) =

M #
M
#
1
e
M M m=1 n=1 m,n

(1)

where e = R, G, and B are used to represent the three colors.


Thus, only one mean value is used to describe a block in
a specific color channel. In addition, each block can be
represented by BM = {(M,e) |e = R, G, B}, where M = 1, 4,
8, and 16, and! notably when M"= 1, the corresponding B1 is
e
|e = R, G, B .
equivalent to xm,n
B. Background Model Construction

The features of a specific block can be described by BM


during a training period (1 t T ), and a CB (the
background model) for a block can be represented by CM =
{CiM |1 i KC m}, where CiM denotes the ith codeword of
size M M in the CB, and KC m denotes the number of
codewords in C M . Herein, CiM = {BiM , WciM , time ciM } (for
pixel-based CiM = {pM
i , wciM , time ciM }), where WciM denotes
a weight variable, and time ciM denotes a time variable. Each
frame of size I J is divided into multiple nonoverlapping
blocks of size I/M J/M, and one block-based CB is
employed to record a block (each block is processed independently). In addition, each layer associates with a block
of a specific size (1, 4, 8, and 16), and thus, a total of
(I J + I/4 J/4 + I/8 J/8 + I/16 J/16) CBs are required
in a frame after the background model is constructed.
Fig. 3 illustrates the proposed algorithm of the four-layer
background model construction with the updating method,
where the block add codeword involves the following equa1
M
M =
M = t.
tions: KC M = KC M + 1, CKc
= BM , WCKc
, and time CKc
T
and The block update codeword involves three equations:
CiM = (1)ciM +BM , WCiM = WCiM + T1 , and time ciM=t ; where

Fig. 3.

1811

Background model training algorithm.

T denotes the number of training frames, and denotes the


learning rate to control the proportion needed to maintain the
current codeword, and includes the current block value in the
codeword. The is set at 0.05 in this paper, and theoretically,
a higher learning rate reflects that the current RGB color of
successive matches possesses a higher confidence (when 1
is set, the original color will be replaced completely). The
proposed background model construction employs multiple
M
codewords CKc
to address the characteristics of a block.
Given an input block vector, Bm the match function as
defined below is employed to check the Euclidean distance
between the codewords ciM in the corresponding CB and the
transformed block vector BM in the RGB color space
match function(source , codeword )
$
T
true, if |d 3d| < 2M
=
where, M = 1, 4, 8, 16
false, otherwise

(2)

where M = 1, 4, 8, 16
d = source codeword

(3)

where 4 , 8 and 16 denote the threshold for the blockbased CBs, and 1 denotes the threshold for the pixel-based
CB (set at 4 and 3 for the block-based CBs and the pixelbased CB, respectively). To setup these thresholds, the blockbased CBs are established for dynamic background scenarios
such as waving trees; thus, a greater threshold is considered
to filter out most of the noise, and a smaller threshold is
adopted for refining the outputs. In general, when the blockbased threshold value is small, more foreground is detected,
and a greater pixel-based threshold is applied to refine the
foreground; vsource denotes a 1-D feature vector from the test
sequence, and vcodeword denotes a one-dimension codeword
stored in the CM . This match function is widely used in this
paper for background model construction because the RGB

1812

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

color model is employed in this paper, and the two vectors


vsource and vcodeword are applied to yield the average distance
d across the three-color space. Subsequently, the matching
parameter M is adopted to compare the Euclidean distance
between the two vectors in the RGB color spaces. When a
codeword ciM is matched, ciM is updated by the corresponding
block vector BM ; the larger the number of block vectors of
frames that are matched to the ith codeword, the higher the
importance that the codeword can be boosted to (by increasing
the weight variable WciM ). Conversely, some codewords in a
CB are not frequently used to describe the background.
Consequently, a refining procedure is employed (as described below) to filter out those redundant codewords as well
as to further reduce the computational complexity. The refining
procedure extracts LC M codewords from KC M (LC M KC M )
codewords in a CB; the codeword ciM in the CB is sorted
according to the corresponding weight wciM from high (higher
importance) to low (lower importance)
%

LC M = argming (

g
#
i=1

&

CM ) > , where g KCM


W
i

(4)

iM ,
CM denotes the weight of the sorted codeword C
where W
i
and denotes the proportion parameter used to determine
which proportion of the codewords should be maintained (
= 0.7). A greater means that more codewords are retained
during the updating procedure, and the completeness of the
codebook model can be better maintained. Yet, sometimes
wrong codewords can be added into the codebook model in
this scenario. If there are almost no moving objects during
the background model construction, then has no effect
toward the codeword number; conversely, if many moving
objects are involved, then a greater proportion parameter can
lead to recording
more wrong codewords. The refined CBs
'
iM '1iLC M , M = 1, 4, 8, 16 } are then employed for
CM = {C
the block-based background subtraction, which is introduced
in Section III.
C. Illumination Change Procedure
Typically, lighting conditions change over time, and usually
a fixed threshold, such as the M in (2), is not sufficient to fully
cover these variations for most of the scenes. To address this
concern, an adjustment strategy that adaptively modifies the
background model with the variations in the lights, to obtain
a higher suitability, is expected to be given. To do so, the
illumination change procedure as shown in Fig. 2 is proposed
to solve this problem. Fig. 4 illustrates this algorithm, and the
variables are defined as follows:
dis = ctg(Value) Gray, where ctg(Value) = BM
Gray = GrayM , M = 1, 4, 8, 16

(5)

where this form of YCbCr was defined for a standard video


capture system and used in the ITU-R BT.601 (formerly CCIR
601) standard for digital image compression. The component

Fig. 4.

Illumination changes procedure.

Y (= ctg(Value)) is derived from the corresponding RGB space


[15] using the following equation:
ctg(Value)=V R 0.299+V G 0.587+V B 0.114, where V R3 .
(6)
In the initialize variable block, the variables Value and Gray
denote the current gray value (BM ) and the recoded gray value
(the previous mean gray value) of the corresponding layer, respectively; variable countGM denotes the count of the recorded
frames. In this figure, a distance dis that is too large suggests
that a very large illumination change occurs and the previous
Gray might not be able to represent the background. Thus, the
coming Value is adapted to replace the Gray value, to shift the
luminance range of the background model (which involves
the following equations: Gray = Value and countGM = 1). At
the same time, the current M is also discarded and replaced
by the predefined M , which is always the original value
for initialization. Compared with the fixed strategy that is
used in (2), this adaptive manner automatically adjusts the
model location along with the fluctuation in the lights; thus,
more variations will be covered reasonably. Conversely, if
the dis is sufficiently small, the distortion is updated [including the following equations: countGM = countGM + 1 and
Gray = (Gray (countGM 1) + Value)/countGM ), to yield a
new, as follows:
M = M (1 +

dis
)
255

(7)

for improving the tolerance to the environmental changes in


the background model, most importantly to the illumination
changes.
III. Multilayer Background Subtraction
The proposed hierarchical structural CB method is to reduce
the computational complexity for the foreground detection, in
which multiple codewords are employed to fully describe an
image block. Subsequently, the FFRM is employed to update
the background model CBs. To adapt the current situations,
the nonbackground information is also used to update the
block-based and pixel-based background models. This method
provides an independent way to update the background model
according to the time that the foreground stays, rather than the
former method, which must confirm a color similarity (Sim).
Finally, the results are then further refined during the pixelbased phase. This phase also provides additional functions

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

1813

TABLE I
Performance Comparison

A. Hierarchical Background Subtraction Using Multilayer


Block-Based CBs

Fig. 5. Flowchart of the proposed background subtraction. Top block: multilayer block-based background subtraction. Bottom block: pixel classification.

that distinguish whether a target belongs to highlight or


shadow, which might confuse the procedure of the foreground
determination.
After finishing the background model construction by training on T frames, as shown in Fig. 2, four layers of CBs,
the block-based C 16 , C 8 , and C 4 and the pixel-based C 1 , are
obtained; these are then utilized for the proposed multilayer
background subtraction. Without loss of generality, the background subtraction starts after T , and the test input frames are
FT +1 , FT +2 , . . . . By observing (4), we know that in the refining
procedure, the proposed method retains the top 70% (proportion parameter =0.7) of the codewords according to the
priority of importance. Consequently, even if moving objects
randomly appear during the training, the proposed method can
still build the background information robustly because these
objects associate with unstable codewords, which mostly will
not be retained by (4). The main difference between the HCB
[12] and the proposed method is that, in total, four layers of
various block sizes are employed in this paper, which not only
boosts the processing efficiency of the foreground detection
but also adaptively solves the dynamic background issue. The
pixel-based CBs at the end of the proposed system can also
classify the pixels into three types, foreground, shadow, and
highlight, as detailed below.

The upper block of Fig. 5 shows the flow of the proposed


MCBS. Initially, similar to the construction of the block-based
background model, the input frames (F t ) are divided into
multiple nonoverlapping blocks, and the feature extraction as
introduced in Section II-A is then applied in this process. Thus,
each block is transformed into a 3-D mean-value vector BM
for background subtraction. In the first stage, B16 is adopted
for the first layer process. Before applying (2) to determine the
background, the illumination changes procedure as formulated
in (7) is adopted to adjust the threshold M [defined in (2)]
to meet various changes in the lighting, where Value = BM ,
and Gray = GrayM , in this case. The current block mean
value B16 is used to match the block of size 16 16 in
the background model C16 = {Ci16 |1 i LC16 } using
the new threshold via (2). If a codeword is matched, then
update the matched ith codeword ci16 [which involves the
following equations: ci16 = (1 )ci16 + B16 , time ci16 = t, and
count = count + 1] and the gray recorder value Gray16 , as
shown below, and determine the current block to be background

countGM = countGM + 1
GrayM (countGM 1)+ctg(BM )
GrayM =
, M = 4, 8, 16
countGM
(8)
where ctg() is defined in (6). Otherwise (no codeword is
matched), the current block is divided into four 8 8 blocks,
and each block is transformed into a block-based vector B8 ,
which is adopted for the next block-based layer. Equation (2)
is applied for the second layer with the new threshold after the
illumination changes procedure (7), to match the codewords to
the background model C 8 and the current block vector B8 . If
they match, then update the matched codewords, determine the
current block to be background, and update the gray recorder
Gray8 ; otherwise, divide the 8 8 block into four 4 4 blocks,
similar to the algorithm for the second layer, and do likewise
for the updating phases. After the three stages are finished, the
final phase combines the results yielded from the blocks of the
three sizes. In this way, the block-based stage can remove most
of the noise and dynamic background; however, it has low Pr.
To overcome this problem, the pixel-based stage is adopted
to enhance the Pr, which also can reduce the FPR, as shown
in Table I. The main contribution of the block-based stage is
to reduce the redundant foreground detection operations and

1814

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

to reduce the noise in the dynamic background because the


mean is considered in the block-based stage, and the mean
is a good feature to protect against the noise. In addition, a
larger block size is employed for the early stages while leaving
a smaller block size in the later stages, and the processing
speed can be further boosted. The result from the blockbased stage is then fed into the pixel classification, which is
introduced in the following section. In other words, as long
as a block has been matched during the block-based stages,
the pixel-based phase will not be required for simplification.
To prevent the codeword representative in the pixel-based CB
from decreasing, a pixel-based update procedure is proposed.
The variable countGM in the top block of Fig. 5 denotes the
temporary constant that records the update times of C 16 , and
updatepixel denotes the period constant that controls how many
times C 16 updates (the updating parameter updatepixel = 3 for
the pixel-based CB). The number 3 represents that our system
will update the pixel information to ensure that no data are
missed after there are three successive matches with a block of
size 16 16. A smaller updatepixel means that the pixels can
describe the current background more precisely, yet it also
leads to a higher computational complexity. Thus, in every
updatepixel time interval, the pixel-based update procedure
t
updates the pixels Xm,n
in the current block B16 .
B. Pixel Classification Using Pixel-Based CB
In the bottom block of Fig. 5, the pixel-based CB is used to
further classify the property of the pixels, where the variable
+
m,n
denotes the pixels in the nonsubtracted area after the
block-based background subtraction. After the illumination
changes procedure [similar to (7)] using the new pixel-based
matching threshold P in (2), the pixels in this area are
matched with the corresponding CB C 1 . If they match, then
the matched codeword ci1 is updated, and the pixel is classified
as background and the pixel-based gray recorder Gray1 is
updated as in (8). For the unmatched codewords, there might
be some fake foregrounds, such as shadows and highlights. To
separate these from the true foreground, the former algorithm
in Carmona et al. [16] is applied, in which a cone-shaped color
model is proposed to improve the CBs color model, which
is more robust and is employed in this paper to classify the
pixels.
Fig. 6 shows the color model in RGB color space, which
is constructed by the pixel-based codeword ci1 = (pei |e =
R, G, B), which is the ith codeword in the background model
CB of the length LC 1 ; to classify the pixels in RGB color
space, a high bound I MAX and a low bound I min are calculated
as follows:
+
B 2
2
ci1 = (pRi )2 + (pG
(9)
i ) + (pi )
IMAX = ci1

(10)

Imin = ci1

(11)

where is greater than 1, and is smaller than 1, or more


specifically, = 1.25 and = 0.7, which sets a high bound and a
low bound, respectively. In Fig. 6, an angle parameter color = 3

Fig. 6.

Color model used in pixel classification.

is used to define the region of the cone-shaped color model.


Similar to a bright environment, the countermeasure is to
widen the middle part of the color model by reducing the
(which also yields a smaller I min ) and increasing the color to
separate the shadows. Conversely, under a darker environment,
the highlights are rather apparent, and then, we can obtain
a cone-shaped color model with a wider top. By applying
a greater color and (a greater I MAX can be yielded), the
highlights can be classified correctly. To !verify which region
"
t
e
|e = R, G, B
that the current pixel vector Xm,n
= Xm,n
t
belongs to, the projected vector Xproj from Xm,n
to ci1 is
calculated first, as follows:
Xproj =

t
Xm,n
ci1 1
c i
ci1

(12)

, ,
i1 = Ci1 / ,Ci1 , denotes the unit vector of Ci1 , and the
where C
- t
.
inner product Xm,n Ci1 is calculated as follows:
t
2
G
G 2
B
B 2
Xm,n
Ci1 = xm,n (pG
i ) + xm,n (pi ) + xm,n (pi )
,
,
so that ,Xproj , can be calculated as follows:

xproj =

t
Xm,n
ci1
i1 = 1.
, where C
ci1

(13)

(14)

t
Subsequently, the angle between the current pixel Xm,n
and
1
the codeword Ci can be calculated as follows:
/
0
dist t
xm,n , ci1
1
Xm,n,
(15)
t
ci1 = tan
||Xproj ||
t
where distXm,n,
t
,ci1 denotes the distance between Xm,n and Xproj ,
the calculation method is shown next
+
t 2 c1 2
distXm,n
Xm,n
(16)
t c1 =
i
i

where denotes the L2-norm of Xt and is defined as


+
t
R )2 + (xG )2 + (xB )2 .
Xm,n
= (xm,n
m,n
m,n

(17)

If the angle Xm,n


t ,c1 is smaller than the angle parameter
i
color , then the current pixel vector belongs to the cone-shaped
region. The high bound is shown with green and is defined as
the highlight region; the low bound is shown with blue and is

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

1815

defined as the shadow region. The overall color model used


in this paper is organized as follows:
t
classificationpixel (Xm,n
, ci1 )

t ct < color )Imin Xproj < pi


Shadow, if Xm,n
i
Highlight, if Xm,n ,ci1 < color )ci1 Xproj < IMAX
=

Foreground, otherwise.
(18)

With the color model above, the result of the block-based


CB model is further refined in the pixel-based phase. From the
cone-shaped color model using the public test sequences [18]
[21], the definition of the highlight is when the test vectors
are greater than the high bound I MAX , which represents useless
information in the foreground.
IV. Fake Foreground Removal Model

Although further refinement of the block-based phase using


the four-layer strategy can lead to a performance able to
address most situations, there are still some specific scenarios
to be considered. For example, a moving object becomes a
stationary background when it stands for a length of time
during the period of background subtraction. The bottom-right
corner of Fig. 2 shows the relationship between the FFRM and
the background model. The FFRM is to update the background
model CBs, while the background model is for background
subtraction.
To adapt to the current situation, the algorithm of the
FFRM, as illustrated in Fig. 7, is used to record the nonbackground information, and the construction method is identical to the background model. Each time that a block is
classified as a nonbackground region, the FFRM is used to
record the information in this block. The FFRM is constructed! and updated in a "frame, which can be expressed as
S M = SiM |1 i LS M , where LS M denotes the length
of the FFRM, and includes a vector to record the features, a
weight variable WsiM to record the updated times (also known
as importance), and a time variable timesiM . After updating
the block-based FFRM, the two-stage FFRM procedure is
proposed to address the current environment, as introduced
below, which is illustrated in the bottom parts of Fig. 7.
1) Delete the FFRM codeword: The time variable is used
to check the codewords in the FFRM. If the block-based
codeword stays in the FFRM for a long time and has not
been updated during a period of time, then it is regarded
as a temporary codeword to be deleted. At a detailed level,
if the time period between the current time t and the last
updated time of the ith temporary codeword timesiM is greater
than the fake foreground deleting parameter deletes (= 5), then
remove this redundant codeword from the FFRM. Thus, the
determined size of the updated S M is also changed accordingly,
as follows:
1
2
S M = SiM |t timeSiM < deletes
(19)
LS M = dim(S M ).

(20)

Fig. 7.

FFRM updating background model for adapting scene.

2) Delete and add the background codeword: In the


previous procedure, if the weight variable WsiM of the ith
temporary codeword in the FFRM is greater than or equal
to the parameter add S,B = 100, then it denotes that a threshold
weight is used to determine whether a codeword should be
moved from FFRM to the background model or not; next,
the codeword is added to the background model BM from the
FFRM S M . Herein, the three parameters, deletes = 5, deleteB =
500, and addS,B = 100 and in the FFRM, are fixed for various
environments. In the future, a further study can be conducted to
adaptively adjust the parameters under various circumstances
to yield the optimum effects. This procedure means that the
temporary codeword stays in the FFRM for a long time and is
still updated; thus, the information in this temporary codeword
is sufficiently robust to construct the background, as follows:
1
2 1
2
CM = CiM |t timeciM < deleteB SiM |WSiM < addS,B
where M = 1, 4, 8, 16

(21)

1
2
S M = SiM |WSiM < addS,B

(22)

LCM = dim(CM )

(23)

LS M = dim(S M ).

(24)

Notably, four layers are considered in the block-based


FFRM, which are associated with each block-based background model; thus, the variable M in Fig. 7 can be 1, 4,
8, and 16 (M = 1 associates with the pixel-based background
model). Similar to Fig. 3, the algorithm of the pixel-based
CB updating is similar to the block-based updating, and the
corresponding algorithm that is used to construct the pixelbased FFRM is identical to the construction of the blockbased model. The only difference is that the input vector
changes to the pixel vector, which is classified as foreground
by the color model in (18). In summary, the proposed FFRM

1816

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

provides an independent method to update the background


model according to the staying time of the foreground, rather
than the former method, which must conform to the color Sim
model. By this updating strategy, a background model with
compound properties can address usual backgrounds as well
as a simultaneously moving foreground.

TABLE II
FPS Comparison Between HCB and Proposed Method Using SD
Images

TABLE III

V. Experimental Results
In this section, the performance of the proposed method is
evaluated with respect to various criteria. Herein, six criteria,
which are the false positive rate (FPR), true positive rate
(TPR), Pr, Sim, percentage of wrong classifications (PWC)
[17], and F-measure (Fm) [17], are employed, as formulated
below
fp
tp
tp
; TPR =
; Pr =
fp + tn
tp + fn
tp + fp
tp
fn + fp
Sim =
; PWC = 100
tp + fp + fn
tp + fn + fp + tn
Pr TPR
Fm = 2
(25)
Pr TPR
where tp, tn, fp, and fn denote the numbers of the true
positives, true negative, false positive, and false negative,
respectively.
In addition, the video sequences of the public databases
[18][20] used in this paper lack foregrounds at the beginning; the starting points of the sequences without foregrounds
are CAMPUS 200, WATERSURFACE 480, MEETINGROOM 1755, INDOORGTTEST1 342, and INTELLIGENT
ROOM 82. However, the public database for change detection
[21] does not provide training frames; instead, it uses all of
the frames before the first ground truth. Basically, different
video sequences have different frames for training. Although
the training frames are different, the background model is
updated somehow; hence, with respect to an infinite amount
of time, the proposed method is quite steady.
FPR =

A. Comparison of the Extracted Features


The average performance and the frame per second (FPS)
comparisons between the HCB [12] and the proposed method
are presented in Tables IIV. In addition, the proposed MCBS
model can yield an even lower computing cost, as explained
below. With the block-based HCB model, the high- and lowmean values of the blocks were employed to solve the dynamic
background and to improve the processing speed, while in this
paper, the reliability of the system is improved compared with
the dynamic background problem using the mean value only.
Table I shows the comparison in terms of FPS under the SD
sequence format, which shows that the proposed scheme can
still meet the real-time demand.
Fig. 8 shows the block-based results of these two methods with the test sequence WATERSURFACE [19], of size
160 128. The first column of Fig. 8 shows the original
images of different frames, the second column shows the
results of block-based HCB with a block of size 8 8, and
the third column shows the results of the proposed method.

Average Performance Comparison of HCB and the Proposed


Method (Block-Based Results) Using Video Sequence
Watersurface

TABLE IV
Fps Comparison of the HCB and the Proposed Method

The three rows represent different frames, #401 and #547.


Fig. 9(a) shows the statistical curve of the HCB feature values
in the same block position, which associates to the red block
in the first row of Fig. 8. Fig. 9(b) shows the statistical
curve of the mean values used in this paper in the same
block. The horizontal axis denotes the time index (t) of the
frames, and the vertical axis represents the values of the
extracted features. The two curves are mostly smooth to prove
that the extracted features for the block-based background
models are satisfactory for most parts of this sequence (without
foreground). Before a foreground (a person, in this sequence)
moves into this block at frame #488, curve (a) changes
drastically for a long-time period, yet the curve should be
stable before this block is filled with foreground. With the
match function in the HCB, this block should be determined
as background before the foreground enters, yet the curve in
Fig. 9(a) has very large fluctuations before frame #488, which
causes wrong detections. Thus, the BTC feature values used
in the HCB are not stable to address this issue. Conversely,
the curve of the mean feature values used in the proposed
method shows good reliability in the block-based background
subtraction, for the practical results shown in the third column
of Fig. 8. The average performance comparisons between
the HCB [12] and the proposed method are presented in
Table III; these comparisons involve frames 481525 of the
test sequence WATERSURFACE [18].
Except for the reliability, the processing time is also an
important issue in the background subtraction. Table III shows
the FPS rate comparisons of the HCB scheme and the proposed
method. HCBs BTC values require multiple calculations in
one color channel and, thus, impede the processing speed.
As discussed in this section, the processing speed is another
important issue in computer vision. Herein, the SD (a.k.a.
480 p) sequence is involved for the performance test in

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

1817

TABLE V
Change Detection Benchmark Dataset (Best Performance of Each Metric Had Been Circled)

Fig. 8. Block-based background subtraction results with WATERSURFACE.


Column 1: Original images. Column 2: Block-based results of HCB [12].
Column 3: Block-based results of proposed method.

terms of the processing efficiency as a test sequence of size


720 480 and a total of 347 frames [the test sequences were
established from National Taiwan University of Science and
Technology (NTUST)]. Table II shows the comparison of the
FPS rate between the HCB scheme [12] and the proposed
method. As can be seen, the proposed method can meet the
real-time requirement under this scenario.
B. Performance Comparison
Fig. 10 shows the results using the test sequence WATERSURFACE [18], which contains 636 frames of size 160 128.
This sequence involves a nonstationary background, such as a
rippling sea surface. Compared with the three former methods,
MoG [3], CB [8], and HCB [12], the proposed method and
HCB can provide better performance. To further examine the

practical performance of the HCB and the proposed method,


Table I shows the difference; the proposed method provides
a better capacity to remove the dynamic background. Under
the block of size 8 8 scenario, the HCB [12] has some
drawbacks, such as a blocking effect, as shown at the persons
feet in Fig. 10(e). In addition, Fig. 11 shows the test sequence
CAMPUS [18], which suffers from a serious dynamic background, such as waving trees and a waving flag, in the scene.
Again, the proposed method, as shown in Fig. 11(f), provides
slightly superior performance than the other, former schemes.
Fig. 12 shows the indoor scenario using the video sequence
MEETINGROOM [18] to show the performance of the HCB
and the proposed method. The sequence contains 2964 frames
of size 160 128, and the background is nonstationary. The
shutter in the background is waving, which is difficult to
address. As shown in Fig. 12(c), the HCB renders a greater
fake foreground. Yet, in Fig. 12(d), the proposed method
presents better quality by removing more fake foreground.
To provide a more objective evaluation, the average performances with all of the above three videos are organized in
Table I, in which the best performance is circled for each metric. Obviously, superiority can be obtained with the proposed
scheme compared with the former methods. Moreover, another
comparison with 20 former techniques using a different dataset
[21] is also provided, and the corresponding results are organized in Table V. This dataset involves six different scenarios
that use the same parameter values, including the baseline,
dynamic background, camera jitter, intermittent object motion,

1818

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

Fig. 11. Background subtraction results with CAMPUS. (a) Original image
(frame #695). (b) Ground truth. (c) MoG [3]. (d) CB [8]. (e) HCB [12].
(f) Proposed method.

Fig. 9. Block-based feature values in the same block position [associated to


row 1 in Fig. 7, size = 8 8, position (8,16) to (15,23)]. (a) BTC values of
HCB [12]. (b) Proposed mean values.

Fig. 12. Background subtraction results with MEETINGROOM. (a) Original


image (frame #2236). (b) Ground truth. (c) HCB [12]. (d) Proposed method.

Fig. 13. Background subtraction results with INDOORGTTEST1. (a) Original image (frame #365). (b) HCB [12]. (c) Proposed method without light
procedure. (d) Proposed method with light procedure.

Fig. 10. Background subtraction results with WATERSURFACE. (a) Original


image. (b) Ground truth. (c) MoG [3]. (d) CB [8]. (e) HCB [12]. (f) Proposed
method.

shadow, and thermal. In addition, each measure in this table


is averaged from all of the videos in the six different cases.
According to the results, the proposed method can obtain
the best performance in terms of the metrics PWC and Fm.
Moreover, with respect to the remaining five measures (Re,
Sp, FPR, FNR, and Pr), the proposed method can still be
considered to be a good method across various environments
because of its balanced performance (Figs. 13 and 14).
Regarding the processing efficiency, the two test sequences
WATERSURFACE and CAMPUS are adopted for testing without losing generality, and the corresponding results are organized in Table IV. Moreover, when compared with the
ViBe [13] at a higher resolution of 320 240, because it
simply collects the mean value and a set of samples to
establish the background models, a high processing speed of
200 FPS can be achieved [13], while the proposed scheme
can yield 120 FPS with frames of the same size. Although
a relatively lower processing efficiency can be obtained by

the proposed method, it is still more than satisfactory in


terms of the real-time requirement for the prospective practical
applications. Pertaining to the memory issue, suppose that
the RGB color model is utilized in ViBe [13]; then, the
corresponding memory consumption for a frame of size I J
is simply I J 3N bytes, where N denotes the number
of samples that are stored in each pixel-based model, and the
number 3 denotes the red, green, and blue color channels.
Conversely, in the proposed CB algorithm, either a block or a
pixel is represented by a CB individually, and it is represented
as a compressive form of the background of a long-term image
sequence. Moreover, each CB is composed of codewords that
comprise the colors that are transformed by an innovative color
distortion metric (as defined in Section III-B). The required
memory varies across different environments (the required
number of codewords is not a constant), which suggests that
the memory consumption is difficult to accurately estimate.
Hence, let C denote the length of each CB, and suppose that
the four background layers (1, 4, 8, and 16) have an identical
amount of CB in this estimation.
3 For each frame
4 of size I J,
the memory consumption is 1612 + 812 + 412 + 1 I J C
bytes for the proposed method because each block simply
requires one mean (one byte) only. Compared with the ViBe,
because the average length of the proposed CB usage is
approximately 10 (C = 10 bytes) empirically and the length

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

Fig. 14.

1819

Accuracy value for the sequence INDOORGTTEST1. (a) FPR. (b) TPR. (c) Pr. (d) Sim. (e) F-measure. (f) PWC.

(N) for ViBes background model is 20 according to its


experimental settings [13], the memory consumption for the
proposed method is 32.5 bytes/pixel, which is superior to the
60 bytes/pixel required in ViBe. Based on the premise of
superiority in terms of memory consumption, the performance
of the proposed method is also better than ViBe as well as
ViBe +, as shown in Table V, in terms of the PWC and Fm
metrics.
Intuitively, the block size should adapt to the frame resolution to obtain a higher suitability. However, suppose that
a larger block size (>16) is employed; then, two extreme
conditions could arise: 1) all of the pixels in that block are

passed (to the layer of the pixel level) or 2) all of the pixels
in that block are rejected (as foreground/highlight/shadow).
For the first scenario, the reason is that the large block size
cannot handle/describe all of the variations of the pixels inside
is because only one mean is adopted in the proposed method;
for the second scenario, the layer-structure of the proposed
algorithm cannot yield the expected speed-up effect because
all of the elements within that block are processed by the pixelbased model, thus causing a relatively high computational
complexity. Consequently, according to these above considerations, we still opt to use the constant block sizes, 1, 4, 8,
and 16, for the proposed method.

1820

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 10, OCTOBER 2013

proposed method cannot address the camouflage problem.


Future work can be focused on using multiple cameras or an
infrared system to acquire the additional depth information
that is required to solve this issue.

VI. Conclusion

Fig. 15. Result of the pixel classification using INTELLIGENT ROOM.


(a) Original image. (b) Result of the pixel classification.

C. Performances With/Without
Changes Procedure

the

Detect

Illumination

Fig. 10 shows the results using the test sequence WATERSURFACE [18], which contains 636 frames of size 160 128.
This sequence involves nonstationary background, such as a
rippling sea surface. Compared with the three former methods,
MoG [3], CB [8], and HCB [12], the proposed method and
HCB can provide better performance. To further examine the
practical performance between the HCB and the proposed
method, Table I shows the difference; the proposed method can
provide better capacity for removing the dynamic background.
Under the block of size 8 8 scenario, the HCB [12] has
some drawbacks, such as a blocking effect, as shown at the
persons feet in Fig. 10(e). In addition, Fig. 11 shows the
test sequence CAMPUS [18], which suffers from a serious
dynamic background, such as waving trees and a waving
flag in the scene. Again, the proposed method, as shown in
Fig. 11(f), provides a slightly superior performance compared
with the other former schemes. Fig. 12 shows the indoor
scenario using the video sequence MEETINGROOM [18], to
show the performance of the HCB and the proposed method.
The sequence contains 2964 frames of size 160 128, and
the background is nonstationary. The shutter in the background is waving, which is difficult to address. As shown in
Fig. 12(c), the HCB renders a larger fake foreground. However,
in Fig. 12(d), the proposed method presents better quality by
removing more of the fake foreground.
The concept of pixel classification has been introduced
in Section III-B. In this section, the video sequence
INTELLIGENT ROOM [20] is employed for testing, with
the results given in Fig. 15, in which Fig. 15(a) shows the
images over a period of time, and the corresponding results
are shown in Fig. 15(b), and they are not compensated by
any post-processing. In the classification, the foreground,
shadow, and highlight pixels are colored in blue, red, and
green, respectively. As can be seen, the pixel-based color
model can resist against the influence caused by illumination
changes, and the foregrounds are classified successively. The
corresponding execution speed is 90.56 FPS. Notably, the

In this paper, a multilayer adaptive block-based strategy


was proposed along with the adopted mean feature from the
separated blocks. The proposed method removed most of the
background when suffering from dynamic background and
solved the blocking effect deficiency in the HCB method.
Moreover, the multilayer scheme also significantly improved
the processing efficiency. Given a video of SD resolution, the
proposed scheme still provided real-time processing capability.
However, because the MCBS employed RGB information
for modeling the background subtraction, it was difficult to
distinguish the foreground and background when they have
similar colors. In future work, the depth information was
involved in the proposed MCBS model to solve the camouflage
issue. In more detail, by obtaining information from static
video surveillances and considering spatially registered, timesynchronized parameters of color and depth image spaces
across a series of times, we believe that we will achieve a
better result for extracting foreground objects of a similar color
from the backgrounds. Overall, the proposed method was a
good candidate for intelligent object detection.
References
[1] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, Detection moving objects, ghosts, and shadows in video streams, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 13371342, Oct.
2003.
[2] T. Horprasert, D. Harwood, and L. S. Davis, A statistical approach for
real-time robust background subtraction and shadow detection, in Proc.
IEEE Int. Conf. Comput. Vision, vol. 99. Sep. 1999, pp. 119.
[3] E. J. Carmona, J. Martinez-Cantos, and J. Mira, A new video segmentation method of moving objects based on blob-level knowledge, Pattern
Recognit. Lett., vol. 29, no. 3, pp. 272285, Feb. 2008.
[4] C. Stauffer and W. E. L. Grimson, Learning patterns of activity using
real-time tracking, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22,
no. 8, pp. 747757, Aug. 2000.
[5] N. Martel-Brisson and A. Zaccarin, Learning and removing cast shadows through a multidistribution approach, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 29, no. 7, pp. 11331146, Jul. 2007.
[6] J.-S. Hu and T.-M. Su, Robust background subtraction with shadow
and highlight removal for indoor surveillance, EURASIP J. Adv. Signal
Process., vol. 2007, no. 1, pp. 114, Jan. 2007.
[7] G. Xue, J. Sun, and L. Song, Background subtraction based on phase
and distance transform under sudden illumination change, in Proc.
IEEE Int. Conf. Image Process., Sep. 2010, pp. 34653468.
[8] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, Real-time
foreground-background segmentation using codebook model, RealTime Imaging, vol. 11, no. 3, pp. 172185, Jun. 2005.
[9] M. Wu and X. Peng, Spatio-temporal context for codebook-based
dynamic background subtraction, AEU Int. J. Electron. Commun.,
vol. 64, no. 8, pp. 739747, Aug. 2010.
[10] L. Maddalena and A. Petrosino, A self-organizing approach to background subtraction for visual surveillance applications, IEEE Trans.
Image Process., vol. 17, no. 7, pp. 11681177, Jul. 2008.
[11] T. Kohonen, Self-Organization and Associative Memory. 2nd ed. Berlin,
Germany: Springer-Verlag, 1988.
[12] J.-M. Guo, Y.-F. Liu, C.-H. Hsia, M.-H. Shih, and C.-S. Hsu, Hierarchical method for foreground detection using codebook model,
IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 6, pp. 804815,
Jun. 2011.

GUO et al.: FAST BACKGROUND SUBTRACTION BASED ON A MULTILAYER CODEBOOK MODEL

[13] O. Barnich and M. Van Droogenbroeck, ViBe: A universal background


subtraction algorithm for video sequences, IEEE Trans. Image Process.,
vol. 17, no. 6, pp. 17091724, Jun. 2011.
[14] V. Pham, P. Vo, V. T. Hung, and L. H. Bac, GPU implementation
of extended Gaussian mixture model for background subtraction, in
Proc. IEEE Int. Conf. Computing Communication Technologies Research
Innovation Vision Future, Nov. 2010, pp. 14.
[15] J.-S. Chiang, C.-H. Hsia, H.-W. Peng, C.-H. Lien, and H.-T. Li,
Saturation adjustment method based on human vision with YCbCr color
model characteristics and luminance changes, in Proc. IEEE Int. Symp.
Intell. Signal Processing Commun. Syst., Nov. 2012, pp. 136141.
[16] E. J. Carmona, J. Martinez, and J. Mira, A new video segmentation
method of moving objects based on blob-level knowledge, Pattern
Recognit. Lett., vol. 29, no. 3, pp. 272285, Feb. 2008.
[17] N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, changedetection.net: A new change detection benchmark dataset, in Proc. IEEE
Comput. Soc. Conf. Computer Vision Pattern Recognition Workshops,
Jun. 2012, pp. 18.
[18] Statistical Modeling of Complex Background for Foreground
Object Detection. [Online]. Available: http://perception.i2r.astar.edu.sg/bk model/bk index.html
[19] Performance Evaluation of Surveillance Systems. [Online]. Available:
http://www.research.ibm.com/peoplevision/performanceevaluation.html
[20] Shadow
Detection.
[Online].
Available:
http://cvrr.ucsd.edu/aton/shadow/index.html
[21] A Change Detection Benchmark Dataset. [Online]. Available:
http://www.changedetection.net

Jing-Ming Guo (M06SM10) was born in Kaohsiung, Taiwan, on November 19, 1972. He received
the B.S.E.E. and M.S.E.E. degrees from National
Central University, Taoyuan, Taiwan, in 1995 and
1997, respectively, and the Ph.D. degree from the
Institute of Communication Engineering, National
Taiwan University, Taipei, Taiwan, in 2004.
He is a Professor with the Department of Electrical
Engineering, National Taiwan University of Science and Technology, Taipei. His research interests
include multimedia signal processing, multimedia
security, computer vision, and digital halftoning.
Dr. Guo was invited to be the Technical Program Chair for the IEEE
International Symposium on Intelligent Signal Processing and Communication
Systems in 2012 and the IEEE International Symposium on Consumer
Electronics in 2013. He has been invited to be a Lecturer for the IEEE Signal
Processing Society Summer School on Signal and Information Processing in
2012. He has been elected as the Chair of the IEEE Taipei Section GOLD
Group in 2012. He has served as a Guest Co-Editor of two special issues for
the Journal of the Chinese Institute of Engineers and the Journal of Applied
Science and Engineering. He serves on the Editorial Board of the Journal of
Engineering and The Scientific World Journal. Currently, he is an Associate
Editor of IEEE Signal Processing Letters, IEEE Transactions on
Multimedia, Information Sciences, and Signal Processing. He is a Senior
Member of the IEEE Signal Processing Society and a fellow of the IET. He
received the Outstanding Youth Electrical Engineer Award from the Chinese
Institute of Electrical Engineering in 2011, the Outstanding Young Investigator
Award from the Institute of System Engineering in 2011, the Best Paper Award
from the IEEE International Conference on System Science and Engineering
in 2011, the Excellence Teaching Award in 2009, the Research Excellence
Award in 2008, the Acer Dragon Thesis Award in 2005, the Outstanding
Paper Awards from IPPR, Computer Vision and Graphic Image Processing in
2005 and 2006, and the Outstanding Faculty Award in 2002 and 2003.
Chih-Hsien Hsia (M10) was born in Taipei City,
Taiwan, in 1979. He received the B.S. degree in
computer science and information engineering from
Taipei Chengshih University of Science and Technology, Taipei, Taiwan, in 2003 and the M.S. degree
in electrical engineering and the Ph.D. degree from
Tamkang University, New Taipei, Taiwan, in 2005
and 2010, respectively.
He was a Visiting Scholar with Iowa State University, Ames, IA, USA, in 2007. From 2010 to
2013, he was a Post-Doctoral Research Fellow with
the Department of Electrical Engineering, National Taiwan University of

1821

Science and Technology, Taipei, Taiwan. He joined as Faculty Member of the


Department of Electrical Engineering at Tamkeng University, from 2010 to
2013 as an Adjunct Associate Professor. He currently is an Associate Professor
with the Department of Electrical Engineering, Chinese Culture University,
Taipei, Taiwan. His research interests include DSP IC design, image/video
processing, multimedia compression system design, multiresolution signal
processing, and computer/robot vision processing.
Dr. Hsia is a member of the Phi Tau Phi scholastic honor society. He has
served as a Guest Editor of special issues for Journal of Applied Science and
Engineering.
Yun-Fu Liu (S09) was born in Hualien, Taiwan,
on October 30, 1984. He received the M.S.E.E. degree from the Department of Electrical Engineering,
Chang Gung University, Taoyuan, Taiwan, in 2009.
He is currently pursuing the Ph.D. degree with the
Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei,
Taiwan.
He was a Visiting Scholar with the Department
of Electrical and Computer Engineering, University
of California, Santa Barbara, CA, USA, in 2012.
His research interests include computer vision, machine learning, digital
halftoning, steganography, image compression, and enhancement.
Mr. Liu is a Member of the IEEE Signal Processing Society.
Min-Hsiung Shih was born in Kaohsiung, Taiwan,
on December 25, 1987. He received the B.S. degree from the Department of Computer and Communication Engineering, National Kaohsiung First
University of Science and Technology, Kaohsiung,
Taiwan, in 2010. Currently, he is pursuing the
masters degree with the Department of Electronic
Engineering, National Taiwan University of Science
and Technology, Taipei, Taiwan.
His research interests include pattern recognition
and intelligent surveillance systems.
Cheng-Hsin Chang was born in Taipei, Taiwan,
on September 4, 1990. He received the B.S.E.E.
degree from National Taiwan University of Science
and Technology, Taipei, Taiwan, in 2012, where he
is currently pursuing the masters degree with the
Department of Electrical Engineering.
His research interests include video synopsis, object tracking, and intelligent surveillance systems.

Jing-Yu Wu was born in Nantou, Taiwan, on October 29, 1990. She received the B.S. degree from
the Department of Electronic Engineering and the
B.A. degree from the Department of Applied Foreign
Language, National Taiwan University of Science
and Technology, Taipei, Taiwan, in 2012, where she
is currently working toward the masters degree with
the Department of Electrical Engineering.
Her research interests include behavior analysis.

Voice-Activity Home Care System


Oscal T.-C. Chen, Senior Member, IEEE, Y. H. Tsai, C. W. Su, P. C. Kuo, and W. C. Lai
Dept. of Electrical Engineering, and Advanced Institute of Manufacturing with High-Tech Innovations,
National Chung Cheng University,
Chiayi, 62102 Taiwan

Abstract This work proposes a voice-activity home care


system which can construct a life log associated with voices at
home. Accordingly, the techniques of sound-pressure-level
calculation, abnormal sound detection, noise reduction,
text-independent speaker recognition and keyword spotting are
developed. In abnormal sound detection and speaker
recognition, we adopt the two-stage recognition processes of
Gaussian Mixture Model (GMM) for sound rejection, and
Support Vector Machine (SVM) for sound classification. The
experimental results reveal that the proposed abnormal sound
detection, speaker recognition, and word spotting can reach
accuracy rates above 82%, 90%, and 87%, respectively. Based
on the recognized abnormal sounds or special words, an
emergent event can be identified for home care where a speaker
is known as well. Finally, the abovementioned recognition
results versus time scales can fairly build a daily life log for
home care.
Keywords Special sound recognition; noise reduction; speaker
recognition; keyword spotting; daily log; Gaussian mixture model ;
support vector machine

I. INTRODUCTION
Due to the increase of senior population, home care is a
critical topic in recent years. With increasing age, the
majority of older people take most time at home, so that home
care and accident prevention grow into an area of concern to
everyone. Accordingly, how does a family caregiver remotely
and effectively monitor the elder has become an important
research direction.
In the prior art, many researchers have proposed various
voice processing techniques to realize home care. For
example, the event model was established to determine
whether there was an emergency occurred [1]. Unusual
sounds like cough, groan, wheeze, and cry for help were
detected to understand the health condition of a subject [2].
Some special words of an utterance like help were
recognized with location perception to provide necessary
assistance [3]. In addition to the context of voices for help,
abnormal sounds, such as screaming and glass broken, were
discriminated from normal sounds where Gaussian Mixture
Model (GMM) and Support Vector Machine (SVM) were
commonly used for recognitions [4]-[6].
Thanks to the advance of technologies, IP cameras are
usually used to capture life activities of elderly persons. The
recorded video can be further processed to identify when a
special event occurs. However, a camera has a limited angle
of view and cannot be deployed at all areas of home, like

bathroom. Hence, IP microphones can aid cameras to


overcome these shortcomings [7]. Besides, video-based
surveillance will make users' lives no privacy. This work
explores how to build a daily life log from the recorded
sounds for home care. Accordingly, the identification and
context awareness of speaker's and environmental voices at
home are developed to construct a life log.
II. PROPOSED HOME CARE SYSTEM
This work employs multiple IP microphones to obtain
voice data. First, the silence detection is performed by using
the technique of energy ratio to avoid unnecessary computing
where Sound Pressure Level (SPL) in a decibel unit is
calculated at a voice frame of 256 points (32ms). The SPLs
computed and recorded in a whole day present energy
variations of environmental sounds. This parameter can reveal
activity time of family members. Restated, sounds in day time
and midnight tend to have high and low SPLs. If the situation
is not consistent with the previous one in a daily log, the
unusual event may occur. Once a sound is acquired, it is
further classified into a speech or non-speech sound. Second,
our system is to identify whether there exists a special sound
of siren, scream, sob, crash, glass broken, or crying. Third,
speaker(s) are distinguished and their speech is translated to
texts via Google web speech API. Meanwhile, the far-field
recorded sounds go through noise cancellation to raise speech
clarity. Additionally, special words like asking for help are
recognized. Fourth, once an emergency event comes out, the
proposed system will issue a call to a family member or
hospital. Figure 1 shows the block diagram of the proposed
voice-activity home care system.

Figure 1.

978-1-5090-2455-1/16/$31.00 2016 IEEE

110

Proposed voice-activity home care system.

A. Special Sound Recognition


The recognition of special sounds adopts a two-stage
process in which the first stage is the 16 mixture GMM to
reject normal sounds, and the second stage is the linear-kernel
SVM to identify six anomalies. Particularly, the feature
selection scheme of Sequential Floating Backward Selection
(SFBS) is employed to choose six features: zero-crossing rate,
spectral kurtosis, spectral flatness, spectrum spread, spectral
roll-off, mel-frequency cepstral coefficients. Figure 2 shows
the flowchart of special sound recognition.

Figure 2.

Flowchart of special sound recognition

B. Noise Reduction
Due to far-field recording, a microphone receives many
sound sources which may include noises of fan, air
conditioner, and range hood as well as speakers' sounds. To
make speaker's voice clear, the noise reduction scheme is
applied to minimize noise. Based on silence frames of a
dialogue, the ambient noise is modeled and estimated.
Waveforms of a voice are partitioned into many frames which
go through the Fourier transform. Based on a priori SNR and
a posteriori SNR [8], a spectral gain function is estimated, and
then used to multiply with spectrum signals of a sound frame.
Afterwards, the normalized spectrum signals are inversely
transformed to attain a time-domain sound frame with noise
lessening. In our system, only speaker recognition takes the
pre-process of noise reduction. In order to lower the
computation complexity, the sound context associated with
human speech is estimated where the autocorrelation scheme
is performed frame by frame. Such an approach can prevent
our system from doing noise reduction for special and
environmental sounds. The autocorrelation scheme can be
formulated as

R( )

linear-kernel SVM to identify whom they are. Due to text


independent, the characteristics of speaker utterances rather
than context must be well addressed. Accordingly, mean and
standard deviation of the first-order differential mel-frequency
cepstral coefficients, the first-order differential linear
prediction coefficients, and mean, standard deviation kurtosis
and skewness of fundamental frequencies are considered to
improve the recognition rate. Particularly, the feature selection
scheme of SFBS is utilized to discover the adequate
parameters which include low-frequency mel-frequency
cepstral coefficients, high-frequency linear prediction
coefficients, and fundamental frequencies. Figure 3 shows the
flowchart of the speaker recognition.

P 1

x ( n) x ( n ) ,

(1)

Figure 3.

Flowchart of speaker recognition

D. Keyword Spotting
Nowadays, Google web speech API is a quite convenient
speech recognition engine which supports multiple languages.
Hence, the proposed system sends the enhanced speech
waveform file to Google for text generation. Additionally, we
define some special words associated emergency (e.g. help,
hurt, fire, police, thief), that are spotted from the text file for a
further action. To consider the privacy, this function is only
enabled by a special comment, like Alexa used by Echo
from Amazon [10].
III. EXPERIMENTAL RESULTS
The demo room has a length of 6.3 meters, a width of 3.2
meters, and a height of 3.6 meters where two interior walls are
placed by three webcams for experiments. Room
configuration with three webcams is depicted in Figs. 4 and 5.
The sounds produced at this demo room are recorded,
processed, and identified by our system.

n 0

where x(n) is a signal of a voice frame, n is a time index, P is a


frame size, and is the number of delay points. Due to the
short-term relationship among speech signals, can be
around 2 to 20. In our tests, = 8 yields fairly good estimation.
The larger R() is, the higher the probability of a voice frame
including speech signals is.
C. Speaker Recognition
Text-independent speaker recognition is carried out in our
System [9]. Speaker recognition uses 16 mixture GMM to
reject speaker(s) who are not family members, and then

111

Figure 4.

Configuration of our demo room.

Figure 5.

Views of three webcams at the demo room.

A. Special Sound Recognition


In the experiments, we emulated a home environment
situation to record special sounds for training and testing
where each special sound has more than 65 to 111 pieces. The
lengths of sound pieces are around 1 to 5 seconds, which are
related to sound types. For instance, the glass-broken sound
takes around 1 to 2 seconds whereas a siren is likely more
than 5 seconds. The recorded sound goes through a two-stage
recognition process which includes rejection and
classification. The measurement results in Table I reveal that
the average accuracy rate is up to 82.5%. In these results,
sounds of glass broken and siren do not reach 80%. The
reason is that physical characteristics of glass broken and
siren are similar to those of crash and scream, respectively,
resulting in few misjudgments.
TABLE I.

Special sounds
Glass broken
Baby crying
Crash
Scream
Sob
Siren
Accuracy

the acoustic model for identification. In the performance


evaluation, a speaker uttered one or multiple sentences of
which sounds were far-field recorded, and diminished from
ambient noise. With consideration of various lengths from a
speaker voice, the proposed speaker recognition is based on a
2-second segment. When a sound piece has a length longer
than 2 seconds, it is partitioned into many 2-second segments.
Each segment has its identification result. The final speaker
recognition is determined by the majority voting. Table IV
illustrates the performance of the proposed speaker
recognition system where the rejection process is included.
When the length of talking time is longer, the accuracy of
speaker identification reliably increases.
TABLE II.

Performance of noise reduction at the condition of breeze.

SNR (dB)
of inputs
SNR( dB)
of outputs
with noise
reduction
SNR (dB)
of inputs
SNR (dB)
of outputs
with noise
reduction

6.1

6.8

7.4

8.1

8.7

9.3

10

11

12

9.9

10.4

11.0

11.6

12.1

12.6

ACCURACIES OF SPECIAL SOUND RECOGNITION.

First Step
GMM
75.0%
90.0%
85.0%
95.0%
85.0%
95.0%
87.5%

Second Step
SVM
93.3%
94.4%
94.1%
94.7%
94.1%
78.9%
94.3%

Average

TABLE III.

70.0%
85.0%
80.0%
90.0%
80.0%
75.0%
82.5%

B. Noise Reduction
IP cameras at the demo room are to individually record
far-field speaker voices, and noises of an air conditioner at
modes of breeze and strong wind. Afterwards, waveform
summations of speaker sounds, and air-conditioning noise are
performed under Signal-to-Noise Ratios (SNR) from 1dB to
12dB. Tables II and III list the performance of noise
reductions at the conditions of breeze and strong winds,
respectively. The smaller SNR is, the larger the improved
performance is. Restated, air-conditioning noise can be fairly
minimized.
C. Speaker Recognition
In the experiments, four persons, three men and a woman,
of a family stayed at the demo room, and had free talks under
background noise where far-field sound recording was
carried out. Each person had 140 recorded sound pieces of
which lengths ranging from 2 to 10 seconds. The recorded
sound pieces from all family members were used to establish

Performance of noise reduction at the condition of strong


winds.

SNR (dB)
of inputs
SNR (dB)
of outputs
with noise
reduction
SNR (dB)
of inputs
SNR (dB)
of outputs
with noise
reduction

6.9

7.5

8.1

8.6

9.3

9.9

10

11

12

10.4

11.0

11.5

12.0

12.4

12.9

TABLE IV.
Talking time
Accuracies

Performance of keyword spotting


1-2 seconds

4-5 seconds

10 seconds

78%

88%

90%

D. Keyword Spotting
Through the speech recognition system of Google web
speech API, the context of a speakers voice is converted into
a text output. Accordingly, some special words can be
pre-defined, and searched in the text file from Google web
speech API. Here, ten popularly-used keywords associated
with asking for help are adopted in our system. Table V lists
the accuracies of spotting keywords. Due to far-field
recording, and environmental noise, more times of keyword
uttering in one or multiple sentences can yield better

112

word-spotting accuracy. Based on the recognized message,


our system can issue a call to the other family member or
hospital for further verification.
TABLE V.
Keywords
Accuracies

2015/5/26
16:46:50

Recognition rates of keyword spotting.

One time

Two times

Three times

Four times

55%

75%

83%

87%

None

Normal
SPL=58.5dB

None

Normal
SPL=58.4dB

Crash

Crash
SPL=72.8dB

"please give me
some water."
2015/5/26
16:47:10
"okay."
2015/5/26
16:47:30

E. Daily log of voice-activity system


According to the above mentioned recognition results, a
life log of voice activities is established as listed in Table VI.
Abnormal events can be identified from special sounds, and
some help keywords. Additionally, SPLs in every day are
recorded, and analyzed to discover the inconsistent periods
where the profiling computation of SPLs is based on one
week. Speaker identification helps on understanding life
activities, and social events. These data are beneficial to
knowing who is at home (family member(s) or visitor(s)), and
when and what they talk. Therefore, the proposed
voice-activity system can effectively realize smart home care
applications.

2015/5/26
16:47:50
"Oh god that's
hurt."
2015/5/26
16:48:10
"please help me."

This work develops a voice-activity home care system


which consists of SPL calculation, noise reduction, special
sound recognition, speaker identification, keyword spotting,
and daily log establishment. In order to reduce computation
complexity, SPL and speech intensity in a sound piece are
analyzed to determine whether voice-activity recognition,
and speech-related identification are activated, respectively.
In special sound and speaker recognitions, the two-layer
recognition processes of rejection and classification are
effectively employed. Experimental results exhibit that
special sound, speaker, and keyword recognitions can attain
82%, 90%, and 87%, respectively. Based on these computed
and recognition results, the voice-related daily log is
effectively built for home care applications.
ACKNOWLEDGMENT
This work is partially supported by Ministry of Education,
and Ministry of Science and Technology, Taiwan, under the
contract number of MOST 104-2221-E-194-046-MY2.

TABLE VI.
Time

LIFE LOG OF VOICE ACTIVITIES.

Member's
Speech Context

Abnormal
Sound
&Keywords

2015/5/26
16:46:30
None
"I'm thirsty."

Conditions

Normal
Guest
SPL=56.2dB

None
&
"hurt"

Accident
SPL=65.4dB

None
&
"help me"

SOS
SPL=62.2dB

REFERENCES
[1]

IV. CONCLUSION

None

Danilo Hollosi, Jens Schroder, Stefan Goetze, and Jens-E. Appell,


Voice activity detection driven acoustic event classification for
monitoring in smart homes, Proc. of IEEE International Symposium
on Applied Sciences in Biomedical and Communication Technologies,
pp.1-5, 2010.
[2] Min-Quan Jing, Chao-Chun Wang, and Ling-Hwei Chen, A real-time
unusual voice detector based on nursing at home, Proc. of IEEE
International Conference on Machine Learning and Cybernetics, vol. 4,
pp. 2368-2373, 2009.
[3] Y.-W. Liu et al., Developing voice care: real-time methods for event
recognition and localization based on acoustic cues, Proceedings of
IEEE International Conference on Multimedia and Expo Workshops,
pp. 1-6, July 2014.
[4] Jianzhao Qin, Jun Cheng, Xinyu Wu, and Yangsheng Xu, A learning
based approach to audio surveillance in household environment,
International Journal of information Acquisition, vol. 3, no. 3, pp. 1-7,
2006.
[5] Huy Dat Tran, and Haizhou Li, Sound event recognition with
probabilistic distances SVM, IEEE Transaction on Audio, Speech, and
Language Processing, pp. 1556-1568, vol. 19, no. 6, August 2011.
[6] M. A. Sehili, D. Istrate, B. Dorizzi, and J. Boudy, Daily sound
recognition using a combination of GMM and SVM for home
automation, Proc. of 20th European Signal Processing Conference, pp.
1673-1677, 2012.
[7] O. T.-C. Chen, Yi-Heng Tsai, Che-Wei Su, Po-Chen Kuo, and Pin-Chih
Chen, Voice-activity recognition system for home care, Proc. of the
37th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, Milan, Italy, August 25th -29th, 2015.
(Late breaking research poster paper)
[8] Richard C. Hendriks, Richard Heusdens, and Jesper Jensen,
Forward-backward decision directed approach speech enhancement,
Proc. of Int. Workshop Acoustics Echo and Noise Control, pp. 109-112,
Sept. 2005.
[9] Tomi Kinnunen, and Haizhou Li, An overview of text-independent
speaker recognition: from features to supervectors, Speech
Communication 52, pp. 12-40, 2010.
[10] Amazon Echo: Always Ready, Connected, and Fast.
http://www.amazon.com/Amazon-SK705DI-Echo/dp/B00X4WHP5E

113

      


   
$C
$,KU)E,AU )68
',AU'2U )68
)6,A6
%,8U)E,AU

%]nDj>nUc]>Y}.DmD>fBR}%]mnUnrnD})%}
0513} *.1&5)%} "fD]cAYD}%*-}#>]cU}5]UtDfmUnw}cE}1BUD]BD}>]C}3DBS]cYcQw}
#>]cU}6UDn]>Z}
zCcC>nof>]}nRU}Y>]
YD}nR>_R}R>U
}nf>]}{}ZUB>
DCr
}t]}
#.ILF-/L ) ") !

") ) !) ) ) ) "" "$)

)  ) ") ! ) ) #!" ) ) ) ") ) !!#!) )
! ") ) !)  ) $") "") "") %!)  ) )
) !!!") !) !) !) !!) ) "!)  ) %) '() )
&") #")  ") ) ) #) ") !$) ")
 ) )  ) $") "") ) " !")  ) $"!)
) # ) !"#') #) ) ') "!!) ) ")  ) !"') )
!")

) ) #")  ) ) ) ")  ) !) )

! ) !#") )  ) !) )  ) )


) !#!) ) & ") !#"!) $#") ') !)  )
") ) !!"$"') !%) "") #) ) )  ")
") ) " ) ) ") #!) ) ")  ") %!)
"") )  ) # ) ) " !")  ) $"!) %) ) )
#!) )" ) !'!")  ) " ) ")

&3RQDF0I -.BDF?->U 3P3BL"U 03L3/L9DB"U />-II:/-L9DB"U LF-/=9B5"U


9?-53"UP903D"UIDNB0"U?N>L9?309-"U

%
}

8?GD@.I-G5@?}

"UtD]} nRD} CDtDYcdZD]n} cE} ZUBfcDYDBnfc]UB} nDBR]cYcQw}


U]EcfZ>nUc]} nDBR]cYcQw} >]C} >rncZ>nUc]} nDBS]cYcQw} mZ>fn}
RcZDm} mZ>fn} ArUYCU]Qm} ZcfD} >]C} ZcfD} uUCDYw} R>tD} ADD]}
CDtDYcdDC}>]C}ArUYn
}3RD}CDtDYcdZD]n}R>m}ADD]}DvnD]CDC} U]nc}
cnRDf} CcZ>U]m} >]C} RD>YnR} B>fD} Um} >} CcZ>U]} nR>n} R>m} ZrBR}
U]nDfDmn}U]}cE}fDmD>fBRDfm
}3RD} RD>YnR}mZ>fn}RcZD}Bc]BDdn}Um}>}
dfcZUmU]Q}>]C}BcmnDEEDBnUtD}u>w}cE}UZdfctU]Q}>BBDmm}nc}RcZD}
B>fD} Ecf} nRD} DYCDfYw} >]C} CUm>AYDC
} )>]w} fDmD>fBR} >]C}
CDtDYcdZD]n} dfcWDBnm}>fD}c]QcU]Q} O]CDC}Aw}U]nDj>nUc]>Y}>]C}
QctDjZD]n>Y} cfQ>]Uy>nUc]m
} 7D} B>]} BUnD} RDfD} mcZD} fDmD>fBR}
dfcWDBnm} U]} uRUBR} ZrYnUZDCU>} U]EcfZ>nUc]} >fD} dfcBDmmDC} Ecf}
Zc]UncfU]Q} >]C} DtD]n} CDnDBnUc]} n>mXm}:= :=
} %]}:=} :=}
B>ZDf>m} uDfD} rmDC} nc} >rncZ>nUB>YYw} CDnDBn} >]C} fDBcQ]UyD}
>BnUtUnUDm} >A]cfZ>Y} DtD]nm} mrBR} >m} E>YYU]Q} cE} dDcdYD} YUtU]Q} U]}
RcrmD
} ;]} mnrCUDm}:=}:=}:=} mcr]C} >]C} mdDDBR} fDBcQ]UnUc]}
uDfD} >ddYUDC} nc} CDnDBn} >rCUc} DtD]nm} uRUBR} B>]} AD} rmDErY} nc}
DYCDfYw}cf}CDdD]C>]n}dDcdYD}U]}B>mD}cE}dfcAYDZ
}

Bc]n>U]m} tUCDc}>]C}>rCUc}mnfD>Z}cE}mUv} >A]cfZ>Y} DtD]nm


}3RUm}
C>n>mDn}BcrYC}AD}rmDC}nc}D]Bcrf>QD}fDmD>fBR}c]}>A]cfZ>Y}DtD]n}
CDnDBnUc]}A>mDC}c]}ZrYnUZcC>Y}U]EcfZ>nUc]} UU}uD}dfcdcmDC}>}
ZDnRcC} Ecf} tUCDc} A>mDC} >A]cfZ>Y} CDnDBnUc]} nR>n} BcZAU]Dm}
ZcnUc]} nDZdY>nD} >]C} YcB>YUy>nUc]} U]EcfZ>nUc]} UUU} uD} Z>CD} >}
BcZd>f>nUtD}mnrCw}c]}>rCUc}BY>mmUIB>nUc]
}
3RD}fDZ>U]U]Q}cE}nRUm}d>dDf}Um}cfQ>]UyDC}>m}EcYYcum
};]}nRD}
mDBnUc]} %%} uD} Z>XD} >]} ctDftUDu} cE} crf} dfcdcmDC} mwmnDZ} Ecf}
>A]cfZ>Y} DtD]n} CDnDBnUc]
} 7D} CDmBfUAD} U]} CDn>UY} nuc} Z>U]}
ZcCrYDm} Ecf} >A]cfZ>Y} DtD]n} CDnDBnUc]} A>mDC} c]} tUCDc} >^C}
>rCUc} U]} mDBnUc]} %%%
} 1DBnUc]} %6} mRcum} DvdDfUZD]n>Y} fDmrYnm
}
1DBnUc]}6}Bc]BYrCDm}>]C}QUtDm}UCD>m}Ecf}OnrfD}ucfXm
}
%%
}

#(?@D=$<}0J0?G}.0G0-G5@?}FOFG0=}BJ0J50L}

;]} C>UYw} YUED} DYCDfYw} CDdD]C>]n} dDcdYD} uUnR} dRwmUB>Y}


CUm>AUYUnUDm} Ecf} U]mn>]BD} >]C} d>nUD]n} mcZDnUZD} uRc} YUtD} >Yc]D}
B>]}ZDDn}>]}>BBUCD]n}U]}nRDUf}RcZD}nR>n}BcrYC}R>fZ}nRDZ
}3R>n}
Um} uRw} Z>]w} fDmD>fBR} >]C} CDtDYcdZD]n} dfcWDBnm} R>tD} ADD]}
Bc]CrBnDC} U]} cfCDf} nc} >rncZ>nUB>YYw} CDnDBn} >A]cfZ>Y} >]C}
DZDfQD]Bw}DtD]nm
}

2da} } FxpqYi}RnVbdqYVqsnY}

%]} crf} mnrCw} uD} ucrYC} YUXD} nc} CD>Y} uUnR} ZcfD} >A]cfZ>Y}
DtD]nm}]rZADf}nR>n}BcrYC}R>ddD]}nc}DYCDfYw} d>nUD]nm} CUm>AYDC}
Aw} DvdYcUnU]Q} ZrYnUZDCU>} U]EcfZ>nUc]} tUCDc} >]C} >rCUc}
mUQ]>Ym
}3RD}CDtDYcdDC}mwmnDZ}Um}U]nD]CDC}nc}CDnDBn}>BBUCD]nm}
mrBR} >m} E>YYU]Q} ADU]Q} ZcnUc]YDmm} >]C} >BBUCD]nm} uRUBR} ucrYC}
dfcA>AYw}dfcCrBD}>A]cfZ>Y}mcr]Cm
}

;]} nRD} M>ZDucfX} cE} crf} mnrCw} uD} Bc]mUCDf} mUv} >A]cfZ>Y}
DtD]nm} }& 33.6+D } 3@.6+D57;.763%::D76D;,%D)779 D } :;!.6+D
.6D 9%:;D 9775 D } "%.6+D 7<;D $779D .6D 376+D ;.5% D } "6795 3D
:8%%#,D %+D :#9% 5.6+ D :,7<;.6+ D 6$D } "6795 3D 676C
:8%%#,D%+D"9% 2.6+D 6$ & 33.6+D:7<6$: D

3RD} dfcdcmDC} mwmnDZ} nRD]} Um} BcZdcmDC} cE} nuc} Z>U]}


ZcCrYDm} tUCDc} A>mDC} >A]c`a>Y} DtD]n} CDnDBnUc]} >]C} >rCUc}
A>mDC} >A]cfZ>Y} DtD]n} CDnDBnUc]
} ,rf} Z>U]} Bc]nfUArnUc]m} >fD}
nSfDDEcYC} U} uD} R>tD} BfD>nDC} >} drAYUBUnw} C>n>mDn} uRUBR}

3RD}!UQ
}}mRcum}nRD}>fBRUnDBnrfD}cE}crf}dfcdcmDC}mwmnDZ
}
;n} Bc]n>U]m} nSfDD} Z>U]} Y>wDfm} } %Qx_~__rl Un} B>dnrfDm}
tUmr>Y} >]C} >rCUc} mUQ]>Y} McZ} B>ZDf>m} >]C} ZUBfcdRc]Dm}
DerUddDC}U]}nRD}D]tUfc]ZD]n} }AzrQW~~_l[ Un}Bc]n>U]m}tUCDc}

!  !!   USU 4...

!U

>]>YwmUm} >]C} c]D} Ecf} >rCUc} >]>YwmUm} ZcCrYDm


} YY} BcfD}
dfcBDmmU]Q} } >]>YwyU]Q} >fD} dDfEcfZDC} >n} nRUm} Y>wDf} }
%vvd_QL_rl nRUm}Y>wDf}Um}QD]DfUB}Un}>YYcum}>YY}XU]Cm}cE}tUCDc} }
>rCUc} A>mDC} >ddYUB>nUc]
} %]} nRD} Bc]nDvn} cE} nRUm} ucfX} Um} nRD}
DtD]n}CDnDBnUc]
}
%%%
}

nRUm} u>w} uD} B>]}fDZctD} mcZD}E>YmD}>Y>fZm}cBBrlDC}uRD]}uD}


>ddYw}#,"16)}CDnDBncf}c]}nRD}uRcYD}UZ>QD
}

#(?@D=$<}1J0?G}/0G0-G5@?}+$F0.},?}
K5.0@#t.5@}8?2@D=$G5@?}

%]} nRD} Bc]nDvn} cE} crf} ucfX} uD} nfw} nc} DvdYcfD} AcnR} tUmr>Y}
>]C} >rCUc} U]EcfZ>nUc]} nc} fDBcQ]UyD} nRD} DtD]nm} cE} U]nDfDmn
} %]}
nRUm} mDBnUc]} nRD} B>d>BUnw} cE} D>BR} ZcC>YUnw} tUCDc} >]C} >rCUc}
Ecf}>A]cfZ>Y}DtD]n}CDnDBnUc]}Um}>]>YwyDC
}7D}Ifmn}dfDmD]n}crf}
dfcdcmUnUc]m} Ecf} D>BR} ZcC>YUnw} mDd>f>nDYw
} n} nRD} D]C} cE} nRUm}
mDBnUc]} uD} uUYY} QUtD} mcZD} CUmBrmmUc]m} c]} nRD} dcmmUAUYUnw} nc}
BcZAU]D}>rCUc}>]C}tUmr>Y}U]EcfZ>nUc]}

2da} } .YqYVqdkj} nYpshqp} kUqRdjYX} `ki} Xd\\YnYjVdja} VsnnYjq} \nRiY} vdqb}
URVganksjX} \nRiY}

DmUCD}nc}>tcUC}ZUmmDC}CDnDBnUc]}B>rmDC}Aw}#,"16)}uD}
uUYY} XDDd} CDnDBnUc]} nR>n} m>nUmP} Bc]CUnUc]m} nc} AD} mnUYY} >} RrZ>]}
f>nUc} ADnuDD]} uUCnR} >]C} RDUQRn} dDfBD]n>QD} cE} EcfDQfcr]C}
dUvDYm}>]C}nRD}Acr]CU]Q}Acv}nc}XDDd}q>BXV]Q}Yc]QDf
}

D .$%7D :%$D=%6;D%;%#;.76D

Zc]Q} nRD} } >A]cfZ>Y} DtD]nm} cE} U]nDfDmn} uD} >]>YwyD} U]}


nRUm} mDBnUc]} nRD} B>d>BUnw} cE} rmU]Q} tUCDc} Ecf} fDBcQ]UyU]Q} }
DtD]nm}nR>n}>fD} 4
& 33.6+D75D;,%D"%$D79D$<9.6+D> 32.6+ D }
3@.6+D 57;.763%::D 76D ;,%D)779 D } :;!.6+D ;77D 376+D .6D ;,%D 9%:;D
9775 D } "%.6+D 7<;D 7&D ;,%D 9775D ;77D 376+D 3c} Cc} nRUm} uD}
dfcdcmD} >} ZDnRcC} nR>n} BcZAU]Dm} ZcnUc]} nDZdY>nDm} >]C}
YcB>YUy>nUc]} U]EcfZ>nUc]
} %]} nRUm} mDBnUc]} uD} Ifmn} dfDmD]n}
RrZ>]} CDnDBnUc]} nf>BXU]Q} >]C} YcB>YUy>nUc]} ZcCrYD
}3RD]} uD}
CDmBfUAD}>A]cfZ>Y}DtD]n}CDnDBnUc]}ZDnRcC
}
 "1%#;D$%;%#;.76 D ;9 #2.6+D 6$D37# 3.A ;.76D
3c}AD}>AYD}nc}CDnDBn}nRD}dfDmD]BD}cE}dDcdYD}>]C}nRDVf}YcB>nUc]}
V]} } UZ>QD} dY>]D} >m} uDYY} >m} } fccZ} md>BD} fD>Y} ucfYC} uD}
B?@x} crn} nTiDD} Z>U]} n>mXm} A>BXQfcr]C} ZcCDYU]Q} Rs\>]}
CDnDBnUc]}>]C}RrZ>]}q>BXU]Q
}
D  #2+97<6$D7$%3.6+D
3RD} cAWDBn}CDnDBnUc]} >mmrfDm} >]} >rncZ>nUB} U]UnU>YUy>nUc]} cE} nRD}
nf>BXDf} >m} uDYY} >m} dfctUCDm} cAmDft>nUc]m} Ecf} C>n>} >mmcBU>nUc]
}
%]}crf}Bc]nDvn}nRD}B>ZDf>}Um}Iv}Arn}mBD]D}B>]}Bc]n>U]}ZctU]Q}
A>BXQfcr]C} YUXD} u>tU]Q} Brfn>U]} >]C} UYYrZU]>nUc]}t>fU>nUc]m
}
3c}R>]CYD}nc}nRUm} uD}dfcdcmDC}nc}rmD}mDQZD]n>nUc]}nDBR]UerD}
A>mDC} c]} BcCD}AccX} nDBS]UerD} :=
} 3RUm} ZDnRcC} Um}
DvdDfUZD]n>YYw} mRcu]} nc} AD} ZcfD} DHBUD]n} U]} nUZD ZDZcfx}
>]C} dfDBUmUc]} ZD>]U]Q} mcZD} ZctU]Q} DYDZD]nm} U]} nRD}
A>BXQfcr]C}Um}Bc]mUCDfDC}>m} A>BXQfcr]C
}!cf}nDBR]UB>Y}CDn>UY}
mDD}nRD}cfUQU]>Y}d>dDf}:=
}
" D 7=.6+D"1%#;D$%;%#;.76D
,]BD} nRD} A>BXQfcr]C} ZcCDY} Um} ArUYn} QUtD]} D>BR} tUCDc}
M>ZD} nRD} ZctU]Q} cAWDBnm} CDnDBnUc]} Um} B>ffUDC} crn} Aw}
CUEEDfD]BU]Q}nRD}BrffD]n} UZ>QD} uUnR}nRD}A>BXQfcr]C}ZcCDY
}3c}
fDZctD} ]cUmDm} uD} nSfDmRcYC} nRD} CUEEDfD]n} UZ>QD
}
)cfdRcYcQUB>Y} cdDf>ncfm} >fD} nRD]} >ddYUDC} EcYYcuDC} Aw}
Bc_]DBnDC} BcZdc]D]n} >]>YwmUm}nc}Qfcrd}dUvDYm}U]nc}AYcAm
}

7D}B>]}cAmDftD}V]}!UQ
}}nR>n}rmU]Q}A>BXQfcr]C}mrAq>BnUc]}
dDcdYD} >fD} CDnDBnDC} Arn} nRDVf} YcB>YUy>nUc]} Um} ]cn} fD>YYw} dDfEDBn}
nRD}Acr]CU]Q}AcvDm}>fD}ZcmnYw}AUQQDf}nR>]}RrZ>]
}1cZDnUZD}>}
d>fp} cE} A>BXQfcsbC} Um} Bc]mUCDfDC} >m} >} E>YmD} >Y>gZ
} 3c} fDZctD}
nRUm} XU]C} cE} E>YmD} >Y>gZm} Jfmn} uD} DvnD]C} >YY} Acr]CU]Q} AcvDm}
nRD]} >ddYw} #,"16)}:=} A>mDC} RrZ>]} CDnDBncf} c]} D>BR}
DvnD]CDC}Acr]CU]Q}Acv} Ecf}tDfUIB>nUc]
}7D}]cnUBD}>Ymc}nR>n}Aw}

K

O

2da} } R} .YqYVqdkj} nYpshqp} UhRVg} nYVqRjahYp} Ux} Rllhxdja} 4@3 FJ=} kj}
vbkhY}diRaY} U}.YqYVqdkj}nYpshqp}nYX}nYVqSahY}Ux}Rllhxdja}4@3 FJ=}kj}
qbY} YwqYjXYX} nYadkj} anYYj} nYVqRjahY} GbY} \RhpY} RhRni} dj} R} piRhhYn}
nYVqRjahY}dp}jkv}nYikuYX}dj}U }qbY}hkVRhdyRqdkj}k\}bsiRj}dp}iknY}lnYVdpY}

# D 9 #2.6+D
!cf}nf>BXU]Q}RrZ>]} uD}dfcdcmD}nc}rmD}nRD}nf>CUnUc]>Y}'>YZ>]}
GYnDf} nR>n} R>m} ADD]} mRcu]} nc} AD} QccC} D]crQR} U]} Ycn} cE}
mrftDUYY>]BD} >ddYUB>nUc]m
} ,AmDft>nUc]} >]C} dfcBDmm} ]cUmD} >fD}
mrddcmDC}>m}uRUnD}]cUmD}uUnR}">rmmU>]}CUmnfUArnUc]m
}

2da} } R}GnRVgdja}nYpshqp }R}(ksjXdja}Ukw}nYlnYpYjqp}qbY}VsnnYjq}hkVRqdkj}


k\}qbY}bsiS }qbY}nYX}hdjY}dp}bdp}qnRfYVqknx!} U}0RVb}lYnpkj}SX}bdp}qnRfYVqknx}
RnY}nYlnYpYjqYX}Ux}R}Vkhkn}dj}ishqdlhY}bsiRj}qnRVgdja}

%]} crf} B>mD} uD} ucrYC} YUXD} nc} ArUYC} >} ZrYnUdYD} RrZ>]}
nf>BXU]Q}mc}uD}]DDC}nc}Cc}>}ZcfD}BcZdYDv}nf>BX} cAmDft>nUc]}
>mmcBU>nUc]
} 3RD} >mmcBU>nUc]} ADnuDD]} >} nf>BX} >]C} >]}
cAmDft>nUc]}uUYY}AD}mDYDBnDC}A>mDC}c]}>}Z>nBR}ZD>mrfD}nR>n}Um}
nRD} rBYUCU>]} CUmn>]BD} ADnuDD]} nuc} #,"} CDmBfUdncfm
} %E} >}
nf>BX}CcDm}]cn}I]C}>]}cAmDft>nUc]}ZUmmDC}CDnDBnUc]}uD}XDDd}
nRUm} nf>BX} U]} mDtDf>Y} M>ZDm} r]nUY} Un} I]C} >]} cAmDft>nUc]} U]} nRD}
]Dvn} M>ZD
} NDf} UZdcfn>]n} ZUmmDC} cAmDft>nUc]m} uD} CDYDnD}
nRUm} nf>BX
} !cf} >YY} fDZ>U]U]Q} cAmDft>nUc]m} uD} BfD>nD} ]Du}
nf>BXm
} !UQ
} } mRcum} mcZD} Dv>ZdYDm} cE} RrZ>]} CDnDBnUc]} >]C}
nf>BXU]Q
}
 "6795 3D%=%6;D$%;%#;.76D
Zc]Q}nRDmD}Ecrf}DtD]nm}nRD}Ifmn}DtD]n}E>YY}R>m}>nnf>BnDC}
Z>]w}ucfXm}U]}nRD}BcZdrnDf}tUmUc]}BcZZr]Unw
}BBcfCU]Q}nc}
: =} nRUm} DtD]n} B>]} AD} CDBcZdcmDC} U]} Ecrf} dR>mDm
} 3RD} dfD

!U

E>YY}dR>mD}BcffDmdc]Cm}nc}C>UYw}YUED}ZcnUc]m} uUnR}cBB>mUc]>YYw}
mrCCD]} ZctDZD]nm} CUfDBnDC} ncu>fCm} nRD} Qfcr]C} YUXD} mUnnU]Q}
Ccu]}cf}BfcrBRU]Q}Ccu]
}3RD}BfUnUB>Y}dR>mD} BcffDmdc]CU]Q}nc}
nRD}E>YY} Um}DvnfDZDYw} mRcfn
}3RUm} dR>mD}B>]}AD}CDnDBnDC}Aw} nRD}
ZctDZD]n} cE} nRD} AcCw} ncu>fC} nRD} Qfcr]C} cf} Aw} nRD} UZd>Bn}
mRcBX} uUnR} nRD} Lccf
} 3RD} dcmnE>YY} dR>mD} Um} QD]Df>YYw}
BR>f>BnDfUyDC}Aw} >} dDfmc]} ZcnUc]YDmm} c]} nRD} Qfcr]C} Wrmn} >NDf}
nRD}E>YY
}%n}B>]}AD}CDnDBnDC}Aw}>}YwU]Q}dcmUnUc]}cf}Aw}>]}>AmD]BD}
cE}Y>fQD}ZcnUc]
}}fDBctDfw}dR>mD}B>]}DtD]nr>YYw}cBBrf}UE} nRD}
dDfmc]} Um} >AYD} nc} mn>]C} rd} >Yc]D} cf} uUnR} nRD} RDYd} cE} >]cnRDf}
dDfmc]
}!UQ
}}UYYrmnf>nDm}CUEEDfD]n}dR>mDm}cE}>}E>YY}crn}cE}ADC
}

3RD} RrZ>]} mR>dD} >]>YwmUm} Um} Cc]D} Aw} >ddfcvUZ>nU]Q} >}


dDfmc]}Aw}>]}DYYUdmD}CDF[DC}Aw}Unm}BD]nDf}Unm}cfUD]n>nUc]}>]C}nRD}
YD]QnR} cE} Unm} Z>Wcf} >]C} ZU]cf} mDZU>vDm
} 3RD} >ddfcvUZ>nDC}
DYYUdmD} QUtDm} rm} U]EcfZ>nUc]} >Acrn} nRD} mR>dD} >^C} cfUD]n>nUc]} cE}
nRD} dDfmc]} U]} nRD} UZ>QD
} 7D} BcZdrnD} nRD} cfUD]n>nUc]} mn>]C>fC}
CDtU>nUc]} >]C} mn>]C>fC} CDtU>nUc]} cE} f>nUc} ADnuDD]} Z>Wcf} >]C}
ZU]cf} mDZU>vDm} U]} Crf>nUc]} cE} nUZD} D
Q
} $m} >]C} A>mD} c]} nRD}
EcYYcuU]Q} cAmDft>nUc]m} } %E} >} dDfmc]} E>YYm}dDkD]CUBrY>fYw} nc}
nRD} B>ZDf>} cdnUB>Y} >vUm} nRD]} nRD} cfUD]n>nUc]} uUYY} BR>]QD}
mUQ]UIB>]nYw} >]C} cfUD]n>nUc]} mn>]C>gC} CDtU>nUc]} uUYY} AD} RUQR
} %E}
nRD} dDfmc]} Wrmn} u>YXm} cfUD]n>nUc]} mn>]C>fC} CDtU>nUc]} uUYY} AD}
Ycu} } %E} >} dDfmc]} E>YYm} d>g>YYDYYw} nc} nRD} B>ZDf>} cdnUB>Y} >vUm}
nRD]} nRD} f>nUc} uUYY} BR>]QD} >]C} nRD} mn>]C>fC} CDtU>nUc]} cE} f>nUc}
ADnuDD]}Z>Wcf}>]C}ZU]cf}mDZU>vDm}uUYY}AD}RUQR
} 4E nRD}dDfmc]}
Wrmn}u>YXm}nRUm}ZD>mshD}uUYY}AD}Ycu
}
D <$.7D" :%$D=%6;D%;%#;.76D

2da} } 5hhspqnRqdkj}k\}Xd_nYjq}lbRpYp}k\}qbY}YuYjq}\Rhh}ksq}k\}UYX}

3RDfD} >fD} >} ]rZADf} cE} ucfXm} R>tD} ADD]} dfcdcmDC} Ecf} E>YY}
DtD]n} CDnDBnUc]
} 3RDmD} ucfXm} B>]} AD} CUtUCDC} U]nc} nuc}
B>nDQcfUDm
} 3RD} ucfXm} ADYc]QU]Q} nc} nRD} Ifmn} B>nDQcfw} nfw} nc}
ZcCDY} >]C} nc} fDBcQ]UyD} nRD} E>YY} DtD]nm} Aw} rmU]Q} I]UnD} mn>nD}
Z>BRU]D} #))} #UCCD]} )>fXct} )cCDY} :=} uRUYD} nRD}
mDBc]C} BcZdrnD} nRD} ZcnUc]} nDZdY>nDm} mrBR} >m} )#%} )cnUc]}
#Umncfw} %Z>QD}:=
} %]} nRUm} d>dDf} Ecf} nRD} E>YY} DtD]n} U]mdUfDC}
nRD}ucfX}cE:}=}uD}dfcdcmD}>}E>YY}DtD]n}>YQcfUnRZ}BcZAU]U]Q}
AcnR}cAWDBn}YcB>YUy>nUc]}crndrn}>]C})#%
}
Zc]Q}}DtD]nm}cE}U]nDfDmn} nRD}nRUfC}>]C}nRD}EcrfnR}DtD]nm}
:; @.6+D;77D376+D.6D;,%D9%:;D9775D>]C} "%.6+D7<;D7B;,%D9775D
;77D 376+
D>fD} U]EDffDC} CUfDBnYw} McZ} nRD} crndrn} cE} RrZ>]}
YcB>YUy>nUc]
} 3RD}Ifmn}>]C}nRD}mDBc]C}DtD]nm} ' 33.6+D*75D;,%D
"%$D 79D $<9.6+D > 32.6+D >]C} 3@.6+D 57;.763%::D 76D ;,%D)779
D
>fD}fDBcQ]UyDC}Aw}nRD}>YQcfUnRZ}UYYrmnf>nDC}U]}!UQ
}
}
(++




#$ + %% + %#+ +  *% +

  ( !#$

+ $+ + %+ +

  ( #+ 


+

 % + "&%% + &$+  +
% + + %+  + #!#$%+
%+!#$ +
+


 
 

+ %+! $% + +!#$ + + %+!#' &$+

 +

+

  ( $+ +#

+

 +

+ +  % + #+ +


+ #+ %%+  % + #+ +

3RD}dfcAYDZ}cE}CDnDBnU]Q}>A]cfZ>Y}mcr]Cm}BcrYC}AD}mcYtDC}
Aw}BY>mmUPU]Q}mcr]Cm
}>dnrfDC}mcr]C}Um}IfmnYw}BY>mmUIDC}U]nc}
mdDDBR} >]C} ]c]mdDDBR
} n} nRUm} mnDd} nRDfD} mRcrYC} AD} >}
mdDDBR ]c]mdDDBR} CUmBfUZU]>ncf
} 1DBc]CYw} U]} nrj} mdDDBR}
mcr]C} Um} BY>mmUIDC} U]nc} ]cfZ>Y} mdDDBR} >]C} >A]cfZ>Y} mdDDBR}
>]C} ]c]mdDDBR} mcr]C} Um} BY>mmUIDC} U]nc} ]cfZ>Y} ]c]mdDDBR}
>]C} >A]cfZ>Y} ]c]mdDDBR
} n} nRUm} mnDd} Un} fDerUfDm} nuc} cnRDf}
CUmBfUZU]>ncfm}]cfZ>Y >A]cfZ>Y}mdDDBR}>]C}]cfZ>Y >A]cfZ>Y}
]c]mdDDBR
}!cf}nRD}drkcmD}cE}mnrCw}>]C}CDtDYcdU]Q}nRD}nSfDD}
CUmBfUZU]>ncfm} Un} Um} ]DDCDC} nc} ArUYC} Ecrf} mcr]C} Bcfdcf>}
]cfZ>Y} mdDDBR} >A]cfZ>Y} mdDDBR} ]cfZ>Y} ]c]mdDDBR} >]C}
>A]cfZ>Y}]c]mdDDBR
}

+



+

  ( 

1cr]C} BY>mmUIB>nUc]} dfcAYDZ} R>m} ADD]} mnrCUDC} Ecf} >} Yc]Q}


nUZD}>]C}>ddYUDC}uUCDYw}U]}Z>]w}>ddYUB>nUc]m}:=}:=}:=}:=}
:= :=
} >fYw} mnrCw} c]} mdDDBR ]c]mdDDBR} BY>mmUIB>nUc]}
uDfD} dfDmD]nDC} U]}:=} U]} uRUBR} >rnRcfm} Bc]BD]nf>nDC} U]}
CUmBfUZU]>nU]Q} mdDDBR} McZ}ZrmUB}c]}Afc>CB>mn
} %]}nRD}mnrCUDm}
:=} :=} :=} :=} mcr]C} BY>mmUIB>nUc]} uDfD} >ddYUDC} Ecf}
ArUYCU]Q} >A]cfZ>Y} mcr]C} CDnDBnUc]}>]C} BY>mmUIB>nUc]}mwmnDZm}
uRUBR} ucrYC} AD} rmDC} Ecf} mrftDUYY>]BD} >ddYUB>nUc]m
} ]C} U]}
nRDmD} mnrCUDm} >A]cfZ>Y} mcr]Cm} >fD} CDI]DC} >m} nRcmD} BcZU]Q}
McZ} DZDfQD]Bw} mUnr>nUc]m} mrBR} >m} mBfD>Z} Qfc>]} Bfw} cE}
d>nUD]n}cf}mcr]Cm}cE}E>YYm}cf}AfD>X}cE}cAWDBnm}

#$ + )$+
 % $$+ +
%+ 

#+

+ %% +

2da} } DYVkajdqdkj}Rhakndqbi}\kn}qvk}YuYjqp}2Rhhdja}`ki}qbY}UYX}kn}Xsndja}


vRhgdja }SX}<xdja}ikqdkjhYpp}kj}qbY}^kkn}

1U]BD} nRD} Ifmn} DtD]n} Um} CDmBfUADC} Aw} >} Y>fQD} ZcnUc]} uRUYD}
nRD} mDBc]C} Um} fDdfDmD]nDC} Aw} >} mZ>YY} ZcnUc]
} 7D} rmD} )#%} nc}
DmnUZ>nD}dDfmc]}ZcnUc]}>m}ZDnRcC}cE}:=
}#cuDtDf} nRD}Z>U]}
CUEEDfD]BD} cE} crf} ZDnRcC} >]C} nR>n} cE}:=} Um} nRD} %9:76D
$%;%#;.76 D ;9 #2.6+D 6$D 37# 3.A ;.76D mnDd
} >mDC} c]} nRUm} mnDd}
uD} tDfUP} nRD} RwdcnRDmUm} .:D ;,%D 8%9:76D 76D ;,%D "%$D 3RD}
fDmrYn} cE} nRUm} tDfUIB>nUc]} >YYcum} nc} fDZctD} E>YmD} CDnDBnUc]}
ADB>rmD} UE} dDfmc]} Y>wm} ZcnUc]YDmm} U]} nRD} ADC} nRUm} Um} ]cfZ>Y}
mUnr>nUc]
} )cfDctDf} UE}nRD}mwmnDZ}X]cum}nR>n}nRD}dDfmc]}Um}c]}
nRD}ADC}Un}CcDm}]cn}]DDC}nc}Cc}E>YY}DtD]n}CDnDBnUc]
}

 79879 D#76:;9<#;.76D


D ,%D795 3D8%%#,D
*cfZ>Y} mdDDBR} Um} nRD} mcr]C} BcZU]Q} McZ} d>nUD]nm} nSfc>n}
uRD]} RD mRD} Um} U]} QccC} RD>YnR
} !fcZ} 6*1dDDBRcfdrm}:=}
uRUBR} Um} >} 6UDn]>ZDmD} mdDDBR} Bcfdrm} rmDC} Ecf} >rncZ>nUB}
mdDDBR} fDBcQ]UnUc]} >} ]cfZ>Y} mdDDBR} Bcfdrm} Um} Dvnf>BnDC} >]C}
rmDC}Ecf}crf}mnrCw
}3RUm}Bckrm}Um}BcZdcmDC}cE}mdDDBR}mUQ]>Ym}
cE} }mdD>XDfm} }Z>YD}>]C}}EDZ>YD}uUnR}nRD}>QD} McZ} }nc}
 } DmUCDm} uD} ArUYn} >} ]Du} mrddYDZD]n>Y} mdDDBR} Bcfdrm} U]}
mUZrY>nDC} fccZ
} 3RUm} mrddYDZD]nDC} Bcfdrm} U]BYrCDm} mdDDBR}
mUQ]>Ym} McZ}  } ]Du} mdD>XDfm} } Z>YD} >]C} } EDZ>YD} >QU]Q}
McZ}}nc}
}
" D ,%D"6795 3D8%%#,D
A]cfZ>Y} mdDDBR} Um} nRD} mcr]C} BcZU]Q} McZ} d>nUD]nm}
nSfc>n} uRD]} RD mRD} Um} U]} A>C} RD>YnR} cf} U]} >} mDfUcrm} mUnr>nUc]}
D
Q
}E>YY} E>U]n]Dmm} mUBX}DnB
}rD}nc}nRD}CUHBrYnUDm}cE}ArUYCU]Q}
nRUm} nwdD} cE} Bcfdrm} U]} fD>Y} Bc]CUnUc]} uD} B>ffUDC} crn} ArUYCU]Q}
nRD} >A]cfZ>Y} mdDDBR} Bcfdrm} Aw} fDBcfCU]Q} nRD} mcr]C} U]} nRD}

!U

mUZrY>nDC} fccZ
} 3RD} nD]} mdD>XDfm} } Z>YDm} >]C} } EDZ>YDm}
uDfD} >mXDC} nc} Z>XD} mcr]C} YUXD} mBfD>Z} mRcrn} >]C} BcrQR
}
DmUCDm} uD} BcYYDBnDC} mcr]Cm} Bcfdrm} mBfD>Z} mRcrn} BcrQR}
Bfx}Zc>]}McZ}U]nDjDn}IYZm}>]C}mcr]C}DEEDBn}m
}
# D ,%D795 3D76:8%%#,D
*cfZ>Y} ]c]mdDDBR} Um} nRD} mcr]C} BcZU]Q} McZ} nRU]Qm} cf}
mcr]Cm} cE} ]cfZ>Y} YUED
} 3RcmD} >rCUc} mUQ]>Ym} BcrYC} cfUQU]>nD}
McZ} Ecf}U]mn>]BD} Cccfm} BYcmDC} Cccf}ADYY} BR>Ufm} Cf>QQDC}
Cf>uDfm}cdD]DC}>]C}BYcmDC}YUerUC}dcrf}U]}>]C}crn}QY>mmDm}
Brdm} CUmRDm} AcuYm} nRDfZcm} L>mX} DnB
} %]} crf} mnrCw} nRD}
d>nUD]nm} fccZ} cE} nRD}  } #cmdUn>Y} U]} #>]cU} 6UDn]>Z} R>m}
ADD]}Bc]mUCDfDC}>m}nRD}d>nnDj}fccZ
}7D}BcYYDBnDC}nRU]Qm}McZ}
nRD}d>nUD]nm}fccZ}mrBR}>m}ADC}BR>Uf}n>AYD}BrdAc>fC}QY>mmDm}
Brdm} DnB
} nRD]}uD}fDBcfCDC}nRDUf}mcr]Cm}cf}mcr]Cm} McZ}nRDUf}
BY>mRDm
}3RD}Bcfdrm}uDfD}fDBcfCDC}U]}nRD}fDBcfCU]Q}mnrCUc}>]C}
nRD} mUZrY>nDC} fccZ
} 7D} >Ymc} fDBcfCDC} mcr]Cm} cE} cAWDBnm} U]}
A>nSfccZ} mrBR} >m} mRcuDf} E>rBDn} LrmRU]Q} cE} ncUYDn} DnB
} 3c}
D_fUBR} nRUm} Bcfdrm} uD} >Ymc} BcYYDBnDC} nRcmD} mcr]Cm} McZ}
U]nDjDn}>]C}mcr]C}DEEDBn}m
}
$ D,%D"6795 3D76:8%%#,D
A]cfZ>Y} ]c]mdDDBR} Um} nRD} mcr]C} BcZU]Q} McZ} nRU]Qm} U]}
Zc]UncfDC}fccZ}uRD]}dDfmc]}Um}U]}A>C}RD>YnR
}!cf}U]mn>]BD}UE}
nRD} d>nUD]n} E>YYm} RD mRD} uUYY} dfcA>AYw} AfD>X} >} QY>mm} nRD]} nRD}
mcr]C} cE} >} AfcXD]} QY>mm} Um} Bc]mUCDfDC} >m} >]} >A]cfZ>Y} ]c]|
mdDDBR
} ]C} uRD]} RD} E>YYm} RD} uUYY} dfcA>AYw} Z>XD} >} BR>Uf}
nrZAYDC} Ccu]} U]} nRUm} B>mD} nRD} mcr]C} cE} >} nrZAYDC} BR>Uf} Um}
>]cnRDf}mUQ]>Y}cE}nRUm}Bcfdrm
} %]}crf}mnrCUc} uD}nrZAYD}BR>Ufm}
AfD>X} QY>mmDm} >]C} AcuYm} >]C} Brdm} DnB
} nc} fDBcfC} mcr]Cm} cE}
>A]cfZ>Y}]c]mdDDBR
}3RD]}c]BD}>Q>U]}uD}D]Y>fQD}nRD}Bcfdrm}
Aw}BcYYDBnU]Q}ZcfD}McZ}U]nDf]Dn}>]C}mcr]C}DEEDBn}m
}

Z>BRU]D} 16)} >fD} BRcmD]} >m} Bc]mUCDfDC} BY>mmUIB>nUc]}


ZcCDYm
}
D,%D8%%#-76:8%%#,D.:#9.5.6 ;79D
3RD}>YY} Ecrf}Bcfdcf>}>fD}rmDC}nc}ArUYC}>]C}nc} Dt>Yr>nD} nRUm}
CUmBfUZU]>ncf
} !fcZ} nRDmD} C>n>A>mDm} uD} Dvnf>Bn} } ED>nrfDm
}
3RD} >ddYUB>nUc]} cE} -} c]} nRDmD} ED>nrfDm} mRcum} nR>n} nRD} }
ED>nrfDm} )!} 9.} ;<U BD]nfcUC} D]DfQw} fcYYcEE}  .}
A>]CuUCnR} >]C} (1-} R>tD} >} ncn>Y} t>fU>AUYUnw} cE} 
}
3RDfDEcfD} nRDw} >fD} BRcmD]} nc} EcfZ} nRD} ED>nrfD} mDn} cE} nRUm}
CUmBfUZU]>ncf
} 3RUm} ED>nrfD} mDn} Um} rmDC} nc} Dt>Yr>nD} nRD}
dDfEcfZ>]BD} cE} D>BR} BY>mmUIB>nUc]} ZcCDY} U]} CUmBfUZU]>nU]Q}
mdDDBR ]c]mdDDBR}mUQ]>Y
}
" D,%D795 3"6795 3D8%%#,D.:#9.5.6 ;79D
3RD} ]cfZ>Y} mdDDBR} >]C} >A]cfZ>Y} mdDDBR} C>n>A>mDm} >fD}
rmDC} nc} Bc]mnfrBn} nRUm} CUmBfUZU]>ncf
} 3c} I]C} >ddfcdfU>nD}
CUmBfUZU]>]n} ED>nrfDm} -}Um}>ddYUDC
} vdDfUZD]nm}dfctD}nR>n}
>} mDn} cE} } ED>nrfDm} 9.} 4U BD]nfcUC} D]DfQw} fcYYcEE}
A>]CuUCnR} >]C}dUnBR}B>]} m>nUmP}}cE} nRD} ncn>Y}t>fU>AUYUnw
}
>mU]Q} c]} nRUm} mDn} nRD} } ZcCDYm} >fD} nDmnDC} nc} CUmnU]QrUmR}
ADnuDD]}]cfZ>Y}>]C}>A]cfZ>Y}mdDDBR
}
# D ,%D795 3"6795 3D76:8%%#,D.:#9.5.6 ;79D
vdDfUZD]nm}Ecf}nRUm}CUmBfUZU]>ncf}>fD}A>mDC}c]}nRD}]cfZ>Y}
]c]mdDDBR} >]C} nRD} >A]cfZ>Y} ]c]mdDDBR} Bcfdcf>
} 3RD} -}
u>m} >Ymc} >ddYUDC} nRD} } ED>nrfDm} 9.} ;<U BD]nfcUC} D]DfQw}
fcYYcEE} A>]CuUCnR} >]C} (1-} cBBrdw} } cE} nRD} ncn>Y}
t>fU>AUYUnw
}%]}nRD}]Dvn}mn>QD}nRUm}mDn}Um}nRD]}EDC}nc}>YY}}ZcCDYm}
nc}I]C}nRD}ADmn
}
G&<0} "

3RD} cAn>U]DC} mcr]C} Bcfdcf>} Um} mRcu]} >n} 3( } %


} YY}
mUQ]>Ym}>fD}Zc]c}fDBcfCDC}>n}}X#y}>]C}er>]nUyDC}>n}}AUn
}
G&<0} "
"
 ) )
 ) )
 ) !)
 ) !)

)

" "

 )

""
"

"
"

!"
  "

U
U
 U
U

 U
U
U
U

U
U
U
U

"" PQ}  "

""

 " )

U)

"

" "
"

) 

 )

 )

)

 )

 ) 

)

)

 U

 U

 U

 U

 U

 U

# !)

 7<6$D" :%$D "6795 3D%=%6;D$%;%#;.76D


-fcBDmm}cE}CDtDYcdU]Q}nSfDD}CUmBfUZU]>ncfm}>fD}mUZUY>f}nuc}
EcYYcuU]Q}mnDdm}>fD}dDfEcfZDC
}
%3%#;.6+D D&% ;<9%D :%;D&79D D $.:#9.5.6 ;79D uD} mn>fn} uUnR}
$$} ED>nrfDm} uRUBR} uDfD} >ddYUDC} U]}:=}:=} yDfcBfcmmU]Q}f>nD}
9.} ;<U BD]nfcUC} D]DfQw} fcYYcEE}A>]C}D]DfQw}f>nUc} .}
A>]CuUCnR} YU]D>f}mdDBnf>Y}d>Ufm}(1-})DY}MDerD]Bw}BDdmnf>Y}
BcDHBUD]nm} )!} dDfBDdnr>Y} YU]D>f} dfDCUBnUc]} -(-} >]C}
dUnBR
}3RD]}-}-fU]BUd>Y}cZdc]D]n}]>YwmUm}Um}>ddYUDC}nc}
mDYDBn} nRD} Zcmn} mrUn>AYD} ED>nrfDm} Ecf} D>BR} CUmBfUZV]>ncf
} } mDn}
cE} ED>nrfDm} uUYY} AD} mDYDBnDC} UE} nRDUf} ncn>Y} t>fU>AUYUnw}DvBDDCm} >}
BDfn>U]}nSfDmRcYC}}Ecf}U]mn>]BD
}
,77:.6+D D#3 ::0/# ;.76D57$%3D&79D;,%D$.:#9.5.6 ;79D 3RD}
mDYDBnUc]} BfUnDfUc]} Um} CUmBfUZU]>nUc]} f>nUc} uRUBR} Um} nRD} f>nUc}
ADnuDD]} BcffDBnYw} CUmBfUZU]>nDC} m>ZdYDm} >]C} nDmnDC} m>ZdYDm
}
%]} nRUm} mnrCw} >fnUIBU>Y} ]Drfc]} ]DnucfX} +*} ">rmmU>]}
ZUvnrfD}ZcCDY}"))}CDBUmUc]}nfDD}3}>]C}mrddcfn}tDBncf}

)

$)

*+U

 " )

3RD} dfDYUZU]>fw} fDmrYnm} cE} crf} mnrCw} dfDmD]nDC} U]}:=}


mRcuDC}nR>n}3}ZcCDY}QUtDm}ADnnDf}CUmBfUZU]>nUc]}f>nUc}Ecf}>YY}
nSfDD}CUmBfUZU]>ncfm}U]}BcZd>fUmc]}nc}nSfDD}fDZ>U]U]Q}ZcCDYm
}
3Rrm} uD} R>C} BRcmD]} 3} ZcCDY} nc} Bc]mnfrBn} nRfDD}
CUmBfUZU]>ncfm
} #cuDtDf} c]} nRD} c]D} R>]C} uRD]}
UZdYDZD]nU]Q} nSfDD} CUmBfUZU]>ncfm} rmU]Q} cAn>U]DC} 3} ZcCDY}
U]}mUZrY>nDC}fccZ}uD}Ecr]C}nR>n}nRD}DvDBrnD}nUZD}Um}UZdcfn>]n}
ADB>rmD} nRD} mUyD} cE} 3} ZcCDY} Um} erUnD} Y>fQD
} ,]} nRD} cnRDf}
R>]C} Aw} ZcfD} CDDdYw} mnrCw} uUnR} +*} uD} B>]} QDn} nRD}
CUmBfUZU]>nUc]} f>nUc} BYcmDC} nc} nR>n} cE} 3} ZcCDY} 3( } %%
} }
>]C} nRD} fr]} nUZD} cE} nSfDD} **} A>mDC} CUmBfUZU]>ncfm} Um} ZrBR}
ADnnDf} 3} A>mDC}CUmBfUZU]>ncfm
}3RDfDEcfD} nRD}+*}ZcCDY}Um}
I]>YYw} BRcmD]} Ecf} CDtDYcdU]Q} mcr]C} BY>mmUIB>nUc]} mwmnDZ} U]}
crf}mnrCw
}
*

.:#<::.76D76D#75".6 ;.76D7(D <$.7D 6$D=.$%7D "6795 3D


%=%6;D$%;%#;.76D

%]}:=} nSfDD} nwdDm} cE} U]nDfZcC>Y} fDY>nUc]m} >fD} dfDmD]nDC}


nRDw} >fD} nfUQQDf} U]nDQf>nUc]} >]C} BcYY>Acf>nUc]} fDY>nUc]m
} !fcZ}
nRD}} Bc]mUCDfDC} >A]cfZ>Y} DtD]nm} U]} crf} mnrCw} uD} Ecr]C}nR>n}

!U

nRDfD} Um} c]Yw} nRD} E>YYU]Q} DtD]n} uRUBR} B>]} dfcCrBD} >A]cfZ>Y}
tUCDc}>]C}>rCUc}U]EcfZ>nUc]}mUZrYn>]DcrmYw
}%]}B>mD}cE}E>YYU]Q}
DtD]n} uRD]} >} dDfmc]} E>YYm} RD mRD} B>]} mRcrn} cf} mBfD>Z} cf}
AfD>XU]Q} mcr]C} Um} QD]Df>nDC
} 3Rrm} nRD} nfUQQDf} fDY>nUc]} uRUBR}
ZcCDYm} nRD} nfUQQDfU]Q} cE} c]D} ZcC>YUnw} dfcBDmmU]Q} Aw} nRD}
CDnDBnUc]} cE} >]} DtD]n} U]} >]cnRDf} ZcC>YUnw} u>m} Bc]mUCDfDC
}
>mDC} c]} nRD} >YQcfUnRZ} nR>n} Um} >ddYUDC} Ecf} CDnDBnU]Q} E>YYU]Q}
DtD]n}>]C}nRD}DvdDfUZD]n>Y}UZdYDZD]n>nUc]} uD}Ecr]C}nR>n}nRD}
nfUQQDf}fDY>nUc]}CUC}]cn}QUtD}>}QccC}fDmrYn}ADB>rmD}nRD}CDY>w}cE}
CDBUmUc]}McZ}tUCDc}U]EcfZ>nUc]}Um}erUnD}Y>fQD}Un}Um}]cn}YcQUB>Y}
UE} nRD}mwmnDZ} rmDm} nRUm}CDBUmUc]}nc} nfUQQDf}nRD}>A]cfZ>Y}mcr]C}
CDnDBnUc]} ZcCrYD
} c]mDerD]nYw} nRD} nuc} >A]cfZ>Y} CDnDBnUc]}
ZcCrYDm} >fD} UZdYDZD]nDC} >]C} ucfXDC} U]CDdD]CD]nYw} nRD}
dfcBDmmDC} U]EcfZ>nUc]} Um} mD]n} nc} BD]nf>Y} dfcBDmmU]Q} ZcCrYD} U]}
d>f>YYDY
}
%6
}

"Dn}crn}uUnR}>}dY>mnUB}t>mD} } !>YY}c]}nRD}Lccf}>]C}Cfcd}nRD}


dY>mnUB}t>mD} }"Dn}crn}nRD}fccZ
}7D}dDfEcfZ}crf}mwmnDZ}U]}
nRD} BcZdrnDf} uUnR} nRD} EcYYcuU]Q} Bc]IQrf>nUc]} %]nDY.}
cfD3)}U )}-5}}
}"#y}v}}/)}"}
G%*<0}555} ,-D)C3?G3;8;1G2--J?-C368G(-8H3C;968G
""

" "

 }&(  }&( i}


FdyY}<}&( L}&( 4}
 }&(  }&(  }i}
}
}
=Rdj}Xkkn}
}
}
LdjXkvp}
}
}
GkdhYq}djpdXY}
(YX }iYXdVRh}VRUdjYq}
(YX }iYXdVRh}VRUdjYq}
@UfYVqp}
<dabqdja}
?Ykj}RjX}XRxhdabq}qbnksab} ?Ykj}RjX}XRxhdabq}qbnksab}
VkjXdqdkj}
vdjXkvp}RjX}Xkkn}
vdjXkvp}RjX}Xkkn}
 5C}VRiYnRp}%N5F}=6 }  5C}VRiYnRp}%N5F}=6 }
JdXYkRsXdk}
}idVnklbkjYp}'3}-;} }idVnklbkjYp}'3}-;}
VRlqsnYp}

E0FI<GF}1MC0D5=0?GF}

D ?8%9.5%6;D#%6 9.7D 6$D= 3< ;.76D% :<9%D

3c}Dt>Yr>nD}crf}mwmnDZ}uD}]DDC}nc}mDnrd}D]tUfc]ZD]n}>]C}
CDI]D} mBD]>fUcm
} 7D} B>lx} crn} DvdDfUZD]nm} U]} nuc}
D]tUfc]ZD]n} Bc]CUnUc]m} } >n} nRD} mRcu} fccZ} cE} )%}
%]mnUnrnD} mUZrY>nDC} fccZ} }>n}d>nUD]n}fccZ}>n} } #cmdUn>Y}
U]} #>]cU} 6UDn]>Z
} 3( } %%%
} QUtDm} mcZD} U]EcfZ>nUc]} cE}
crf} nDmnU]Q} D]tUfc]ZD]n
} cnR} nuc} D]tUfc]ZD]nm} R>tD} mUZUY>f}
mnfrBnrfD} >m} UYYrmnf>nDC} U]} !UQ
} 
} %]} nRDmD} D]tUfc]ZD]n} uD}
DerUd} nuc} %-} B>ZDf>m} 8%1} )$ } >]C} nuc} RUQR} er>YUnw}
ZUBfcdRc]Dm} '"} '} nc} B>dnrfD} tUmr>Y} >]C} >rCUc}
U]EcfZ>nUc]
}

2da} } $}VknjYn}k\}qbY}nkki}vdqb}qbY}iksjqYX}VRiYnR}SX}idVklbkjY}

7D}ZD>mrfD}nuc}BfUnDfU>}ADYcu}
/$B 




(1@H7J7O7MU

F>

 

}

F>
07

uRDfD} 3-} 3frD} -cmUnUtD} Um} nRD} ]rZADf} cE} BcffDBn} DtD]nm}
CDnDBnDC} !-} !>YmD} -cmUnUtD} Um} nRD} ]rZADf} cE} ufc]Q} DtD]nm}
CDnDBnDC} >]C} !*} !>YmD} *DQ>nUtD} Um} nRD} ]rZADf} cE} Ycmn}
DtD]nm
} 3RD} mZ>YYDf} !

.}>]C}nRD} QfD>nDf} 1D]mUnUtUnw} >fD} nRD}
ADnnDf}mwmnDZ}Um
}

 $(

( (

/>
F>
/>

(
% "!(
#(

D ?8%9.5%6; 3D9%:<3;:D

'((  (
(( 

2da} } GbY} bYRhqb} piRnq} nkki} vdqb} qbY} RsXdk} udXYk} VRlqsnYp } XYqYVqYX}
RUjkniRh}YuYjqp}vdhh}UY}RhRniYX}Rq}qbY}ikjdqkn}nkki}

3RD} ]rZADf} cE} d>fnUBUd>]nm} nc} nRD} DvdDfUZD]nm} >fD}  }  }


Z>YD}>]C} }EDZ>YD} >]C}>fD} } }Z>YD}>]C}}EDZ>YD} >QU]Q}
McZ}}nc} }wD>fm}cYC}Ecf}nRD}Ifmn}Bc]CUnUc]}>]C}nRD} mDBc]C}
Bc]CUnUc]} fDmdDBnUtDYw
} 1rAWDBnm} >fD} >mXDC} nc} dY>w} nRD}
EcYYcuU]Q}mBD]>fUc}}nUZDm
}%]}nRD}mBD]>fUc}nRD}dDfmc]}Um}>mXDC}
nc} } ]nDf} nc} nRD} fccZ} }u>YX} >]C} m>w} mcZDnRU]Q} }1Un}
c]}nRD}ADC} }(>w}c]}nRD}ADC}>]C}mBfD>Z} }(>w}ZcnUc]YDmm}
c]} nRD} ADC} } !>YY} McZ} nRD} ADC} >]C} mRcrn} } (>w}
ZcnUc]YDmm}c]}nRD} Lccf} } "Dn}rd}>]C}u>YX}nc}nRD}n>AYD} }
"c}ncu>fC}nRD}fDmn}fccZ}>]C}m>w}mcZDnRU]Q}  } "Dn}U]nc}nRD}
fDmn}fccZ} }dcrf}u>nDf} }Cfcd}>]}mn>U]YDmm}mnDDY}nf>w}cf}
>} dY>mnUB} BR>ZADfdcn} <} 1n>w} Yc]Q} U]} nRD} fDmn} fccZ} }

3RD} cAn>U]DC} fDmrYnm} cE} nuc} DvdDfUZD]nm} mUZrY>nDC} >]C}


RcmdUn>Y}fccZ}>fD}mRcu]}U]}3>A
} %6}>]C}3>A
}6}fDmdDBnUtDYw
}
3RD} ncn>Y} ]rZADf} cE} DtD]n} Ecf} nRD} Ifmn} DvdDfUZD]n} >n} D>BR}
nUZDm}Um} }  } tD]n}<}  } tD]n}}  } tD]n}}  } tD]n}}
 } tD]n}} } tD]n}}uRUYD}nRD}ncn>Y}]rZADf}cE}DtD]n}Ecf}nRD}
mDBc]C}DvdDfUZD]n}Um}<}
}3RD}DvdDfUZD]n>Y}fDmrYnm}mRcu}nR>n}
crf} mwmnDZ} Um} B>d>AYD} nc} CDnDBn} mUv} DtD]nm} cE} U]nDfDmn} uUnR} >}
RUQR}t>YrD}cE}mD]mUnUtUnw}>]C}Ycu}t>YrD}cE}E>YmD}>Y>fZ}f>nD
}3RD}
ctDf>YY}fDmrYn}cAn>U]DC}U]}nRD}mUZrY>nDC}fccZ}Um}ADnnDf}nR>]}nR>n}
cE}RcmdUn>Y}fccZ
}3RD}fD>mc]}Um}nR>n}crf}nf>U]U]Q}C>n>A>mDm}>fD}
BcYYDBnDC} U]} nRD} mUZrY>nDC} fccZ} nRDfDEcfD} nRD} BRcmD]}
d>f>ZDnDfm} cE} nRD} ZcCDY} >fD} ZcfD} mrUn>AYD} Ecf} nRD} mUZrY>nDC}
fccZ}Bc]CUnUc]
}
Zc]Q}}DtD]nm} ' 33.6+D*75D;,%D"%$D79D$<9.6+D> 32.6+ D
3@.6+D 57;.763%::D 76D ;,%D )779 D  "6795 3D :8%%#,D >]C}
 "6795 3D 676 :8%%#,D cAn>U]m} nRD} ADmn} fDmrYn} U]} nDfZ} cE}
1D]mUnUtUnw
} 3RD} t>YrD} cE} !

.} cAn>U]DC} Ecf} nRDmD} DtD]nm} Um}
>Ymc} RUQR
} 3RUm} ZD>]m} nR>n} crf} mwmnDZ} CDnDBn} nRDmD} DtD]nm}
uRD]DtDf}nRDw}R>ddD]
}rn} Un}>Ymc}R>m}mDtDf>Y}E>YmD}dcmUnUtDm
}
3RUm}fDmrYn} Um}>BBDdn>AYD}U]}nRD}Bc]nDvn} cE} mrftDUYY>]BD}mwmnDZ}
Ecf} dDcdYDm} uUnR} mdDBU>Y} ]DDC} ADB>rmD} nRDmD} DtD]nm} >fD}

!U

UZdcfn>]n} >]C} nRD} ZUmm} cE} nRDmD} DtD]nm} B>]} B>rmD} nRD} Z>Wcf}
RD>YnR}dfcAYDZ
}3RD}fDBcQ]UnUc]}fDmrYnm}cE} :; @.6+D;77D376+D.6D
;,%D 9%:;D 9775D >]C} "%.6+D 7<;D 7&D ;,%D 9775D ;77D 376+D >fD}
fDY>nUtDYw} QccC
} 3RD} Z>U]} fD>mc]} Um} nR>n} nRDmD} DtD]nm} >fD}
fDBcQ]UyDC} Aw} rmU]Q} nRD} fDmrYnm} cE} cAWDBn} YcB>YUy>nUc]} >]C}
nf>BXU]Q} ZcCrYD
} 3RD} Cccfm} cE} crf} nDmnU]Q} fccZ} >fD}
nf>]md>fD]n
} 3RDfDEcfD} nRD} UYYrZU]>nUc]} DEEDBn} B>rmD} mcZD}
E>YmD}CDnDBnUc]m
}

#-;?@L<0.30=0?GF}

3RUm} mnrCw} u>m} Cc]D} U]} nRD} M>ZDucfX} cE} nRD} %]nDf]>nUc]>Y}
BccdDf>nUc]} dfcWDBn} <}  }<$#K*K3} >]C} u>m} mrddcfnDC} Aw}
nRD} .DmD>fBR} "f>]n} EfcZ} 6UDn]>Zm} *>nUc]>Y} !cr]C>nUc]} Ecf}
1BUD]BD} >]C} 3DBS]cYcQw} DtDYcdZD]n} *!,13 } *c
}
!7,
$ 
 
2
}

P6Q}

G%(<0}5J} G
" 
"
!"" 
""
" " }
" }
"  
" } }  " "
"
""
 "  " 
"  }! "
""
" " 
"  }
! "" ""
"
"" 
"  }
 "" "
""
" " 
"  }"

"

"  }"

U


" "

"



FYjpdqdudqx}Rq}qdiYp}}6}
2"D}Rq}qdiYp}}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
""  "
" "

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

U
  U
 U
 U
 U
 U
U
 U
 U
 U

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

























PQ}

PQ}

PQ}

PQ}

PQ}

PQ}

G%(<0}J} G
" 
"
!"" 
""
" " }


" "

" " }
"

TGU

FYjpdqdudqx}Rq} qdiYp}}}
2 $ D}Rq}qeiYp}}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
FYjpdqdudqx}Rq}qdiYp}}
2"D}Rq}qdiYp}}
""  "
" "

 U
 U
 U
 U
 U
 U
 U
 U
 U
 U

U
 U
U
 U
 U
 U
U
U
 U
 U
 U
U
 U
 U
 U  U
 U
 U
 U  U

 U
U
 U
U
 U
U
 U
U
 U
 U

PQ}

U
 U
 U  U
U
 U
 U  U
 U
 U
 U  U
 U
 U
 U
U
 U
 U
 U  U

















:p







6
}

E020D0?-0F}

PQ}

PQ}

P66Q}

PQ}

PQ}

,@?-<IF5@?F}

%]} nRUm} d>dDf} uD} R>tD} dfDmD]nDC} >]} >A]cfZ>Y} DtD]n} CDnDBnUc]}
rmU]Q} ZrYnUZDCU>} >rCUc} >]C} tUCDc} Ecf} Zc]UncfU]Q} mwmnDZ
}
7D} R>tD} >]>YwyDC} nRD} D]tUfc]ZD]n} mDnrd} U]nfcCrBDC} >rCUc}
A>mDC} >A]cfZ>Y} DtD]n} CDnDBnUc]} >]C} tUCDc} A>mDC} >A]cfZ>Y}
DtD]n} CDnDBnUc]
} 7D} R>tD} dDfEcfZDC} >Ymc} nuc} DvdDfUZD]nm} U]}
nuc} CUEEDfD]n} Bc]CUnUc]m } 3RD} cAn>U]DC} fDmrYnm} mRcu} nR>n} crf}
mwmnDZ} Um} B>d>AYD} nc} CDnDBn} } DtD]nm} cE} U]nDfDmn} uUnR} >} RUQR}
t>YrD} cE} mD]mUnUtUnw} >]C} Ycu} t>YrD} cE} E>YmD} >Y>fZ} f>nD
} %]} nRD}
OnrfD} uD} ucrYC} YUXD} nc} UZdfctD} nRUm} mwmnDZ} Aw} <} UZdfctD}
cAWDBn} CDnDBnUc]} nf>BXU]Q} >]C} YcB>YUy>nUc]} ZDnRcC} Aw} n>XU]Q}
U]nc} >BBcr]n} UYYrZU]>nUc]} BR>]QD} >]C} cAWDBn} cBBYrmUc]} }
DvnD]C} nRD} mwmnDZ} mc} nR>n} Un} B>]} CDnDBn} >} Y>fQDf} ]rZADf} cE}
DtD]nm}cE}U]nDfDmn}

!U

PQ}
PQ}

PQ}

PQ}

PQ}

PQ}

$UkvX} 3 } =xjRr} 0 } RjX} DkXXYj} E GbY} bsiRj} YwlYndYjVY} k\}


sUdmsdqksp}Vkilsqdja }<&49 6,9&<20387< ukh}  }ll} }}
(kjbkiiY} F } -Rilk} 0 } 0pqZuY} . } RjX} 3sYjjYV}  } CnkpR\Yz
YwqYjXYX }R} qYhYiYXdVdjY} lhRq\kniqk} VkjqndUsqY} qkiYXdVRh} XdRajkpdp }
&/&0&%< &/&" 4&< ukh}  }jk}  }ll}  }
.} HpqnRqY }=} JRVbYn }0} -RpqYhhd }RjX}W C} ?asxYj }FksjX}lnkVYppdja}
\kn} 4YRhqb} FiRnq} 4kiY } dj} 17&41 7,21 /< 21(&4&1"&< 21< 0 47< +20&6<
1$< +& /7+< &/&0 7,"< }
.} 5pqnRqY }:} (ksXx }4} =YXfRbYX }RjX} <} (RhXdjaYn }=YXdVRh}nYikqY}
ikjdqkndja}spdja}pksjX}YjudnkjiYjq}RjRhxpdp}SX}vYRnRUhY}pYjpknp } dj}
,2<  < 4YnRghdkj } }ll} }
3sdhhRsiY} FRVVk }J[nkjdmsY} :ksidYn }Yq} R7 }.YqYVqdkj} k\} RVqdudqdYp} k\}
XRdhx} hdudja} dilRdniYjq} dj} $hybYdiYnp} XdpYRpY} RjX} idhX} VkajdqduY}
dilRdniYjq} spdja} dj\kniRqdkj} RjX} VkiisjdVRqdkj} qYVbjkhkax } /,1<
17&49<),1)< ukh}  }ll} }  }.YV} }
=dVbYh} JRVbYn } %jqbkjx} 2hYsnx } 2nRjVkdp} CknqYq } :YRj 2nRjVkdp}
FYndajRq }RjX}?knUYnq}?ksnx }-kilhYqY}FksjX}RjX}FlYYVb}DYVkajdqdkj}
FxpqYi} \kn} 4YRhqb} FiRnq} 4kiYp } $llhdVRqdkj} qk} qbY} DYVkajdqdkj} k\}
$VqdudqdYp} k\} .Rdhx} <dudja } dj} &:< &9&/230&176< ,1< ,20&$," /<
1),1&&4,1)< 5jGYVb } }l} }
:} -bYj } (Rqbnkki} $Vqdudqx} =kjdqkndja} (RpYX} kj} FksjX } dj}
< < =sjdVb } }ll} }
;} ;di }E -bRhdXRUbkjapY }.} 4RnvkkX }RjX} <} .Rudp }DYRh qdiY}
2knYanksjX} (RVganksjX} FYaiYjqRqdkj} Ipdja} -kXYUkkg} =kXYh } & /;
0&< 0 ),1)< 3&"< 668&< ,$&2< !.&"7< 42"&66< ukh}  }jk}  }ll}
 :sj }
?RujYYq} .RhRh} RjX} (dhh} Gndaap }4dpqkanRip} k\} kndYjqYX} anRXdYjqp} \kn}
bsiRj} XYqYVqekj
} ej} 20387'4< -6-21< 1$<  77'41< '#2)1-7-21< FRj}
.dYak }-$ }IF$ } }ukh} 6 }ll} } }
?ksnx }? }DsiYRs }C }(ksngY }"; }A<Rdabdj }3 }RjX} <sjXx }: 0 }
$} lnklkpRh} \kn} qbY} VhRppd]VRqdkj} RjX} YuRhsRqdkj} k\} \Rhh} XYqYVqknp }
9)> }ukh}  }jk}  }ll}  }.YV}}
JdjRx}JdpbvRgRniR }-bdqqTRjfRj}=RjXR7 }RjX}FbRidg}FsnRh }$sqkiRqdV}
.YqYVqdkj}k\}4siS}2Rhh}dj}JdXYk }lnYpYjqYX}Rq}qbY}GbY}jX}djqYoRqdkjRh}
Vkj\YnYjVY} kj} CRqqYo} nYVkajdqdkj} RjX} iRVbdjY} djqYhhdaYjVY}  }
FlndjaYn JYnhRa }(Ynhdj }4YdXYhUYna } }ll} }
(kUdVg }"2} RjX} .Rudp} L }GbY} nYVkajdqdkj} k\} bsiS} ikuYiYjq}
spdja} qYilknRh} qYilhRqYp } < 4 16<  77&5< 1 /<  "+< 17&//<
< ukh}  }jk}  }ll}  }=Rn} }
-} DksadYn }$hRdj} Fq %njRsX }:RVmsYhdjY} DksppYRs }RjX} :YRj} =YsjdYn }
JdXYk} FsnuYdhhRjVY} \kn} 2Rhh} .YqYVqdkj } dj} ,$&2< 849&,// 1"&<  }
ll }
} FRsjXYnp }DYRh} qdiY} XdpVndidjRqdkj} k\} UnkRXVRpq} plYYVcispdV } dj}
 < < $qhRjqR }3YknadR }IF$ }}
F} FbR]YY }$} qvk pqRaY} plYYVb} RVqdudqx} XYqYVqdkj} pxpqYi} VkjpdXYndja}
`RVqRh} RplYVqp} k\} lnkpkXx }  77&5< &"2*,7< &77< ukh}  }jk}  }ll}
 }
-bYsja 2Rq} -bRj} RjX} 0ndV} L=} Os }%j} RUjkniRh} pksjX} XYqYVqdkj}
RjX} VhRppd]VRqdkj} pxpqYi} \kn} psnuYdhhRjVY} RllhdVRqdkjp } dj} <
< $RhUkna } }ll} }
-} C} ?asx{j}RjX} + + Gn|j }FksjX}VhRppd]VRqdkj}\kn}YuYjq}XYqYVqdkj}
$llhdVRqdkj} djqk} iYXdVRh} qYhYikjdqkndja } dj} 20 17&/  < 4k} -bd}
=djb}Vdqx }JdYqjRi } }ll} }
J} (} <Y }.} .} GnRj }0} -RpqYhhd }<} (YpRVdYn }RjX}  2} FYndajRq }
FlkgYj} SX}vndqqYj} hSasRaY} nYpksnVYp} \kn} JdYqjRiYpY } dj} <
<
<dpUkj }CknqsaRh } }ll} }
E =Rnqdj }JYnp} sjY} nYVkjjRdppRjVY} ishqdikXRhY} Xs} qYwqY} Yq} XY} hR}
lRnkhY} lksn} h}RjRhxpY} XY} XkVsiYjqp} udX[kp} l[XRakadmsYp } Cb.} qbYpdp }
<R}DkVbYhhY}IjduYnpdqx }}

You might also like