Professional Documents
Culture Documents
Attack Detection and Identification in CPS
Attack Detection and Identification in CPS
Cyber-Physical Systems
Fabio Pasqualetti, Florian Dorfler, and Francesco Bullo
AbstractCyber-physical systems are ubiquitous in power systems, transportation networks, industrial control processes, and
critical infrastructures. These systems need to operate reliably in
the face of unforeseen failures and external malicious attacks. In
this paper (i) we propose a mathematical framework for cyberphysical systems, attacks, and monitors; (ii) we characterize
fundamental monitoring limitations from system-theoretic and
graph-theoretic perspectives; and (iii) we design centralized and
distributed attack detection and identification monitors. Finally,
we validate our findings through compelling examples.
I. I NTRODUCTION
Cyber-physical systems integrate physical processes, computational resources, and communication capabilities. Examples of cyber-physical systems include transportation networks, power generation and distribution networks, water
and gas distribution networks, and advanced communication
systems. As recently highlighted by the Maroochy water
breach [1] in March 2000, multiple recent power blackouts
in Brazil [2], the SQL Slammer worm attack on the DavisBesse nuclear plant in January 2003 [3], the StuxNet computer
worm [4] in June 2010, and by various industrial security
incidents [5], cyber-physical systems are prone to failures and
attacks on their physical infrastructure, and cyber attacks on
their data management and communication layer.
Concerns about security of control systems are not new,
as the numerous manuscripts on systems fault detection,
isolation, and recovery testify [6], [7]. Cyber-physical systems,
however, suffer from specific vulnerabilities which do not
affect classical control systems, and for which appropriate
detection and identification techniques need to be developed.
For instance, the reliance on communication networks and
standard communication protocols to transmit measurements
and control packets increases the possibility of intentional
and worst-case attacks against physical plants. On the other
hand, information security methods, such as authentication,
access control, and message integrity, appear inadequate for
a satisfactory protection of cyber-physical systems. Indeed,
these security methods do not exploit the compatibility of
the measurements with the underlying physical process or the
control mechanism, and they are therefore ineffective against
insider attacks targeting the physical dynamics [1].
This material is based upon work supported in part by NSF grant CNS1135819 and by the Institute for Collaborative Biotechnologies through grant
W911NF-09-0001 from the U.S. Army Research Office.
Fabio
Pasqualetti,
Florian
Dorfler,
and
Francesco
Bullo
are with the Center for Control, Dynamical Systems and
Computation,
University
of
California
at
Santa
Barbara,
{fabiopas,dorfler,bullo}@engineering.ucsb.edu.
Related work. The analysis of vulnerabilities of cyberphysical systems to external attacks has received increasing
attention in the last years. The general approach has been
to study the effect of specific attacks against particular systems. For instance, in [8] deception and denial of service
attacks against a networked control system are defined, and,
for the latter ones, a countermeasure based on semi-definite
programming is proposed. Deception attacks refer to the
possibility of compromising the integrity of control packets
or measurements, and they are cast by altering the behavior
of sensors and actuators. Denial of service attacks, instead,
compromise the availability of resources by, for instance,
jamming the communication channel. In [9] false data injection attacks against static state estimators are introduced.
False data injection attacks are specific deception attacks in
the context of static estimators. It is shown that undetectable
false data injection attacks can be designed even when the
attacker has limited resources. In a similar fashion, stealthy
deception attacks against the Supervisory Control and Data
Acquisition system are studied, among others, in [10]. In [11]
the effect of replay attacks on a control system is discussed.
Replay attacks are cast by hijacking the sensors, recording
the readings for a certain amount of time, and repeating
such readings while injecting an exogenous signal into the
system. It is shown that these attacks can be detected by
injecting a signal unknown to the attacker into the system.
In [12] the effect of covert attacks against control systems is
investigated. Specifically, a parameterized decoupling structure
allows a covert agent to alter the behavior of the physical
plant while remaining undetected from the original controller.
In [13] a resilient control problem is studied, in which control packets transmitted over a network are corrupted by a
human adversary. A receding-horizon Stackelberg control law
is proposed to stabilize the control system despite the attack.
Recently the problem of estimating the state of a linear system
with corrupted measurements has been studied [14]. More
precisely, the maximum number of tolerable faulty sensors is
characterized, and a decoding algorithm is proposed to detect
corrupted measurements. Finally, security issues of specific
cyber-physical systems have received considerable attention,
such as power networks [15][19], linear networks with misbehaving components [20], [21], and water networks [22], [23].
Contributions. The contributions of this paper are as follows.
First, we describe a unified modeling framework for cyberphysical systems and attacks (Section II). Motivated by existing cyber-physical systems and existing attack scenarios, we
model a cyber-physical system under attack as a descriptor
system subject to unknown inputs affecting the state and
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
(1)
[36]. The vulnerability of open channel networks to cyberphysical attacks has been studied in [12], [22], and municipal
water networks are also known to be susceptible to attacks on
the hydraulics [1] and biochemical contamination threats [23].
Power networks and mass transport network under attack
can be modeled by descriptor systems with unknown inputs.
For instance, the small-signal version of the classical structurepreserving power network model reads as [30], [31]
I 0 0
(t)
0 I 0
(t)
0
0 Mg 0 (t)
= Lgg Dg Lgl (t) + P (t),
0 0 0
Llg 0 Lll (t)
P (t)
(t)
(2)
where and denote the generator rotor angles and
i
h frequenL L
cies, are the voltage angles at the buses, L = Lgglg Lglll is
the network susceptance matrix, Mg and Dg are the diagonal
matrices of the generator inertial and damping coefficients, and
P and P are power injections at the generators and buses.
We refer to [35], [36] for the modeling of water networks.
Model of monitors. A monitor is a deterministic algorithm
: with access to continuous-time measurements and knowledge of the system dynamics, that is, =
{E, A, C, y(t) t R0 }. The output of a monitor is =
{1 , 2 }, with 1 {True, False}, and 2 {1, . . . , n + p}.
In particular, the output 1 reveals the presence of attacks,
while 2 corresponds to the attack set.
Let y(x, u, t) be the output signal of (1) generated from the
initial state x by the attack input u. Then, the monitoring input
y(t) equals y(x0 , uK , t) at all times, where x0 is the system
initial state and uK is the attack signal of the attack set K.
Since we only consider deterministic cyber-physical systems,
we assume monitors to be consistent, that is,
(i) 1 = True only if the attack set K is nonempty (1 =
False, otherwise),
(ii) 2 = if and only if 1 = False, and
(iii) 2 = K only if K is the (unique) smallest subset
S {1, . . . , n + p} satisfying y(t) = y(x1 , uS , t)
for some initial state x1 and at all times t R0
(2 = {1, . . . , n + p}, otherwise).
Observe that, if S = {1, . . . , n+p}, then there always exists an
attack signal uS satisfying y(t) = y(x0 , uK , t) = y(x1 , uS , t).
Our consistency assumption ensures that false-alarms are not
triggered by our monitors. Examples of monitors can be found
in [10], [11], [17].
The objective of a monitor is twofold:
Definition 1: (Attack detection and identification) Consider system (1) with nonzero attack (BK uK , DK uK ). The
attack (BK uK , DK uK ) is detected by a monitor if 1 =
True. The attack (BK uK , DK uK ) is identified by a monitor
if 2 = K.
An attack is undetectable (respectively unidentifiable) if no
monitor detects (respectively identifies) the attack. Of course,
an undetectable attack is also unidentifiable, since it cannot
be distinguished from the zero attack. An attack set K is undetectable (respectively unidentifiable) if there exists an undetectable (respectively unidentifiable) attack (BK uK , DK uK ).
Model of attacks. In this work we consider colluding omniscient attackers with the ability of altering the cyber-physical
x(t)
x(0)
y(t)
Bu
(t)
Du(t)
u
(t)
x
(0)
(sE A)1
(sE A)1
x(t)
C
C
x(t)
(sE A)1
(sE A)1
y(t)
Du(t)
y(t)
Du(t)
x(0)
(sE A)1
C(sE A)
x(t)
(s p) 1
y(t)
Du(t)
Following Theorem 3.3, an attack (BK uK , DK uK ) is undetectable if it excites only zero dynamics for the dynamical
system (1). Moreover, the existence of undetectable attacks
for the attack set K is equivalent to the existence of invariant
zeros for the system (E, A, BK , C, DK ). For the notions of
zero dynamics and invariant zeros we refer the reader to [25],
[28]. The following theorem shows that analogous statements
hold for the identifiability of attacks.
Theorem 3.4: (Identifiability of cyber-physical attacks) For
the descriptor system (1) and an attack set K, the following
statements are equivalent:
(i) the attack set K is unidentifiable; and
(ii) there exists an attack set R, with |R| |K| and R 6= K,
s C, gK R|K| , gR R|R| , and x Rn , with
x 6= 0, such that (sE A)x BK gK BR gR = 0 and
Cx + DK gK + DR gR = 0.
Moreover, there exists an unidentifiable attack set K, with
|K| = k N0 , if and only if there exists an undetectable
with |K|
2k.
attack set K,
Proof: Due to linearity of the system (1), the unidentifiability condition in Lemma 3.2 is equivalent to the condition
y(xK xR , uK uR , t) = 0, for some initial conditions
xK , xR , and attack signals uK , uR . The equivalence between
statements (i) and (ii) follows analogously to the proof of
Theorem 3.3. Finally, the last statement follows from Theorem
3.3, and the fact that B = [I, 0] and D = [0, I].
Theorem 3.4 shows that the existence of an unidentifiable
attack set K of cardinality k is equivalent to the existence of
invariant zeros for the system (E, A, BK , C, DK ), with |K|
2k.
Remark 3: (Static and active monitors, and noisy dynamics) A particular monitor is the so-called static monitor which
verifies the consistency of the measurements without knowledge of the system dynamics and without exploiting relations
among measurements taken at discrete time instants. For
instance, the bad data detector in [9], [37] is a static monitor.
Then, an attack (BK uK , DK uK ) is undetectable by a static
monitor if and only if, for some state trajectory x : R0 Rn
and for all times t N it holds Cx(t) + DK uK (t) = 0. Note
that state attacks are undetectable by static monitors [17].
An active monitor injects an auxiliary input (Bv, Dv) to
reveals attacks [11]. Since auxiliary inputs do not alter the
invariant zeros of system (1), active monitors share the same
fundamental limitations of our monitors.
An analogous reasoning shows that the existence of undetectable attacks for a noise-free system implies the existence of
undetectable attacks for the same system driven by noise. The
converse does not hold, since attackers may remain undetected
by injecting a signal compatible with the noise statistics.
Remark 4: (Inconsistent initial conditions and impulsive
attacks) If the consistency assumption (A2) is dropped, then
discontinuities in the state x(t 0) may affect the measurements y(t 0). For instance, for index-one systems, inconsistent initial conditions lead to initial jumps for the algebraic
equations to be satisfied. Consequently, the inconsistent initial
value [0T x2 (0)T ]T Ker(E) cannot be recovered through
measurements.
b3
1
0.8
0.6
0.4
0.2
0.8
0.6
0.2
b6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
g2
b2
0.4
0.6
0.8
1
g3
b4 b5
u1
u2
b1
g1
y
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
u1
u2
y2
y1
a4,1
0
0
a7,1
0
0
0
0
0
0
0
0
0
0
0
0
a5,2
0 a6,3
0
0
a8,2 0
0 a9,3
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
a4,4
0 a5,5
0
0 a6,6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
a4,7
0
0
a7,7
0
0
a10,7
a11,7
0
0
0
0
a5,8
0
0
a8,8
0
a10,8
0
a12,8
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
a6,9
0
a7,10 a7,11
0
a8,10
0
a8,12
a9,9
0
a9,11 a9,12
0 a10,10
0
0
a11,9
0
a11,11
0
a12,9
0
0
a12,12
= |sE A| D
A)1 B 6= 0,
C
D
+ C(sE
D
A)1 B corresponds to the transfer matrix of
the cyber-physical system. Since there are |U| independent
+ C(sE
,
[C]
[D]
and [D]
are the structured output matrices correwhere [C]
sponding to the chosen |U| output vertices. Consider the graph
[A]),
that consists of N = |X | + |U| vertices, and
G(s[E]
ij 6= 0. Notice
an edge from vertex j to i if Aij 6= 0 or E
that a path from U to Y in the digraph associated with the
structured system corresponds, possibly after relabeling the
output variables, to a cycle in involving input/output vertices
[A]).
Observe that there are only |U| 1 such
in G(s[E]
(disjoint) cycles. Hence, there is no cycle family of length N ,
[D])
fails to be structurally
and the system ([E], [A], [B], [C],
left-invertible [40, Theorem 1]. Since the same reasoning holds
for every
h set of |U|i output vertices, every realization of the
[B]
pencil s[E][A]
has no invertible minor of size N , and
[C]
[D]
the claimed statement follows.
If the system initial state is unknown, then an undetectable
attack u is characterized by the existence of a pair of initial
conditions x1 and x2 such that y(x1 , 0, t) = y(x2 , u, t), or,
equivalently, by the existence of invariant zeros for the given
cyber-physical system. We will now show that, provided that a
cyber-physical system is left-invertible, its invariant zeros can
be computed by simply looking at an associated nonsingular
state space system. Let the state vector x of the descriptor
T T
system (1) be partitioned as [xT
1 x2 ] , where x1 corresponds
to the dynamic variables. Let the network matrices E, A, B,
C, and D be partitioned accordingly, and assume that the
descriptor system (1) is given in semi-explicit form, that is,
E = blkdiag(E11 , 0), where E11 is nonsingular.2 In this case,
the descriptor system (1) reads as
E11 x 1 (t) = A11 x1 (t) + B1 u(t) + A12 x2 (t) ,
0 = A21 x1 (t) + A22 x2 (t) + B2 u(t) ,
(3)
Area 1
Area 3
G
G
G
G
G
G
Area 4
G
G
G
G
G
G
G
G
G
G
Area 2
G
G
G
G
G
G
G
G
G
Area 5
G
G
= (A + GC)w(t) Gy(t),
r(t) = Cw(t) y(t),
(5)
Fig. 3. Partition of IEEE 118 bus system into 5 areas. Each area is monitored
and operated by a control center. These control centers cooperate to estimate
the state and to assess the functionality of the whole network.
(6)
0
=
. (7)
C
DK
gK
0
The second equation of (7) yields C w
= DK gK . By substituting C w
by DK gK in the first equation of (7), we obtain
sE A BK
w
0
=
.
(8)
C
DK gK
0
3 For a regular pair (E, A), let (E, A) = { : C, || < , |E
A| = 0}. The pair (E, A) is Hurwitz if real() < 0 for each (E, A).
A1 A1N
..
.. = A + A ,
A = ...
D
C
.
.
AN 1 AN
(10)
(12)
= (AD +
GC)e(t) + v(t) and 2 : v(t) = AC e(t). When regarded as
input-output systems with respective input/output pairs (v, e)
and (e, v), both 1 and 2 are causal and internally stable.
Hence, by [43, Theorem 4.11], the overall error dynamics (12)
are stable if the loop transfer function 1 (j ) 2 satisfies the
spectral radius condition (1 (j ) 2 ) < 1 for all R.
The latter condition is equivalent to (11).
An implementation of the decentrally stabilized filter (10)
under assumptions (A1)-(A7) requires the input AC w and
hence continuous communication among control centers. To
overcome this continuous communication obstacle we rely on
waveform relaxation methods [44], [45] developed for parallel
numerical integration. The Gauss-Jacobi waveform relaxation
applied to the filter (10) yields the waveform relaxation
iteration
E w (k) (t) = (AD +GC)w(k) (t)+AC w(k1) (t)Gy(t), (13)
where k N denotes the iteration index, t [0, T ] is the
integration interval for some uniform time horizon T > 0,
and w(k) : [0, T ] Rn is a trajectory with initial condition
w(k) (0) = w0 for each k N. Notice that (13) is a descriptor
system with state w(k) , and known input AC w(k1) , since the
value of w(t) at iteration k 1 is used. The iteration (13) is
initialized with an initial profile w(0) : [0, T ] Rn .
The iteration (13) is said to be (uniformly) convergent if
(14)
lim max
w(k) (t) w(t)
= 0 ,
k t[0,T ]
(15)
(k)
(18)
jNi
(k)
v x , vy , 0
= A(t) + vx (t),
subject to E (t)
y(t) = C(t) + vy (t),
(0) = 0 Rn .
(19)
minkvx kL0 + k
y C(t)k
L0 =
vx ,
vx (t),
(20)
vx ( )d kL0 ,
` , problem (20)
|supp(
y C C 0 vx ( )d )| = k
y C k
0
` .
IV-A and IV-B, the output of the attack identification filter for
the attack set K will be a residual signal rK . If each monitor
is equipped with such an attack identification filter and if the
attack K is identifiable, then the outputs of the monitor and the
filter are related as follows: 2 = K if and only if rK (t) = 0
for all t R0 .
The design of this residual filter consists of three steps
an input output transformation, a state transformation, and an
output injection and definition of a specific residual. We start
by showing that the identification problem can be carried out
for a modified system without corrupted measurements.
Lemma 4.6: (Attack identification with safe measurements) Consider the descriptor system (1) with attack set K.
The attack set K is identifiable for the descriptor system (1) if
and only if it is identifiable for the following descriptor system
without corrupted measurements:
E x(t)
= (A BK DK
C)x(t) + BK (I DK
DK )uK (t),
y(t) = (I DK DK
)Cx(t).
(21)
sE A BK BR
x
0
C
DK
DR gK = 0 ,
(22)
gR
0
C
DK
DR
sE A
BK
BR
0
x
gK = 0 .
DK DK C
DK
DK DK
DR
0
gR
(I DK DK
)C
0
(I DK DK
)DR
The variable gK can be eliminated in the first redundant
(corrupted) output equation according to
gK = DK
Cx DK
DR gR + (I DK
DK )gK .
T T T
gR ] = 0 has no solution, where P (s) is
Thus, P (s)[xT gK
sE A + BK DK C BK (I DK DK ) BR + BK DK DR
(I DK DK
)C
0
(I DK DK
)DR
(24)
Notice that the conditioned invariant S and an output injection L satisfying (23) and (24) always exist (for instance,
,
P EQ =
22 , P BK (I DK DK ) =
0
0 E
A
A
C + LC)Q = 11 12 ,
P T (A BK DK
0 A22
1 C
2 .
(I DK DK
)C)Q = C
T
y(t) = (I C1 C )y(t),
1
is such that (E
22 , A22 + G(I
where w2 (0) = x2 (0), and G
Consider the filter error e(t) = w(t) x(t), and notice that
11 E
12 e 1 (t)
K
E
A11 A12 e1 (t)
B
=
uK (t),
0
E22 e 2 (t)
0
A22 e2 (t)
0
rK (t) = (I C1 C )C2 e2 (t),
2
2
3
2
u2
C1 C1 )C2 .
0
sE
gK 0
C1
0
DR
C2
0
gR
1 C
DR
C
C
0
C
1 2
1
gR
0
(I C1 C1 )C2 0 (I C1 C1 )DR
T
= [0 0 0 0] .
(27)
u1
y2
y1
!1
!2
!3
1
2
3
!1
!2
!3
1
2
3
k
and by intersecting the attack sets generating zero residuals.
In Section IV-C we show that this non-polynomial complexity
is inherent to the identification problem.
V. I LLUSTRATIVE EXAMPLES
A. An example of state attack against a power network
Consider the power network model analyzed in Example
1 and illustrated in Fig. 2(a). We consider a load altering
attack [18] affecting the power demand P at the load buses
4 and 5. Assume that the variables 4 and 5 are affected by
the unknown and unmeasurable signals u1 and u2 . Suppose
that a monitoring unit measures the state variables of the first
generator, that is, y1 = 1 and y2 = 1 .
Notice from Fig. 4(a) that the maximum size of a linking
from the failure to the output vertices is 1, so that, by Theorem
0
.058
0
0
0
0
0
0
.063
.063
0
0
0
0
.092
0
.170
0
.072 .101
0
0
.262
0
0
.173
bus 14
bus 11
G
bus 1
bus 6
bus 10
bus 9
G
G
bus 7
bus 5
0
0
.059
0
0
.059
0
0
0
.058 0
0
.235
0
0
.085 .092
0
0
.063
0
0
.296
0
.161
0
.072 .
L =
0
0
.059
0
0
.330
0
.170 .101
0
0
0
.085 .161
0
.246
0
0
0
0
bus 13
bus 12
bus 2
bus 4
bus 8
bus 3
Fig. 5. For the IEEE 14 bus system in Fig. 5, if the voltage angle of one bus is
measured exactly, then a cyber attack against the measurements data is always
detectable by our dynamic detection procedure. In contrary, as shown in [9],
a cyber attack may remain undetected by a static procedure if it compromises
as few as four measurements.
VI. C ONCLUSION
Residual area 1
0
1
0
1
10
15
20
25
30
35
40
25
30
35
40
25
30
35
40
25
30
35
40
25
30
35
Residual area 2
0
1
0
1
10
15
20
Residual area 3
0
1
0
1
10
15
20
Residual area 4
0
1
0
1
10
15
20
Residual area 5
0
1
0
10
15
20
40
Time
(a)
R EFERENCES
Error
4
3
2
1
0
0
10
20
30
40
50
60
70
80
90
100
Iterations
(b)
Fig. 6. Distributed detection of an output attack in the IEEE 118 system: The
attacker compromises the measurements of all generators in area 1 from time
30s with a signal uniformly distributed in the interval [0, 0.5]. The residuals
in Fig. 6(a) show that the attack is correctly detected, because the residual
functions do not decay to zero. For the simulation, we run k = 100 iterations
of the
attack detection
method. The plot in Fig. 6(b) represents the error
max
w(k) (t) w(t)
of our waveform relaxation based filter (15) with
t[0,T ]
[1] J. Slay and M. Miller, Lessons learned from the Maroochy water
breach, Critical Infrastructure Protection, vol. 253, pp. 7382, 2007.
[2] J. P. Conti, The day the samba stopped, Engineering Technology,
vol. 5, no. 4, pp. 4647, 06 March - 26 March, 2010.
[3] S. Kuvshinkova, SQL Slammer worm lessons learned for consideration
by the electricity sector, North American Electric Reliability Council,
2003.
[4] J. P. Farwell and R. Rohozinski, Stuxnet and the future of cyber war,
Survival, vol. 53, no. 1, pp. 2340, 2011.
[5] G. Richards, Hackers vs slackers, Engineering & Technology, vol. 3,
no. 19, pp. 4043, 2008.
[6] M.-A. Massoumnia, G. C. Verghese, and A. S. Willsky, Failure detection and identification, IEEE Transactions on Automatic Control,
vol. 34, no. 3, pp. 316321, 1989.
[7] M. Basseville and I. V. Nikiforov, Detection of Abrupt Changes: Theory
and Application. Prentice Hall, 1993.
[8] S. Amin, A. Cardenas, and S. Sastry, Safe and secure networked
control systems under denial-of-service attacks, in Hybrid Systems:
Computation and Control, vol. 5469, Apr. 2009, pp. 3145.
[9] Y. Liu, M. K. Reiter, and P. Ning, False data injection attacks against
state estimation in electric power grids, in ACM Conference on Computer and Communications Security, Chicago, IL, USA, Nov. 2009, pp.
2132.
[10] A. Teixeira, S. Amin, H. Sandberg, K. H. Johansson, and S. Sastry,
Cyber security analysis of state estimators in electric power systems,
in IEEE Conf. on Decision and Control, Atlanta, GA, USA, Dec. 2010,
pp. 59915998.
[11] Y. Mo and B. Sinopoli, Secure control against replay attacks, in
Allerton Conf. on Communications, Control and Computing, Monticello,
IL, USA, Sep. 2010, pp. 911918.
[12] R. Smith, A decoupled feedback structure for covertly appropriating
network control systems, in IFAC World Congress, Milan, Italy, Aug.
2011, pp. 9095.
[13] M. Zhu and S. Martnez, Stackelberg-game analysis of correlated
attacks in cyber-physical systems, in American Control Conference,
San Francisco, CA, USA, Jul. 2011, pp. 40634068.
[14] F. Hamza, P. Tabuada, and S. Diggavi, Secure state-estimation for
dynamical systems under active adversaries, in Allerton Conf. on
Communications, Control and Computing, Sep. 2011.
[15] C. L. DeMarco, J. V. Sariashkar, and F. Alvarado, The potential for
malicious control in a competitive power systems environment, in IEEE
Int. Conf. on Control Applications, Dearborn, MI, USA, 1996, pp. 462
467.
[16] G. Dan and H. Sandberg, Stealth attacks and protection schemes for
state estimators in power systems, in IEEE Int. Conf. on Smart Grid
Communications, Gaithersburg, MD, USA, Oct. 2010, pp. 214219.
[17] F. Pasqualetti, F. Dorfler, and F. Bullo, Cyber-physical attacks in power
networks: Models, fundamental limitations and monitor design, in IEEE
Conf. on Decision and Control and European Control Conference,
Orlando, FL, USA, Dec. 2011, pp. 21952201.
[18] A.-H. Mohsenian-Rad and A. Leon-Garcia, Distributed internet-based
load altering attacks against smart power grids, IEEE Transactions on
Smart Grid, vol. 2, no. 4, pp. 667 674, 2011.
[19] S. Sridhar, A. Hahn, and M. Govindarasu, Cyberphysical system
security for the electric power grid, Proceedings of the IEEE, vol. 99,
no. 1, pp. 115, 2012.
[20] S. Sundaram and C. Hadjicostis, Distributed function calculation via
linear iterative strategies in the presence of malicious agents, IEEE
Transactions on Automatic Control, vol. 56, no. 7, pp. 14951508, 2011.
20
20
x(t)
20
0
r(t)
10
15
20
0.1
1
0
rK (t)
10
15
0.1
20
14.5
0.05
1
0
10
15
20
rR (t)
20
14.5
0.05
14.5
1
14.5
10
15
20
20
15
20
0
15.5
r(t)
15
15.5
rK (t)
15
10
15
0.1
10
15
0.1
20
14.5
0.1
1
0
15.5
20
20
14.5
1
0
10
15
20
1
0
20
x(t)
rR (t)
15
15.5
0.1
14.5
20
x(t)
15
r(t)
1
14.5
10
15
20
10
15
20
15
15.5
rK (t)
15
10
15
0.05
10
15
20
rR (t)
15
15.5
15.5
15
15.5
0.05
14.5
15
15.5
1
0
15
0.1
20
14.5
1
0
15.5
20
14.5
0.1
1
0
1
0
Time
20
0
15.5
20
10
15
20
1
14.5
15
15.5
Time
Fig. 7. In Fig. 7(a) we report our simulation results for the case of linear network dynamics without noise and for the proposed detection monitor (5) and
identification monitor (26), respectively. The state trajectory x consists of the generators angles and frequencies. The detection residual r becomes nonzero
after time 15s, and it reveals the presence of the attack. The identification residual rK is identically zero even after time 15s, and it reveals that the attack
set is K = {101, 102}. The identification residual rR is nonzero after time 15s, and it reveals that R is not the attack set. In Fig. 7(b) we report our
simulation results for the case of linear network dynamics driven by state and measurements noise. For this case, we choose the output injection matrices
of the detection and identification filters as the corresponding optimal Kalman gain. Due to the presence of noise, the residuals deviate from their nominal
behavior reported in Fig. 7(a). Although the attack is clearly still detectable and identifiable, additional statistical tools such as hypothesis testing [7] may be
adopted to analyze the residuals r, rK , and rR . In Fig. 7(c) we report our simulation results for the case of nonlinear network dynamics without noise. For
this case, the detection and identification filters are designed for the nominal linearized dynamics with output injection matrices as the corresponding optimal
Kalman gain. Despite the presence of unmodeled nonlinear dynamics, the residuals reflect their nominal behavior reported in Fig. 7(a).