221 Homework

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

EE221A Linear System Theory

Problem Set 1
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 9/1; Due 9/8
Problem 1: Functions. Consider f : R
3
R
3
, dened as
f(x) = Ax, A =

1 0 1
0 0 0
0 1 1

, x R
3
Is f a function? Is it injective? Is it surjective? Justify your answers.
Problem 2: Fields.
(a) Use the axioms of the eld to show that, in any eld, the additive identity and the multiplicative identity
are unique.
(b) Is GL
n
, the set of all n n nonsingular matrices, a eld? Justify your answer.
Problem 3: Vector Spaces.
(a) Show that (R
n
, R), the set of all ordered n-tuples of elements from the eld of real numbers R, is a vector
space.
(b) Show that the set of all polynomials in s of degree k or less with real coecients is a vector space over
the eld R. Find a basis. What is the dimension of the vector space?
Problem 4: Subspaces.
Suppose U
1
, U
2
, . . . , U
m
are subspaces of a vector space V . The sum of U
1
, U
2
, . . . , U
m
, denoted U
1
+ U
2
+
. . . +U
m
, is dened to be the set of all possible sums of elements of U
1
, U
2
, ..., U
m
:
U
1
+U
2
+. . . +U
m
= {u
1
+u
2
+. . . +u
m
: u
1
U
1
, . . . , u
m
U
m
}
(a) Is U
1
+U
2
+. . . +U
m
a subspace of V ?
(b) Prove or give a counterexample: if U
1
, U
2
, W are subspaces of V such that U
1
+ W = U
2
+ W, then
U
1
= U
2
.
Problem 5: Subspaces. Consider the space F of all functions f : R
+
R, which have a Laplace transform

f(s) =

0
f(t)e
st
dt dened for all Re(s) > 0. For some xed s
0
in the right half plane, is {f|

f(s
0
) = 0} a
subspace of F?
Problem 6: Linear Independence.
Let V be the set of 2-tuples whose entries are complex-valued rational functions. Consider two vectors in V :
v
1
=

1/(s + 1)
1/(s + 2)

, v
2
=

(s + 2)/((s + 1)(s + 3))


1/(s + 3)

Is the set {v
1
, v
2
} linearly independent over the eld of rational functions? Is it linearly independent over the
eld of real numbers?
Problem 7: Bases. Let U be the subspace of R
5
dened by
U = {[x
1
, x
2
, . . . , x
5
]
T
R
5
: x
1
= 3x
2
and x
3
= 7x
4
}
1
Find a basis for U.
Problem 8: Bases. Prove that if {v
1
, v
2
, . . . v
n
} is linearly independent in V , then so is the set {v
1
v
2
, v
2

v
3
, . . . , v
n1
v
n
, v
n
}.
2
EE221A Problem Set 1 Solutions - Fall 2011
Note: these solutions are somewhat more terse than what we expect you to turn in, though the important thing is
that you communicate the main idea of the solution.
Problem 1. Functions. It is a function; matrix multiplication is well dened. Not injective; easy to nd
a counterexample where f(x
1
) = f(x
2
) x
1
= x
2
. Not surjective; suppose x = (x
1
, x
2
, x
3
)
T
. Then f(x) =
(x
1
+ x
3
, 0, x
2
+ x
3
)
T
; the range of f is not the whole codomain.
Problem 2. Fields. a) Suppose 0

and 0 are both additive identities. Then x + 0

= x + 0 = 0 0

= 0.
Suppose 1 and 1

are both multiplicative identities. Consider for x = 0, x 1 = x = x 1

. Premultiply by x
1
to
see that 1 = 1

.
b) We are not given what the operations + and are but we can assume at least that + is componentwise addition.
The identity matrix I is nonsingular so I GL
n
. But I + (I) = 0 is singular so it cannot be a eld.
Problem 3. Vector Spaces. a) This is the most familiar kind of vector space; all the vector space axioms can be
trivially shown.
b) First write a general vector as x(s) = a
k
x
k
+ a
k1
x
k1
+ + a
1
x + a
0
. Its easy to show associativity and
commutativity (just look at operations componentwise). The additive identity is the zero polynomial (a
0
= a
1
=
= a
k
= 0) and the additive inverse just has each coecient negated. The axioms of scalar multiplication are
similarly trivial to show as are the distributive laws.
A natural basis is B :=

1, x, x
2
, . . . , x
k

. It spans the space (we can write a general x(s) as linear combinations of
the basis elements) and they are linearly independent since only a
0
= a
1
= = a
k
= 0 solves a
k
x
k
+ a
k1
x
k1
+
+ a
1
x + a
0
= 0. The dimension of the vector space is thus the cardinality of B, which is k + 1.
Problem 4. Subspaces. a) Yes, it is a subspace. First, U
1
+ + U
m
is a subset since its elements are sums of
vectors in subspaces (hence also subsets) of V and since V is a vector space, those sums are also in V . Also a linear
combination will be of the form

1
1
u
1
1
+
2
1
u
2
1
+ +
1
m
u
1
m
+
2
m
u
2
m
= w
1
+ + w
m
U
1
+ + U
m
where u
1
k
, u
2
k
, w
k
U
k
.
b) Counterexample: U
1
= {0} , U
2
= W = U
1
. Then U
1
+ W = W = U
2
+ W.
Problem 5. Subspaces. If we assume that S =

f|

f(s
0
) = 0

is a subset of F then all that must be shown is


closure under linear combinations. Let f, g S and , R. Then
L(f + g) =


0
[f(t) + g(t)] e
st
dt
=


0
f(t)e
st
dt +


0
g(t)e
st
dt
=

f(s) + g(s)
and thus we have closure since

f(s
0
) + g(s
0
) = 0 + 0 = 0.
If on the other hand we do not assume S F, then one could construct a counterexample of a transfer function
with a zero at s
0
and a pole somewhere else in the RHP that will be in S but not in F. f(t) := e
s0t
cos bt is one
such counterexample.
Problem 6. Linear Independence. a) Linearly dependent. Take =
s+3
s+2
, then v
1
= v
2
. b) Linearly
independent. Let , R. Then v
1
+ v
2
= 0 = (s + 2)(s + 3)
1
for all s, which requires that
= = 0.
Problem 7. Bases. B := {b
1
, b
2
, b
3
} =

1,
1
3
, 0, 0, 0

T
,

0, 0, 1,
1
7
, 0

T
, [0, 0, 0, 0, 1]
T

is a basis. They are linearly


independent by inspection and they span U since we can nd a
1
, a
2
, a
3
such that u = a
1
b
1
+ a
2
b
2
+ a
3
b
3
for all
u U .
Problem 8. Bases. Form the usual linear combination equalling zero:

1
(v
1
v
2
) +
2
(v
2
v
3
) + +
n1
(v
n1
v
n
) +
n
v
n
= 0

1
v
1
+ (
2

1
)v
2
+ + (
n1

n2
)v
n1
+ (
n

n1
)v
n
= 0
Now, since {v
1
, . . . , v
n
} is linearly independent, this requires that
1
= 0 and
2

1
=
2
= 0, ...,
n
= 0. Thus
the new set is also linearly independent.
EE221A Linear System Theory
Problem Set 2
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 9/8; Due 9/16
All answers must be justied.
Problem 1: Linearity. Are the following maps A linear?
(a) A(u(t)) = u(t) for u(t) a scalar function of time
(b) How about y(t) = A(u(t)) =

t
0
e

u(t )d?
(c) How about the map A : as
2
+ bs + c

s
0
(bt + a)dt from the space of polynomials with real coecients
to itelf?
Problem 2: Nullspace of linear maps. Consider a linear map A. Prove that N(A) is a subspace.
Problem 3: Linearity. Given A, B, C, X C
nn
, determine if the following maps (involving matrix multi-
plication) from C
nn
C
nn
are linear.
1. X AX +XB
2. X AX +BXC
3. X AX +XBX
Problem 4: Solutions to linear equations (this was part of Professor El Ghaouis prelim question
last year). Consider the set S = {x : Ax = b} where A R
mn
, b R
m
are given. What is the dimension
of S? Does it depend on b?
Problem 5: Rank-Nullity Theorem. Let A be a linear map from U to V with dimU = n and dimV = m.
Show that
dimR(A) + dimN(A) = n
Problem 6: Representation of a Linear Map. Let A : (U, F) (V, F) with dim U = n and dim V = m
be a linear map with rank(A) = k. Show that there exist bases (u
i
)
n
i=1
, and (v
j
)
m
j=1
of U, V respectively such
that with respect to these bases A is represented by the block diagonal matrix
A =

I 0
0 0

What are the dimensions of the dierent blocks?


Problem 7: Sylvesters Inequality. In class, weve discussed the Range of a linear map, denoting the rank
of the map as the dimension of its range. Since all linear maps between nite dimensional vector spaces can
be represented as matrix multiplication, the rank of such a linear map is the same as the rank of its matrix
representation.
Given A R
mn
and B R
np
show that
rank(A) + rank(B) n rank AB min [rank(A), rank(B)]
1
EE221A Problem Set 2 Solutions - Fall 2011
Problem 1. Linearity.
a) Linear: A(u(t) + v(t)) = u(t) + v(t) = A(u(t)) +A(v(t))
b) Linear:
A(u(t) + v(t)) =

t
0
e

(u(t ) + v(t ))d =

t
0
e

u(t )d +

t
0
e

u(t )d
= A(u(t)) +A(v(t))
c) Linear:
A(a
1
s
2
+ b
1
s + c
1
+ a
2
s
2
+ b
2
s + c
2
) =

s
0
((b
1
+ b
2
)t + (a
1
+ a
2
))dt =
=

s
0
(b
1
t + a
1
)dt +

s
0
(b
2
t + a
2
)dt
= A(a
1
s
2
+ b
1
s + c
1
) +A(a
2
s
2
+ b
2
s + c
2
)
Problem 2. Nullspace of linear maps. Assume that A : U V and that U is a vector space over the
eld F. N(A) := {x U : A(x) =
v
}. So by denition N(A) U. Let x, y N(A) and , F. Then
A(x +y) = A(x) +A(y) =
V
+
V
=
V
. So N(A) is closed under linear combinations and is a subset
of U, therefore it is a subspace of U.
Problem 3. Linearity. Call the map A in each example for clarity.
i) Linear: A(X +Y ) = A(X +Y ) +(X +Y )B = AX +AY +XB +Y B = AX +XB +AY +Y B = A(X) +A(Y )
ii) Linear: A(X + Y ) = A(X + Y ) + B(X + Y )C = AX + AY + BXC + BY C = AX + BXC + AY + BY C =
A(X) +A(Y )
iii) Nonlinear:
A(X + Y ) = A(X + Y ) + (X + Y )B(X + Y ) =
= AX + AY + XBX + XBY + Y BX + Y BY
= AX + XBX + AY + Y BY + XBY + Y BX
= A(X) +A(Y ) + XBY + Y BX
= A(X) +A(Y )
Problem 4. Solutions to linear equations. If b / R(A), then there are no solutions, S = = {0} (dimS = 0,
1, or undened depending on conventionthough 0 is somewhat less preferable since it would make sense to
reserve zero for the dimension of a singleton set). If b R(A), then A(x + z) = b for any x S, z N(A) =
dimS = dimN(A).
Lemma. A : U V linear, dimU = n, {u
j
}
n
k+1
a basis for N(A), {u
j
}
n
1
a basis for U (use thm. of incomplete
basis). Then S = {A(u
j
)}
k
1
is a basis for R(A).
Proof. R(A) = {A(u) : u U} = {A(

n
1
a
j
u
j
) : a
j
F} =

a
j

k
1
A(u
j
)

, so S spans R(A). Now suppose


S wasnt linearly independent, so a
1
A(u
1
) + + a
k
A(u
k
) = 0 where a
j
= 0 for some j. Then by linearity
A(a
1
u
1
+ + a
k
u
k
) = 0 = a
1
u
1
+ + a
k
u
k
N(A). Since {u
j
}
n
1
is a basis for U and {u
j
}
n
k+1
is a basis for
N(A), we must have a
1
u
1
+ + a
k
u
k
= 0 . Thus S is linearly independent and spans R(A) so it is a basis
for R(A).
Problem 5. Rank-Nullity Theorem. The theorem follows directly from the above lemma.
2
Problem 6. Representation of a Linear Map. We have from the rank-nullity theorem that dimN(A) = nk.
Let {u
i
}
n
k+1
be a basis of N(A). Then A(u
i
) =
V
for all i = k + 1, . . . , n. Since the zero vector has all its
coordinates zero in any basis, this implies that the last n k columns of A are zero. Now it remains to show that
we can complete the basis for U and choose a basis for V such that the rst k columns are as desired. But the
lemma above gives us what we need. The form of the matrix A tells us that we want the i-th basis vector of V to
be A(u
i
), for i = 1, . . . , k. So let the basis for U be B
U
= {u
i
}
n
1
(where the last n k basis vectors are a basis for
N(A) and the rst k are arbitrarily chosen to complete the basis), and the basis for V be B
V
= {v
i
}
m
1
where the
rst k basis vectors are dened by v
i
= A(u
i
) and the remaining mk are arbitrarily chosen (but we know we can
nd them by the theorem of the incomplete basis). Thus the block sizes are as follows:
A =

I
kk
0
k(nk)
0
(mk)k
0
(mk)(nk)

Problem 7. Sylvesters Inequality.


Let U = R
p
, V = R
n
, W = R
m
. So B : U V , A : V W. Dene A|
R(B)
: R(B) W : v Av, A
restricted in domain to the range of B. Clearly R(AB) = R(A|
R(B)
). Rank/nullity gives that dimR(A|
R(B)
) +
dimN(A|
R(B)
) = dimR(B), so dimR(AB) dimR(B). Now R(A|
R(B)
) R(A) = dimR(A|
R(B)
) =
dimR(AB) dimR(A). We now have one of the inequalities: dimR(AB) min {dimR(A), dimR(B)}. Clearly
N(A|
R(B)
) N(A) = dimN(A|
R(B)
) dimN(A), so by rank/nullity, dimR(A|
R(B)
) + dimN(A)
dimR(B) = rank (B). Finally by rank/nullity again, dimN(A) = n rank (A). So we have rank (AB) + n
rank (A) rank (B). Rearranging this gives the other inequality we are looking for.
EE221A Linear System Theory
Problem Set 3
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2010
Issued 9/22; Due 9/30
Problem 1. Let A : R
3
R
3
be a linear map. Consider two bases for R
3
: E = {e
1
, e
2
, e
3
} of standard basis
elements for R
3
, and
B =
_
_
_
_
_
1
0
2
_
_
,
_
_
2
0
1
_
_
,
_
_
0
5
1
_
_
_
_
_
Now suppose that:
A(e
1
) =
_
_
2
1
0
_
_
, A(e
2
) =
_
_
0
0
0
_
_
, A(e
3
) =
_
_
0
4
2
_
_
Write down the matrix representation of A with respect to (a) E and (b) B.
Problem 2: Representation of a Linear Map. Let A be a linear map of the n-dimensional linear space
(V, F) onto itself. Assume that for some F and basis (v
i
)
n
i=1
we have
Av
k
= v
k
+v
k+1
k = 1, . . . , n 1
and
Av
n
= v
n
Obtain a representation of A with respect to this basis.
Problem 3: Norms. Show that for x R
n
,
1

n
||x||
1
||x||
2
||x||
1
.
Problem 4. Prove that the induced matrix norm: ||A||
1,i
= max
j{1,...,n}

m
i=1
|a
ij
|.
Problem 5. Consider an inner product space V , with x, y V . Show, using properties of the inner product,
that
||x +y||
2
+||x y||
2
= 2||x||
2
+ 2||y||
2
where || || is the norm induced by the inner product.
Problem 6. Consider an inner product space (C
n
, C), equipped with the standard inner product in C
n
, and
a map A : C
n
C
n
which consists of matrix multiplication by an n n matrix A. Find the adjoint of A.
Problem 7: Continuity and Linearity. Show that any linear map between nite dimensional vector spaces
is continuous.
1
EE221A Problem Set 3 Solutions - Fall 2011
Problem 1.
a) / w.r.t. the standard basis is, by inspection,
A
E
=
_
_
2 0 0
1 0 4
0 0 2
_
_
.
b) Now consider the diagram from LN3, p.8. We are dealing with exactly this situation; we have one matrix
representation, and two bases, but we are using them in both the domain and the codomain so we have all the
ingredients. So the matrices P and Q for the similarity transform in this case are,
P =
_
e
1
e
2
e
3

1
_
b
1
b
2
b
3

=
_
b
1
b
2
b
3

,
since the matrix formed from the E basis vectors is just the identity; and
Q =
_
b
1
b
2
b
3

1
_
e
1
e
2
e
3

=
_
b
1
b
2
b
3

1
= P
1
.
Let A
B
be the matrix representation of A w.r.t. B. From the diagram, we have
A
B
= QA
E
P
= P
1
A
E
P
=
_
b
1
b
2
b
3

1
A
E
_
b
1
b
2
b
3

=
_
_
1 2 0
0 0 5
2 1 1
_
_
1
_
_
2 0 0
1 0 4
0 0 2
_
_
_
_
1 2 0
0 0 5
2 1 1
_
_
=
1
15
_
_
16 4 12
7 32 6
21 6 12
_
_
Problem 2. Representation of a linear map. This is straightforward from the denition of matrix represen-
tation,
A =
_

_
0 0
1
.
.
.
.
.
.
0 1
.
.
. 0
.
.
.
.
.
. 0
0 1
_

_
Problem 3. Norms.
Proof. 1st inequality: Consider the Cauchy-Schwarz inequality, (

n
i=1
x
i
y
i
)
2

_
n
i=1
x
2
i
_ _
n
i=1
y
2
i
_
. Now, let
y = 1 (vector of all ones). Then we have |x|
2
1
n|x|
2
2
which is equivalent to the rst inequality.
2nd inequality: Note that |x|
2
|x|
1
|x|
2
2
|x|
2
1
. Consider that
|x|
2
2
= [x
1
[
2
+ + [x
n
[
2
,
while
|x|
2
1
= ([x
1
[ + + [x
n
[)
2
= [x
1
[
2
+ [x
1
[ [x
2
[ + + [x
1
[ [x
n
[ + [x
2
[
2
+ [x
2
[ [x
1
[ + + [x
n
[ [x
n1
[ + [x
n
[
2
= |x|
2
2
+ (cross terms),
showing the second inequality.
2
Problem 4.
Proof. First note that the problem implies that A F
mn
. By denition,
|A|
1,i
= sup
uU
|Au|
1
|u|
1
.
Consider |Au|
1
=
_
_
_

n
j=1
A
j
u
j
_
_
_
1
, where A
j
and u
j
represent the j-th column of A and the j-th component of u
respectively. Then |Au|
1


n
j=1
|A
j
|
1
[u
j
[. Let A
max
be the column of A with the maximum 1-norm; that is,
A
max
= max
j{1,...,n}
m

i=1
[a
ij
[ .
Then |Au|
1


n
j=1
A
max
[u
j
[ = A
max

n
j=1
[u
j
[ = A
max
|u|
1
. So we have that
|Au|
1
|u|
1
A
max
.
Now, it remains to nd a u such that equality holds. Chose u = (0, . . . , 1, . . . 0)
T
, where the 1 is in the k-th
component such that A u pulls out a column of A having the maximum 1-norm. Note that | u|
1
= 1, and we see
then that
|A u|
1
| u|
1
= A
max
.
Thus in this case the supremum is achieved and we have the desired result.
Problem 5.
Proof. Straightforward; we simply use properties of the inner product at each step:
|x +y|
2
+ |x y|
2
= x +y, x +y + x y, x y
= x +y, x + x +y, y + x y, x + x y, y
= (x, x +y + y, x +y + x, x y + y, x y)
= (x, x + x, y + y, x + y, y + x, x + x, y + y, x + y, y)
= 2 |x|
2
+ 2 |y|
2
+
_
x, y + y, x x, y + x, y
_
= 2 |x|
2
+ 2 |y|
2
+
_
x, y + x, y
_
= 2 |x|
2
+ 2 |y|
2
+ (x, y + x, y)
= 2 |x|
2
+ 2 |y|
2
+ x, y y
= 2 |x|
2
+ 2 |y|
2
Problem 6. We will show that the adjoint map /

: C
n
C
n
is identical to matrix multiplication by the complex
conjugate transpose of A. Initially we will use the notation A
a
for the matrix representation of the adjoint of / and
reserve the notation v

for the complex conjugate transpose of v. First, we know that we can represent / (w.r.t.
the standard basis of C
n
) by a matrix in C
nn
; call this matrix A. Then we can use the dening property of the
adjoint to write,
Au, v = u, A
a
v
u

v = u

A
a
v
Now, this must hold for all u, v C
n
. Choose u = e
i
, v = e
j
(where e
k
is a vector that is all zeros except for 1 in
the k-th entry). This will give,
a

ij
= a
a
ij
,
for all i, j 1, . . . , n. Thus A
a
= A

; it is no accident that we use the notation for both adjoints and complex
conjugate transpose.
3
Problem 7. Continuity and Linearity.
Proof. Let / : (U, F) (V, F) with dimU = n and dimV = m be a linear map. Let x, y U, x ,= y, and
z = x y. Since / is a linear map between nite dimensional vector spaces we can represent it by a matrix A.
Now, the induced norm,
|A|
i
:= sup
zU,z=0
|Az|
|z|
= |Az| |A|
i
|z| .
Given some > 0, let
=

|A|
i
So
|x y| = |z| < = |Az| < |A|
i
= |A|
i

|A|
i
= |Az| <
and we have continuity.
Alternatively, we can also use the induced matrix norm to show Lipschitz continuity,
x, y U, |Ax Ay| < K |x y| ,
where K > |A|
i
, which shows that the map is Lipschitz continuous, and thus is continuous (LC = C , note
that the reverse implication is not true!).
EE221A Linear System Theory
Problem Set 4
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 9/30; Due 10/7
Problem 1: Existence and uniqueness of solutions to dierential equations.
Consider the following two systems of dierential equations:
x
1
= x
1
+ e
t
cos(x
1
x
2
)
x
2
= x
2
+ 15 sin(x
1
x
2
)
and
x
1
= x
1
+ x
1
x
2
x
2
= x
2
(a) Do they satisfy a global Lipschitz condition?
(b) For the second system, your friend asserts that the solutions are uniquely dened for all possible initial
conditions and they all tend to zero for all initial conditions. Do you agree or disagree?
Problem 2: Existence and uniqueness of solutions to linear dierential equations.
Let A(t) and B(t) be respectively n n and n n
i
matrices whose elements are real (or complex) valued
piecewise continuous functions on R
+
. Let u() be a piecewise continuous function from R
+
to R
ni
. Show that
for any xed u(), the dierential equation
x(t) = A(t)x(t) + B(t)u(t) (1)
satises the conditions of the Fundamental Theorem.
Problem 3: Local or global Lipschitz condition. Consider the pendulum equation with friction and
constant input torque:
x
1
= x
2
x
2
=
g
l
sin x
1

k
m
x
2
+
T
ml
2
(2)
where x
1
is the angle that the pendulum makes with the vertical, x
2
is the angular rate of change, m is the
mass of the bob, l is the length of the pendulum, k is the friction coecient, and T is a constant torque. Let
B
r
= {x R
2
: ||x|| < r}. For this system (represented as x = f(x)) nd whether f is locally Lipschitz in
x on B
r
for suciently small r, locally Lipschitz in x on B
r
for any nite r, or globally Lipschitz in x (ie.
Lipschitz for all x R
2
).
Problem 4: Local or global Lipschitz condition. Consider the scalar dierential equation x = x
2
for
x R, with x(t
0
) = x
0
= c where c is a constant.
(a) Is this system locally or globally Lipschitz?
(b) Solve this scalar dierential equation directly (using methods from undergraduate calculus) and discuss
the existence of this solution (for all t R, and for c both non-zero and zero).
1
Problem 5: Perturbed nonlinear systems.
Suppose that some physical system obeys the dierential equation
x = p(x, t), x(t
0
) = x
0
, t t
0
where p(, ) obeys the conditions of the fundamental theorem. Suppose that as a result of some perturbation
the equation becomes
z = p(z, t) + f(t), z(t
0
) = x
0
+ x
0
, t t
0
Given that for t [t
0
, t
0
+T], ||f(t)||
1
and ||x
0
||
0
, nd a bound on ||x(t) z(t)|| valid on [t
0
, t
0
+T].
2
EE221A Problem Set 4 Solutions - Fall 2011
Problem 1. Existence and uniqueness of solutions to dierential equations.
Call the rst system f(x, t) =

x
1
x
2

T
and the second one g(x) =

x
1
x
2

T
.
a) Construct the Jacobians:
D
1
f(x, t) =

1 e
t
sin (x
1
x
2
) e
t
sin(x
1
x
2
)
15 cos (x
1
x
2
) 1 15 cos(x
1
x
2
)

,
Dg(x) =

1 + x
2
x
1
0 1

.
D
1
f(x, t) is bounded x, and f(x, t) is continuous in x, so f(x) is globally Lipschitz continuous. But while g(x) is
continuous, Dg(x) is unbounded (consider the 1,1 entry as x
2
or the 1,2 entry as x
1
) so the function is
not globally LC.
b) Agree. Note that x
2
does not depend on x
1
; it satises the conditions of the Fundamental Theorem, and
one can directly nd the (unique by the FT) solution x
2
(t) = x
2
(0)e
t
0 as t . This solution for x
2
can be
substituted into the rst equation to get
x
1
= x
1
+ x
1
x
2
(0)e
t
= x
1

x
2
(0)e
t
1

,
which again satises the conditions of the Fundamental Theorem, and can be solved to nd the unique solution
x
1
(t) = x
1
(0) exp

1 e
t

x
2
(0) t

which also tends to zero as t , for any x


1
(0), x
2
(0).
Problem 2. Existence and uniqueness of solutions to dierential equations.
The FT requires:
i) a dierential equation x = f(x, t)
ii) an initial condition x(t
0
) = x
0
iii) f(x, t) piecewise continuous (PC) in t
iv) f(x, t) Lipschitz continuous (LC) in x
We clearly have i), f(x, t) = A(t)x(t) + B(t)u(t), and any IC will do for ii). We are given that A(t), B(t), u(t)
are PC in t so clearly f is also. It remains to be shown the f is LC in x. This is easily shown:
f(x, t) f(y, t) = A(t)(x y) A(t)
i
x y
Let k(t) := A(t)
i
. Since A(t) is PC and norms are continous, k(t) is PC. Thus f is LC in x so all the conditions
of the FT are satised.
Problem 3. Local or global Lipschitz condition.
Construct the Jacobian, D
f
=

0 1

g
l
cos x
k
m

. This is bounded for all x so the system is globally LC in x.


Problem 4. Local or global Lipschitz condition.
a) It is only locally LC since the derivative is unbounded for x R.
b) The equation is solved by x(t) =
c
1c(tt0)
, for c = 0. (For c = 0, the solution is simply x(t) 0 dened
on R). We can see that x(t
0
) = c (initial condition is satised) and x(t) =
c
2
(1c(tt0))
2
= (x(t))
2
(satises the
dierential equation). However, this is not dened on all of R; consider the solution value as t t
0
+
1
c
.
2
t
x(t)
t=t
0
+1/c
Problem 5. Perturbed nonlinear systems.
Let be a solution of x = p(x, t), x() = x
0
, and be a solution of z = p(z, t), z() = x
0
+ x
0
. Then we have
(t) = x
0
+

p (() , ) d,
(t) = x
0
+ x
0
+

p ( () , ) + f () d,
so
(t) (t) =

x
0
+

p (() , ) p ( () , ) f () d

x
0
+

1
+p (() , ) p ( () , ) d

0
+

1
+ K () () () d
=
0
+
1
(t t
0
) +

K() () () d
Now, identify u(t) := (t) (t), k(t) = K(t), c
1
=
0
+
1
(t t
0
) and apply Bellman-Gronwall to get,
(t) (t) (
0
+
1
(t t
0
)) exp

t
t0
K()d
Now, take

K := sup
[t0,t0+T]
K(), then
(t) (t) (
0
+
1
(t t
0
)) exp

t
t0

Kd
= (
0
+
1
(t t
0
)) exp


K(t t
0
)

EE221A Linear System Theory


Problem Set 5
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 10/18; Due 10/27
Problem 1: Dynamical systems, time invariance.
Suppose that the output of a system is represented by
y(t) =

e
(t)
u()d
Show that it is a (i) dynamical system, and that it is (ii) time invariant. You may select the input space U to
be the set of bounded, piecewise continuous, real-valued functions dened on (, ).
Problem 2: Jacobian Linearization I. Consider the now familiar pendulum equation with friction and
constant input torque:
x
1
= x
2
x
2
=
g
l
sin x
1

k
m
x
2
+
T
ml
2
(1)
where x
1
is the angle that the pendulum makes with the vertical, x
2
is the angular rate of change, m is the mass
of the bob, l is the length of the pendulum, k is the friction coecient, and T is a constant torque. Considering
T as the input to this system, derive the Jacobian linearized system which represents an approximate model
for small angular motion about the vertical.
Problem 3: Satellite Problem, linearization, state space model.
Model the earth and a satellite as particles. The normalized equations of motion, in an earth-xed inertial
frame, simplied to 2 dimensions (from Lagranges equations of motion, the Lagrangian L = T V =
1
2
r
2
+
1
2
r
2

k
r
):
r = r

k
r
2
+ u
1

= 2

r
r +
1
r
u
2
with u
1
, u
2
representing the radial and tangential forces due to thrusters. The reference orbit with u
1
= u
2
= 0
is circular with r(t) p and (t) = t. From the rst equation it follows that p
3

2
= k. Obtain the linearized
equation about this orbit. (How many state variables are there?)
Problem 4: Solution of a matrix dierential equation.
Let A
1
(), A
2
(), and F(), be known piecewise continuous n n matrices. Let
i
be the transition matrix of
x = A
i
(t)x, for i = 1, 2. Show that the solution of the matrix dierential equation:

X(t) = A
1
(t)X(t) + X(t)A

2
(t) + F(t), X(t
0
) = X
0
is
X(t) =
1
(t, t
0
)X
0

2
(t, t
0
) +

t
t0

1
(t, )F()

2
(t, )d
Problem 5: State Transition Matrix, calculations.
1
Calculate the state transition matrix for x(t) = A(t)x(t), with the following A(t):
(a) A(t) =

1 0
2 3

; (b) A(t) =

2t 0
1 1

; (c) A(t) =

0 (t)
(t) 0

Hint: for part (c) above, let (t) =

t
0
(t

)dt

; and consider the matrix

cos (t) sin (t)


sin (t) cos (t)

Problem 6: State transition matrix is invertible.


Consider the matrix dierential equation

X(t) = A(t)X(t). Show that if there exists a t
0
such that detX(t
0
) =
0 then detX(t) = 0, t t
0
.
HINT: One way to do this is by contradiction. Assume that there exists some t

for which detX(t

) = 0, nd
a non-zero vector k in N(X(t

)), and consider the solution x(t) := X(t)k to the vector dierential equation
x(t) = A(t)x(t).
2
EE221A Problem Set 5 Solutions - Fall 2011
Problem 1. Dynamical systems, time invariance.
i) To show that this is a dynamical system we have to identify all the ingredients:
First we need a dierential equation of the form x = f(x, u, t): Let x(t) = y(t) (so h(x, u, t) = f(x, u, t)) and
dierentiate (using Liebniz) the given integral equation to get
d
dt
x(t) = x(t) + u(t)
this a linear time invariant dynamical system by inspection (its of the form x(t) = Ax(t) = Bu(t)) but we can
show the axioms. First lets call the system D = (U, , Y, s, r). The time domain is T = R. The input space U
is as specied in the problem; the state space and output space Y are identical and are R. The state transition
function is
s(t, t
0
, x
0
, u) = x(t) = e
(tt0)
x
0
+

t
t0
e
(t)
u()d
and the readout function is
r(t, x(t), u(t)) = y(t) = x(t)
Now to show the axioms. The state transition axiom is easy to prove, since u() only enters the state transition
function within the integral where it is only evaluated on [t
0
, t
1
] (where t
0
and t
1
are the limits of the integral). For
the semi group axiom, let s(t
1
, t
0
, x
0
, u) = x(t
1
) be as dened above. Then plug this into
s(t
2
, t
1
, s(t
1
, t
0
, x
0
, u), u) = e
(t2t1)
_
e
(t1t0)
x
0
+

t1
t0
e
(t1)
u()d
_
+

t2
t1
e
(t2)
u()d
= e
(t2t0)
x
0
+

t1
t0
e
(t2)
u()d +

t2
t1
e
(t2)
u()d
= e
(t2t0)
x
0
+

t2
t0
e
(t2)
u()d
= s(t
2
, t
0
, x
0
, u),
for all t
0
t
1
t
2
, as required.
ii) To show that this d.s. is time invariant, we need to show that the space of inputs is closed under the time
shift operator T

; it is (clearly if u(t) U, u(t ) U). Then we need to check that:


(t
1
, t
0
, x
0
, u) = e
(t1t0)
x
0
+

t1
t0
e
(t1)
u()d
= e
(t1+(t0+))
x
0
+

t1+
t0+
e
(t1+)
u( )d
= (t
1
+ , t
0
+ , x
0
, T

u)
Problem 2. Jacobian Linearization I.
Let x := [x
1
, x
2
]
T
. We are given
d
dt
x = f(x, u) =
_
x
2

g
l
sin x
1

k
m
x
2
_
+
_
0
1
ml
2
_
u
Note that at the desired equilibrium, the equation for x
2
implies that the nominal torque input is zero, so u
0
= 0.
The Jacobian (w.r.t. x) evaluated at x
0
= [0, 0]
T
is,
D
1
f(x, u)|
x0,u0
=
_
0 1

g
l
cos x
1

k
m
_

x=x0,u=u0
=
_
0 1

g
l

k
m
_
.
2
We can see by inspection that
D
2
f(x, u) = D
2
f(x, u)|
x=x0,u=u0
=
_
0
1
ml
2
_
So the linearized system is,
x(t) =
_
0 1

g
l

k
m
_
x +
_
0
1
ml
2
_
u
(Note: If you assumed based on the wording of the question that the torque was held constant for the linearized
system, i.e. u 0, then this will also be accepted)
Problem 3. Satellite Problem, linearization, state space model.
Write as a rst-order system: x
1
= r, x
2
= r, x
3
= , x
4
=

. In these variables the equations of motion are,
d
dt
_

_
x
1
x
2
x
3
x
4
_

_
=
_

_
x
2
x
1
x
2
4

k
x
2
1
+ u
1
x
4
2
x2x4
x1
+
1
x1
u
2
_

_
.
The reference orbit has x
1
= p, x
2
= 0, x
3
= t, x
4
= , with u
1
= u
2
= 0, i.e. x
0
= [p, 0, t, ]
T
, u
0
= [0, 0]
T
. Let
u = u
0
+ u, which produces the trajectory x = x
0
+ x, and take x(t
0
) = 0. So
x = x
0
+ x = f(x
0
+ x, u
0
+ u)
We can write this in a Taylor series approximation:
x
0
+ x = f(x
0
+ x, u
0
+ u) = f(x
0
, u
0
) + D
1
f(x, u)|
x0,u0
x + D
2
f(x, u)|
x0,u0
u + h.o.t.
x = D
1
f(x, u)|
x0,u0
x + D
2
f(x, u)|
x0,u0
u
D
1
f(x, u)|
x0,u0
=
_

_
0 1 0 0
x
2
4
+ 2kx
3
1
0 0 2x
1
x
4
0 0 0 1
2x
2
x
4
x
2
1
x
2
1
u
2
2
x4
x1
0 2
x2
x1
_

x0,u0
=
_

_
0 1 0 0
3
2
0 0 2p
0 0 0 1
0 2

p
0 0
_

_
D
2
f(x, u)|
x0,u0
=
_

_
0 0
1 0
0 0
0
1
x1
_

x0,u0
=
_

_
0 0
1 0
0 0
0
1
p
_

_
Problem 4. Solution of a matrix dierential equation.
Proof. First check that the initial condition is satised:
X(t
0
) =

:
I

1
(t
0
, t
0
)X
0

:
I

2
(t
0
, t
0
) +

:
0

t0
t0

1
(t
0
, )F()

2
(t
0
, )d
= X
0
Now check that the dierential equation is satised (taking appropriate care of dierentiation under the integral
3
sign):
d
dt
X(t) = A
1
(t)
1
(t, t
0
)X
0

2
(t, t
0
) +
1
(t, t
0
)X
0

2
(t, t
0
)A

2
(t) +
d
dt

t
t0

1
(t, )F()

2
(t, )d
= A
1
(t)
1
(t, t
0
)X
0

2
(t, t
0
) +
1
(t, t
0
)X
0

2
(t, t
0
)A

2
(t) +
1
(t, t)F(t)

2
(t, t)
+

t
t0
d
dt
(
1
(t, )F()

2
(t, )) d
= A
1
(t)
1
(t, t
0
)X
0

2
(t, t
0
) +
1
(t, t
0
)X
0

2
(t, t
0
)A

2
(t) + F(t)
+

t
t0
(A
1
(t)
1
(t, )F()

2
(t, ) +
1
(t, )F()

2
(t, )A

2
()) d
= A
1
(t)
_

1
(t, t
0
)X
0

2
(t, t
0
) +

t
t0

1
(t, )F()

2
(t, )d
_
+
_

1
(t, t
0
)X
0

2
(t, t
0
) +

t
t0

1
(t, )F()

2
(t, )d
_
A

2
(t) + F(t)
= A
1
(t)X(t) + X(t)A

2
(t) + F(t)
Problem 5. State Transition Matrix, calculations.
(a)
(t, 0) = e
At
= L
1
_
(sI A)
1
_
= L
1
_
_
s + 1 0
2 s + 3
_
1
_
= L
1
_
1
(s + 1)(s + 3)
_
s + 3 0
2 s + 1
__
=
_
e
t
0
e
t
e
3t
e
3t
_
Thus,
(t, t
0
) = (t t
0
, 0) =
_
e
(tt0)
0
e
(tt0)
e
3(tt0)
e
3(tt0)
_
(b) Here our approach will be to directly solve the system of equations. Let x(t) = [x
1
(t), x
2
(t)]
T
. Then
we have x
1
(t) = 2tx
1
(t). Recall from undergrad (or if not, from section 8) that the solution to the linear
homogeneous equation x(t) = a(t)x(t) with initial condition x(t
0
) is x(t) = e

t
t
0
a(s)ds
x(t
0
). In this case that gives
x
1
(t) = x
1
(t
0
)e

t
t
0
2sds
= x
1
(t
0
) exp
_
s
2

t
t0
_
= x
1
(t
0
) exp
_
t
2
+ t
2
0
_
= e
(t
2
t0)
2
x
1
(t
0
).
We also have x
2
(t) = x
1
(t) x
2
(t) = x
1
(0)e
(t
2
t
2
0
)
x
2
(t). This can be considered a linear time-invariant
system
d
dt
x
2
(t) = x
2
(t)+u(t), with state x
2
and input u(t) = x
1
(0)e
(t
2
t
2
0
)
, with solution x
2
(t) = e
(tt0)
x
2
(t
0
)+
x
1
(0)

t
t0
e
(t)
e
(
2
t
2
0
)
d. We can now write down the s.t.m.,
(t, t
0
) =
_
e
(t
2
t
2
0
)
0

t
t0
e
(t)
e
(
2
t
2
0
)
d e
(tt0)
_
(c) Let (t, t
0
) =

t
t0
()d. Guess that (t, t
0
) =
_
cos (t, t
0
) sin (t, t
0
)
sin (t, t
0
) cos (t, t
0
)
_
. This is the s.t.m. if it
satises the matrix d.e.

X(t) = A(t)X(t) with X(t
0
) = I. Note that (t
0
, t
0
) = 0, so X(t
0
) = (t
0
, t
0
) = I. First
4
notice
d
dt
(t, t
0
) =
d
dt

t
t0
()d = (t). Now look at the derivative,
d
dt
(t, t
0
) =
_
sin (t, t
0
)(t) cos (t, t
0
)(t)
cos (t, t
0
)(t) sin (t, t
0
)(t)
_
= (t)
_
sin (t, t
0
) cos (t, t
0
)
cos (t, t
0
) sin (t, t
0
)
_
=
_
0 (t)
(t)
_ _
cos (t, t
0
) sin (t, t
0
)
sin (t, t
0
) cos (t, t
0
)
_
= A(t)(t, t
0
)
Problem 6. State transition matrix is invertible.
Proof. By contradiction: Suppose that there exists t

such that X(t

) is singular; this means that there exists


k = , X(t

)k = . Now let x(t) := X(t)k = . Then we have that x(t) =



X(t)k = A(t)X(t)k = A(t)x(t),
and x(t

) = X(t

)k = . This has the unique solution x(t) , for all t. But in particular this implies that
x(t
0
) = X(t
0
)k = , which implies that X(t
0
) is singular, i.e. det X(t
0
) = 0, giving our contradiction.
EE221A Linear System Theory
Problem Set 6
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 10/27; Due 11/4
Problem 1: Linear systems. Using the denitions of linear and time-invariance discussed in class, show
that:
(a) x = A(t)x + B(t)u, y = C(t)x + D(t)u, x(t
0
) = x
0
is linear;
(b) x = Ax + Bu, y = Cx + Du, x(0) = x
0
is time invariant (its clearly linear, from the above).
Here, the matrices in the above are as dened in class for multiple input multiple output systems.
Problem 2: A linear time-invariant system.
Consider a single-input, single-output, time invariant linear state equation
x(t) = Ax(t) + bu(t), x(0) = x
0
(1)
y(t) = cx(t) (2)
If the nominal input is a non-zero constant, u(t) = u, under what conditions does there exist a constant
nominal solution x(t) = x
0
, for some x
0
?
Under what conditions is the corresponding nominal output zero?
Under what conditions do there exist constant nominal solutions that satisfy y = u for all u?
Problem 3: Sampled Data System
You are given a linear, time-invariant system
x = Ax + Bu (3)
which is sampled every T seconds. Denote x(kT) by x(k). Further, the input u is held constant between kT
and (k+1)T, that is, u(t) = u(k) for t [kT, (k+1)T]. Derive the state equation for the sampled data system,
that is, give a formula for x(k + 1) in terms of x(k) and u(k).
Problem 4: Discrete time linear system solution.
Consider the discrete time linear system:
x(k + 1) = Ax(k) + Bu(k) (4)
y(k) = Cx(k) + Du(k) (5)
Here, k N, A R
nn
, B R
nni
, C R
non
D R
noni
. Use induction to obtain formulae for y(k), x(k)
in terms of x(k
0
) and the input sequence (u
k0
, . . . , u
k
).
Problem 5: Linear Quadratic Regulator. Consider the system described by the equations x = Ax+Bu,
y = Cx, where
A =

0 1
0 0

, B =

0
1

, C = [1 0]
1
(a) Determine the optimal control u

(t) = F

x(t), t 0 which minimizes the performance index J =

0
(y
2
(t) + u
2
(t))dt where is positive and real.
(b) Observe how the eigenvalues of the dynamic matrix of the resulting closed loop system change as a function
of . Can you comment on the results?
Problem 6. Preservation of Eigenvalues under Similarity Transform.
Consider a matrix A R
nn
, and a non-singular matrix P R
nn
. Show that the eigenvalues of A = PAP
1
are the same as those of A.
Remark: This important fact in linear algebra is the basis for the similarity transform that a redenition of
the state (to a new set of state variables in which the equations above may have simpler representation) does
not aect the eigenvalues of the A matrix, and thus the stability of the system. We will use this similarity
transform in our analysis of linear systems.
Problem 7. Using the dyadic expansion discussed in class (Lecture Notes 12), determine e
At
for square,
diagonalizable A (and show your work).
2
EE221A Problem Set 6 Solutions - Fall 2011
Problem 1. Linear systems.
a) Call this dynamical system L = (U, , Y, s, r), where U = R
ni
, = R
n
, Y = R
no
. So clearly U, , Y are all
linear spaces over the same eld (R). We also have the response map
(t, t
0
, x
0
, u) = y(t) = C(t)x(t) + D(t)u(t)
and the state transition function
s(t, t
0
, x
0
, u) = x(t) = (t, t
0
)x
0
+

t
t0
(t, )B()u()d
We need to check the linearity of the response map; we have that, t t
0
, t R
+
:
(t, t
0

1
x
1
+
2
x
2
,
1
u
1
+
2
u
2
) = C(t)
_
(t, t
0
) (
1
x
1
+
2
x
2
) +

t
t0
(t, )B() (
1
u
1
() +
2
u
2
()) d
_
+D(t) (
1
u
1
() +
2
u
2
())
=
1
_
C(t)(t, t
0
)x
1
+

t
t0
(t, )B()u
1
()d + D(t)u
1
(t)
_
+
2
_
C(t)(t, t
0
)x
1
+

t
t0
(t, )B()u
2
()d + D(t)u
2
(t)
_
=
1
(t, t
0
, x
1
, u
1
) +
2
(t, t
0
, x
2
, u
2
)
b) Using the denition of time-invariance for dynamical systems, check:
(t
1
+ , t
0
+ , x
0
, T

u) = Cx(t
1
+ ) + Du((t
1
+ ) )
= C
_
e
A(t1+(t0+))
x
0
+

t1+
t0+
e
A(t1+)
Bu( )d
_
+ Du(t
1
)
= Ce
A(t1t0)
x
0
+

t1
t0
e
A(t1s)
Bu(s)ds + Du(t
1
)
= (t
1
, t
0
, x
0
, u)
Problem 2. A linear time-invariant system.
a) The solution is constant exactly when x(t) = 0, so 0 = Ax
0
+ b u Ax
0
= b u. Such an x
0
exists i
b u R(A) b R(A) (since u = 0).
b) For the output to be zero, we also need y(t) = cx
0
= 0. We can write both conditions as
_
A
c
_
x
0
=
_
b u
0
_
= u
_
b
0
_
,
which is equivalent to
_
b
0
_
R(
_
A
c
_
).
c) Now we must have u = cx
0
. Similar to the above analysis, this leads to
_
A
c
_
x
0
=
_
b u
u
_
= u
_
b
1
_
,
and such an x
0
will exist whenever
_
b
1
_
R(
_
A
c
_
)
Problem 3. Sampled Data System.
To prevent confusion between the continuous time system and its discretization, we will use the notation x[k] :=
x(kT), u[k] := u(kT) in the following:
2
x[k + 1] = x((k + 1)T) = e
A((k+1)TkT)
x(kT) +

(k+1)T
kT
e
A((k+1)T)
Bu()d
= e
AT
x[k] +

(k+1)T
kT
e
A((k+1)T)
dBu[k]
Now, make the change of variables = (k + 1)T in the integral, to get
x[k + 1] = e
AT
x[k] +

T
0
e
A
dBu[k]
= A
d
x[k] + B
d
u[k] ,
where
A
d
:= e
AT
, B
d
:=

T
0
e
A
dB.
Remark. This is known as the exact discretization of the original continuous-time system. If A is invertible, then
consider (with the usual disclaimer about proceeding formally where the innite series is concerned),

T
0
e
A
d =

T
0
_
I + A +
1
2
A
2

2
+
_
d
= I

T
0
d + A

T
0
d +
1
2
A
2

T
0

2
d +
= T +
1
2
AT
2
+
1
3 2
A
2
T
3
+
= A
1
_
AT +
1
2
A
2
T
2
+
1
3!
A
3
T
3
+
_
= A
1
_
e
AT
I
_
So in this case we have A
d
= e
AT
, B
d
= A
1
_
e
AT
I
_
B.
Problem 4. Discrete time linear system solution.
Assume k > k
0
, and let N = k k
0
(not to be confused with N in the problem statement, which might have
better been printed as N). Then,
3
x(k
0
+ 1) = Ax(k
0
) + Bu
k0
x(k
0
+ 2) = A(Ax(k
0
) + Bu
k0
) + Bu
k0+1
= A
2
x(k
0
) + ABu
k0
+ Bu
k0+1
x(k
0
+ 3) = A(A
2
x(k
0
) + ABu
k0
+ Bu
k0+1
) + Bu
k0+2
= A
3
x(k
0
) + A
2
Bu
k0
+ ABu
k0+1
+ Bu
k0+2
.
.
.
x(k) = x(k
0
+ N) = A
N
x(k
0
) + A
N1
Bu
k0
+ A
N2
Bu
k0+1
+ + ABu
k2
+ Bu
k1
= A
N
x(k
0
) +
_
A
N1
B A
N2
B AB B

_
u
k0
u
k0+1
.
.
.
u
k2
u
k1
_

_
,
= A
N
x(k
0
) +
N

i=1
A
Ni
Bu
k0+i1
= A
kk0
x(k
0
) +
k1

i=k0
A
k1i
Bu
i
(1)
= A
kk0
x(k
0
) +
kk0

i=1
A
kk0i
Bu
k0+i1
(alternate form)
= A
kk0
x(k
0
) +
kk01

i=0
A
i
Bu
ki1
(alternate form)
Thus,
y(k) = Cx(k) + Du(k)
= CA
kk0
x(k
0
) + C
k1

i=k0
A
k1i
Bu
i
+ Du(k)
Remark. Note the similarity between the form of (1) and the usual form of the analogous continous time case,
x(t) = e
A(tt0)
x(t
0
) +

t
t0
e
A(t)
Bu()d.
Problem 5. Linear Quadratic Regulator.
a) We have a cost function of the form
J =


0
_
y
T
Qy + u
T
Ru
_
dt,
where in this case Q = 1, R = . In LN11 we have a proof that the optimal control is
u

= F

x(t)
= R
1
B
T
Px(t)
=
1
B
T
Px(t),
where P is the unique positive denite solution to the (algebraic) Riccatti equation
PA + A
T
P PBR
1
B
T
P + C
T
QC = 0
4
In this case the sparsity of A, B, C suggests that we may be able to determine the solution to the ARE by hand:
_
p
11
p
12
p
21
p
22
_ _
0 1
0 0
_
+
_
0 0
1 0
_ _
p
11
p
12
p
21
p
22
_

_
p
11
p
12
p
21
p
22
_ _
0 0
0 1
_ _
p
11
p
12
p
21
p
22
_
+
_
1 0
0 0
_
=
_
0 0
0 0
_
= p
11
=

2
1/4
p
12
= p
21
=

p
p
22
=

2
3/4
= P =
_
2
1/4

2
3/4
_
Thus,
u

(t) =
1

_
0 1

_
2
1/4

2
3/4
_
x(t)
=
_

1/2

2
1/4

x(t)
= F

x(t)
b) The closed loop system is
x(t) = Ax(t) BF

x(t)
= (ABF

)x(t)
=
__
0 1
0 0
_
+
_
0
1
_
_

1/2

2
1/4

_
x(t)
=
_
0 1

1/2

2
1/4
_
x(t),
and the closed loop dynamics A
CL
= ABF

has eigenvalues,

2
2

1/4
(1 j)
so the poles lie on 45-degree lines from the origin in the left half plane. Since appears in the denominator, small
values in correspond to poles far away from the origin; the system response will be faster than for larger values
of . However in all cases the damping ratio of =

2
2
will be the same.
Problem 6. Preservation of Eigenvalues under Similarity Transform.
Recall the property of determinants that det AB = det Adet B. Then,
det(sI

A) = det(sI PAP
1
)
= det(sPP
1
PAP
1
)
= det P det(sI A) det P
1
= det P det P
1
det(sI A)
= det(sI A)
Thus the characteristic polynomials of

A and A are identical and so are their eigenvalues.
Problem 7.
First, consider (At)
n
= A
n
t
n
= (

n
i=1

i
e
i
v
T
i
)
n
t
n
=

n
i=1

n
i
t
n
e
i
v
T
i
(using the same argument as the n = 2 case
5
in the lecture notes). Recall that

n
i=1
e
i
v
T
i
= I. Then,
e
At
= I + At +
t
2
2!
A
2
+
t
3
3!
A
3
+
=
n

i=1
e
i
v
T
i
+ t
_
n

i=1

i
e
i
v
T
i
_
+
t
2
2!
_
n

i=1

2
i
e
i
v
T
i
_
+
t
3
3!
_
n

i=1

3
i
e
i
v
T
i
_
+
=
n

i=1
(1 +
i
t +
t
2
2!

2
i
+
t
3
3!

3
i
+ )e
i
v
T
i
=
n

i=1
e
it
e
i
v
T
i
,
where we are treating the innite series representation of the exponentia formally.
EE221A Linear System Theory
Problem Set 7
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 11/3; Due 11/10
Problem 1.
A has characteristic polynomial (s
1
)
5
(s
2
)
3
, it has four linearly independent eigenvectors, the largest
Jordan block associated to
1
is of dimension 2, the largest Jordan block associated to
2
is of dimension 3.
Write down the Jordan form J of this matrix and write down cos(e
A
) explicitly.
Problem 2.
A matrix A R
66
has minimal polynomial s
3
. Give bounds on the rank of A.
Problem 3: Jordan Canonical Form.
Given
A =

3 1 0 0 0 0 0
0 3 1 0 0 0 0
0 0 3 0 0 0 0
0 0 0 4 1 0 0
0 0 0 0 4 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

(a) What are the eigenvalues of A? How many linearly independent eigenvectors does A have?
How many generalized eigenvectors?
(b) What are the eigenvalues of e
At
?
(c) Suppose this matrix A were the dynamic matrix of an LTI system. What happens to the state trajectory
over time (magnitude grows, decays, remains bounded...)?
Problem 4.
You are told that A : R
n
R
n
and that R(A) N(A). Can you determine A up to a change of basis? Why
or why not?
Problem 6.
Let A R
nn
be non-singular. True or false: the nullspace of cos(log(A)) is an Ainvariant subspace?
Problem 7.
Consider A R
nn
, b R
n
. Show that span{b, Ab, . . . , A
n1
b} is an Ainvariant subspace.
1
EE221A Problem Set 7 Solutions - Fall 2011
Problem 1.
With the given information, we can determine the Jordan form J = TAT
1
of A to be,
J =

1
1
0
1

1
1
0
1

2
1 0
0
2
1
0 0
2

.
Thus,
cos

e
J

cos e
1
e
1
sin e
1
0 cos e
1
cos e
1
e
1
sin e
1
0 cos e
1
cos e
1
cos e
2
e
2
sin e
2

1
2

e
2
sin e
2
+ e
22
cos e
2

0 cos e
2
e
2
sin e
2
0 0 cos e
2

,
and cos

e
A

= T
1

cos

e
J

T.
Problem 2.
We know that there is a single eigenvalue = 0 with multiplicity 6, and that the size of the largest Jordan block
is 3. We know that rank (A) = rank

T
1
JT

= rank (J) since T is full rank (apply Sylvesters inequality). Then


J must have rank of at least 2, arising from the 1s in the superdiagonal in the Jordan block of size 3. If all the
other Jordan blocks were size 1, then there would be no additional 1s on the superdiagonal, so the lower bound on
rank (A) is 2. Now the most 1s on the superdiagonal that this matrix could have is 4, which would be the case if
there were two Jordan blocks of size 3. So rank (A) 4. Thus the bounds are
2 rank (A) 4.
Problem 3. Jordan Canonical Form.
a) Since this matrix is upper triangular (indeed, already in Jordan form) we can read the eigenvalues from the
diagonal elements: (A) = {3, 4, 0}. Since there are 4 Jordan blocks, there are also 4 linearly independent
eigenvectors, and 3 generalized eigenvectors (2 associated with the eigenvalue of -3 and 1 with the eigenvalue of -4).
b) By the spectral mapping theorem,
(e
At
) = e
(A)t
=

e
3t
, e
4t
, 1

c) Since (A) has an eigenvalue not in the open left half plane, it is not (internally) asymptotically stable. (Note,
however that it is (internally) stable since the questionable eigenvalues are on the j-axis and have Jordan blocks
of size 1). In particular, the rst 5 states will decay to zero asymptotically (indeed, exponentially), and the last
two will remain bounded (indeed, constant).
Problem 4.
No. The given property R(A) N(A) is equivalent to A
2
v = , v R
n
. Clearly A
0
= 0
nn
has this property,
but so does, e.g.,
A
1
=

0 1 0 0
0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0
0 0 0

,
2
and since A
0
, A
1
are both in Jordan form, but are not the same (even with block reordering), this means that A
cannot be determined up to a change of basis.
Problem 6.
True.
Proof. Let f(x) := cos (log (x)). We can write A = T
1
JT. So f(A) = f(T
1
JT) = T
1
f(J)T. Now consider
N(f(A)) = N(T
1
f(J)T)
Now if x N(T
1
f(J)T), T
1
f(J)Tx = f(J)Tx = Tx N(f(J)). We need show that
f(A)Ax = T
1
f(J)TT
1
JTx = f(J)JTx = . This will be true if J and f(J) commute, because
if so, then f(J)JTx = Jf(J)Tx = since we have shown that Tx N(f(J)) whenever x N(f(A)).
Note that the block structure of f(J) and J leads to f(J)J and Jf(J) having the same block structure, and we
only need to check if J
i
and f(J
i
) commute, where J
i
is the i-th Jordan block. Write J
i
=
i
I + S where S is an
upper shift matrix (all zeros except for 1s on the superdiagonal).
So we want to know if (
i
I + S) f(J
i
) =
i
f(J
i
) + Sf(J
i
) = f(J
i
)
i
+ f(J
i
)S. In other words does Sf(J
i
) =
f(J
i
)S. Note that when S pre-multiplies a matrix, the result is the original matrix with its entries shifted up, and
the last row being lled with zeros; when S post-multiplies a matrix, the result is the original matrix with its entries
shifted to the right and the rst column lled with zeros. Since f(J
i
) is an upper-triangular, banded matrix, the
result is the same in either case and so f(J) and J commute.
So indeed, the nullspace of cos (log (A)) is an A-invariant subspace.
Alternate proof: Let f(x) := cos (log(x)). By the spectral mapping theorem, (f(A)) = f((A)); since we are
interested in the nullspace of f(A), this means we want to consider eigenvectors associated with eigenvalues at zero
of f(A). So these are the values of x that make cos (log(x)) = 0. These are e
/2
, e
3/2
, and so on. We have seen
that for any eigenvalue of A, the space N(AI) spanned by the eigenvectors associated with that eigenvalue is
A-invariant. The nullspace of f(A) is thus the direct sum of such subspaces and is hence also A-invariant. (Thanks
to Roy Dong for this proof).
Another alternate proof: Since f(x) := cos (log(x)) is analytic for x = 0 and A nonsingular means 0 / (A),
f(A) = p(A) for some polynomial p of nite degree. Then A-invariance of the nullspace is easy to check. Let
v N(A), so Av = 0. Then
Af(A)v = A

c
0
I + c
1
A + + c
n1
A
n1

v
= c
0
Av + c
1
A
2
v + + c
n1
A
n
v
=

c
0
I + c
1
A + + c
n1
A
n1

Av
= 0
Problem 7.
Proof. Let v := span

b, Ab, A
2
b, . . . , A
n1
b

. Then v =
0
b +
1
Ab +
2
A
2
b + +
n1
A
n1
b.
Now consider Av =
0
Ab +
1
A
2
b +
2
A
3
b + +
n2
A
n1
b +
n1
A
n
b.
Apply the C-H theorem:
A
n
=
0
I +
1
A + +
n1
A
n1
,
so we have
Av = (
n1

0
)b + (
0
+
n1

1
)Ab + (
1
+
n1

2
)A
2
b + + (
n2
+
n1

n1
)A
n1
b
and so Av .
EE221A Linear System Theory
Problem Set 8
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 11/10; Due 11/18
Problem 1: BIBO Stability.
T
H
T
C
f
f
H
C
T
H
T
C
i
i
,
,
V
H
V
C
Figure 1: A simple heat exchanger, for Problem 1.
Consider the simple heat exchanger shown in Figure 1, in which f
C
and f
H
are the ows (assumed constant)
of cold and hot water, T
H
and T
C
represent the temperatures in the hot and cold compartments, respectively,
T
Hi
and T
Ci
denote the temperature of the hot and cold inow, respectively, and V
H
and V
C
are the volumes
of hot and cold water. The temperatures in both compartments evolve according to:
V
C
dT
C
dt
= f
C
(T
Ci
T
C
) +(T
H
T
C
) (1)
V
H
dT
H
dt
= f
H
(T
Hi
T
H
) (T
H
T
C
) (2)
Let the inputs to this system be u
1
= T
Ci
, u
2
= T
Hi
, the outputs are y
1
= T
C
and y
2
= T
H
, and assume that
f
C
= f
H
= 0.1 (m
3
/min), = 0.2 (m
3
/min) and V
H
= V
C
= 1 (m
3
).
(a) Write the state space and output equations for this system in modal form.
(b) In the absence of any input, determine y
1
(t) and y
2
(t).
(c) Is the system BIBO stable? Show why or why not.
Problem 2: BIBO Stability
Consider a single input single output LTI system with transfer function G(s) =
1
s
2
+1
. Is this system BIBO
stable?
1
Problem 3: Exponential stability of LTI systems.
Prove that if the A matrix of the LTI system x = Ax has all of its eigenvalues in the open left half plane, then
the equilibrium x
e
= 0 is asymptotically stable.
Problem 4: Characterization of Internal (State Space) Stability for LTI systems.
(a) Show that the system x = Ax is internally stable if all of the eigenvalues of A are in the closed left half
of the complex plane (closed means that the j-axis is included), and each of the j-axis eigenvalues has a
Jordan block of size 1.
(b) Given
A =

3 1 0 0 0 0 0
0 3 1 0 0 0 0
0 0 3 0 0 0 0
0 0 0 4 1 0 0
0 0 0 0 4 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

Is the system x = Ax exponentially stable? Is it stable?


2
EE221A Problem Set 8 Solutions - Fall 2011
Problem 1. BIBO Stability.
a) First write this LTI system in state space form,
x = Ax +Bu
=
_
(+f
C
)
V
C

V
C

V
H
(+f
H
)
V
H
_
x +
_
f
C
V
C
0
0
f
H
V
H
_
u,
=
_
0.3 0.2
0.2 0.3
_
x +
_
0.1 0
0 0.1
_
u
y = Cx =
_
1 0
0 1
_
x
where x := (T
C
, T
H
)
T
, u := (T
Ci
, T
Hi
)
T
. This has two distinct eigenvalues (so we know it can be diagonalized)

1
= 0.5 with eigenvector e
1
= (1, 1) and
2
= 0.1 with eigenvector e
2
= (1, 1). Let T
1
=
_
e
1
e
2

, so
T =
1
2
_
1 1
1 1
_
and the modal form is
z =

Az +

Bu,
y =

Cz,
where

A = TAT
1
=
_
0.5 0
0 0.1
_
,

B = TB =
_
0.05 0.05
0.05 0.05
_
,

C = CT
1
=
_
1 1
1 1
_
.
b)
y(t) =

Cz(t) =

Ce

At
z(0) =

Ce

At
Tx
0
=
1
2
_
1 1
1 1
_ _
e
0.5t
0
0 e
0.1t
_ _
1 1
1 1
_ _
x
0,1
x
0,2
_
=
1
2
_
1 1
1 1
_ _
e
0.5t
e
0.5t
e
0.1t
e
0.1t
_ _
x
0,1
x
0,2
_
=
1
2
_
e
0.5t
+e
0.1t
e
0.5t
+e
0.1t
e
0.5t
+e
0.1t
e
0.5t
+e
0.1t
_ _
x
0,1
x
0,2
_
= y
1
(t) =
1
2
e
0.5t
(x
0,1
x
0,2
) +
1
2
e
0.1t
(x
0,1
+x
0,2
)
y
2
(t) =
1
2
e
0.5t
(x
0,2
x
0,1
) +
1
2
e
0.1t
(x
0,1
+x
0,2
)
c) Since all the eigenvalues are in the open left half plane, the system is (internally) exponentially stable, and
since we have a minimal realization ((A, B) completely controllable and (A, C) completely observable; clear by
inspection since B and C are both full rank), it is thus BIBO stable.
Problem 2. BIBO Stability.
The transfer function has poles at j; thus there are some poles that are not in C
o

(open left half plane),


therefore the system cannot be BIBO stable.
Consider for example the bounded input u(t) = sin t. So u(s) =
1
s
2
+1
and
y(s) = G(s) u(s) =
1
(s
2
+ 1)
2
=
1
2
_
1
s
2
+ 1

s
2
1
(s
2
+ 1)
2
_
= y(t) =
1
2
[sin t t cos t]
which will clearly grow without bound as t .
2
Problem 3. Exponential stability of LTI systems.
We have seen that for an LTI system, (t, t
0
) = e
A(tt0)
. By the spectral mapping theorem,
_
e
A(tt0)
_
=
f((A)) where f(x) = e
x(tt0)
, thus f

(x) = (t t
0
) e
x(tt0)
, f

(x) = (t t
0
)
2
e
x(tt0)
, ..., f
(k)
(x) = (t t
0
)
k
e
x(tt0)
.
Note that the Jordan form of e
A(tt0)
will be comprised solely of entries of this form (scaled by
1
(k1)!
< 1), ie.
products of polynomials in t and e
i(tt0)
. When Re(
i
) < 0, all these entries will go to zero as t , since
any decaying exponential eventually dominates any growing polynomial. So the magnitude of the state must also
go to zero. The state is also bounded by continuity the polynomial-matrix products. This implies that x
e
= 0 is
asymptotically stable.
This is developed a bit more formally in LN15, p.5 but the idea is the same; and we dont need all the mechanics
of that proof since we arent trying to show that the state goes to zero exponentially fast.
Problem 4. Characterization of Internal (State Space) Stability for LTI systems.
(a) For internal stability we simply need the state to be bounded for all t t
0
. This implies that
_
_
e
Jt
_
_
must be
bounded, where J is the Jordan form of A. By the analysis in problem 3, this is clearly true for the subspaces of the
state space corresponding to eigenvalues in the open left half plane. For subspaces corresponding to an eigenvalue

i
= j on the imaginary axis, note that the corresponding Jordan block J
i
with block size 1 leads to simply
e
Jit
= e
it
= e
jt
= cos t + sin t, hence
_
_
e
Jit
_
_
= 1 .
(b) This system is in Jordan form; the eigenvalues have either negative imaginary part, or they are on the
imaginary axis and have Jordan block size 1, so by the result of part (a) the system is (internally) stable. However,
because of the eigenvalues at zero, the system is not exponentially stable.
EE221A Linear System Theory
Problem Set 9
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 11/21; Due 12/1
Problem 1: Lyapunov Equation.
(a) Consider the linear map L : R
nn
R
nn
dened by L(P) = A
T
P + PA. Show that if
i
+
j
=
0,
i
,
j
(A), the equation:
A
T
P +PA = Q
has a unique symmetric solution for given symmetric Q.
(b) Show that if (A) C

then for given Q > 0, there exists a unique positive denite P solving
A
T
P +PA = Q
(Hint: try P =

0
e
A
T
t
Qe
At
dt)
Problem 2: Asymptotic and exponential stability.
True or False: If a linear time-varying system is asymptotically stable, it is also exponentially stable. If true,
prove, if false, give a counterexample.
Problem 3: State observation problem.
Consider the linear time varying system:
x(t) = A(t)x(t)
y(t) = C(t)x(t)
This system is not necessarily observable. The initial condition at time 0 is x
0
.
(a) Suppose the output y(t) is observed over the interval [0, T]. Under what conditions can the initial state
x
0
be determined? How would you determine it?
(b) Now suppose the output is subject to some error or measurement noise. Determine the best estimate of
x
0
given y() and the system model.
(c) Consider all initial conditions x
0
such that ||x
0
|| = 1. Dening the energy in the output signal as
< y(t), y(t) >, is it possible for the energy of the output signal to be zero?
Problem 4: State vs. Output Feedback.
Consider a dynamical system described by:
x = Ax +Bu (1)
y = Cx (2)
where
A =

0 1
7 4

, B =

1
2

, C = [1 3] (3)
For each of cases (a) and (b) below, derive a state space representations of the resulting closed loop system,
and determine the characteristic equation of the resulting closed loop A matrix (called the closed loop
characteristic equation): (a) u = [f
1
f
2
]x, and (b) u = ky.
Problem 5: Controllable canonical form.
1
Consider the linear time invariant system with state equation:

x
1
x
2
x
3

0 1 0
0 0 1

3

2

1

x
1
x
2
x
3

0
0
1

u (4)
Insert state feedback: the input to the overall closed loop system is v and u = v kx where k is a constant
row vector. Show that given any polynomial p(s) =

3
k=0
a
k
s
3k
with a
0
= 1, there is a row vector k such
that the closed loop system has p(s) as its characteristic equation. (This naturally extends to n dimensions,
and implies that any system with a representation that can be put into the form above can be stabilized by
state feedback.)
2
EE221A Linear System Theory
Problem Set 10
Professor C. Tomlin
Department of Electrical Engineering and Computer Sciences, UC Berkeley
Fall 2011
Issued 12/2; Due 12/9
Problem 1: Feedback control design by eigenvalue placement. Consider the dynamic system:
d
4

dt
4
+
1
d
3

dt
3
+
2
d
2

dt
2
+
3
d
dt
+
4
= u
where u represents an input force,
i
are real scalars. Assuming that
d
3

dt
3
,
d
2

dt
2
,
d
dt
, and can all be measured,
design a state feedback control scheme which places the closed-loop eigenvalues at s
1
= 1, s
2
= 1, s
3
=
1 + j1, s
4
= 1 j1.
Problem 2: Controllability of Jordan Forms.
Given the Jordan Canonical Form of Problem Set 7:
A =

3 1 0 0 0 0 0
0 3 1 0 0 0 0
0 0 3 0 0 0 0
0 0 0 4 1 0 0
0 0 0 0 4 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

Suppose this matrix A were the dynamic matrix of a system to be controlled. What is the minimum number
of inputs needed for the system to be controllable?
Problem 3: Observer design. Figure 1 shows a velocity observation system where x
1
is the velocity to be
x
1
x
2
z
1
u
input
velocity
observed variable
observer
output
observer
1
s
2+s
2 s
Figure 1: Velocity Observation System.
observed. An observer is to be constructed to track x
1
, using u and x
2
as inputs. The variable x
2
is obtained
from x
1
through a sensor having the known transfer function
2 s
2 + s
(1)
1
as shown in Figure 1.
(a) Derive a set of state-space equations for the system with state variables x
1
and x
2
, input u and output
x
2
.
(b) Design an observer with states z
1
and z
2
to track x
1
and x
2
respectively. Choose both observer eigenvalues
to be at 4. Write out the state space equations for the observer.
(c) Derive the combined state equation for the system plus observer. Take as state variables x
1
, x
2
, e
1
= x
1
z
1
,
and e
2
= x
2
z
2
. Take u as input and z
1
as the output. Is this system controllable and/or observable? Give
physical reasons for any states being uncontrollable or unobservable.
(d) What is the transfer function relating u to z
1
? Explain your result.
Problem 4: Observer-controller for a nonlinear system.
The simplied dynamics of a magnetically suspended steel ball are given by:
m y = mg c
u
2
y
2
where the input u represents the current supplied to the electromagnet, y is the vertical position of the ball,
which may be measured by a position sensor, g is gravitational acceleration, m is the mass of the ball, and c
is a positive constant such that the force on the ball due to the electromagnet is c
u
2
y
2
. Assume a normalization
such that m = g = c = 1.
(a) Using the states x
1
= y and x
2
= y write down a nonlinear state space description of this system.
(b) What equilibrium control input u
e
must be applied to suspend the ball at y = 1 m?
(c) Write the linearized state space equations for state and input variables representing perturbations away
from the equilibrium of part (b).
(d) Is the linearized model stable? What can you conclude about the stability of the nonlinear system close
to the equilibrium point x
e
?
(e) Is the linearized model controllable? Observable?
(f ) Design a state feedback controller for the linearized system, to place the closed loop eigenvalues at 1, 1.
(g) Design a full order observer, so that the state estimate error dynamics has eigenvalues at 5, 5.
(h) Now, suppose that you applied this controller to the original nonlinear system; discuss how you would
expect the system to behave. How would the behavior change if you had chosen controller eigenvalues at
5, 5, and observer eigenvalues at 20, 20?
Problem 5. Given a linear time varying system R(), show that if R() is completely controllable on [t
0
, t
1
],
then R is completely controllable on any [t

0
, t

1
], where t

0
t
0
< t
1
t

1
. Show that this is no longer true
when the interval [t
0
, t
1
] is not a subset of [t

0
, t

1
].
2
EE221A Problem Set 10 Solutions - Fall 2011
Problem 1. Feedback control design by eigenvalue placement.
First write the system in state space form:
x =

0 1 0 0
0 0 1 0
0 0 0 1

4

3

2

1

x +

0
0
0
1

u
= Ax +Bu
y = x
We can check the controllability by considering
Q =

sI A B

s 1 0 0 0
0 s 1 0 0
0 0 s 1 0

4

3

2
s
1
1

which clearly has rank 4 for any s Cmoreover, by inspection (A, B) is in controllable canonical formso (A, B)
is completely controllable. Now, let
u = f
T
x =

f
1
f
2
f
3
f
4

x
The closed loop system is then
x =

0 1 0 0
0 0 1 0
0 0 0 1

4

3

2

1

0
0
0
1

f
1
f
2
f
3
f
4

x
=

0 1 0 0
0 0 1 0
0 0 0 1

4
f
1

3
f
2

2
f
3

1
f
4

x
= A
CL
x
We can compute the characteristic polynomial of the closed loop system,

A
CL
(s) = det (sI A
CL
) = s
4
+ (
1
+f
4
) s
3
+ (
2
+f
3
) s
2
+ (
3
+f
2
) s +
4
+f
1
While our desired characteristic polynomial is,

X
des
(s) = (s + 1)
2
(s + 1 +j) (s + 1 j)
= s
4
+ 4s
3
+ 7s
2
+ 6s + 2
and by matching terms we conclude that
f
1
= 2
4
f
2
= 6
3
f
3
= 7
2
f
4
= 4
1
Problem 2. Controllability of Jordan Forms.
2
A minimum of two inputs are needed. Proof: The PBH test shows that no B matrix with a single column can
provide complete controllability; it is easy to nd a two-column B matrix that does, for example
B =

0 0
0 0
1 0
0 0
1 0
1 0
0 1

Problem 3. Observer design.


a) We have x
1
(s) =
1
s
u(s) = sx
1
(s) = u(s) = x
1
(t) = u(t). Also (2 +s)x
2
(s) = (2 s)x
1
(s) = x
2
(t) =
2x
1
(t) 2x
2
(t) x
1
(t) = 2x
1
(t) 2x
2
(t) u(t). So the system in state-space form is,

x
1
x
2

0 0
2 2

x
1
x
2

1
1

u
y =

0 1

x
1
x
2

b) We want to place the eigenvalues of ATC where T =

t
1
t
2

, C =

0 1

= ATC =

0 t
1
2 2 t
2

.
The characteristic polynomial of ATC is,
det(sI (ATC)) = det

s t
1
2 s + 2 +t
2

= s
2
+ (2 +t
2
)s + 2t
1
and we want it to equal the desired characteristic polynomial, (s +4)
2
= s
2
+8s +16. Thus 2 +t
2
= 8 = t
2
= 6
and 2t
1
= 16 = t
1
= 8. The observer state space equations are therefore,
z =

0 8
2 8

z +

1
1

u +

8
6

y
c) The overall dynamics are described by

x
e

A 0
0 ATC

x
e

B
0

u
=

0 0 0 0
2 2 0 0
0 0 0 8
0 0 2 8

x
e

1
1
0
0

u
y =

1 0 1 0

x
e

where
x =

x
1
x
2

T
, e =

e
1
e
2

T
B =

1
1

The overall system is not completely controllable, nor observable; the controllability matrix Q = [B|AB|A
2
B|A
3
B]
is
Q =

1 0 0 0
1 4 8 16
0 0 0 0
0 0 0 0

3
which has rank 2, and the observability matrix is
O =

C
CA
CA
2
CA
3

1 0 1 0
0 0 0 8
0 0 16 64
0 0 128 384

which has rank 3. The error states are not controllable because the observer is designed to such that the error
converges to zero. Also intuitively it makes sense that one should not be able to control the state estimates separately
from the states that are being estimated! The state x
2
is not observable. This is because the system is designed
to ensure z
1
x
1
independently of u, and one does not with variations in the controlled variable x
2
to aect the
estimate of x
1
.
d)
C(sI A)
1
B =

1 0 1 0

s 0 0 0
2 s + 2 0 0
0 0 s 8
0 0 2 s + 8

1
1
0
0

1 0 1 0

s+2
s(s+2)
0 0 0
2
s(s+2)
s
s(s+2)
0 0
0 0
0 0

1
1
0
0

1
s
0

1
1
0
0

=
1
s
where denotes terms that dont matter since they will be multiplied by zero.
The observer is essentially inverting the dynamics of the sensor, such that the transfer function from input to
the estimated velocity is identical to the transfer function to the actual velocity.
Problem 4. Observer-controller for a nonlinear system.
a)
x :=

x
1
x
2

x
2
1
u
2
x
2
1

= f(x), y =

1 0

x = Cx
b) When u = 1, x
2
= 0, so this input will keep the system in equilibrium at y = x
1
= 1 m.
c) Let A := D
x
f|
x0,u0
, B := D
u
f|
x0,u0
, x
0
:= (1, 0), u
0
= 1. Then A =

0 1
2 0

, B =

0
2

.
d) The eigenvalues of A are

2 so the equilibrium x
0
is unstable in the linearized system. The same equilibrium
will consequently also be unstable in the nonlinear system.
e)
C =

B AB

0 2
2 0

= controllable
O =

C
CA

1 0
0 1

= observable
f) Let the feedback system be u = Fx, thus the closed loop dynamics are x = (ABF) x. By comparing
the desired characteristic polynomial we can determine that F =

3/2 1

gives the desired closed loop


eigenvalues.
4
g) Let the observer gain matrix be T =

t
1
t
2

. Then
det [sI (ATC)] = det

s +t
1
1
2 +t
2
s

= s
2
+t
1
s 2 +t
2
While the desired characteristic polynomial is,
(s + 5)
2
= s
2
+ 10s + 25
Thus, T =

10
27

gives the desired spectrum for observer dynamics ATC.


h) In principle, near the equilibrium this controller/observer system will both control and observe the nonlin-
ear system. More aggressive eigenvalue placement leads to higher gains in the controller, potentially degrading
performance (especially in the presence of measurement noise, actuator saturation, signal digitization, unmodeled
disturbances, etc.).
Problem 5.
a) Let (x

0
, t

0
) be the initial phase and (x

1
, t

1
) be an arbitrary nal phase. Construct a control u() piecewise
such that u(t) = 0, t [t

0
, t
0
) (t
1
, t

1
]. Then we have that x(t
0
) = (t
0
, t

0
)x

0
, and x(t

1
) = (t

1
, t
1
)x(t
1
)
x(t
1
) = (t
1
, t

1
)x(t

1
). But since R() is c.c. on [t
0
, t
1
], we know there exists a control u on [t
0
, t
1
] that will transfer
any (x
0
, t
0
) to any (x
1
, t
1
). So let u(t) = u(t), t [t
0
, t
1
].
b) Counterexample: Consider a system R() = (A(), B(), C(), D()), where t
0
= t

0
< t

1
< t
1
B(t) =

0
nn
, t
0
t t

1
I
nn
, t

1
< t t
1
Then clearly R() is c.c. on [t
0
, t
1
], but not on [t

0
, t

1
].
EE221A Problem Set 9 Solutions - Fall 2011
Problem 1. Lyapunov Equation.
(a) We want to show that L(P) = Q has a unique symmetric solution. So we are interested in whether
L(P) A
T
P + PA is injective (for uniqueness) and surjective (solution exists for any given symmetric Q). Thus
we want to know if L is bijective or equivalently (since L maps from R
nn
to itself), if N(L) = {}.
A sketch of the proof is as follows: We use the (ordinary and generalized) eigenvalues and eigenvectors of A and
the property that sums of eigenvalues cannot be zero, to show that v N(P) for each v (ordinary and generalized)
eigenvector of A. Since the set of all (ordinary and generalized) eigenvectors is a basis for R
n
, the only P that
satises this is P = 0, hence N(L) = {} as desired.
Let e be an eigenvector of A with eigenvalue . Then
L(P) = 0 = A
T
Pe +PAe = 0
A
T
Pe = Pe,
and since (A) = (A
T
), this means that either: i) is an eigenvalue of A, with left eigenvector e
T
P, or ii)
Pv = 0. But the rst case is precluded by the given property on the eigenvalues of A. So we have shown that for
every eigenvector e of A, Pe = 0. If A happens to be diagonable (i.e. it has a complete set of n linearly independent
eigenvectors), then we are done. However we cant assume this. Thus, consider also a generalized eigenvector v of
A of degree 1 (so Av = v +e where e is some eigenvector of A). Then,
L(P) = 0 = A
T
Pv +PAv = 0
A
T
Pv = Pv Pe
A
T
Pv = Pv,
where we recall that we have already shown that Pe = 0. By the same reasoning as before, we now have that Pv = 0
for all generalized eigenvectors of degree 1. One can continue this until all of the eigenvectors and generalized
eigenvectors of A have been exhausted, with the result that P maps every eigenvector and generalized eigenvector
of A to zero. But since the eigenvectors and generalized eigenvectors of A form a basis for R
n
, this implies that

L(P) = 0 = P = 0. So we have that L(P) = Q has unique solutions. Now to show that any solution is
symmetric,
Q = Q
T
= A
T
P +PA = P
T
A+A
T
P
T
= L(P) = L(P
T
) = P = P
T
(b) Note that (A) C
o

implies the property in part (a) so by that result we have existence of a unique,
symmetric solution. Check that the hinted P is this solution:
A
T
P +PA = A
T


0
e
A
T
t
Qe
At
dt +


0
e
A
T
t
Qe
At
dtA
=


0
_
d
dt
e
At
_
T
Qe
At
dt +


0
e
A
T
t
_
d
dt
e
At
_
dt
=


0
d
dt
_
e
A
T
t
Qe
At
_
dt
= e
A
T
t
Qe
At

t=0
= Q
And P is clearly positive denite because e
At
is invertible and Q is positive denite.
Problem 2. Asymptotic and exponential stability.
False. Counterexample: Consider the system x =
1
1+t
x. This has solution x(t) =
_
1+t0
1+t
_
x
0
, i.e. (t, t
0
) =
1+t0
1+t
. So (t, 0) 0 as t ; therefore x
e
= 0 is asymptotically stable. But note that for any > 0,
|x(t)| exp [(t t
0
)] as t , so we can never satisfy the requirements of exponential stability.
2
Problem 3. State observation problem.
(a) We have L
o
x
0
= y(t) = C(t)(t, 0)x
0
. So of course it is necessary that y R(L
o
); however this should be
guaranteed if y is the output of our system and there are no unmodeled dynamics or noise. Since (t, 0) is invertible
for all t, a sucient condition would be if there exists t [0, T] for which C
1
(t) exists. Generally however we
dont have such a simple case. However, we know that if the observability Grammian
W
o
[0, T] =

T
0

(, 0)C

()C()(, 0)d
is full rank, then the system is completely observable on [0, T] or in other words, we can determine x
0
exactly from
the output. We could determine it as in the derivation of the continuous time Kalman lter from lecture notes 18:
x
0
= (L

o
L
o
)
1
L

o
y
= W
1
o
[0, T]

T
0

(, 0)C

()y()d.
(b) In this case we are not guaranteed that y(t) R(L
o
). But we can look for the least-norm approximate
solution to L
o
x
0
= y(t). Let y = y
R
+ y
N
, where y
R
R(L
o
) and y
N
R(L
o
)

= N(L

o
). Note then that y
R
is
the orthogonal projection of y onto the range of L
o
it is the vector in R(L
o
) that is closest to y in the least L
2
norm sense. So we are looking for those x
0
such that
L
o
x
0
= y
R
+y
N
= L

o
L
o
x
0
= L

o
(y
R
+y
N
)
= L

o
y
R
Now as we have seen, if N(L
o
) = N(L

o
L
o
) = {}, i.e. L
o
is injective, then L

o
L
o
is invertible and we can recover a
unique x
0
that is the initial condition that, with no noise, would produce the output closest (in the sense we have
described) to the observed output. Now consider the other cases. Note that L
o
cannot be surjective, since it maps
to an innite-dimensional vector space. Now, if L
o
is not injective, then at best we can dene a set of possible initial
conditions that would all result in the output y
R
, X := {x|L
o
x = y
R
}. The x
0
obtained via the Moore-Penrose
pseudoinverse, x
0
= V
1

1
r
U

1
L

o
y = V
1

1
r
U

T
0

(, 0)C

()y()d, would be the solution of least (L2) norm


(here, SVD L

o
L
o
:= U
1

r
V

1
).
(c) Yes; in the case that L
o
is not injective, N(L
o
) is nontrivial and there exist x
0
N(L
o
) with unit norm;
then for any such x
0
, y, y = L
o
x
0
, L
o
x
0
= , = 0.
Problem 4. State vs. Output Feedback.
(a) We have
x = Ax +B(
_
f
1
f
2

x)
=
_
AB
_
f
1
f
2
_
x
= A
cl
,
where
A
cl
=
_
0 1
7 4
_

_
1
2
_
_
f
1
f
2

=
_
f
1
1 f
2
7 2f
1
4 2f
2
_
with characteristic equation,

A
cl
(s) = (s +f
1
)(s + 4 + 2f
2
) (1 f
2
)(7 2f
1
)
= s
2
+ 4s + 2f
2
s +f
1
s + 4f
1
+ 2f
1
f
2
7 + 2f
1
+ 7f
2
2f
1
f
2
= s
2
+ (4 + 2f
2
+f
1
)s + 6f
1
+ 7f
2
7
(b) We have
x = Ax +B(ky)
= Ax kBCx
= (AkBC) x
= A
cl
,
3
where
A
cl
=
_
0 1
7 4
_
k
_
1
2
_
_
1 3

=
_
k 1 3k
7 2k 4 6k
_
with characteristic equation,

A
cl
(s) = s
2
+ (7k + 4)s + 27k 7
Problem 5. Controllable canonical form.
The closed loop system is,
_
_
x
1
x
2
x
3
_
_
=
_
_
0 1 0
0 0 1

3

2

1
_
_
_
_
x
1
x
2
x
3
_
_
+
_
_
0
0
1
_
_
_
v
_
k
1
k
2
k
3

x
_
=
_
_
0 1 0
0 0 1

3
k
1

2
k
2

1
k
3
_
_
_
_
x
1
x
2
x
3
_
_
+
_
_
0
0
1
_
_
v
So

A
CL
(s) = s
3
+ (
3
+k
1
) s
2
+ (
2
+k
2
) s + (
1
+k
3
)
The desired characteristic polynomial is,
p(s) = s
3
+a
1
s
2
+a
2
s +a
3
So setting k such that,
k
1
= a
3

3
k
2
= a
2

2
k
3
= a
1

1
gives the desired characteristic polynomial.

You might also like