Math 5310, Homework #5: Problem 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Math 5310, Homework #5

Problem 1
In[4]:= A = 0, 1, 1, 1, 4, -3, 1, -3, 4;
Part (a)
In[5]:= EigenvaluesA
Out[5]= 7, 2, 1
In[6]:= V = TransposeEigenvectorsA
Out[6]=
(
'
.
.
.
.
.
.
.
.
0 1 2
1 1 1
1 1 1
\
!
.
.
.
.
.
.
.
.
Characteristic polynomial
In[7]:= DetA - IdentityMatrix3
Out[7]=
3
+ 8
2
5 14
Part (b)
V is the matrix whose columns are the eigenvectors. Let A(x) be the diagonal matrix,
In[9]:= Ax_ = DiagonalMatrixExpx EigenvaluesA
Out[9]=
(
'
.
.
.
.
.
.
.
.
.
:
7 x
0 0
0 :
2 x
0
0 0 :
x
\
!
.
.
.
.
.
.
.
.
.
(Weve used A(x) instead of D(x) because D is a reserved symbol in Mathematica)
In[10]:= y0 = 1, 0, 2;
The solution is y(x) = V A(x) V
1
y
0
,
In[12]:= yx_ = FullSimplifyV . Ax . InverseV.y0
Out[12]= |:
2 x
, :
2 x
:
7 x
, :
2 x
+ :
7 x
|
Problem 2
Suppose (v, ) is an eigenpair of A, so that Av = v. It is given that A is nonsingular so A
1
exists. Multiply
both sides of Av = v by A
1
,
A
1
Av = v = A
1
v
and divide by ,
A
1
v =
1

v.
Therefore
1

is an eigenvalue of A
1
paired with the same eigenvector v.
Therefore
1

is an eigenvalue of A
1
paired with the same eigenvector v.
Problem 3
Counterexample:
A = 1, 0, 0, 2;
EigenvaluesA
2, 1
B = 2, 0, 3, 0;
EigenvaluesB
2, 0
EigenvaluesA + B
3, 2
Were the proposition true, we would find 4 and 1 as eigenvalues for A + B. Furthermore, the proposition claims
that the sum of any two eigenvalues is an eigenvalue of A + B. With two eigenvalues each for A and B, there
are four sums of eigenvalues. Since A + B has at most two distinct eigenvalues, the proposition is impossible.
Problem 4
Part (a)
We will prove by induction. Assume that z
k
is a normalized eigenvector v
i
with eigenvalue
i
. Then
Az
k
=
i
z
i
. Consider the k th iteration
z
k+1
=
A z
k
A z
k


z
k
z
k

= sign(
k
) z
k
.
If z
k
is normalized, then z
k+1
will normalized also. It has been stipulated that z
0
is a normalized eigenvector, so
z
1
is as well and the general case follows by induction.
Since each iterate z
k
is an eigenvector,

k+1
=
z
k

A z
k
z
k

z
k
=
z
k

i
z
k
z
T
k
z
k
=
i
Part (b)
powerIterA_, W_ := Blockz = W1, Az = A.W1, ReturnAzSqrtAz.Az, z.Azz.z
powerMethodA_, x0_ :=
Blockw0 = x0, 0.0, out = , NestAppendToout, ; powerIterA, &, w0, 15;
Gridout, Frame - All
2 hwk5Soln.nb
powerMethodA, 1.0, 2.0, 3.0
1., 2., 3. 0.
0.581238, 0., 0.813733 1.85714
0.187485, 0.428537, 0.883858 3.59459
0.0696382, 0.639033, 0.76602 6.30273
0.0182496, 0.687597, 0.725863 6.93536
0.00546914, 0.701733, 0.712419 6.99467
0.00152661, 0.70556, 0.708649 6.99956
0.000441328, 0.706668, 0.707545 6.99996
0.000125358, 0.706981, 0.707232 7.
0.0000359217, 0.707071, 0.707143 7.
0.0000102483, 0.707097, 0.707117 7.
{2.9302410
6
, 0.707104, 0.70711 7.
|8.3690410
7
, 0.707106, 0.707108| 7.
|2.3915910
7
, 0.707107, 0.707107| 7.
|6.8324910
8
, 0.707107, 0.707107| 7.
After five iterations the eigenvalue estimate is accurate to 1 %.
Part (c)
v2 = TransposeV2
1, 1, 1
v3 = TransposeV3
2, 1, 1
z0 = 2.0+v2 + v3
0., 3., 3.
hwk5Soln.nb 3
powerMethodA, z0
0., 3., 3. 0.
0.816497, 0.408248, 0.408248 1.
0.426401, 0.639602, 0.639602 1.66667
0.646997, 0.539164, 0.539164 1.90909
0.540738, 0.594812, 0.594812 1.97674
0.595247, 0.56819, 0.56819 1.99415
0.568294, 0.581825, 0.581825 1.99854
0.581852, 0.575086, 0.575086 1.99963
0.575093, 0.578476, 0.578476 1.99991
0.578477, 0.576786, 0.576786 1.99998
0.576786, 0.577632, 0.577632 1.99999
0.577632, 0.577209, 0.577209 2.
0.577209, 0.577421, 0.577421 2.
0.577421, 0.577315, 0.577315 2.
0.577315, 0.577368, 0.577368 2.
With this initial guess the power method does not converge to the dominant eigenpair. The dominant eigenvec-
tor is orthogonal to the space spanned by the two other eigenvectors. The initial guess is a LC of the other
eigenvectors. The second iterate is A times that LC, which is another LC of the other eigenvectors. Each
subsequent iterate will also be some other LC of the non-dominant eigenvectors, therefore perpendicular to the
dominant eigenvector.
Its extremely unlikely for an initial guess to be exactly orthogonal to the dominant eigenvector (if you know
measure theory, youll recognize that any subspace has measure zero). With an initial guess chosen by a good
pseudorandom number generator (rather then a "random-looking" guess such as (1,2,3)) this type of failure
will almost never happen. If the application is one where failure would have serious consequences (a cost in
lives or large quantities of dollars), you can always check by repeating the power method with a different
initial guess.
Part (d)
To find the smallest eigenvalue, apply the power method to A
1
. That will (almost always) converge to the
reciprocal of the smallest eigenvalue.
Problem 5
fx_ = PiecewiseSin2 x^2, Absx s Pi2
sin
2
(2 x) x s

2
Part (a)
We need to look at continuity at the points x =

2
where the function switches between the cases
f (x) = (sin(2 x))
2
and f (x) = 0. At these points, sin(2 x) = 0 so we have C
0
continuity.
To check C
1
continuity, look at f :
4 hwk5Soln.nb
fx
4 cos(2 x) sin(2 x) x s

2
This is also zero at x =

2
in both cases, so we have C
1
continuity. Next, look at f ,
fx
8 cos
2
(2 x) 8 sin
2
(2 x) x s

2
At x =

2
this is not continuous: f (x) = 0 for | x | >

2
, but f |

2
] = 8. So f } C
2
.
Part (b)
The basis is orthogonal, so the Gram matrix is diagonal and we can compute the coefficients easily.
fnn_ := Integratefx Cosn x, x, -Pi, PiIntegrateCosn x^2, x, -Pi, Pi
Look at a few coefficients
Tablefnn, n, 0, 4
{
1
4
,
16
15
, 0,
16
21
,
1
4

Write a function to sum terms through frequency M


fSumM_, x_ := Sumfnn Cosn x, n, 0, M
Form sums for M = 2, 4, 8, 16, 32
f2x_ = fSum2, x

16 cos(x)
15
+
1
4
f4x_ = fSum4, x

16 cos(x)
15

16 cos(3 x)
21

1
4
cos(4 x) +
1
4
f8x_ = fSum8, x;
f16x_ = fSum16, x;
f32x_ = fSum32, x;
Plot the approximations
hwk5Soln.nb 5
Plotfx, f2x, f4x, f8x, f16x, f32x, x, -Pi, Pi
3 2 1 1 2 3
0.2
0.4
0.6
0.8
1.0
Plot the errors
Plotf2x - fx, f4x - fx, f8x - fx, f16x - fx, f32x - fx,
x, -Pi, Pi, PlotRange - All
3 2 1 1 2 3
0.4
0.2
0.2
0.4
0.6
Because the errors change so much as Mincreases, the smaller errors are hard to see on the plot. To help see the
errors, use a semilog plot. The peak error decreases by about a factor of 8 each time we double the number of
terms.
PlotLog10, Absf2x - fx, f4x - fx, f8x - fx, f16x - fx, f32x - fx,
x, -Pi, Pi
3 2 1 1 2 3
7
6
5
4
3
2
1
6 hwk5Soln.nb
Part (c)
Well use the method of undetermined coefficients. For the particular solution to

n
+ 3
n
= cos(n x)
assume a solution of the form
n
(x) = Acos(n x) + Bsin(n x).
trialSolnA_, B_, n_, x_ = A Cosn x + B Sinn x
Acos(n x) + Bsin(n x)
Plug in the trial solution into the equation and compute the residual
resid = Expand DtrialSolnA, B, n, x, x + 3 trialSolnA, B, n, x - Cosn x
3 Acos(n x) + Bn cos(n x) cos(n x) + 3 Bsin(n x) An sin(n x)
The residual must be zero. Group terms,
Collectresid, Cosn x, Sinn x
(3 A + Bn 1) cos(n x) + (3 B An) sin(n x)
The cosine and sine are LI so the coefficients must be zero independently. Write equations that set the coeffi-
cients to zero, and solve for A and B,
abSolnn_ = Solve3 A + n B - 1 = 0, 3 B - n A = 0, A, B
{{A
3
n
2
+ 9
, B
n
n
2
+ 9

An_ = A . abSolnn1;
Bn_ = B . abSolnn1;
Form the particular solution with frequency n
partSolnNn_, x_ = An Cosn x + Bn Sinn x

3 cos(n x)
n
2
+ 9
+
n sin(n x)
n
2
+ 9
Part (d)
Superpose the particular solutions caused by each term f
n
cos(n x),
partSolnM_, x_ := ExpandSumfnn partSolnNn, x, n, 0, M
Look at the particular solution through M = 4,
partSoln4, x

8 cos(x)
25

8 cos(3 x)
63

3
100
cos(4 x) +
8 sin(x)
75

8 sin(3 x)
63

1
25
sin(4 x) +
1
12
Form a general solution by adding an arbitrary member of the kernel. The basis for the kernel isthe single
function e
3 x
. Add any constant multiple S e
3 x
of this function to form the general solution. (The symbol C is
reserved in Mathematica, so I use S instead).
hwk5Soln.nb 7
Form a general solution by adding an arbitrary member of the kernel. The basis for the kernel isthe single
function e
3 x
. Add any constant multiple S e
3 x
of this function to form the general solution. (The symbol C is
reserved in Mathematica, so I use S instead).
y(x) = S e
3 x
+ y
p
(x)
With an initial value y(0) = 1 we can solve for S: y(0) = S + y
p
(0).
SM_ := 1 - partSolnM, 0
For example,
S2

11
12

8
25
Now put everything together:
ySolnM_, x_ := SM Exp-3 x + partSolnM, x
y2x_ = ySoln2, x

8 cos(x)
25
+
8 sin(x)
75
+ :
3 x
(
'
.
.
.
..

11
12

8
25
\
!
.
.
.
..
+
1
12
y4x_ = ySoln4, x;
y8x_ = ySoln8, x;
y16x_ = ySoln16, x;
y32x_ = ySoln32, x;
Plot the solutions.
Ploty2x, y4x, y8x, y16x, y32x, x, 0, 4 Pi
2 4 6 8 10 12
0.1
0.2
0.3
0.4
0.5
The sequence of approximate solutions appear to be converging to a limit. To see this more clearly, look at the
difference between two successive approximations.
AM_, x_ := ExpandySolnM, x - ySoln2 M, x
d2x_ = A2, x;
d4x_ = A4, x;
d8x_ = A8, x;
8 hwk5Soln.nb
d16x_ = A16, x;
PlotLog10, Absd2x, d4x, d8x, d16x, x, 0, 4 Pi
0 2 4 6 8 10 12
8
6
4
2
The difference between approximations is getting smaller as the number of terms is increased. Intuitively, that
suggests the approximations are converging to a limit. If you know about Cauchy sequences, youll know that
this intuitive idea can be made rigorous and used to prove convergence of a sequence of approximations. (If
you dont know about Cauchy sequences, youll learn about them next semester).
Problem 6
Heres the operator:
Ly_ := yx + 2 yx + 10 yx
Form a trial solution
ypx_ = A Cos4 x + B Sin4 x
Acos(4 x) + Bsin(4 x)
Plug the trial solution into the equation L y = cos(4 x) and collect coefficients of cos(4 x) and sin(4 x).
CollectLyp - Cos4 x, Cos4 x, Sin4 x
(6 A + 8 B 1) cos(4 x) + (8 A 6 B) sin(4 x)
Set the coefficients to zero, and solve for A and B,
ypCoeffs = Solve8 B - 6 A = 1, -8 A - 6 B = 0, A, B
{{A
3
50
, B
2
25

ypSolnx_ = ypx . ypCoeffs1

2
25
sin(4 x)
3
50
cos(4 x)
Form the general solution by adding a member of the kernel.
hwk5Soln.nb 9
ygx_ = ypSolnx + R Exp-x Cos3 x + S Exp-x Sin3 x
:
x
Rcos(3 x)
3
50
cos(4 x) + :
x
S sin(3 x) +
2
25
sin(4 x)
Use the initial conditions to solve for the remaining coefficients,
ygCoeffs = Solveyg0 = 0, yg0 = 0, R, S
{{R
3
50
, S
13
150

ySolnx_ = ygx . ygCoeffs1

3
50
:
x
cos(3 x)
3
50
cos(4 x)
13
150
:
x
sin(3 x) +
2
25
sin(4 x)
Compare to DSolve
soln = DSolveyx + 2 yx + 10 yx = Cos4 x, y0 = 0, y0 = 0, yx, x
{{y(x)
1
300
:
x
(25 :
x
cos(x) cos(3 x) 7 :
x
cos(7 x) cos(3 x) 25 :
x
sin(x) cos(3 x) + :
x
sin(7 x) cos(3 x) 18 cos(3 x)
25 :
x
cos(x) sin(3 x) :
x
cos(7 x) sin(3 x) 25 :
x
sin(x) sin(3 x) + 26 sin(3 x) 7 :
x
sin(3 x) sin(7 x))
Simplify and expand,
ExpandFullSimplifyyx . soln1

3
50
:
x
cos(3 x)
3
50
cos(4 x)
13
150
:
x
sin(3 x) +
2
25
sin(4 x)
Problem 7
Recall from a previous assignment that D
tt
=
(
'
.
..
0 1
1 0
\
!
.
... The eigenvalues are
Eigenvalues0, 1, -1, 0
i, i
The Eigenvector function returns eigenvectors as rows; it is more usual to view the eigenvectors as columns so
we take the transpose.
TransposeEigenvectors0, 1, -1, 0
(
'
.
..
i i
1 1
\
!
.
..
The columns are the eigenvectors written in the trigonometric basis. The first component is the component of
cos(x), the second component is the coefficient of sin(x). Writing out the eigenvectors in this way gives

1
= i cos(x) + sin(x) = i(cos(x) + i sin(x)) = i e
i x

2
= i cos(x) + sin(x) = i(cos(x) i sin(x)) = i e
i x
The eigenvectors of the differentiation operator in F
1
are the complex exponentials! It should be clear why i
are the eigenvalues of D
tt
.
10 hwk5Soln.nb
The eigenvectors of the differentiation operator in F
1
are the complex exponentials! It should be clear why i
are the eigenvalues of D
tt
.
Problem 8
Part (a)
The matrix D
vv
=
(
'
.
.
.
.
.
.
.
.
0 1 0
0 0 2
0 0 0
\
!
.
.
.
.
.
.
.
.
is upper triangular so the eigenvalues are the diagonal elements. All diagonal
elements are zero so the eigenvalues are zero.
Part (b)
The eigenvectors are the solutions of the augmented system
(
'
.
.
.
.
.
.
.
.
0 1 0 0
0 0 2 0
0 0 0 0
\
!
.
.
.
.
.
.
.
.
The nontrivial solution is (1, 0, 0)
T
.
Part (c)
Because differentiating a polynomial always results in a polynomial of lower degree, here is no non-constant
polynomial such that p = p. When p is a constant, p = 0 so = 0.
In the Vandermonde basis, the constant polynomials are represented as multiples of (1, 0, 0)
T
which is the only
nontrivial eigenvector. The eigenvalue is zero.
hwk5Soln.nb 11

You might also like