Professional Documents
Culture Documents
HW 3 Solutions
HW 3 Solutions
Section 3.4
4d) I wrote code, but that was not necessary. For MatLab output, see the appendix. The free cubic
spline with six digit rounding of the coefficients is
3 for x ∈ [−1, −.5]
.861995 + .175638(x + 1) + .065651(x + 1)
2
p(x) = .958020 + .224876(x + .5) + .098476(x + .5) + .028281(x + .5) 3 for x ∈ [−.5, 0]
2
1.09861 + .344563x + .140898x − .093932x 3 for x ∈ [0, .5]
8d) I wrote code, but that was not necessary. For MatLab output, see the appendix. The clamped
cubic spline with six digit rounding of the coefficients is
2 3 for x ∈ [−1, −.5]
.861995 + .155362(x + 1) + .065375(x + 1) + .016003(x + 1)
2
p(x) = .958020 + .232740(x + .5) + .089379(x + .5) + .015020(x + .5) 3 for x ∈ [−.5, 0]
2
1.09861 + .333384x + .111910x + .008758x 3 for x ∈ [0, .5]
Section 3.5
For 2a and 2c, let (x0 , y0 ) = (0, 0) and (x1 , y1 ) = (5, 2). Let (x̂0 , ŷ0 ) and (x̂1 , ŷ1 ) denote the
left and right guidepoints, respectively. Then α0 = x̂0 − x0 , α1 = x1 − x̂1 , β0 = ŷ0 − y0 , and
β1 = y1 − ŷ1 . MATLAB output and figures are in the appendix.
2a) (x̂0 , ŷ0 ) = (1, 1) and (x̂1 , ŷ1 ) = (6, 1). So, α0 = β0 = β1 = 1 = −α1 . Therefore, (α0 + α1 ) =
0, (α1 + 2α0 ) = 1, (β0 + β1 ) = 2, and (β1 + 2β0 ) = 3. Using Equations (3.24) and (3.25) from page
162, we have
2c) (x̂0 , ŷ0 ) = (1, 1) and (x̂1 , ŷ1 ) = (6, 3). So, x̂0 , x̂1 , α0 , and α1 are unchanged from problem 2a
and therefore so is x(t). Now β0 = 1 = −β1 , so (β0 + β1 ) = 0, and (β1 + 2β0 ) = 1. From Equation
(3.25) from page 162, we have
1
Section 4.1
6b) Use the most accurate 3-point formulat to fill in the missing data from the following table:
x f (x) f 0 (x)
7.4 -68.3193
7.6 -71.6982
7.8 -75.1576
8.0 -78.6974
in the first three cases h = .2, and in the final case h = −.2. For the first and last data point, we
must use the forward 3-point formula (where h < 0 actually makes it a backward formula) and the
two middle cases will use the central 3-point formula.
1
f 0 (7.4) ≈ [−3(−68.3193) + 4(−71.6982) − (−75.1576)] = −16.6933
.4
1
f 0 (7.6) ≈ [−75.1576 − (−68.3193)] = −17.0958
.4
0 1
f (7.8) ≈ [−78.6974 − (−71.6982)] = −17.4980
.4
1
f 0 (8.0) ≈ [−71.6982 − 4(−75.1576) + 3(−78.6974)] = −17.9000.
.4
8b) Now we now the data was generated from f (x) = ln(x+2)−(x+1)2 so that f 0 (x) = 1
x+2 −2(x+1)
2
and f (3) (ξ) = (x+2)3 . Therefore, our table should actually be
x f (x) f 0 (x)
7.4 -68.3193 -16.6936
7.6 -71.6982 -17.0958
7.8 -75.1576 -17.4980
8.0 -78.6974 -17.9000
Thus, for the final three values our approximations are exact. The absolute error for f 0 (7.4) is
.0004.
Let E(x) denote the theoretical
(3) h2 error bound for f 0 (x). The theoretical bound for the forward
3-point formula is f (ξ) 3 and for the central 3-point formula it is half of that. For x = 7.4 and
x = 7.6, ξ ∈ [7.4, 7.8] and for x = 7.8 and x = 8.0 ξ ∈ [7.6, 8.0]. Since f (3) (x) is decreasing on both
of these intervals, its maximum is achieved at the left endpoint of the appropriate interval. Hence,
2 .04 2 .04
E(7.4) ≤ 3
≤ 3
= .00003211
(ξ + 2) 3 (9.4) 3
2 .04 2 .04
E(7.6) ≤ 3
≤ 3
= .00001605
(ξ + 2) 6 (9.4) 6
2 .04 2 .04
E(7.8) ≤ 3
≤ 3
= .00001507
(ξ + 2) 6 (9.6) 6
2 .04 2 .04
E(8.0) ≤ 3
≤ 3
= .00003014
(ξ + 2) 3 (9.6) 3
2
Obviously, we meet this bound with the last three approximations. The first approximation however
does not achieve this bound. This is almost certainly due to the precision of the initial data as the
original data is less precise than the bounds.
G28) To derive an approximation technique for f (3) (x0 ) we expand f (x) as a Taylor series centered
at x0 and evaluate at the points x0 ± h.
1 1 1 1 (5)
f (x0 + h) = f (x0 ) + f 0 (x0 )h + f 00 (x0 )h2 + f (3) (x0 )h3 + f (4) (x0 )h4 + f (x0 )h5 + · · ·
2 6 24 120
0 1 00 1 (3) 1 1 (5)
f (x0 − h) = f (x0 ) − f (x0 )h + f (x0 )h − f (x0 )h + f (4) (x0 )h4 −
2 3
f (x0 )h5 + · · ·
2 6 24 120
0 1 (3) 1 (5)
so f (x0 + h) − f (x0 − h) = 2f (x0 )h + f (x0 )h + f (x0 )h5 + · · · .
3
(1)
3 60
29) Of course we want to minimize error. Finding the minimum of the error e(h) is basic calculus.
1
e0 (h) = −
h2 + h
3 M must equal zero. Thus h
2 = h
3 M or h = 3 3
M . Since h is positive, the second
1
3 3
derivative is positive and therefore e(h) has a minimum at h = M .
3
Section 4.2
1
8) Let N1 (h) = h [f (x0 + h) − f (x0 )]. Then
h h2
f 0 (x0 ) = N1 (h) − f 00 (x0 ) − f 000 (x0 ) + O(h3 ) (5)
2 6
h h h2
so f 0 (x0 ) = N1 − f 00 (x0 ) − f 000 (x0 ) + O(h3 ). (6)
2 4 24
h2
0 h
f (x0 ) = 2N1 − N1 (h) + f 000 (x0 ) + O(h3 ). (7)
2 12
h2
0 h
f (x0 ) = N2 + f 000 (x0 ) + O(h3 ). (8)
2 48
Now we solve equation (9) for f 0 (x0 ) and unravel the definition of N2 (h) in terms of the original
function f .
0 4 h 1
f (x0 ) = N2 − N2 (h) + O(h3 )
3 2 3
4 h h 1 h
= 2N1 − N1 − 2N1 − N1 (h) + O(h3 )
3 4 2 3 2
8 h 6 h 1
= N1 − N1 + N1 (h) + O(h3 )
3 4 3 2 3
84 h 62 h 11
= f x0 + − f (x0 ) − f x0 + − f (x0 ) + [f (x0 + h) − f (x0 )] + O(h3 )
3h 4 3h 2 3h
1 h h
= 32f x0 + − 12f x0 + + f (x0 + h) − 21f (x0 ) + O(h3 ) (10)
3h 4 2
Substituting 4h for h in equation (10) and arranging the terms in an aesthetically pleasing order
gives us a O(h3 ) approximation of the derivative of f at x0 :
1
f 0 (x0 ) = [f (x0 + 4h) − 12f (x0 + 2h) + 32f (x0 + h) − 21f (x0 )] + O(h3 ).
12h
4
Section 4.3
2a) With the trapezoid rule, we have h = .5 and
Z .25
.5 2 cos2 (.25)
cos2 x dx ≈ cos (−.25) + cos2 (.25) = ≈ .4693956.
−.25 2 2
5
Section 4.4
14) f (x) = x ln x so f 0 (x) = ln x + 1 and f 00 (x) = x1 . Let ET (h), EM (h), and ES (h) denote
the ABSOLUTE R 2error for the trapezoidal, midpoint, and Simpson’s methods, respectively, when
approximating 1 f (x)dx. In this case, b − a = 1 and f 00 (x) ≤ 1 for all x ∈ [1, 2]. Part (c) is done
prior to part (b).
00 2
(a) By Theorem 4.5, ET (h) = f 12(µ) h2 ≤ h12 . Since we want ET (h) ≤ 10−5 , we have h2 ≤ .00012
or h ≤ .010954 (truncating rather than rounding to ensure the bound). Since h = b−a 1
n = n , then
n ≥ 92 will suffice.
00 00
(c) By Theorem 4.6, EM (h) = f 6(µ) h2 = 2 f 12(µ) h2 = 2ET (h). Then EM (h) ≤ 10−5 ⇔ ET (h) ≤
.000005 or h2 ≤ .00006. So we need h ≤ .00774 and n ≥ 130.
f (4) (µ) 4
(b) By Theorem 4.4, ES (h) = 180 h . Since f (4) (x) is decreasing on [1, 2], then f (4) (µ) ≤ 2. Thus
1
ES (h) ≤ 10−5 ⇔ h4 ≤ 90 10−5 ⇔ h ≤ (.0009) 4 ≈ .17320.
1 1
Then n = h ≈ .17320 ≥ 5.7735. Therefore, n ≥ 6 will suffice.
The approximations are (a. Trapezoid) .636301186, (c. Midpoint) .636287731, and (b. Simpson)
.636297501 which are all within 10−5 with Simpson’s absolute error about half the other two absolute
errors.
22) Let To be the set of all times m6 for m = 1, 3, . . . , 13 and So the sum of the respective speeds.
Let Te be the set of times n6 for n = 2, 4, . . . , 12 and Se the sum of the respective speeds. Then,
since h3 = 2 the length of the track in feet is approximately 2 ∗ (124 + 4So + 2Se + 123) = 9858.
2
26) Let g(t) = √12π e−t /2 . It is moderatley helpful to recognize that this is the pdf of the normal
Rx
distribution with mean 0 and standard deviation 1. So G(x) = −∞ f (t)dt is the cummulative
distribution function. We know that the normal distribution is symmetric about the mean (here
0). Additionally, loosely speaking, the probability you are within 2 standard deviations of the
mean is approximately .95R for any distribution. Also, for the continuous random variable X,
x
P(X ≤ x) = G(x) = .5 + 0 g(t)dt. So x = 2 is a reasonable initial guess since we translate this
problem to one asking for what value x do we have R xP(X ≤ x) = .95.
First I want to ensure that my function f (x) = 0 g(t)dt is accurate to within 10−5 . By Theorem
2
4.4, the error can be determined using f (4) (x) = √12π (x − x3 )e−x /2 . By graphing (this is fine as we
don’t
(4) need
to be exact, but just get some bound) the absolute value of this function, we see that
f (x) ≤ .35 on [0, ∞). So ES (x) ≤ .35 x54 since b = x and a = 0. Now we know that it is very
180 n
unlikely that x ≥ 3 so let’s use x = 3. We want ES (x) ≤ 10−5 so .4725 n4
≤ 10−5 or n ≥ 15. So in my
composite Simpson method, I use n = 16.
Finally, if F (x) = f (x) − .45, the F 0 (x) = f 0 (x) = g(x) and I run Newton’s method with
tolerance 10−5 . From above, I used an initial approximation of 2. My algorithms return the
approximation 1.64485333 which, using Simpsons method, gives an approximation to the integral
of .449999997. The error of 3 × 10−8 is certainly within our limits.
x3 −5
For the Trapezoid method, I need ET (x) ≤ .35 12 n2 ≤ 10 . Again, using x = 3, then n = 281
will suffice. I obtain 1.64485832 as the approximate root to FT (x) (same as F above but uses the
Trapezoid method). This returns a Trapezoid approximation to the integral of .450000003 with the
same absolute error.
One could change the stopping criteria in Newton’s method to a function value tolerance for
this problem.
6
Section 4.6
2f) We want to use adaptive quadrature
R 1.6 (painfully by hand to understand exactly how the method
works) to approximate the integral 1 x2 −4 dx with tolerance 10−3 . We use the book’s notation
2x
S(a, b).
2(1) 2(1.3) 2(1.6)
S(1, 1.6) = .1 − −4 − ≈ −.739105339
3 2.31 1.44
2(1) 2(1.15) 2(1.3)
S(1, 1.3) = .05 − −4 − ≈ −.261412444
3 2.6775 2.31
2(1.3) 2(1.45) 2(1.6)
S(1.3, 1.6) = .05 − −4 − ≈ −.473053517.
2.31 1.8975 1.44
Now |S(1, 1.6) − (S(1, 1.3) + S(1.3, 1.6))| = .004639378 < .015 = 15(10−3 ). Therefore the theoreti-
cal development of adaptive quadrature says we should stop with this second level of approximation
as our approximation of the error meets our tolerance. Thus
Z 1.6
2x
2
dx ≈ S(1, 1.3) + S(1.3, 1.6) = −.734465960.
1 x −4
(The actual value is .733969175 so the absolute error is .000496790 and we achieve the desired
accuracy.)
Section 4.7
R 1.6 R −1.44 1 R1
4f) By a double substitution, 1 x22x−4 dx = −3 1
u du = .78 −1 .78w−2.22 dw. Then for n = 5, we
have
Z 1.6
2x 1 1
dx = .78 .2369268850 +
1 x2 − 4 .78(.9061798459) − 2.22 .78(−.9061798459) − 2.22
1 1
+ .4786286705 +
.78(.5384693101) − 2.22 .78(−.5384693101) − 2.22
1
−.5688888889 .
2.22
Evaluating this in Matlab returns -.733969133. The actual value of the integral is -.733969175 and
our absolute error is 4.2 × 10−7 .
Alternatively, without the first the substitution and with f (x) = x22x−4 , we have
Z 1.6 Z 1
f (x)dx = .3 f (.3x + 1.3)dx ≈ −.733968725
1 −1
7
APPENDIX: MATLAB OUTPUT
3.4 4d)
>> x=[-1 -.5 0 .5];
>> a=[.86199480 .95802009 1.0986123 1.2943767];
>> cubicfree(x,a)
ans =
ans =
3.5 2a) and 2c) This function also returns the correct values for the polynomial
coefficents in matrix form. The first four terms are the coefficients for t^0 through t^3 in
x(t) while the last four terms are the coefficients for t^0 through t^3 in y(t). The plots of
the curves are in this same order on the next page.
>> aa=[0 0;5 2];
>> bb=[1 1];
>> cc=[6 1];
>> bezierplot(aa,bb,cc)
ans =
0 3 12 -10 0 3 -3 2
>> dd=[6,3];
>> bezierplot(aa,bb,dd)
ans =
0 3 12 -10 0 3 3 -4
3
2.5
1.5
0.5
-0.5
-1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
2a
2.5
1.5
0.5
-0.5
2c