Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

DOUBLE SERIES AND PRODUCTS OF SERIES

KENT MERRYFIELD

1. Various ways to add up a doubly-indexed series: Let (u)jk be a sequence of numbers depending on the two variables j and k. I will assume that 0 j < and 0 k < . An ordinary sequence can be expressed as a list; to list such a doubly-indexed sequence would require an array: u00 u01 u02 u10 u11 u12 (1) u20 u21 u22 . . . .. . . . . . . . We would like to consider series with such sequences as terms; that is, we would like to add up all of the numbers (ujk ). Our problem with making this notion meaningful is not that there is no way to do this; rather, there are too many ways to do this. Out of a much larger collection of possible meanings, let us pick out four to compare. What can we mean by the sum of all of the numbers (ujk )?

Possibility 1: the iterated sum


j=0 k=0

ujk
k=0 j=0

(2) ujk (3)

Possibility 2: the other iterated sum Possibility 3: the limit of the square partial sums:
N N

Let SN =
j=0 k=0

ujk : interpret the series as lim SN


N

(4)

Possibility 4: the limit of the triangular partial sums:


N N j

Let TN =
j=0 k=0

ujk : interpret the series as lim TN


N

(5)

You can get a sense of the meaning of the N -th triangular partial sum by doing the following: look at the array in (1), place a ruler at a 45 angle across the upper left corner of this array, then add up all of the numbers that you can see above and to the left of the ruler. The further you pull the ruler down and to the right, the larger the N that this partial sum represents. Pushing this image a little further, we get the following notion: for each one-step move or our ruler down and to the right, we add in one more lower-left to upper-right diagonal worth of terms. We can look at TN as:
N n=0 N

unk,k
k=0 1

u00 + (u10 + u01 ) + (u20 + u11 + u02 ) + (u30 + u21 + u12 + u03 ) +

(6)

So we have several dierent ways to add up a double series. (There are yet more ways, but lets not overburden the discussion.) The big question is this: do these methods necessarily yield the same number? I would hope that our discussion earlier this semester have prepared you to accept the answer without surprise - the answer is NO! We need an example. The following should be convincing enough. Dene (ujk ) by the array: 1 1 0 0 0 1 1 0 0 0 1 1 (7) 0 0 0 1 . . . . .. . . . . . . . . . Adding up the rows rst as in (2), we get
j=0

ujk
k=0

=
j=0

0=0

Adding up the colums rst as in (3), we get


k=0 j=0

ujk = 1 + 0 + 0 + 0 + = 1

The sequence of square partial sums (SN ) goes as (1, 1, 1, 1, 1, 1, . . . ), which converges to 1. And nally, the sequence of triangular partial sums TN goes as (1, 0, 1, 0, 1, 0, . . . ), which diverges. With four dierent methods, we got three dierent answers - and the sense that the fact that even two of them were the same is at best a lucky coincidence. However, we also note that there is a great deal of cancellation going on here - positive and negative terms adding up to zero. It is clear that this double sum cannot possibly be absolutely convergent by any of these methods. (What do I mean by absolutely convergent? Simply that if we replaced each and every term by its absolute value, the resulting sum would converge to a number < .) In 361A, we proved a theorem that a single series is absolutely convergent if and only if it is unconditionally convergent - that is, if and only if any rearrangement of that series still converges, and to the same number. All of these methods of trying to add up a double series should be seen as various rearrangements of the sum; if we are looking for a condition that guarantees that they will all give us the correct number, we should expect that absolute convergence is just the condition we need. As a general plan, theorems that have absolute convergence as a hypothesis proceed in two stages; the rst is the proof that everything works in the case of series with nonnegative sums; the second is the use of that rst proof as a lemma in the proof of the general absolutely convergent case. What follows will be no exception Lemma 1: If ujk 0 for all j and k, then the square partial sums and triangular partial sums always have the same limit. This limit may be : if the square partial sums tend to , then so also do the triangular partial sums. The proof depends on the following inequality, which I present without written proof. (A picture helps to explain it). If ujk 0 for all j and k, then TN SN T2N for all N (8)

Both of these sequences - (SN ) and (TN ) - are non-decreasing sequences and thus subject to the monotone sequence alternative: either they converge or they go to . (T2N ) is just a subsequence of (TN ) and hence has the same limit. An appeal to the Squeeze Theorem completes the argument.

Lemma 2: If ujk 0 for all j and k, then the limit of the square partial sums is the same as either one of the iterated sums (2) or (3). This time, the proof will take more work. To have some notation to work with,

let I =
j=0 k=0

ujk

and

let L = lim SN .
N

Our goal is to prove that I = L. The proof of this equality will be a classical analysts proof: we very seldom prove equality directly. The way to show that I = L is to show that I L and that I L. First note that for each j, since these are series with nonnegative terms,
N

ujk
k=0 k=0

ujk .

Adding up these estimates for 0 j N yields


N j=0 N N N

ujk
k=0

j=0 k=0

ujk

SN
j=0 k=0

ujk
N

j=0 k=0

ujk

=I

Since SN I for all N, we must have that lim SN I, so L I. Naturally, we started with the easy half, but sooner or later we must face the other side. There will be an in this argument, and a need to have some convenient convergent series with positive 2 1 terms whose sum is . Give us a choice of a convenient convergent series and we will usually take 2 1 a geometric series; given our choice of ratio, we will usually pick . That is, we will use this fact: 2
j=0

1 1 = 2j+2 2

(9) is any positive number. Choose an

We are trying to prove that L I. Start by assuming that M so large that for all m > M, we have
m j=0

ujk
k=0

(10)

Next, for each j, 0 j m, choose an nj large enough that


nj

ujk >
k=0 k=0

ujk

2j+2

(11)

Now we add up the estimates in (11) for 0 j m and use (9) and (10).
m j=0 nj m

ujk
k=0 m

>
j=0 k=0

ujk

2j+2

=
j=0 k=0

ujk

j=0

2j+2

=I 2 2 Finally, let N be the maximum of the nite collection of numbers {m, n1 , n2 , . . . , nm }.


N N m mj

>I

(12)

SN =
j=0 k=0

ujk

j=0 k=0

>I

and since SN S, we have S > I for all > 0. This forces S I. That nishes the proof at least in the case where both of these limits are assumed to be nite. A minor variation of this proof will show that if either one is innite, then both are innite. Theorem 3: If ujk 0 for all j and k, then if any one of the four sums - the limit of the square partial sums, the limit of the triangular partial sums, or either of the iterated sums - converges, then all four converge to the same number. If any one of the four is innite, then all four are innite. To prove this, just collect together Lemmas 1 and 2, and note that the proof of Lemma 2 would work just as well for the other iterated sums. The most interesting consequence of this theorem is the equality of the iterated sums:

ujk
j=0 k=0

=
k=0

j=0

ujk

(13)

The next stage is to claim that the same hold for absolutely convergent double series. We call the double series absolutely convergent if the double series with the terms |ujk | converges. Which of the four possibilities do we mean when we say it converges? By Theorem 3, it doesnt matter if any one of the four converges, they all do, and to the same sum. Theorem 4: If the double series with terms ujk converges absolutely, then both iterated sums, the limit of the square partial sums, and the limit of the triangular partial sums all equal the same number. The proof makes direct use of Theorem 3. Let wjk = ujk + |ujk |. Each wjk 0, so
N N N n

wjk
j=0 k=0

=
k=0

j=0

wjk = lim

wjk
j=0 k=0

= lim

wnk,k
n=0 k=0

Starting with the leftmost item, we have:


j=0

wjk
k=0

=
j=0 k=0

ujk + |ujk |

=
j=0 k=0

ujk +
k=0

|ujk |

=
j=0 k=0

ujk

+
j=0 k=0

|ujk |

(14)

with each of these steps justied by the fact that the sum of two convergent series is the series of the sums. We note that each inner sum of the iterated sum is absolutely convergent because it must add up to a number that is itself a term in a convergent series and thus nite. Repeating very similar arguments for each of the other three items, we have

k=0 N N N N

j=0

wjk =

k=0

j=0

ujk +

k=0

j=0 N

|ujk |
N

(15)

lim

wjk
j=0 n k=0

= lim

ujk
j=0 k=0 n

+ lim

|ujk |
j=0 N k=0 n

(16)

lim

wnk,k
n=0 k=0

= lim

unk,k
n=0 k=0

+ lim

|unk,k |
n=0 k=0

(17)

Each of the right hand sides of (14), (15), (16), and (17) contains a sum of |ujk |; by Theorem 3, each of those sums converges to the same nite number. If we then subtract this number from the the four equal quantities, we get
N N N n

ujk
j=0 k=0

=
k=0

j=0

ujk = lim

ujk
j=0 k=0

= lim

unk,k
n=0 k=0

which nishes the proof of Theorem 4. 2. The Cauchy product of two series:

Suppose
k=0

ak and
k=0

bk are two series. We know that if they are both convergent, we can add

them together simply by adding them together term by term: ak +


k=0 k=0

bk =
k=0

(ak + bk )

What would we get if we multiplied them together? Certainly not the sum of the products of the terms - thats not the way multiplication works. Lets consider this from the perspective of nite sums: if you multiplied together two sums of ten terms each, how many terms would the product have? In general, 100 terms. If we had to gure out a way to organize this sum, wed write these 100 terms in a 10 10 array. It stands to reason that what you get when you multiply together two sums is a doubly indexed sum.

Suppose
k=0

ak = A

and
k=0

bk = B

both converge. Then

ak
k=0 k=0

bk

= AB

=A
k=0

bk Abk

=
k=0

bk

j=0

aj since j is as good as k as an index. aj bk


j=0

=
k=0

=
k=0

This is exactly the kind of double sum that we have been considering. If both of the original series converge absolutely, then Theorem 4 applies and we can write this sum in each of the other three forms of that series and have it be equal. The most interesting in what follows is the triangular partial sums. That is, we have this lemma:

Lemma 5: If
k=0

ak and

bk both converge absolutely, then


k=0 N n n

ak
k=0 k=0

bk

= lim

ank bk
n=0 k=0

=
n=0 k=0

ank bk

(18)

The double sum on the right of (18) is called the Cauchy product of the two series. The immediate application for Lemma 5 is to the product of two power series. In fact, we have the following: Theorem 6 The Cauchy Product of Power Series: Suppose R > 0 and the two power series

f (x) =
k=0

ak xk and g(x) =
k=0

bk xk

both have radius of convergence at least as large as R, then the product f (x)g(x) is given by a power series with radius of convergence at least R, and that power series is
n

f (x)g(x) =
n=0 k=0

ank bk

xn
n

(19)

We can rephrase equation (19) as

If h(x) = f (x)g(x), then h(x) =


n=0

cn xn , where the cn s are computed as cn =


k=0

ank bk .

The name that we will give to getting a new coecient sequence (cn ) from the two other coecient sequences (an ) and (bn ) is that it is the convolution of those two other sequences. 3. Two examples the geometric series and the exponential: Example 1: multiplying the geometric series by itself. We know - it is the geometric series, after all - that for |x| < 1, we have 1 = 1x

xk
k=0

We now multiply this function by itself and use equation (19): 1 = (1 x)2
n=0 n

11 x =
k=0

(n + 1)xn
n=0

The same identity could have ben derived in this case by dierentiating the power series. Example 2: the basic property of the exponential function. Dene the function E(x) by the power series (which has an innite radius of convergence)

E(x) =
k=0

xk . k!

We know that E ought to be the exponential function: E(x) = ex . The most basic algebraic property of the exponential is the law of exponents: ex ey = ex+y . That is, E(x) E(y) = E(x + y). But what if we have never heard of the exponential? Or, more likely, suppose we are looking for

a rigorous way to dene the exponential function and to prove that it has the required properties. Power series provide a direct way to prove this property, as follows.

E(x)E(y) =
k=0 n k=0

xk k!

k=0

yk k! by Lemma 5

=
n=0

yk xnk (n k)! k!

=
n=0

1 n!

n k=0

n! xnk y k (n k)!k!

=
n=0

1 n!

n k=0

n nk k x y k by the Binomial Theorem.

=
n=0

1 (x + y)n = E(x + y) n! 4. Exercises:


(1) Assume that 0 r < 1. Compute the sum


j=0 k=0

rj+2k in two ways by computing both

possible iterated sums. Does the sum converge? (2) Let f (x) = [ln(1 + x)]2 . Use the series for the logarithm and Theorem 6 to compute that
n1

f (x) = [ln(1 + x)] =

(1)
n=2

n k=1

1 (n k)k

xn

Use this to compute the 5th derivative of f evaluated at 0.

You might also like