Infinite Sums 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Interesting Questions to think about.

Lisgar Math Club


September 2022

Activity 1:

When we think about addition of multiple numbers we do it recursively. For


example when evaluating 1 + 2 + 3 we first add 1 + 2 = 3, then say 3 + 3 = 6.
It is a fact that this final sum can be rearranged (be sure to convince yourself
about this). But what happens when we consider an infinite sum? Can we use
this same recursive process? How am I supposed to add 1 + 2 + 3 + . . .? Do I
add 1 + 2 first? Or do I add 184 + 2 first? The issue that quickly arises is that
we will always have infinite more things to sum. Stop here to think about how
you would add infinite things and when you have a guess or have taken a bit to
think about it move on with the exercise.
If it wasn’t hard enough to deal with simply adding infinite series we also
have the following to consider. Let us consider the sum s = 1 + 1 + 1 + 1 + · · · .
Then one could argue that 1+s+1+(1+1+1+· · · ) = s. So, 1+s = s, or 1 = 0.
One could then say that if an infinite sum adds to infinity, it is weird and we have
to treat it differently. Consider now that sum s = 1−1+1−1+· · · . What do you
think a good sum would be? What if we consider 1−s = 1−(1−1+1−1+· · · ) =
1 − 1 + 1 − 1 + · · · = s, so, 1 = 2s, or s = 1/2. So now, adding an infinite amount
of natural numbers gives you a fraction. These results seem to make no sense so
we can say that there are very faulty foundations in these little calculations. The
moral of this story is that infinite sums are not finite sums: they require separate
and rigorous treatment, and need to be inducted into the fabric of mathematics
already existing, so that very little disturbance is made. Fortunately we can
deal with most infinite sums for example consider
1 1 1
1+ + + + ··· .
2 4 8
What does this add up to? Stop here and give this a shot, it will help you get
through the subsequent sections of this exercise. But there will always be some
sums that are outcasts like those shown above.
Now we move to the infamous and outlandish claim that
1
1 + 2 + 3 + 4 + 5 + ··· = − .
12

1
There is a very nice Numberphile video showing how this is gotten and I rec-
ommend those who have not seen it to watch it. Of course even though there is
no explicit mistake, there is one thing that is assumed to be true which simply
isn’t. Look at the video and think what that is?
The concept at play here is that of convergence and divergence. We can
say an infinite sum converges if as you add more and more terms on to it, its
sum converges to a finite real number s (convergence does exist for complex
numbers too but for the moment we are interested only in the real numbers).
And we say that it diverges if it does not. As an exercise show that the first
three sums already shown are all divergent. Since a divergent series cannot add
up to a value it seems that we have reached a conclusion and that we are quote
on quote done thinking about this. This is not the case! There is indeed a far
more profound explanation as to why the misconception of what these series
sum to is said to be what it is.
We shall begin with the series 1 − 1 + 1 − 1 + · · · . Consider the sum

r0 + r1 + r2 + r3 + · · · .

It is known that (and you can this by using polynomial long division)
1
r0 + r1 + r2 + r3 + · · · = , r ∈ (−1, 1).
1−r
Here is the part that becomes a bit technical since we are extending (using
analytic continuation) this formula beyond its usual domain. If we now consider
an ϵ ∈ (0, 2), then
1 1
(ϵ − 1)0 + (ϵ − 1)1 + (ϵ − 1)2 + (ϵ − 1)3 + · · · = = .
1 − (ϵ − 1) 2−ϵ

We can now say that as ϵ approaches 0 we have that

(ϵ − 1)0 + (ϵ − 1)1 + (ϵ − 1)2 + (ϵ − 1)3 + · · · = 1 − 1 + 1 − 1 + · · · .

This is the series we wanted, but notice that as ϵ approaches zero the fraction
1 1
2−ϵ → 2 . So, it is true that this extension of the sum 1 − 1 + 1 − 1 + · · · is equal
to 1/2. BUT it is not true that the sum itself is one half. This is a very hard
concept to wrap your head around since the distinction between where these
two exist is almost non existent.
(The next part is far more difficult.) The same case occurs for the sum
1 + 2 + 3 + · · · but the function used this time is a very famous one, namely, the
Riemann zeta function. For complex numbers z, with ℜ(z) > 1, this function is
defined to be

X 1
f (s) = .
n=1
ns

But we observe that we want f (−1) whose real part is < −1 to deal with this,
Riemann came up with the relation that for s with ℜ(s) < 1.

2
πs 
ζ(s) = 2s π s−1 sin Γ(1 − s)ζ(1 − s).
2
2
This looks like a mess but given that Γ(2) = 1 and ζ(2) = π6 (come talk to me
if you want to know how to calculate these two) we can say that

π2 1
ζ(−1) = 2s π −2 · (−1) · =− ,
6 12
ζ(−1) = 1 + 2 + 3 + 4 + 5 + · · · ,
the infamous result (notice I didn’t say the two were equal even though no-
tationally, it is accepted to do so). Once again, the misconception is that
ζ(−1) ̸= 1 + 2 + 3 + · · · in the way which we look at that sum. There is a
fine distinction between the two. Most people who use this are conscious of the
distinction and assume those who are reading their work are too. The question
to now ask is why do we even bother with this distinction? Why do we
need this? Is this just some fun exercise or is there actually a method to this
madness? Try to think why this is useful? (There are applications. For example
you can google: “Riemann zeta function in physics” and get a few applications)

3
Activity 2:

Euler’s Number, e, is defined in many ways some of which are the following:
1 1 1 1
e=1+ + + + + ··· ,
1! 2! 3! 4!

1 n
e = lim 1 + .
n→∞ n
Even though the latter definition is in many ways more interesting (google:
“defining Euler’s Number” to see a nice analogy regarding interest in a bank
account that explains it well) for the time being we shall focus on the first defi-
nition. Any exponential function ax can be written base e since ax = (elog a )x =
ecx for some c. This leads to the natural question: “we can define e as a sum,
can we do that same for ex ?” Try to come up with a way to define ex as an
infinite sum of terms involving x.
It might seem to be a hopeless practice to try to multiply out the first
definition x times to develop a sum. And it is. But, lucky for us, the second
definition is nicer to work with in this case. Let Pn (x) be the polynomial
(1 + x/n)n , (a small adjustment from our initial definition of simply e will give
that Pn (x) = ex since what we have is now e multiplied to itself x times). Using
the Binomial Theorem we have that
    2   3   n n   k
n x n x n x n x X n x
Pn (x) = 1 + + + + · · · + = .
1 n 2 n2 3 n3 n nn k nk
k=0

The last part is simply notation. Now we also have that


 
1 n 1 n n−1 n−k+1
k
= k · · ··· .
n k n k k−1 1

Since k is fixed we have that


x x2
lim Pn (x) = ex = 1 + + + ··· .
n→∞ 1! 2!
Now that we have a polynomial for ex we can ask ourselves: ‘why do we want
this if we know that infinite sums are way harder to deal with?’ Take a minute
to think why we would want to deal with an infinite sum.
There are many reasons, the first being that we know A LOT about poly-
nomials. They are fairly simple compared to ex since, until now we don’t even
know how big e is. It could be any number. It turns out that e = 2.7182818...,
but even then, the number goes on forever with no known pattern (in fact, like
π it is far worse that just irrational, it is called transcendental) so the question
that naturally arises is: “why is it better to have an infinite sum compared to an
infinite decimal?” The answer here is that our sum has a clear pattern, whereas
the decimal places of e do not.

4
z
If we consider the complex numbers,
√ does that expression e make any sense?
(Remember z = a + bi, where i = −1) Can we deal with ea+bi ? Well, the
answer is yes and no. The largest issue we run into simply evaluating this as
is can be shown as follows. Since ea+bi = ea · ebi = ea · (eb )i (for real a, b).
So now, we need to know what it means to raise a number to the ith power.
Can this make any sense? And more importantly can this extension be made
to salvage some of the properties that e has in the real numbers? The largest
issue with simply defining ez as an exponent is that presumably that if y is real
and unequal to 0, then
2 2
f (iy) = e−1/(iy) = e1/y .

The interesting fact about this expression is that it becomes large as y becomes
small. Thus it will not be continuous at 0 in the complex numbers. Instead we
will simply utilize the infinite sum we have for ex and allow complex numbers
to be inputted into it. That is, we define

z z2
ez = 1 + + + ··· .
1! 2!
This is all wonderful but we have to ask ourselves: “what does this accomplish?”
Just above I mentioned to concept of continuity. This sum ensures that ez will
be continuous at any complex number! There is a very very fine distinction
that drastically changes the defined function ez here (just as we saw in the
first exercise). The process of using polynomials (namely the Taylor series of
a function) to extend it to the complex numbers is a very common practice,
but is most beautifully shown with the exponential function. These kinds of
definitions lead to ‘identities’ like eiπ − 1 = 0. But, I would like to illustrate one
very interesting series of numbers: The Bernoulli numbers.
We observe with our new definition that
ez − 1 z z2
=1+ + + ··· .
z 2! 3!
Then it makes sense that for some numbers bn the reciprocal of this has sum
z b0 b1 b2
= + z + z2 + · · · .
ez − 1 1 1! 2!
(This is because the numbers bn are ‘arbitrary’ coefficients) We call the numbers
bn that appear here the Bernoulli numbers. We might now ask: “why do we
even want to know what these numbers are?”
Just like how in the real numbers, the exponential function can be used to
define a large family of functions, in the complex numbers, the exponential func-
tion can be used to define many more functions. Some examples are the trigono-
metric functions, most sums, and any exponential function (this is obvious, but
we cannot define a continuous logarithm so this is even more important now).
The Bernoulli numbers appear in many of these definitions, so knowing their

5
values becomes important (most notably they appear in the Euler-Maclaurin
summation formula).
Now that we have a brief idea as to on ways in which all of these infinite
sums and Bernoulli numbers can be applied, it seems like a good idea to try and
find some of the Bernoulli numbers. We can see that since the function being
inverted’s first term is one, so should b0 /1, so, b0 = 1. Now, a nice trick can be
used (this part is a bit more technical). Notice that

z2 z3
X  
bk
z= z+ + + ··· .
k! 2! 3!
k=0
Now, if n > 1, then the coefficient of the right side of z n must be 0. So, we have
that
n−1
X n
bi = 0.
i=0
i
The beauty of this result is that since we have the first Bernoulli number we
can (with enough patience or a really good computer) compute any Bernoulli
number we want.
(These are just examples, I do not expect anyone to be able to figure out
how these were derived. Though if you do please talk to me since they are very
elegant solutions and I would love to discuss them. I can also provide some
insights as to how these are computed if you ask me.) Some results that we can
get using this are the following:

X b2n
z cot z = (−1)n 22n z 2n ,
n=0
(2n)!
tan z = cot z − 2 cot 2z,

X b2n
tan z = (−1)n−1 22n (22n − 1)z 2n−1 ,
n=1
(2n)!
n p
np+1 np X bk
 
X
p p
k = + + np−k+1 ,
p+1 2 k k−1
k=1 k=2
n Z x+n+1 N
X X bk (k−1)
g(x + k) = g(t) dt + [g (x + n + 1) − g (k−1) (x)] + Sn (x, n).
x k!
k=0 k=1

The last formula given is very complicated (it is the Euler-Maclaurin formula)
and I only include any of these to show how vast the applications of these
numbers are. The first and third results shown are polynomial versions of classic
trigonometric function, thus, we can now extend them to the complex numbers
relatively easier (dividing infinite series is very hard). The second result is
a trigonometric identity (you can try to show this one for yourselves, it can
be done without the Bernoulli numbers). The last two results are used for
computations and for finding the value of numbers. As one quick side note, the
Bernoulli numbers were discovered in trying to find the fourth of the ‘results’

6
shown above. And the fourth result becomes generalised in the fifth. Though
we sort of went crazy with all the notation and results that stem from one
number, e. I hope that you all gained an understanding and a respect for the
applications of infinite sums and why we need them.*

*Of course, there are many many more reasons as to why we need infinite
sums but these are some of the most interesting.

You might also like