Professional Documents
Culture Documents
Infinite Sums 1
Infinite Sums 1
Infinite Sums 1
Activity 1:
1
There is a very nice Numberphile video showing how this is gotten and I rec-
ommend those who have not seen it to watch it. Of course even though there is
no explicit mistake, there is one thing that is assumed to be true which simply
isn’t. Look at the video and think what that is?
The concept at play here is that of convergence and divergence. We can
say an infinite sum converges if as you add more and more terms on to it, its
sum converges to a finite real number s (convergence does exist for complex
numbers too but for the moment we are interested only in the real numbers).
And we say that it diverges if it does not. As an exercise show that the first
three sums already shown are all divergent. Since a divergent series cannot add
up to a value it seems that we have reached a conclusion and that we are quote
on quote done thinking about this. This is not the case! There is indeed a far
more profound explanation as to why the misconception of what these series
sum to is said to be what it is.
We shall begin with the series 1 − 1 + 1 − 1 + · · · . Consider the sum
r0 + r1 + r2 + r3 + · · · .
It is known that (and you can this by using polynomial long division)
1
r0 + r1 + r2 + r3 + · · · = , r ∈ (−1, 1).
1−r
Here is the part that becomes a bit technical since we are extending (using
analytic continuation) this formula beyond its usual domain. If we now consider
an ϵ ∈ (0, 2), then
1 1
(ϵ − 1)0 + (ϵ − 1)1 + (ϵ − 1)2 + (ϵ − 1)3 + · · · = = .
1 − (ϵ − 1) 2−ϵ
This is the series we wanted, but notice that as ϵ approaches zero the fraction
1 1
2−ϵ → 2 . So, it is true that this extension of the sum 1 − 1 + 1 − 1 + · · · is equal
to 1/2. BUT it is not true that the sum itself is one half. This is a very hard
concept to wrap your head around since the distinction between where these
two exist is almost non existent.
(The next part is far more difficult.) The same case occurs for the sum
1 + 2 + 3 + · · · but the function used this time is a very famous one, namely, the
Riemann zeta function. For complex numbers z, with ℜ(z) > 1, this function is
defined to be
∞
X 1
f (s) = .
n=1
ns
But we observe that we want f (−1) whose real part is < −1 to deal with this,
Riemann came up with the relation that for s with ℜ(s) < 1.
2
πs
ζ(s) = 2s π s−1 sin Γ(1 − s)ζ(1 − s).
2
2
This looks like a mess but given that Γ(2) = 1 and ζ(2) = π6 (come talk to me
if you want to know how to calculate these two) we can say that
π2 1
ζ(−1) = 2s π −2 · (−1) · =− ,
6 12
ζ(−1) = 1 + 2 + 3 + 4 + 5 + · · · ,
the infamous result (notice I didn’t say the two were equal even though no-
tationally, it is accepted to do so). Once again, the misconception is that
ζ(−1) ̸= 1 + 2 + 3 + · · · in the way which we look at that sum. There is a
fine distinction between the two. Most people who use this are conscious of the
distinction and assume those who are reading their work are too. The question
to now ask is why do we even bother with this distinction? Why do we
need this? Is this just some fun exercise or is there actually a method to this
madness? Try to think why this is useful? (There are applications. For example
you can google: “Riemann zeta function in physics” and get a few applications)
3
Activity 2:
Euler’s Number, e, is defined in many ways some of which are the following:
1 1 1 1
e=1+ + + + + ··· ,
1! 2! 3! 4!
1 n
e = lim 1 + .
n→∞ n
Even though the latter definition is in many ways more interesting (google:
“defining Euler’s Number” to see a nice analogy regarding interest in a bank
account that explains it well) for the time being we shall focus on the first defi-
nition. Any exponential function ax can be written base e since ax = (elog a )x =
ecx for some c. This leads to the natural question: “we can define e as a sum,
can we do that same for ex ?” Try to come up with a way to define ex as an
infinite sum of terms involving x.
It might seem to be a hopeless practice to try to multiply out the first
definition x times to develop a sum. And it is. But, lucky for us, the second
definition is nicer to work with in this case. Let Pn (x) be the polynomial
(1 + x/n)n , (a small adjustment from our initial definition of simply e will give
that Pn (x) = ex since what we have is now e multiplied to itself x times). Using
the Binomial Theorem we have that
2 3 n n k
n x n x n x n x X n x
Pn (x) = 1 + + + + · · · + = .
1 n 2 n2 3 n3 n nn k nk
k=0
4
z
If we consider the complex numbers,
√ does that expression e make any sense?
(Remember z = a + bi, where i = −1) Can we deal with ea+bi ? Well, the
answer is yes and no. The largest issue we run into simply evaluating this as
is can be shown as follows. Since ea+bi = ea · ebi = ea · (eb )i (for real a, b).
So now, we need to know what it means to raise a number to the ith power.
Can this make any sense? And more importantly can this extension be made
to salvage some of the properties that e has in the real numbers? The largest
issue with simply defining ez as an exponent is that presumably that if y is real
and unequal to 0, then
2 2
f (iy) = e−1/(iy) = e1/y .
The interesting fact about this expression is that it becomes large as y becomes
small. Thus it will not be continuous at 0 in the complex numbers. Instead we
will simply utilize the infinite sum we have for ex and allow complex numbers
to be inputted into it. That is, we define
z z2
ez = 1 + + + ··· .
1! 2!
This is all wonderful but we have to ask ourselves: “what does this accomplish?”
Just above I mentioned to concept of continuity. This sum ensures that ez will
be continuous at any complex number! There is a very very fine distinction
that drastically changes the defined function ez here (just as we saw in the
first exercise). The process of using polynomials (namely the Taylor series of
a function) to extend it to the complex numbers is a very common practice,
but is most beautifully shown with the exponential function. These kinds of
definitions lead to ‘identities’ like eiπ − 1 = 0. But, I would like to illustrate one
very interesting series of numbers: The Bernoulli numbers.
We observe with our new definition that
ez − 1 z z2
=1+ + + ··· .
z 2! 3!
Then it makes sense that for some numbers bn the reciprocal of this has sum
z b0 b1 b2
= + z + z2 + · · · .
ez − 1 1 1! 2!
(This is because the numbers bn are ‘arbitrary’ coefficients) We call the numbers
bn that appear here the Bernoulli numbers. We might now ask: “why do we
even want to know what these numbers are?”
Just like how in the real numbers, the exponential function can be used to
define a large family of functions, in the complex numbers, the exponential func-
tion can be used to define many more functions. Some examples are the trigono-
metric functions, most sums, and any exponential function (this is obvious, but
we cannot define a continuous logarithm so this is even more important now).
The Bernoulli numbers appear in many of these definitions, so knowing their
5
values becomes important (most notably they appear in the Euler-Maclaurin
summation formula).
Now that we have a brief idea as to on ways in which all of these infinite
sums and Bernoulli numbers can be applied, it seems like a good idea to try and
find some of the Bernoulli numbers. We can see that since the function being
inverted’s first term is one, so should b0 /1, so, b0 = 1. Now, a nice trick can be
used (this part is a bit more technical). Notice that
∞
z2 z3
X
bk
z= z+ + + ··· .
k! 2! 3!
k=0
Now, if n > 1, then the coefficient of the right side of z n must be 0. So, we have
that
n−1
X n
bi = 0.
i=0
i
The beauty of this result is that since we have the first Bernoulli number we
can (with enough patience or a really good computer) compute any Bernoulli
number we want.
(These are just examples, I do not expect anyone to be able to figure out
how these were derived. Though if you do please talk to me since they are very
elegant solutions and I would love to discuss them. I can also provide some
insights as to how these are computed if you ask me.) Some results that we can
get using this are the following:
∞
X b2n
z cot z = (−1)n 22n z 2n ,
n=0
(2n)!
tan z = cot z − 2 cot 2z,
∞
X b2n
tan z = (−1)n−1 22n (22n − 1)z 2n−1 ,
n=1
(2n)!
n p
np+1 np X bk
X
p p
k = + + np−k+1 ,
p+1 2 k k−1
k=1 k=2
n Z x+n+1 N
X X bk (k−1)
g(x + k) = g(t) dt + [g (x + n + 1) − g (k−1) (x)] + Sn (x, n).
x k!
k=0 k=1
The last formula given is very complicated (it is the Euler-Maclaurin formula)
and I only include any of these to show how vast the applications of these
numbers are. The first and third results shown are polynomial versions of classic
trigonometric function, thus, we can now extend them to the complex numbers
relatively easier (dividing infinite series is very hard). The second result is
a trigonometric identity (you can try to show this one for yourselves, it can
be done without the Bernoulli numbers). The last two results are used for
computations and for finding the value of numbers. As one quick side note, the
Bernoulli numbers were discovered in trying to find the fourth of the ‘results’
6
shown above. And the fourth result becomes generalised in the fifth. Though
we sort of went crazy with all the notation and results that stem from one
number, e. I hope that you all gained an understanding and a respect for the
applications of infinite sums and why we need them.*
*Of course, there are many many more reasons as to why we need infinite
sums but these are some of the most interesting.