The Trouble With Real Numbers (By N J Wildberger)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

The trouble with real numbers

N J Wildberger
School of Mathematics UNSW
Sydney 2052 Australia
webpages: http://web.maths.unsw.edu/~norman/

Abstract
This paper lays out the difficulties with the modern theory of real num-
bers. We look at the problems with infinite decimals, Cauchy sequences
and Dedekind cuts. In particular, ‘uncomputable decimal numbers’ are
an embarrassment to modern analysis. They need to be replaced by an
appropriate theory of computable decimals. Contrary to popular wisdom,
these are both uncountable and complete.

Finite versus infinite


There are several approaches to the modern theory of real numbers. Unfortu-
nately, none of them makes complete sense. One hundred years ago, there was
vigorous discussion about the ambiguities with them and Cantor’s theory of
‘infinite sets’. As time went by, the debate subsided but the difficulties didn’t
really go away. A largely unquestioning uniformity has settled on the discipline,
with most students now only dimly aware of the logical problems with ‘uncom-
putable numbers’, ‘non-measurable functions’, the ‘Axiom of choice’, hierarchies
of ‘cardinals and ordinals’, and various anomalies and paradoxes that supposedly
arise in topology, set theory and measure theory. Some of the stumbling blocks
were described in [1]. In this paper we concentrate on the problems with real
numbers and arithmetic with them.
The basic division in mathematics is between the discrete and the contin-
uous. Discrete mathematics studies locally finite collections and patterns and
relies on counting, beginning with the natural numbers 1, 2, 3, · · · and then ex-
tending to the integers, including −1, −2, −3, · · · and to rational numbers, or
fractions, of the form a/b. Continuous mathematics studies the ‘continuum’
and functions√on √ it, and relies on measurement, which involves also irrational
numbers like 2, 5 and π introduced by the ancient Greeks, as well as more
modern numbers such as e and γ arising from integrals and infinite series. Up to
a hundred years ago, the notion of the ‘continuum’ appeared intuitively straight-
forward, but difficult to pin down precisely. However with the advent of quan-
tum mechanics, the concept grew increasingly murky: if space has an inherent
graininess and is not infinitely divisible, and time is relative and perhaps finite

1
in extent, then what exactly are we modelling with our notion of the ‘number
line’ ?
While engineers and scientists today view real numbers primarily as ‘infi-
nite decimals’, mathematicians regard them as ‘Cauchy sequences of rational
numbers’, or as ‘Dedekind cuts’. Each view has different difficulties, but always
there is the crucial problem of discussing ‘infinite objects’ without sufficient
regard to how to specify them.
A finite sequence such as L = 1, 5, 9 may be described in quite different
ways, for example as the increasing sequence of possible last digits in an odd
integer square, or as the sequence of numbers less than 10 which are congruent
to 1 modulo 4, or as the sequence of digits occurring in the 246-th prime after
removing repetitions. But ultimately there is only one way to specify the list L
completely and unambiguously: by explicitly listing all the elements.
When we make the big jump to infinite sequences, such as

M = 3, 5, 7, · · · (1)

the situation changes dramatically. It is never possible to explicitly list ‘all’ the
elements of an infinite sequence. Instead it is the rule generating the sequence
that comes to specify it, in this case perhaps: M is the list of all odd numbers
starting with 3, or perhaps: M is the list of all odd primes. Without such a
rule, a definition like (1) is almost meaningless.
The right model to have in mind when modelling an infinite sequence is a
computer, which is churning out number after number onto a long tape. The
process never stops, as long as you keep supplying more tape, electricity, and
occasionally additional memory banks. The sequence is not to be identified by
the ‘completed output tape’, which is a figment of our imagination, but rather
by the computer program that generates it, which is concrete and completely
specifiable.
A finite set such as {2, 4, 6, 8} can also be described in many ways, but
ultimately it too is only properly specified by showing all its elements. In this
case order is not important, so that for example the elements might be scattered
over a page. Finite sets whose elements cannot be explicitly shown have not been
specified, though we might agree that they have been described. An example
1000
might be: let S be the set of all odd perfect numbers less than 10100 . Such a
description does not deserve to be called a specification of the set S, at least not
with our current understanding of perfect numbers, which doesn’t even allow
us to determine if S is empty or not.
With sets the dichotomy between finite and ‘infinite’ is much more severe
than for sequences, because we do not allow a steady exhibition of the elements
through time. It is impossible to exhibit all of the elements of an ‘infinite set’ at
once, so the notion is an ideal one that more properly belongs to philosophy–
it can only be approximated within mathematics. The notion of a ‘completed
infinite set’ is contrary to classical thinking; since we can’t actually collect
together more than a finite number of elements as a completed totality, why
pretend that we can? It is the same reason that ‘God’ or ‘the hereafter’ are not

2
generally recognized as proper scientific entities. Both infinite sets, God and
the hereafter may very well exist in our universe, but this is a philosophical or
religious inquiry, not a mathematical or scientific one.
The idea of ‘infinity’ as an unattainable ideal that can only be approached by
an endless sequence of better and better finite approximations is both humble
and ancient, and one I would strongly advocate to those wishing to understand
mathematics more deeply. This is the position that Archimedes, Newton, Euler
and Gauss would have taken, and it is a view that ought to be seriously recon-
sidered.
Why is any of this important? The real numbers are where Cantor’s hi-
erarchies of infinities begins, and much of modern set theory rests, so this is
an issue with widespread consequences, even within algebra and combinatorics.
Secondly the real numbers are the arena where calculus and analysis is devel-
oped, so difficulties lead to weakness in the calculus curriculum, confusion with
aspects of measure theory, functional analysis and other advanced subjects, and
are obstacles in our attempts to understand physics. In my opinion, we need to
understand mathematics in the right way before we will be able to unlock the
deepest secrets of the universe.
By reorganizing our subject to be more careful and logical, and removing du-
bious ‘axiomatic assumptions’ and other unnecessary philosophizing, we make it
easier for young people to learn, appreciate and contribute. This also strength-
ens the relationship between mathematics and computing, two subjects which
ought to more closely linked. It is time to acknowledge the orthodoxy that
silently frames our discipline, learn from our colleagues in physics and com-
puter science, and begin the slow, challenging but ultimately rewarding task of
restructuring mathematics properly.

Real numbers as infinite decimals


‘Infinite decimals’ are not simple modifications of finite decimals, rather they
are much more complicated. The topic is invariably distorted when taught in
public and high schools. Let’s look at some basic issues; this will be a good
warm-up for discussing more serious difficulties that await down the road with
‘uncomputable decimals’.
A rational number or fraction has a finite decimal expansion precisely when
it is of the form a/b where a and b are integers with all the prime factors of
b either 2 or 5; numbers such as 7/8 or 37/40. Other rational numbers have
repeating decimal expansions, such as
13/7 ≈ 1. 857 142 857 . . . = 1. 857 142 1/37 = 0.027 1/41 = 0.02439.
Such expansions are generally unique, unless the repeated part is 0 or 9, in
which case there are two different but equal expansions. Thus for example
4.1259999 = 4.126000.
The exception is 0, which has only one representation as a string of zeros.

3
Repeating decimals are less ambiguous than fractions, since the latter have
many different but equal representations, as for example
2 4 28 −20
= = = = ··· .
3 6 42 −30
Now let’s consider arithmetical operations on rational numbers in a decimal
format, where we stick to decimals in the interval [0, 1] for simplicity. Here is
the sum of 1/37 = 0.027 and 1/41 = 0.024 39, computed by truncating both
periodic decimals and adding the two finite decimals:

0.027027027027027027027027
+ 0.024390243902439024390243
0.051417270929466051417270

Figure 1: Approximate sum of 1/37 and 1/41.

We see the answer appears to be the periodic decimal 0.051 417 270 929 466,
which is indeed the decimal for 1/37 + 1/41 = 78/1517. Since the periods of the
two summands are 3 and 5, the period of the sum will be 15, and this determines
the required truncation we use. of course this requires some proof.
Now let’s consider the more challenging problem of how to calculate the
product of 1/37 and 1/41 using their decimal expansions. It is reasonable to
expect the answer to also have period 15, so we use at least this many digits
in each of the multipliers. Figure 2 shows part of the multiplication table, 24
digits across. Each row represents a non-zero multiplication of 0.024 39 by a
single digit, and is truncated to length 24.
Figure 2 does not show a true product of two finite decimals: each row is
itself a truncation, as the entire table would be twice as wide. Adding up the
rows gives an answer that appears to be periodic of period fifteen: namely

0.000 659 195 781 147

which is indeed the correct answer: the decimal for 1/1517.


Multiplication by hand of an ‘infinite decimal’ like this is painful because of
the fact that carrying moves to the left: we would like to start the multiplication
from the ‘extreme right’, but such a position does not exist, so we start at some
arbitrary point, but then we need to continually modify the result as we move
our truncation point to the right. A natural question is: how much can the
truncated digits affect the computation at any stage? Clearly the last, or right-
most, digit can be affected–but under what circumstances can the next to last
digits also be affected? To put the question another way–when can we be sure
that we have determined the first k digits of the product? Unfortunately if we
encounter a string of 9’s as we proceed in our calculation, then we may have to
go much further to establish the k-th digit. There are subtle issues here which
are far beyond high school students, and it seems curious that the topic is rarely
ever developed coherently, even in advanced texts.

4
0.024390243902439024390243
x 0.027027027027027027027027
0.000487804878048780487804
0.000170731707317073170731
0.000000487804878048780487
0.000000170731707317073170
0.000000000487804878048780
0.000000000170731707317073
0.000000000000487804878048
0.000000000000170731707310
0.000000000000000487804878
0.000000000000000170731707
0.000000000000000000487804
0.000000000000000000170731
0.000000000000000000000487
0.000000000000000000000004
0.000659195781147000659014

Figure 2: Approximate product of 1/37 and 1/41

A pleasant and interesting alternative to this subject is to use the delightful


10-adic numbers. Instead of considering decimals repeating infinitely to the
right, we consider decimals repeating infinitely to the left!
To illustrate, let’s reconsider the product of 1/37 and 1/41 using two left-
sided infinite decimals as in Figure 3.
To understand this, declare that a two-sided repeating decimal always equals
zero (this is natural if you consider that a product by a suitable power of ten
ought to leave such a number invariant). So for example

. . . 243902439.0243902439 . . . = 0

so that
1
. . . 243902439.0 = −0.0243902439 . . . = − .
41
The other factor is −1/37. Arithmetic with 10-adic numbers is reasonably
straightforward, as long as one uses the usual rules, with carrying moving to the
left. There is always a well-defined place to start, and there are always only a
finite number of terms to combine at any step. The answer is determined from
left to right as in ordinary multiplication, and once a digit is found it does not
change later. The result of our computation above, after dispensing with two
negative signs, is the 10-adic decimal

. . . 804218852999340804218853.0 = 1 + 999340804218852.0
1516
= 1 − 0.999340804218852 . . . = 1 −
1517
1
= .
1517

5
... 243902439024390243902439.
x ... 027027027027027027027027.
... 707317073170731707317073
... 878048780487804878048780
... 000000000000000000000000
... 317073170731707317073000
... 048780487804878048780000
... 000000000000000000000000
... 073170731707317073000000
... 780487804878048780000000
... 000000000000000000000000
... 170731707317073000000000
... 487804878048780000000000
... 000000000000000000000000
... 731707317073000000000000
... 804878048780000000000000
... 000000000000000000000000
... 707317073000000000000000
... 878048780000000000000000
... 000000000000000000000000
....317073000000000000000000
... 048780000000000000000000
... 000000000000000000000000
... 073000000000000000000000
... 780000000000000000000000
... 000000000000000000000000
...7804218852999340804218853

Figure 3: The 10-adic product of 1/37 and 1/41.

The analysis is simpler than with the usual decimal multiplication, and more
logically solid. One can verify that this arithmetic is just as valid as the usual
one, and generally allows us to get correct answers more efficiently when working
with fractions in decimal form. The method works even better if you replace 10
with a prime number.

Extending to non-periodic decimals


Let’s now return to real numbers, and consider more general infinite decimal
sequences, such as

α = .101101110111101111101111110 . . .
β = .1221112222111112222221111111 . . . .

In each case there is a more or less obvious rule that generates the sequence, but
to be precise we have not specified the decimal until we have unambiguously

6
spelled out this underlying rule. For example α and β have the rules:

A) 1. Set k = 1 B) 1. Set k = 1.
2. Print k ones and then zero. 2. Print k ones and then k + 1 twos.
3. Add one to k. 3. Add two to k.
4. Go to step 2. 4. Go to step 2.

Both rules are simple enough that we could rewrite them as programs which
input a number N and output the N -th digit of the decimal. This is another
reasonable form to describe such a number, however any finite initial string has
an endless number of programs that generate it. For example for α we could
modify step 4 of A) to read:

4a. If k = 1010 then add three to k. Go to step 2.

This gives a decimal α0 that agrees with α up to the 1010 place, and then
differs from it. Any description of an infinite non-periodic decimal as an initial
sequence and three dots does not specify the sequence unambiguously. To do
that requires a program or a function or something equivalent to it.
And now appears the rock upon which the subject teeters–a key weakness
in modern mathematics: such a program is never unique. For example, we could
modify the above program for α by changing step 4 to either of the following:
4b. If k = 1010 + 23 and k is prime then add one to k. Go to step 2.
4c. If k is an odd perfect number then add one to k. Go to step 2.
Since 1010 + 23 is not prime, the alternate rule 4b does not effectively change
the outcome, although it certainly changes the program. Whether or not rule
4c changes the outcome is unknown to us at this point, since we do not know
if there are any odd perfect numbers, though we can compute if any particular
number is. [A perfect number like 6 and 28 is the sum of its divisors less than
itself, i.e. 6 = 1 + 2 + 3 and 28 = 1 + 2 + 4 + 7 + 14.]
There are many other ways to subtly modify a program for a decimal se-
quence while keeping its output the same. Any infinite decimal sequence has an
infinite number of possible specifications as a program. But that phenomenon
is not new to us–it happens also with the case of rational numbers. We know
that a/b and c/d can represent the same fraction even if the integers a, b, c and
d look quite different. However in this case there is a finite rule that allows us
to decide: we know they represent the same fraction precisely when

ad − bc = 0.

There is no such rule for programs that generate infinite decimals. This should
be obvious from the examples above. After all, to tell whether or not the
algorithm A) with steps 1,2,3,4 gives the same output with steps 1,2,3,4a already
requires the solution of a famous unsolved problem in number theory.
So the specification of non-periodic decimals comes to a major hurdle–we
cannot determine in general if my program for a decimal gives the same result
as your program for a decimal. In particular we cannot in general decide if a

7
given arithmetical statement about such decimals is correct. This is a far cry
from the situation with rational numbers!
Unfortunately the situation with specifying decimals becomes even more
problematic when we start to do arithmetic. How are we going to describe the
numbers α + β and αβ in terms of programs that generate decimal expansions?
Indeed how are we going to first define these numbers? There is an obvious
strategy: truncate both decimals to some order n, compute the finite decimal
sum or product, and then reiterate with ever larger n’s. One must then prove
that any initial finite sequence of the result stabilizes. But in general can we say
how large n must be before we are certain about the k-th digit of the result?
No doubt for simple examples like the current α and β this is feasible, but for
general decimal programs it seems a daunting prospect.
Why is this not well known and acknowledged? When a scientific problem
does not have a solution even remotely in sight, its importance is often down-
played. Mathematicians rarely discuss the core problem of how to create a
framework in which we can consistently specify decimal √ numbers.
√ √ The most
common real numbers are too familiar; irrationals such as 2, 3, 5 and a few
special ones like π, e and γ. By manipulating the symbols for these numbers, we
may avoid worrying about specifications and programs. However the ambiguities
are there, even if we ignore them.
For example, which of the dozens of possible definitions of the number π
is the correct one–or perhaps one should say the first one? Should one use
an infinite series, or choose half of the imaginary part of the first zero of the
complex exponential function ez , or (my favourite) an integral such as
Z 1
1
π≡ p ds
0 s (1 − s)

or should one just stick to elementary geometry and say that π is the ratio of
the circumference to the diameter of a circle?
If you believe the latter is the right way of proceeding, then you have a little
problem. You must develop first a theory of arclengths of curves, and this will
require calculus. If you stick to approximations by inscribed and circumscribed
polygons as Archimedes did, you will need to prove that the result is independent
of your choices (this Archimedes didn’t do). You’ll also have to demonstrate
that the ratio is independent of the size of the circle. This little assumption
sneaks into many such arguments, but it is not true on the surface of a sphere,
so you will need to invoke some crucial defining properties of the Euclidean
plane. In other words, you will first need a theory of Euclidean geometry.
Sadly, aside from [2], there are no logically correct and complete developments
of this subject.
And allow me to remind you that you are allowed only one definition of a
number like π. Mathematicians often like to cheat and hedge their bets, with
several definitions of the same object up their sleeve, to be pulled out as different
situations warrant. If you are a professional mathematician and think this
doesn’t include you, then let me ask: how do you define the function sin x?

8
Is it a ratio of lengths in a right triangle, or given by a power series, or the
imaginary part of the complex exponential, or the inverse of a function given
by an integral, or is it defined in terms of some other trig function, such as
cos x or tan (x/2)? Whatever your choice, you must then establish all the other
formulations as theorems. There is a good reason why all those calculus texts
studiously ignore the important problem of actually defining sin x. And of
course why they conveniently pretend that the concept of ‘angle’ is exempt
from a proper definition (for more in this direction, see [2]).

Extending to ‘uncomputable decimals’


The difficulty of specifying the programs or algorithms behind non-periodic dec-
imal numbers is not the only problem with the current thinking. The modern
pure mathematician unfortunately asserts that those decimal numbers which
are specified by programs, algorithms or functions–so called ‘computable deci-
mals’–are in fact an insubstantial minority in the zoo of ‘all possible decimals’.
She believes that it is possible to talk meaningfully about infinite decimal ex-
pansions that are not given by a program, algorithm or function.
Can you put aside your deeply ingrained ‘education’ for just a minute and
begin to see how absurd such a notion might be? Whatever could it mean to have
an infinite pattern that is not ultimately described by a finite rule? Are there
any examples of such things, say in the scientific world, in applied mathematics,
in computer science? There are not. Are there any concrete examples in pure
mathematics? There are not.
One is misled by the terminology, which suggests that if there are ‘computable
decimals’, then there ought to be also ‘uncomputable decimals’. It is like arguing
that since there are mortal beings, there surely must also be immortal beings,
or that because there are mice that can be stopped, there should also be mice
that cannot be stopped.
Theories that rely on ‘uncomputable decimals’ (or ‘immortal beings’, or ‘un-
stoppable mice’) have no consequences to the applied sciences that cannot be
deduced from more modest computable counterparts. If you can prove this
claim wrong, please do so. Find a theorem–just one!–that actually says some-
thing about the real world occupied by engineers, economists, chemists and
experimental physicists which genuinely requires the concept of ‘uncomputable
decimal’–and I will happily retract my heresies.
Some analysts are loath to acknowledge the problem, perhaps because they
recognize the danger to the established framework: if ‘uncomputable real num-
bers’ are revealed as dubious, then ‘non-measurable functions’ are also suspect,
and so are a lot of other more sophisticated things they like to talk about. And if
real numbers come down to only computable decimals, won’t that mean that we
have to truly acknowledge the difficulties in specifying programs for sequences,
as discussed in the previous section?
People will come up with curious apologies for their non-mathematical belief
in infinite sequences without patterns. How about just somebody, say me, sitting

9
back and making up decimal digits ‘arbitrarily’ ? For example:
A = 0.431455582478237823947293428349283479182374918237491759700979
1234789759747234471978791279587219358712934712938479123487129
4324324834682648646836824634682348234726890989999958857717 . . . .
But those three little dots mean only: ‘I stopped typing’. There is no well
defined continuation of the sequence, so ‘the number A’ is a meaningless concept.
Indeed I am probably just a machine (this sad possibility appears increasingly
likely with each passing year), and an intelligence far greater than mine might
easily comprehend the workings of my brain and so find an obvious pattern
behind these seemingly random digits, even if I stick to it for a few years. But
in any case, eventually I die, so the sequence is not endless. The fact that in
our usual topology the ‘number A’–whatever it is–is so closely approximated
by the first hundred or so digits of the above decimal cause us to disregard the
fuzziness in the tail of the sequence as being unimportant. This is an important
psychological explanation of the laxity with ‘uncomputable decimals’: the fact
that the confusion seems after all so vanishingly small.
Some might posit the existence of an angel that has been spewing out decimal
digits ‘forever’, or perhaps a slimy galactic superoctopus in an (infinite) corner
of the universe, that can hold up digits with its infinite number of arms to
specify an ‘uncomputable decimal’. Yet other mathematicians maintain that
even though there may be no examples of ‘uncomputable decimal sequences’,
they themselves can still imagine them, and thus they are proper objects for
mathematical thought and activity. Is this what we want our subject to descend
to? Are we incapable of separating metaphysics from mathematics?
Once an idea gets repeated sufficiently often by yourself or those around you,
it seems to take on a life of its own and simply must correspond to something
out there. Once you have repeated the phrase ‘uncomputable real number’,
‘uncomputable real number’, ‘uncomputable real number’ a few hundred times,
the thought of such objects no longer feels quite so strange. When prominent
mathematicians around you use the term without flinching or reddening, you
start to imagine sort of what they might be like, kind of, if you know what I
mean. Remarkably flexible organ, that old noggin’.

Real numbers as equivalence classes of Cauchy


sequences
Another approach to real numbers is via sequences of rational numbers. Often
these arise from infinite series. So for example the series
π 1 1 1
= 1 − + − + ··· (2)
4 3 5 7
gives rise to the sequence
2 13 76 263 2578 36 979 33 976
1, , , , , , , ,··· . (3)
3 15 105 315 3465 45 045 45 045

10
This latter sequence is a Cauchy sequence, meaning that no matter what
small positive number ε we choose, after some point in this sequence any two
numbers differ by at most ε. We might like to define the real number π/4 in
terms of this Cauchy sequence, and one way to do that is simply to identify the
Cauchy sequence itself with the number. The number is the sequence!
However there is a slight snag. It is really only the tail end of the sequence
that encodes π/4, in the sense that if we changed the first two entries of (3) to
−17 and 2422, the resulting limit would be the same. More seriously, we could
change all of the entries of the sequence, say by using a different series for π. So
we use the idea of an equivalence class. Don’t ask me what the words precisely
mean, since I don’t know. But the rough idea is simple enough: you get a big
bucket and toss in all of the sequences that are going to have the same limit,
and then you define the limiting real number to be the bucket. Brilliant, isn’t
it?
But before we start looking at these buckets of sequences in their entirety,
let’s return to a rather crucial point about the sequences that we ought to allow.
The sequence (3) is not defined until we specify the rule that generates it. In
this case such a rule is not hard to construct from the relatively simple rule
generating (2).
But the question is, do we require this rule in order to specify the sequence?
Now a computer scientist will likely say, ‘of course you do, otherwise you and
your colleagues don’t know what you are talking about’. But we modern math-
ematicians are not so severe on ourselves–we allow contemplation of such se-
quences without specifying the rules that generate them.
If a computer scientist were to create a bucket of sequences all converging
to π/4, the strip of paper with the sequence (3) would likely have on its reverse
the rule that generates that sequence, written in English, as a mathematical
formula, or as a computer program. Without such a rule, the sequence is am-
biguous, since a finite number of terms don’t specify an infinite sequence.
Today’s pure mathematician has quite a different idea. She reckons that
you can get away without the rule on the back, because the entire sequence
is already on the strip. No more information is needed! What a clever idea.
Of course now the strips of paper are all infinitely long, and require an infinite
amount of time to read. For example suppose you take a strip of paper from
my bucket, and you see an initial sequence of zeros. You read on, mile after
mile, light year after light year, and all you see is more and more zeros. You are
starting to suspect this might be the sequence consisting of all zeros. But you
are not sure. On the back of the strip, there is no handy little statement like ‘all
the entries of the sequence are zero’ as you might find on a computer scientist’s
strip. So you are obliged to keep on reading, forever and forever, past the end of
time, just to answer the simple question of whether you are holding the number
zero in your hand. Communing with the Divine Ompah’s Holy Phoenix would
be a relief in comparison! And why must you do this? So that we are allowed to
talk about ‘arbitrary sequences’ without undue embarrassment, and can avoid
worrying about programs. Make no mistake, this is the crux of the matter.
So let’s examine some specific real numbers. Here is my bucket of equivalent

11
Cauchy sequences, and I see you also have such a bucket. Heh, I wonder what
real numbers these are? Let’s have a look, shall we? I’ll just pull this strip s
out of my bucket:

s = 2/3, 2/3, −145, 83001 /7, −132424331, 32/15, −23/17, 999999 − 6, 4, 4, 12/89,
−3, 41412341278, −4578 , 3/2, 5, 7, 9, 14/17, 1/98731 , −1144441, 1/2, · · ·

What do you reckon? Is it π? Or is it e? Wait, you claim that your bucket seems
to contain the same strip!?? What is true is that both our buckets contain an
infinite number–in fact an uncountably infinite number (let’s pretend that we
are metaphysicians for a minute so that such words have meaning to us) of
strips that all begin exactly the same way. In fact every initial sequence of
every strip in my bucket is repeated an infinite number of times in your bucket
and conversely. After pondering this curiosity, we realize that it’s not a surprise,
since any initial sequence of a Cauchy sequence can be changed arbitrarily and
it will still be equivalent to what it was.
We have just established the remarkable fact that any two buckets seem
to contain exactly the same strips. No finite being can √ detect the difference
between any two real numbers! Therefore 0 = 1 = π = 2 = · · · . The mistake
of course is that our buckets may well be different, but this difference can only
be seen by an infinite being, such as God, the Divine Ompah, or a slimy galactic
superoctopus. Oh, alright then.
Am I being unfair in the example of a Cauchy sequence from my bucket?
Don’t most Cauchy sequences rather look like

t = 1, 1.4, 1.41, 1.414, 1.4142, 1.41421, 1.414213, 1.4142135, · · ·?

Only in your dreams, my friend, or in modern texts. While s is indeed rather


atypical, it’s because it is so shockingly regular ! Each of its first 22 entries,
such as 2/3, −145, 83001 /7 and so on, are tiny rational numbers–almost indis-
tinguishable from zero and each other. A more honest representative Cauchy
sequence would have its first term a more normal rational number, with numer-
ator and denominator many millions of light years long, even in 10 point type.
Indeed you and I would ‘with probability one’ never be able to read even the
first entry. Such is the fantasy world in which modern analysis lives.
The difficulties with equivalence classes of Cauchy sequences pile up further
once one starts thinking about out how to do arithmetic with them, and how
to verify the laws that these operations should satisfy. I will leave it to some
future analyst to try to explain to us how these things might be accomplished.
Perhaps Monsieur Bourbaki might like to have a go? Or is there a good reason
why the famed French mathematician has so far avoided such issues?

Real numbers as Dedekind cuts


R. Dedekind supposedly had√a deep insight into the nature of irrational numbers.
It can be well illustrated by 2, however for less simple irrationals it is much less

12

convincing. When we wish to augment the rational numbers to include d = 2,
one way is to simply declare that d is a new number which has the property that
d2 = 2. This works fine as long as we restrict ourselves to the purely algebraic
aspects of the new field, but when we wish to discuss topological or geometrical
questions we wish to know: where exactly is this number d? To make continuity
work, we would like to position d along the rational line, either between 1.4 and
1.42, or between −1.42 and −1.4. Dedekind had the idea that if we imagine
the rational numbers as lying on a line, then a real number ought to fit into
this line somewhere, and so separate the rational numbers into two separate
halves–those to the left and those to the right of the gap. If our metaphysics
is not constrained by finite considerations, why not define a real number to be
a splitting of the rationals into two ‘sets’ L and R, where each element of L is
less than each element of R?
For this idea, Herr Dedekind has received much praise. But is the argument
in reality a sneaky attempt to get something for nothing? Things become
√ clearer
with more general real numbers, rather than the deceptively simple 2. Consider
for example the number ω = π/4. How are we going to define this in a Dedekind
cut fashion? The sequence
2 13 76 263 2578 36 979 33 976
1, , , , , , , ,··· (4)
3 15 105 315 3465 45 045 45 045
alternately overshoots and undershoots ω, since it is the partial sums of an
alternating series which converges to ω. So the ‘set’ L may be specified by the
string of inequalities

x < 2/3 (5)


or x < 76/105
or x < 2578/3465 · · · (6)

and so on, while the ‘set’ R will be specified by the string of inequalities

x > 1
or x > 13/15
or x > 263/315 · · ·

and so on. Of course the string of inequalities is highly non-unique. This kind
of Dedekind cut is equivalent to a pair of Cauchy sequences.
The set theorist may object that even though there are lots of different
ways of specifying say the left set L, it is after all unique. But the ‘set L’ is
an ideal object, only determined by a sequence of approximations to it. The
modern mathematician has been lulled into thinking that there is some decisive
difference between specifying an infinite sequence of inequalities such as (5) and
the ‘set’ of all solutions to it, because of the psychological propensity to identify
a set as a fixed object, pinned down at a given moment of time, while a sequence
is an unfolding process.

13
The chief advantage of the Dedekind cut approach is that it appears to make
the definition of operations on real numbers straightforward. If one real number
r1 is specified by the Dedekind cut {L1 , R1 } and another real number r2 is
specified by the Dedekind cut {L2 , R2 }, then we can define the real number
r1 + r2 to have the Dedekind cut {L1 + L2 , R1 + R2 } where the sum of two sets
A and B of rational numbers is defined by

A + B = {a + b| a ∈ A, b ∈ B} .

Multiplication is similar, and other operations follow the same lines.


However this simplicity is largely an optical illusion, as becomes apparent
when we ascend from the level of talking to the level of doing. Let’s consider
adding the second largest real zero µ of the polynomial function f (x) = x7 +
2x6 + x4 − 5x + 2 to ω = π/4. To first specify µ concretely as a Dedekind cut,
we essentially need to generate two Cauchy sequences converging to µ from left
and right. There are infinitely many ways of generating such sequences.
Inevitably we must go back to Newton’s method for finding zeros, or some
equivalent technique, to find a sequence of finite decimal or rational number
approximations to µ, and then (artificially) concoct L and R in terms of this
sequence of approximations as we did above for ω, and then we combine the
two Dedekind cuts as described above. If you actually try this, you will quickly
realize that the Dedekind cut notion adds little to the computation; nothing is
gained that working with the rational approximations generated by Newton’s
method directly would not give us. Dedekind’s theory is like flying by flapping
your feet.

Time for some fresh thinking


The modern mathematician is uncomfortable in the face of serious inquiry on
this subject, and usually seeks refuge behind an unwieldy and complicated sys-
tem of ‘Axioms’, which–to be honest–only make sense to the previously ini-
tiated, and are almost never directly used by mathematicians in their ordinary
work, with the exception of the ‘Axiom of Choice’ (see [1]). As long as the bluff
is not called, a large portion of mathematics remains essentially built on sand,
and many opportunities for advancing the discipline and improving education
are not pursued. What a regrettable state for the queen of the sciences.
So what might be contemplated? First of all we should relegate all talk about
concepts which cannot be pinned down concretely and finitely to philosophy.
‘Uncomputable real numbers’ ought to be the first to go. Then we need to think
deeply and creatively for a consistent way to deal with computable decimals.
The usual ‘facts’ about these objects will all need to be carefully revisited.
For example, it should become common knowledge that the computable
decimal numbers are not countable. Such a statement requires no use of ‘infinite
sets’. It means that there is no computer program that when input with the
natural number N outputs a program for the N -th computable decimal number.
Because by the famous argument of Cantor, if you have such a machine, I will use

14
it to build another machine that churns out a decimal number not on your list–
my machine will input number N, use your machine to calculate the program
for the N -th decimal, calculate the N -th digit, say k, of that decimal, and then
will output something different from k, such as (k + 3) mod 10. I realize that
many will reject this argument for one philosophical reason or another, but let
me emphasize: if you think they are countable, then count them!
By the way, there is in my view no problem with the fact that one could create
a list of ‘all’ potential computer programs (including nonsense programs which
don’t run, or which run forever without outputting), just by lexicographically
listing all possible codes in some computer language. And while any program
for any computable decimal may be found within such a list, there is no way
to systematically determine which possible codes correspond to programs for
computable decimals, because one cannot build a machine that will determine
when a given program will halt. And yes, this appears to contradict a well
known ‘theorem’ from ‘modern set theory’.
It should also become common knowledge that the computable numbers are
complete–there are no holes there. If you have a machine that generates a
Cauchy sequence of computable decimals, in other words which when input N
outputs the N -th decimal sequence in your Cauchy sequence, then someone else
can use your machine to build a machine that constructs the limit.
To make this all clear, we would like a theory of decimal numbers (may I
stop using the unnecessary adjective computable?) that assigns unique programs
to each number, or at least allows us to determine in a specified finite time
whether two of our programs generate the same number. If we cannot create
such a theory (a likely possibility), then we need to establish this, and begin to
search for viable alternatives. We need an appropriate theory of algorithms. We
also should worry about running times, since programs that have unspecified
running times are awkward.
Such a task will no doubt require bringing together ideas from both computer
science and mathematics. In my view, this is the most interesting and important
task facing modern analysis. Can we get serious now?

References
[1] N. J. Wildberger, ‘Set Theory: Should You Believe?’ posted 2006 at
http://web.maths.unsw.edu.au/~norman/views.htm
[2] N. J. Wildberger Divine Proportions: Rational Trigonometry to Universal
Geometry, Wild Egg Books, Sydney, 2005.

15

You might also like