Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Software and Mind — related articles

A summary of Popper’s principles of demarcation


by Andrei Sorin

This article is a brief discussion of the principles of demarcation between science and
pseudoscience developed by philosopher Karl Popper. The full discussion can be found in my
book, Software and Mind, in the section “Popper’s Principles of Demarcation” in chapter 3. (The
book, as well as individual chapters and sections, can be downloaded free at
www.softwareandmind.com.)

It must be stressed from the start that, while associated largely with scientific theories, the
principles of demarcation are useful in any domain: business, politics, social issues, and even
personal affairs. These principles provide a simple and reliable method for determining whether
a given activity is rational or whether it is a worthless pursuit. Thus, the term “theory” must be
interpreted here as any idea, assertion, method, or claim. Popper himself recognized the value
of his principles outside the domain of science. The principles, however, have become even more
important in the last few decades. As our world is becoming more and more complex, we are
increasingly facing situations – in matters of health, education, work, technology, politics,
investments, purchases – where a decision must be made even while knowing little in a
particular field. In these situations we are liable to fall victim to irrational thinking and fallacious
arguments; for example, we are liable to assess an idea by its popularity or by its promotion,
rather than by its usefulness. Popper’s principles show us how to assess an idea logically, and
help us to recognize mistaken ideas in any field.

According to Popper, we must accept the fact that in the empirical sciences (and hence for any
idea involving the real world) it is impossible to prove a theory in an absolute sense, as in
mathematics or logic. The only way to advance, therefore, is through a process of trial and error,
of conjectures and refutations. Since we can never be sure that our current knowledge is correct
or complete, we must treat our theories as mere conjectures: we must doubt them, do our best
to refute them, and continue to use them as long as we fail to refute them. We accept theories,
therefore, not because we proved their validity (which is impossible), but because we failed to
prove their falsity.

The three principles of demarcation are as follows:

• A theory must be tested by looking for falsifications of its claims; it is unscientific to attempt to
verify a theory by looking for confirmations of its claims. Thus, we must accept a theory, not
when we find situations that confirm it, but when we fail to find situations that falsify it.

• A theory must be falsifiable. Someone who proposes a new theory must specify at the same
time certain situations which, if encountered, would be seen as falsifications of the theory. A
theory that is unfalsifiable in any circumstances is unscientific.

• It is unscientific to modify a theory if the modification aims to annul a falsification of its claims.
In other words, if we encounter a situation that falsifies the theory, we must not relax the
theory’s claims so as to make that situation acceptable. Instead, we must consider the theory
refuted.
The three principles are closely related and support each other. Thus, we could not apply the
first one (looking for falsifications) without the second one (there must exist falsifying
situations). And the second one would be meaningless without the third one (the theory must
not be relaxed so as to annul a falsification); for, by modifying it again and again to cope with
each new falsification, a theory that is obviously invalid could be made to appear successful.

Pseudoscientific theories
If we bear in mind the first two principles (a theory must be falsifiable, and it must be tested by
looking for falsifications, not for confirmations), it is not difficult to recognize pseudoscientific
theories. For, their defenders cover up their failure by disregarding the two principles.

The simplest way to defend a pseudoscientific theory is by looking for confirmations. Since we
intuitively interpret confirmations as evidence of a theory’s validity, we are fooled by the
depiction of successes, failing to see that its promoters deliberately restrict themselves to those
situations where the theory works. Thus, although crude, noting the few confirming situations
and ignoring the many falsifying ones is an effective way to make an invalid theory appear
successful.

A slightly more sophisticated stratagem is to make the theory entirely unfalsifiable, from the
start. This is done by keeping its predictions so vague and ambiguous that any event appears to
confirm it. Such a theory is, in effect, untestable. The fact that it cannot be tested, and therefore
is never falsified, makes the theory look successful; but this success is an illusion, because it is
achieved by avoiding tests, not by passing tests.

The most sophisticated stratagem is to start with a proper, falsifiable theory, and then make it
unfalsifiable by repeatedly modifying it. Whenever faced with a falsifying situation, instead of
admitting that the theory was refuted, its defenders expand it by making that situation look like
a legitimate part of it. In other words, they cover up the falsifications of the theory by turning
them into new features of the theory. Since they deal with one falsification at a time, and since
the changes look like improvements, the general degradation of the theory may go unnoticed.
But modifications that relax a theory’s claims are not improvements; on the contrary, these
modifications make the theory less and less rigorous, and hence less and less useful.

Thus, while the theory remains falsifiable in principle, it is rendered unfalsifiable in fact. To a
casual observer the theory looks just like the serious, scientific theories. It differs from them
only briefly, when a falsification is discovered; at that point it expands so as to permit that
situation, thus eliminating the threat. Then it appears again to be a serious theory – until
threatened by another falsifying situation, when the same trick is repeated. Clearly, a theory
that is repeatedly expanded so as to permit every falsifying situation becomes eventually
worthless, no different from those theories which are unfalsifiable from the start.

To prevent this method of covering up falsifications, Popper added the third principle to his
criterion of demarcation: a theory, once formulated, cannot be modified if the modification aims
to make it correspond to the reality that would have otherwise refuted it. We must admit that
the theory was refuted, and treat the modified version as a new theory.

We do not usually need to abandon the theory, of course; we merely declare the original version
to be refuted and the modified one to be a new theory. While this appears similar to simply
modifying the theory, in practice there is a big difference between the two concepts. By
proposing a new theory for each modification, we state in effect that we take each falsification
seriously. After doing this several times, it is difficult to delude ourselves that our project is
sound. On the other hand, if we perceive each modification as an enhancement of the original
theory, we may discover falsifications forever and continue to claim that the theory is valid, that
all we are doing is to improve it.

To conclude, here are the differences – according to Popper’s principles of demarcation –


between scientists and pseudoscientists; or, in general, between individuals who sincerely
attempt to determine the value of an idea, and those who simply adhere to an idea, whether or
not it is valid.

• Scientists doubt their theory, and use all current knowledge and severe tests in attempts to
refute it; they search for falsifications of the theory. Pseudoscientists defend their theory, and
their work consists in attempts to show that it is valid; they search for confirmations of the
theory.

• Scientists formulate their theory so as to allow only a narrow range of situations that count as
confirmations; the theory, therefore, has great practical value. Pseudoscientists formulate their
theory so as to allow a very broad and poorly defined range of situations that count as
confirmations; thus, since there are hardly any situations that count as falsifications, the
theory has little practical value.

• More commonly, the pseudoscientists start, just like the scientists, with a narrow range of
situations that count as confirmations. But in order to save their theory later, they expand that
range by adding, one by one, those situations that originally counted as falsifications. They
keep reducing, therefore, the theory’s practical value.

Examples
• The simplest example of pseudoscientific methods is found in promotional work like advertising
and public relations. Typically, we are asked to assess the merits of a product by reading a few
testimonials, or success stories, or case studies; in other words, by studying confirmations of
its usefulness. A product is like an empirical theory, in that certain claims are being made
about its usefulness – claims which must be verified through tests. According to Popper’s
principles of demarcation, therefore, the promoters ought to show us tests that attempt to
falsify those claims, not tests that confirm them. Intuitively, we tend to search for success
stories; but we must resist this, and remember that we can learn a lot more about a product’s
usefulness by studying the falsifications of those claims. The theory about the product’s
usefulness is falsifiable, and hence scientific; it is its promoters, when ignoring the
falsifications, that render it pseudoscientific.

• A similar fallacy occurs when books, articles, or documentaries try to help us by describing a
few successful applications of an idea. We are shown, for example, how certain individuals
were successful in real estate, or in finding a job, or in losing weight; or how certain
companies were successful following one management concept or another, or using one
software system or another. Again, if what we need to know is how useful those ideas really
are, we ought to be shown falsifications, not confirmations. It is difficult to convince people
who seek advice, as much as those eager to give advice, that they should ignore the successes
and study instead the failures. But according to Popper’s principles of demarcation, this is the
only logical way to determine whether a certain idea is useful or useless. Specifically, we must
adopt the idea, not because we find confirmations, but if we fail to find falsifications. In
practice, the successes are irrelevant because they may be due to some unusual or subtle
conditions. In most cases, no one, not even the individuals involved, is aware of all the
conditions that contributed to a success.

• The popular economic theories provide another example of pseudoscientific thinking. Every day
we see in the media predictions about the future of such variables as the stock market,
inflation, or a particular commodity or currency. An expert, for instance, may list a number of
reasons why the stock market is likely to go up in the following year. Then, we are told that if
certain events will occur, the stock market may actually go down. While appearing important
and informative, such statements are in fact meaningless, because there are no conceivable
events that could falsify them. This is an example of a theory that is pseudoscientific because
it is unfalsifiable from the start: no matter what happens in the future, the expert will appear
to have predicted it. Because the range of situations that count as confirmations is so broad,
the theory has no practical value.

• The mechanistic theories of mind and society are pseudoscientific according to Popper’s
principles of demarcation. These theories start with the assumption that human phenomena
can be explained with precision, just like the phenomena studied by physics. Then, in the face
of falsifying evidence, instead of doubting the theory its supporters keep defending it and
modifying it, contrary to the principles of demarcation.

The theory of structuralism claimed that all aspects of culture can be explained by performing
various transformations on some basic binary concepts (good/bad, left/right, male/female,
day/night, etc.). However, both the binary concepts and the transformations were vague,
expandable notions, which could easily be employed to describe any tradition, piece of folklore,
work of art, social institution, etc. The theory was, therefore, unfalsifiable: no aspect of culture
exists that it could not explain.

The theory of behaviourism claimed that all human acts can be precisely explained and
predicted by reducing them to combinations of some elementary stimulus-response units. This
idea was falsified even in simple experiments with animals, but instead of abandoning it, the
behaviourists modified it again and again by relaxing its original claims, until the idea became
so vague that it appeared to explain almost any human act.

The theory of universal grammar claimed that the human capacity for language is an innate
faculty, which can be precisely explained, along with all natural languages, using mathematical
models. In reality, the use of language cannot be separated from the other knowledge present
in the mind, so no exact model can exist for the phenomenon of language. But instead of
admitting that the theory was refuted, its supporters kept modifying it in order to deal with the
falsifications. In the end, the idea of an exact model for the phenomenon of language became
unfalsifiable.

• The theories of software engineering are pseudoscientific according to Popper’s principles of


demarcation. These theories start with the assumption that software applications can be
developed by emulating the methods of manufacturing; that is, by designing a software
system as a perfect hierarchical structure of constructs and modules, just as an appliance is
designed as a hierarchical structure of parts and subassemblies. But this idea is fallacious,
because the parts of a software application are related through many structures, not one. So
the theories were refuted: applications could not be developed by strictly following the idea of
software engineering. But instead of recognizing this failure, the theorists responded by adding
features whose purpose was to bypass the restriction to a single structure. In other words,
contrary to the principles of demarcation, they attempted to rescue the theories by expanding
them so as to permit those situations that had originally been defined as falsifications. The
benefits promised by these theories could be achieved only if applications were restricted to a
single structure, just as originally claimed. Thus, by relaxing the claims and permitting
multiple, interacting structures, their defenders increased the range of confirmations to the
point where the theories, while appearing to work, had lost in fact the promised benefits.

The theory of structured programming claimed that applications can be represented as a


hierarchical structure of three standard flow-control constructs, using various transformations
to turn all other constructs into standard ones. This idea was falsified even by simple
applications. But instead of abandoning it, its promoters expanded the theory by permitting
any non-standard constructs that were useful. The theory was saved by replacing its formal,
exact principles with some vague guidelines, at which point many programming styles that
were in fact falsifications appeared to conform to the idea of structured programming.

The theory of object-oriented programming claimed that applications can be represented as


software “objects” that are linked strictly through hierarchical relations. When this idea was
falsified, the theory was saved by expanding it so as to allow additional relationships between
objects (links between different hierarchies, the use of traditional programming concepts,
etc.). Thus, what had been falsifications of the theory became important features of the
theory.

The theory behind the relational database model claimed that if the database is restricted to
“normalized” files, we will be able to represent the database structures and operations with a
formal, mathematical system (which is the same as saying that they can be represented as a
strict hierarchy). This idea was falsified in practice, and the relational model was saved by
inventing new features; for example, by annulling the restriction to normalized files and by
introducing programming languages that allow us to override the formal database operations.
Every feature added to the model was needed only in order to cover up a falsification of the
model.

Misrepresenting the principles


It is impossible to discuss Popper’s principles of demarcation without mentioning their
misrepresentation in the academic world. From the moment they were published many decades
ago, the academics have attempted to discredit these principles by belittling their significance.
And this effort continues. For example, as of 2013, some important websites, citing well-known
philosophers, misrepresent the principles and then conclude that they are inadequate as a
criterion of demarcation. (Popper, who had to spend much of his career dealing with the
misrepresentation of his work in general, called it “the Popper legend.”)

The academics reject Popper’s principles because this is the only way to justify senseless
research programs – programs that would be considered pseudoscientific if the principles were
observed. Typically, these programs promote mechanistic theories in fields like psychology,
sociology, linguistics, or economics, and in software development and software-related fields;
that is, theories which attempt to describe complex human phenomena with mathematical
precision. These theories never work, and Popper’s principles of demarcation can easily expose
them as pseudoscientific. The academics, therefore, should not even pursue such theories. But
by rejecting the principles they appear to be engaged in serious research.

The critics misrepresent Popper’s principles by reducing them to a weaker version, which can
then be attacked. They use arguments such as the following.

• The critics claim that the concept of falsifiability ends up treating even silly and invalid theories
as scientific, simply because they are falsifiable. Astrology, for example, was shown long ago
to be useless. But, the critics say, it is falsifiable, and therefore scientific if we accept Popper’s
principles.

This claim is invalid: it distorts the principles by ignoring the requirement that a theory be
tested by looking for falsifications, not for confirmations. Astrologers, for example, always
advertise the successful predictions and disregard the wrong ones – contrary to this
requirement. Popper didn’t say that the only principle is falsifiability. He specifically
complemented it with the principle of searching for falsifications. It is the two principles
together that constitute the criterion of demarcation.

• The critics claim that the requirement to abandon a theory that has been falsified would have
forced us in the past to reject many theories that were later found to be, in fact, correct and
important.

Again, this is an invalid claim. Popper’s principles do not prevent us from modifying the
falsified theory and repeating the tests. What they say is that the new version must be
considered a new theory, and subjected to the same rigorous tests as a new theory. As we
saw, this is meant to prevent us from relaxing a theory’s tenets over and over, indefinitely,
rendering it in effect unfalsifiable.

• The critics point to the requirement that, when searching for falsifications, we must employ
severe tests and sincerely attempt to refute the theory. This, they claim, shows that the
principles are not a criterion of demarcation but merely an informal guideline for scientific
research.

Popper, however, made it clear that terms like “severe” and “sincere” are not subjective
assessments of the researcher’s attitude, but precise, technical concepts. Specifically, they
mean that only comprehensive attempts to falsify the theory count as tests; that is, only tests
which, given all current knowledge, are the most likely to falsify the theory. Thus, rather than
being just a guideline, the principles of demarcation constitute a formal, exact criterion.

• The critics point to some subtle philosophical issues arising from Popper’s principles, and claim
that, since these issues have not been resolved, the principles cannot be relied upon as a
criterion of demarcation.

There are, indeed, some unresolved issues, and Popper himself recognized and discussed
them. But these issues are important only when his principles are treated as a topic in the
philosophy of knowledge. They are immaterial when the principles are used strictly as a
criterion of demarcation. In other words, the validity and usefulness of Popper’s principles for
the purpose of demarcation would remain unaffected no matter how the philosophical
subtleties might be settled.
In conclusion, the critics misrepresent Popper’s principles by extracting one aspect or another
from the whole concept, and claiming that this aspect alone constitutes the principles. The
isolated aspect is, indeed, too weak to function as criterion of demarcation, so to a casual
observer the criticism appears valid.

It is also worth noting that the academics do not offer an alternative to Popper’s principles. By
rejecting them without advancing another, equally rigorous, criterion of demarcation, they hope
to blur the difference between scientific and pseudoscientific research, and thereby ensure that
their mechanistic theories are perceived as scientific no matter how shallow they actually are.

You might also like