Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

The philosophy of Karl Popper, liberal governance and

the social control of technology

A well-known episode of 20th century philosophy of science is Karl Popper’s


critique of inductivism –an epistemological doctrine then advocated by Bertrand
Russell and some of the logical positivists in the Vienna Circle. Inductivists aimed
at offering a logical, rational account of scientific discovery by appealing to
induction. For them, the accumulation of similar experiences could lead to
observational statements upon which scientists grounded generalizations or
empirical laws, and even also theoretical concepts. Theory was thus supposed to
be reducible to series of observations –with the exception of the analytical edifice
of the theory, comprised by mathematics and the logical system in which theories
were phrased. Induction was, then, a sort of reasoning which justified the
verifiability of observations.

Popper rejected this understanding of science. He showed that inductivism


leads either to a vicious circle, or to an infinite regress, or to the a priori
statement of the principle of induction as the sole justification of all conjectures
thereby derived. For Popper, the latter was unacceptable since, as the conflict
between quantum mechanics and classic logic showed, the fallibility of all a priori
principles is central to science. The reason why inductivists were wrong, he
claimed, is that they do not distinguish between contexts of discovery and
contexts of justification. Against this, Popper argues that the context of discovery
must be regarded as largely irrational and not subject to logic or rationales of any
kind: psychical influences such as metaphysical principles or religion are
potentially acceptable as the source of new theories. As a result, it was

1
psychologists, not epistemologists, who should study the processes of discovery.
The only criterion of demarcation between science and pseudoscience was, in
turn, that scientific theories could always be refuted and changed in the face of
compelling evidence. This demand confined all rationality to contexts of
justification, where criticisms or re-interpretations of the theory in light of new
results ought to lead to new conjectures. Now these are always refutable as well
and, as a result, the falsifiability criterion constitutes science as a potentially
endless enterprise.

Interestingly, years later Popper would recast this epistemology as


evolutionary, making it akin to a pragmatist metaphysics fully aligned with a
Darwinian view of culture which was also attuned to liberal-democratic
sentiments. Because scientific theories are applicable to society or technology,
open societies should ruthlessly criticize them, and if necessary let them die, in
order to save us from dying instead (Eccles & Popper 2014). More recently, other
philosophers have reinterpreted Popper’s epistemology in a cybernetic fashion,
thus treating culture or science as epistemic systems which are flexible,
adjustable and open to errors (Hooker 1995). Here, a guiding and commonplace
idea is that, if the system –or theory- has to be robust enough, then errors should
feedback onto it and correct its algorithms or constraints and, as a consequence,
its output behaviour. If the system is truly robust or resilient, then it can lead to
better and safer performance without the need for the system to alter its basic
functioning.

Collingridge’s book on the social control of technology (1980) echoes all


these Popperian and cybernetic motives. Collingridge’s question is a normative
one: how technological innovations should be controlled or regulated more
efficiently. And he detects the following control dilemma: when innovation are at
an early stage, our lack of knowledge makes them impossible to control; but, later
on, their capacities and their social contexts of operation become so stiffened,
that control is rendered inefficient if not irrelevant. Here, Bayesianism fulfils a
similar role to that of inductivism in Popper’s critique: for Bayesians understand
rational decisions as decisions made when both their likely outcomes and their
probabilities are known –thus, decisions supported by a relevant inductive base.

2
In response to this dilemma, Collingridge devises a fallibilist theory of decision-
making under conditions of uncertainty or ignorance. This theory is also clearly
inspired by cybernetics. Decision-making rules should be minimally intrusive in
order to make governance robust and flexible enough. In this way, errors can
always be corrected, thus helping the decision-making system to learn and
change.

Both the framing of the problem and its solution fully portrays the character
of innovation and the style of governance in globalised liberal democracies
working with market economies. In a planned economy which operates
consistently according to predefined rules or algorithms coming from a central
system, no innovation is allowed that does not suit the constraints and goals
prescribed by the rules. Innovations are taken to be rational from the beginning
precisely by virtue of their rule-like origin –thus echoing the inductivist
obsession with the rational character and the foundational power of applying
rules. By contrast, liberal governance involves a sharp division of labour between
the production of innovations and their governance, which echoes Popper’s
distinction between contexts of discovery and justification. The innovation
process is taken to be irrational, dynamic, creative, anarchic and inscrutable.
There is only one overarching goal explicitly shared by innovators and policy-
makers. This is to make economies grow in an endless, uninterrupted loop. To
this effect, liberal culture, technological entrepreneurship and market
mechanisms are taken to be a source of potential riches that, however, must be
harnessed by bureaucratic and rationalizing mechanisms of a Weberian kind
(Beniger 1986). Yet, in order to exert this control, decision-making must meet a
series of conditions. Governance must be flexible and timely. Policies and rules
should be reversible. They must be cheap, in order to grant the cost-effective
application of technologies. And they must be precise, that is, they must deal with
specific technologies or applications, and their potential consequences.

In fact, decision-makers usually address technologies in abstraction from


their larger economic backgrounds, both in their origin (how and why technology
comes to be) and their context of operation. They abstract from the origin of
technology because that is a core commitment of liberal democracies with market

3
economies –otherwise, creativity and dynamism would be deterred, and growth
endangered. The context of operation is often disregarded as well, as part of the
traditional technocratic approaches involved, for instance, in development
policies –such as the Green Revolution, as Collingridge himself reports. In this
connection, Collingridge’s work is certainly welcome because it provides us with
insights and methods to help us assess and control the consequences of specific
innovations. These, as the failure of the Green Revolution shows, may involve
unwanted effects. Yet, one must also ask whether the unconditioned commitment
to growth, coupled with the perpetual need to innovate and disrupt economies
and cultures, is not further fuel for the sort of control dilemmas that Collingridge
aims to address –thus, making them more visible but also more pressing and hard
to assess and solve, in an overall scheme that could never be robust. This critique
to the insufficiency of the present style of governance to tackle some of the most
pressing challenges of this century is not new, and I have explored in detail its
rationale somewhere else (Cañizares 2016).

The question then arises whether we can afford to continue assuming this
unconditioned and accelerating unfolding of innovation as an irrational source of
radical novelty; or rather, criticize this very assumption and start to look for ways
to innovate that, though not centrally planned, at least avoid the accelerating
concatenation of disruptions and thus the increasing demands for control,
bureaucracy and technocratic policy which societies need to implement in order
to cope with the radical and disruptive effects of these anarchic innovation
schemes.

 Beniger, J. R. (1986). The control revolution: Technological and economic origins of the

 Cañizares, J.
information society. Cambridge, Mass: Harvard University Press.
6, unpublished . The Ecological Contradiction. A survey and critique
of two major approaches to ecological crises , in
https://www.academia.edu/29689259/The_Ecological_Contradiction._A_survey_and_critiqu

 Collingridge, D. (1980). The social control of technology. New York: St. Martin's Press.
e_of_two_major_approaches_to_ecological_crises

 Eccles, John C. & Popper, Karl (2014). The Self and its Brain: An Argument for
Interactionism. Routledge.

4
 Hooker, C. A. (1995). Reason, Regulation, and Realism: Towards a Regulatory Systems

 Popper, Karl R. (1959). The Logic of Scientific Discovery. Routledge.


Theory of Reason and Evolutionary Epistemology. State University of New York Press.

You might also like